Feature #9711 ยป copy-test-unit-minitest.patch
test/lib/minitest/.document | ||
---|---|---|
# Ignore README.txt, it is included in the minitest documentation.
|
||
*.rb
|
test/lib/minitest/README.txt | ||
---|---|---|
= minitest/{unit,spec,mock,benchmark}
|
||
home :: https://github.com/seattlerb/minitest
|
||
rdoc :: http://docs.seattlerb.org/minitest
|
||
vim :: https://github.com/sunaku/vim-ruby-minitest
|
||
== DESCRIPTION:
|
||
minitest provides a complete suite of testing facilities supporting
|
||
TDD, BDD, mocking, and benchmarking.
|
||
"I had a class with Jim Weirich on testing last week and we were
|
||
allowed to choose our testing frameworks. Kirk Haines and I were
|
||
paired up and we cracked open the code for a few test
|
||
frameworks...
|
||
I MUST say that minitest is *very* readable / understandable
|
||
compared to the 'other two' options we looked at. Nicely done and
|
||
thank you for helping us keep our mental sanity."
|
||
-- Wayne E. Seguin
|
||
minitest/unit is a small and incredibly fast unit testing framework.
|
||
It provides a rich set of assertions to make your tests clean and
|
||
readable.
|
||
minitest/spec is a functionally complete spec engine. It hooks onto
|
||
minitest/unit and seamlessly bridges test assertions over to spec
|
||
expectations.
|
||
minitest/benchmark is an awesome way to assert the performance of your
|
||
algorithms in a repeatable manner. Now you can assert that your newb
|
||
co-worker doesn't replace your linear algorithm with an exponential
|
||
one!
|
||
minitest/mock by Steven Baker, is a beautifully tiny mock (and stub)
|
||
object framework.
|
||
minitest/pride shows pride in testing and adds coloring to your test
|
||
output. I guess it is an example of how to write IO pipes too. :P
|
||
minitest/unit is meant to have a clean implementation for language
|
||
implementors that need a minimal set of methods to bootstrap a working
|
||
test suite. For example, there is no magic involved for test-case
|
||
discovery.
|
||
"Again, I can't praise enough the idea of a testing/specing
|
||
framework that I can actually read in full in one sitting!"
|
||
-- Piotr Szotkowski
|
||
Comparing to rspec:
|
||
rspec is a testing DSL. minitest is ruby.
|
||
-- Adam Hawkins, "Bow Before MiniTest"
|
||
minitest doesn't reinvent anything that ruby already provides, like:
|
||
classes, modules, inheritance, methods. This means you only have to
|
||
learn ruby to use minitest and all of your regular OO practices like
|
||
extract-method refactorings still apply.
|
||
== FEATURES/PROBLEMS:
|
||
* minitest/autorun - the easy and explicit way to run all your tests.
|
||
* minitest/unit - a very fast, simple, and clean test system.
|
||
* minitest/spec - a very fast, simple, and clean spec system.
|
||
* minitest/mock - a simple and clean mock/stub system.
|
||
* minitest/benchmark - an awesome way to assert your algorithm's performance.
|
||
* minitest/pride - show your pride in testing!
|
||
* Incredibly small and fast runner, but no bells and whistles.
|
||
== RATIONALE:
|
||
See design_rationale.rb to see how specs and tests work in minitest.
|
||
== SYNOPSIS:
|
||
Given that you'd like to test the following class:
|
||
class Meme
|
||
def i_can_has_cheezburger?
|
||
"OHAI!"
|
||
end
|
||
def will_it_blend?
|
||
"YES!"
|
||
end
|
||
end
|
||
=== Unit tests
|
||
require 'minitest/autorun'
|
||
class TestMeme < MiniTest::Unit::TestCase
|
||
def setup
|
||
@meme = Meme.new
|
||
end
|
||
def test_that_kitty_can_eat
|
||
assert_equal "OHAI!", @meme.i_can_has_cheezburger?
|
||
end
|
||
def test_that_it_will_not_blend
|
||
refute_match /^no/i, @meme.will_it_blend?
|
||
end
|
||
def test_that_will_be_skipped
|
||
skip "test this later"
|
||
end
|
||
end
|
||
=== Specs
|
||
require 'minitest/autorun'
|
||
describe Meme do
|
||
before do
|
||
@meme = Meme.new
|
||
end
|
||
describe "when asked about cheeseburgers" do
|
||
it "must respond positively" do
|
||
@meme.i_can_has_cheezburger?.must_equal "OHAI!"
|
||
end
|
||
end
|
||
describe "when asked about blending possibilities" do
|
||
it "won't say no" do
|
||
@meme.will_it_blend?.wont_match /^no/i
|
||
end
|
||
end
|
||
end
|
||
For matchers support check out:
|
||
https://github.com/zenspider/minitest-matchers
|
||
=== Benchmarks
|
||
Add benchmarks to your regular unit tests. If the unit tests fail, the
|
||
benchmarks won't run.
|
||
# optionally run benchmarks, good for CI-only work!
|
||
require 'minitest/benchmark' if ENV["BENCH"]
|
||
class TestMeme < MiniTest::Unit::TestCase
|
||
# Override self.bench_range or default range is [1, 10, 100, 1_000, 10_000]
|
||
def bench_my_algorithm
|
||
assert_performance_linear 0.9999 do |n| # n is a range value
|
||
@obj.my_algorithm(n)
|
||
end
|
||
end
|
||
end
|
||
Or add them to your specs. If you make benchmarks optional, you'll
|
||
need to wrap your benchmarks in a conditional since the methods won't
|
||
be defined.
|
||
describe Meme do
|
||
if ENV["BENCH"] then
|
||
bench_performance_linear "my_algorithm", 0.9999 do |n|
|
||
100.times do
|
||
@obj.my_algorithm(n)
|
||
end
|
||
end
|
||
end
|
||
end
|
||
outputs something like:
|
||
# Running benchmarks:
|
||
TestBlah 100 1000 10000
|
||
bench_my_algorithm 0.006167 0.079279 0.786993
|
||
bench_other_algorithm 0.061679 0.792797 7.869932
|
||
Output is tab-delimited to make it easy to paste into a spreadsheet.
|
||
=== Mocks
|
||
class MemeAsker
|
||
def initialize(meme)
|
||
@meme = meme
|
||
end
|
||
def ask(question)
|
||
method = question.tr(" ","_") + "?"
|
||
@meme.__send__(method)
|
||
end
|
||
end
|
||
require 'minitest/autorun'
|
||
describe MemeAsker do
|
||
before do
|
||
@meme = MiniTest::Mock.new
|
||
@meme_asker = MemeAsker.new @meme
|
||
end
|
||
describe "#ask" do
|
||
describe "when passed an unpunctuated question" do
|
||
it "should invoke the appropriate predicate method on the meme" do
|
||
@meme.expect :will_it_blend?, :return_value
|
||
@meme_asker.ask "will it blend"
|
||
@meme.verify
|
||
end
|
||
end
|
||
end
|
||
end
|
||
=== Stubs
|
||
def test_stale_eh
|
||
obj_under_test = Something.new
|
||
refute obj_under_test.stale?
|
||
Time.stub :now, Time.at(0) do # stub goes away once the block is done
|
||
assert obj_under_test.stale?
|
||
end
|
||
end
|
||
A note on stubbing: In order to stub a method, the method must
|
||
actually exist prior to stubbing. Use a singleton method to create a
|
||
new non-existing method:
|
||
def obj_under_test.fake_method
|
||
...
|
||
end
|
||
=== Customizable Test Runner Types:
|
||
MiniTest::Unit.runner=(runner) provides an easy way of creating custom
|
||
test runners for specialized needs. Justin Weiss provides the
|
||
following real-world example to create an alternative to regular
|
||
fixture loading:
|
||
class MiniTestWithHooks::Unit < MiniTest::Unit
|
||
def before_suites
|
||
end
|
||
def after_suites
|
||
end
|
||
def _run_suites(suites, type)
|
||
begin
|
||
before_suites
|
||
super(suites, type)
|
||
ensure
|
||
after_suites
|
||
end
|
||
end
|
||
def _run_suite(suite, type)
|
||
begin
|
||
suite.before_suite
|
||
super(suite, type)
|
||
ensure
|
||
suite.after_suite
|
||
end
|
||
end
|
||
end
|
||
module MiniTestWithTransactions
|
||
class Unit < MiniTestWithHooks::Unit
|
||
include TestSetupHelper
|
||
def before_suites
|
||
super
|
||
setup_nested_transactions
|
||
# load any data we want available for all tests
|
||
end
|
||
def after_suites
|
||
teardown_nested_transactions
|
||
super
|
||
end
|
||
end
|
||
end
|
||
MiniTest::Unit.runner = MiniTestWithTransactions::Unit.new
|
||
== FAQ
|
||
=== How to test SimpleDelegates?
|
||
The following implementation and test:
|
||
class Worker < SimpleDelegator
|
||
def work
|
||
end
|
||
end
|
||
describe Worker do
|
||
before do
|
||
@worker = Worker.new(Object.new)
|
||
end
|
||
it "must respond to work" do
|
||
@worker.must_respond_to :work
|
||
end
|
||
end
|
||
outputs a failure:
|
||
1) Failure:
|
||
Worker#test_0001_must respond to work [bug11.rb:16]:
|
||
Expected #<Object:0x007f9e7184f0a0> (Object) to respond to #work.
|
||
Worker is a SimpleDelegate which in 1.9+ is a subclass of BasicObject.
|
||
Expectations are put on Object (one level down) so the Worker
|
||
(SimpleDelegate) hits `method_missing` and delegates down to the
|
||
`Object.new` instance. That object doesn't respond to work so the test
|
||
fails.
|
||
You can bypass `SimpleDelegate#method_missing` by extending the worker
|
||
with `MiniTest::Expectations`. You can either do that in your setup at
|
||
the instance level, like:
|
||
before do
|
||
@worker = Worker.new(Object.new)
|
||
@worker.extend MiniTest::Expectations
|
||
end
|
||
or you can extend the Worker class (within the test file!), like:
|
||
class Worker
|
||
include ::MiniTest::Expectations
|
||
end
|
||
== Known Extensions:
|
||
capybara_minitest_spec :: Bridge between Capybara RSpec matchers and MiniTest::Spec expectations (e.g. page.must_have_content('Title')).
|
||
minispec-metadata :: Metadata for describe/it blocks
|
||
(e.g. `it 'requires JS driver', js: true do`)
|
||
minitest-ansi :: Colorize minitest output with ANSI colors.
|
||
minitest-around :: Around block for minitest. An alternative to setup/teardown dance.
|
||
minitest-capistrano :: Assertions and expectations for testing Capistrano recipes
|
||
minitest-capybara :: Capybara matchers support for minitest unit and spec
|
||
minitest-chef-handler :: Run Minitest suites as Chef report handlers
|
||
minitest-ci :: CI reporter plugin for MiniTest.
|
||
minitest-colorize :: Colorize MiniTest output and show failing tests instantly.
|
||
minitest-context :: Defines contexts for code reuse in MiniTest
|
||
specs that share common expectations.
|
||
minitest-debugger :: Wraps assert so failed assertions drop into
|
||
the ruby debugger.
|
||
minitest-display :: Patches MiniTest to allow for an easily configurable output.
|
||
minitest-emoji :: Print out emoji for your test passes, fails, and skips.
|
||
minitest-english :: Semantically symmetric aliases for assertions and expectations.
|
||
minitest-excludes :: Clean API for excluding certain tests you
|
||
don't want to run under certain conditions.
|
||
minitest-firemock :: Makes your MiniTest mocks more resilient.
|
||
minitest-great_expectations :: Generally useful additions to minitest's assertions and expectations
|
||
minitest-growl :: Test notifier for minitest via growl.
|
||
minitest-implicit-subject :: Implicit declaration of the test subject.
|
||
minitest-instrument :: Instrument ActiveSupport::Notifications when
|
||
test method is executed
|
||
minitest-instrument-db :: Store information about speed of test
|
||
execution provided by minitest-instrument in database
|
||
minitest-libnotify :: Test notifier for minitest via libnotify.
|
||
minitest-macruby :: Provides extensions to minitest for macruby UI testing.
|
||
minitest-matchers :: Adds support for RSpec-style matchers to minitest.
|
||
minitest-metadata :: Annotate tests with metadata (key-value).
|
||
minitest-mongoid :: Mongoid assertion matchers for MiniTest
|
||
minitest-must_not :: Provides must_not as an alias for wont in MiniTest
|
||
minitest-nc :: Test notifier for minitest via Mountain Lion's Notification Center
|
||
minitest-predicates :: Adds support for .predicate? methods
|
||
minitest-rails :: MiniTest integration for Rails 3.x
|
||
minitest-rails-capybara :: Capybara integration for MiniTest::Rails
|
||
minitest-reporters :: Create customizable MiniTest output formats
|
||
minitest-should_syntax :: RSpec-style +x.should == y+ assertions for MiniTest
|
||
minitest-shouldify :: Adding all manner of shoulds to MiniTest (bad idea)
|
||
minitest-spec-context :: Provides rspec-ish context method to MiniTest::Spec
|
||
minitest-spec-magic :: Minitest::Spec extensions for Rails and beyond
|
||
minitest-spec-rails :: Drop in MiniTest::Spec superclass for ActiveSupport::TestCase.
|
||
minitest-stub-const :: Stub constants for the duration of a block
|
||
minitest-tags :: add tags for minitest
|
||
minitest-wscolor :: Yet another test colorizer.
|
||
minitest_owrapper :: Get tests results as a TestResult object.
|
||
minitest_should :: Shoulda style syntax for minitest test::unit.
|
||
minitest_tu_shim :: minitest_tu_shim bridges between test/unit and minitest.
|
||
mongoid-minitest :: MiniTest matchers for Mongoid.
|
||
pry-rescue :: A pry plugin w/ minitest support. See pry-rescue/minitest.rb.
|
||
== Unknown Extensions:
|
||
Authors... Please send me a pull request with a description of your minitest extension.
|
||
* assay-minitest
|
||
* detroit-minitest
|
||
* em-minitest-spec
|
||
* flexmock-minitest
|
||
* guard-minitest
|
||
* guard-minitest-decisiv
|
||
* minitest-activemodel
|
||
* minitest-ar-assertions
|
||
* minitest-capybara-unit
|
||
* minitest-colorer
|
||
* minitest-deluxe
|
||
* minitest-extra-assertions
|
||
* minitest-rails-shoulda
|
||
* minitest-spec
|
||
* minitest-spec-should
|
||
* minitest-sugar
|
||
* minitest_should
|
||
* mongoid-minitest
|
||
* spork-minitest
|
||
== REQUIREMENTS:
|
||
* Ruby 1.8, maybe even 1.6 or lower. No magic is involved.
|
||
== INSTALL:
|
||
sudo gem install minitest
|
||
On 1.9, you already have it. To get newer candy you can still install
|
||
the gem, but you'll need to activate the gem explicitly to use it:
|
||
require 'rubygems'
|
||
gem 'minitest' # ensures you're using the gem, and not the built in MT
|
||
require 'minitest/autorun'
|
||
# ... usual testing stuffs ...
|
||
DO NOTE: There is a serious problem with the way that ruby 1.9/2.0
|
||
packages their own gems. They install a gem specification file, but
|
||
don't install the gem contents in the gem path. This messes up
|
||
Gem.find_files and many other things (gem which, gem contents, etc).
|
||
Just install minitest as a gem for real and you'll be happier.
|
||
== LICENSE:
|
||
(The MIT License)
|
||
Copyright (c) Ryan Davis, seattle.rb
|
||
Permission is hereby granted, free of charge, to any person obtaining
|
||
a copy of this software and associated documentation files (the
|
||
'Software'), to deal in the Software without restriction, including
|
||
without limitation the rights to use, copy, modify, merge, publish,
|
||
distribute, sublicense, and/or sell copies of the Software, and to
|
||
permit persons to whom the Software is furnished to do so, subject to
|
||
the following conditions:
|
||
The above copyright notice and this permission notice shall be
|
||
included in all copies or substantial portions of the Software.
|
||
THE SOFTWARE IS PROVIDED 'AS IS', WITHOUT WARRANTY OF ANY KIND,
|
||
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
|
||
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
|
||
IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
|
||
CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
|
||
TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
|
||
SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
|
test/lib/minitest/autorun.rb | ||
---|---|---|
# encoding: utf-8
|
||
######################################################################
|
||
# This file is imported from the minitest project.
|
||
# DO NOT make modifications in this repo. They _will_ be reverted!
|
||
# File a patch instead and assign it to Ryan Davis.
|
||
######################################################################
|
||
begin
|
||
require 'rubygems'
|
||
gem 'minitest'
|
||
rescue Gem::LoadError
|
||
# do nothing
|
||
end
|
||
require 'minitest/unit'
|
||
require 'minitest/spec'
|
||
require 'minitest/mock'
|
||
MiniTest::Unit.autorun
|
test/lib/minitest/benchmark.rb | ||
---|---|---|
# encoding: utf-8
|
||
######################################################################
|
||
# This file is imported from the minitest project.
|
||
# DO NOT make modifications in this repo. They _will_ be reverted!
|
||
# File a patch instead and assign it to Ryan Davis.
|
||
######################################################################
|
||
require 'minitest/unit'
|
||
require 'minitest/spec'
|
||
class MiniTest::Unit # :nodoc:
|
||
def run_benchmarks # :nodoc:
|
||
_run_anything :benchmark
|
||
end
|
||
def benchmark_suite_header suite # :nodoc:
|
||
"\n#{suite}\t#{suite.bench_range.join("\t")}"
|
||
end
|
||
class TestCase
|
||
##
|
||
# Returns a set of ranges stepped exponentially from +min+ to
|
||
# +max+ by powers of +base+. Eg:
|
||
#
|
||
# bench_exp(2, 16, 2) # => [2, 4, 8, 16]
|
||
def self.bench_exp min, max, base = 10
|
||
min = (Math.log10(min) / Math.log10(base)).to_i
|
||
max = (Math.log10(max) / Math.log10(base)).to_i
|
||
(min..max).map { |m| base ** m }.to_a
|
||
end
|
||
##
|
||
# Returns a set of ranges stepped linearly from +min+ to +max+ by
|
||
# +step+. Eg:
|
||
#
|
||
# bench_linear(20, 40, 10) # => [20, 30, 40]
|
||
def self.bench_linear min, max, step = 10
|
||
(min..max).step(step).to_a
|
||
rescue LocalJumpError # 1.8.6
|
||
r = []; (min..max).step(step) { |n| r << n }; r
|
||
end
|
||
##
|
||
# Returns the benchmark methods (methods that start with bench_)
|
||
# for that class.
|
||
def self.benchmark_methods # :nodoc:
|
||
public_instance_methods(true).grep(/^bench_/).map { |m| m.to_s }.sort
|
||
end
|
||
##
|
||
# Returns all test suites that have benchmark methods.
|
||
def self.benchmark_suites
|
||
TestCase.test_suites.reject { |s| s.benchmark_methods.empty? }
|
||
end
|
||
##
|
||
# Specifies the ranges used for benchmarking for that class.
|
||
# Defaults to exponential growth from 1 to 10k by powers of 10.
|
||
# Override if you need different ranges for your benchmarks.
|
||
#
|
||
# See also: ::bench_exp and ::bench_linear.
|
||
def self.bench_range
|
||
bench_exp 1, 10_000
|
||
end
|
||
##
|
||
# Runs the given +work+, gathering the times of each run. Range
|
||
# and times are then passed to a given +validation+ proc. Outputs
|
||
# the benchmark name and times in tab-separated format, making it
|
||
# easy to paste into a spreadsheet for graphing or further
|
||
# analysis.
|
||
#
|
||
# Ranges are specified by ::bench_range.
|
||
#
|
||
# Eg:
|
||
#
|
||
# def bench_algorithm
|
||
# validation = proc { |x, y| ... }
|
||
# assert_performance validation do |n|
|
||
# @obj.algorithm(n)
|
||
# end
|
||
# end
|
||
def assert_performance validation, &work
|
||
range = self.class.bench_range
|
||
io.print "#{__name__}"
|
||
times = []
|
||
range.each do |x|
|
||
GC.start
|
||
t0 = Time.now
|
||
instance_exec(x, &work)
|
||
t = Time.now - t0
|
||
io.print "\t%9.6f" % t
|
||
times << t
|
||
end
|
||
io.puts
|
||
validation[range, times]
|
||
end
|
||
##
|
||
# Runs the given +work+ and asserts that the times gathered fit to
|
||
# match a constant rate (eg, linear slope == 0) within a given
|
||
# +threshold+. Note: because we're testing for a slope of 0, R^2
|
||
# is not a good determining factor for the fit, so the threshold
|
||
# is applied against the slope itself. As such, you probably want
|
||
# to tighten it from the default.
|
||
#
|
||
# See http://www.graphpad.com/curvefit/goodness_of_fit.htm for
|
||
# more details.
|
||
#
|
||
# Fit is calculated by #fit_linear.
|
||
#
|
||
# Ranges are specified by ::bench_range.
|
||
#
|
||
# Eg:
|
||
#
|
||
# def bench_algorithm
|
||
# assert_performance_constant 0.9999 do |n|
|
||
# @obj.algorithm(n)
|
||
# end
|
||
# end
|
||
def assert_performance_constant threshold = 0.99, &work
|
||
validation = proc do |range, times|
|
||
a, b, rr = fit_linear range, times
|
||
assert_in_delta 0, b, 1 - threshold
|
||
[a, b, rr]
|
||
end
|
||
assert_performance validation, &work
|
||
end
|
||
##
|
||
# Runs the given +work+ and asserts that the times gathered fit to
|
||
# match a exponential curve within a given error +threshold+.
|
||
#
|
||
# Fit is calculated by #fit_exponential.
|
||
#
|
||
# Ranges are specified by ::bench_range.
|
||
#
|
||
# Eg:
|
||
#
|
||
# def bench_algorithm
|
||
# assert_performance_exponential 0.9999 do |n|
|
||
# @obj.algorithm(n)
|
||
# end
|
||
# end
|
||
def assert_performance_exponential threshold = 0.99, &work
|
||
assert_performance validation_for_fit(:exponential, threshold), &work
|
||
end
|
||
##
|
||
# Runs the given +work+ and asserts that the times gathered fit to
|
||
# match a logarithmic curve within a given error +threshold+.
|
||
#
|
||
# Fit is calculated by #fit_logarithmic.
|
||
#
|
||
# Ranges are specified by ::bench_range.
|
||
#
|
||
# Eg:
|
||
#
|
||
# def bench_algorithm
|
||
# assert_performance_logarithmic 0.9999 do |n|
|
||
# @obj.algorithm(n)
|
||
# end
|
||
# end
|
||
def assert_performance_logarithmic threshold = 0.99, &work
|
||
assert_performance validation_for_fit(:logarithmic, threshold), &work
|
||
end
|
||
##
|
||
# Runs the given +work+ and asserts that the times gathered fit to
|
||
# match a straight line within a given error +threshold+.
|
||
#
|
||
# Fit is calculated by #fit_linear.
|
||
#
|
||
# Ranges are specified by ::bench_range.
|
||
#
|
||
# Eg:
|
||
#
|
||
# def bench_algorithm
|
||
# assert_performance_linear 0.9999 do |n|
|
||
# @obj.algorithm(n)
|
||
# end
|
||
# end
|
||
def assert_performance_linear threshold = 0.99, &work
|
||
assert_performance validation_for_fit(:linear, threshold), &work
|
||
end
|
||
##
|
||
# Runs the given +work+ and asserts that the times gathered curve
|
||
# fit to match a power curve within a given error +threshold+.
|
||
#
|
||
# Fit is calculated by #fit_power.
|
||
#
|
||
# Ranges are specified by ::bench_range.
|
||
#
|
||
# Eg:
|
||
#
|
||
# def bench_algorithm
|
||
# assert_performance_power 0.9999 do |x|
|
||
# @obj.algorithm
|
||
# end
|
||
# end
|
||
def assert_performance_power threshold = 0.99, &work
|
||
assert_performance validation_for_fit(:power, threshold), &work
|
||
end
|
||
##
|
||
# Takes an array of x/y pairs and calculates the general R^2 value.
|
||
#
|
||
# See: http://en.wikipedia.org/wiki/Coefficient_of_determination
|
||
def fit_error xys
|
||
y_bar = sigma(xys) { |x, y| y } / xys.size.to_f
|
||
ss_tot = sigma(xys) { |x, y| (y - y_bar) ** 2 }
|
||
ss_err = sigma(xys) { |x, y| (yield(x) - y) ** 2 }
|
||
1 - (ss_err / ss_tot)
|
||
end
|
||
##
|
||
# To fit a functional form: y = ae^(bx).
|
||
#
|
||
# Takes x and y values and returns [a, b, r^2].
|
||
#
|
||
# See: http://mathworld.wolfram.com/LeastSquaresFittingExponential.html
|
||
def fit_exponential xs, ys
|
||
n = xs.size
|
||
xys = xs.zip(ys)
|
||
sxlny = sigma(xys) { |x,y| x * Math.log(y) }
|
||
slny = sigma(xys) { |x,y| Math.log(y) }
|
||
sx2 = sigma(xys) { |x,y| x * x }
|
||
sx = sigma xs
|
||
c = n * sx2 - sx ** 2
|
||
a = (slny * sx2 - sx * sxlny) / c
|
||
b = ( n * sxlny - sx * slny ) / c
|
||
return Math.exp(a), b, fit_error(xys) { |x| Math.exp(a + b * x) }
|
||
end
|
||
##
|
||
# To fit a functional form: y = a + b*ln(x).
|
||
#
|
||
# Takes x and y values and returns [a, b, r^2].
|
||
#
|
||
# See: http://mathworld.wolfram.com/LeastSquaresFittingLogarithmic.html
|
||
def fit_logarithmic xs, ys
|
||
n = xs.size
|
||
xys = xs.zip(ys)
|
||
slnx2 = sigma(xys) { |x,y| Math.log(x) ** 2 }
|
||
slnx = sigma(xys) { |x,y| Math.log(x) }
|
||
sylnx = sigma(xys) { |x,y| y * Math.log(x) }
|
||
sy = sigma(xys) { |x,y| y }
|
||
c = n * slnx2 - slnx ** 2
|
||
b = ( n * sylnx - sy * slnx ) / c
|
||
a = (sy - b * slnx) / n
|
||
return a, b, fit_error(xys) { |x| a + b * Math.log(x) }
|
||
end
|
||
##
|
||
# Fits the functional form: a + bx.
|
||
#
|
||
# Takes x and y values and returns [a, b, r^2].
|
||
#
|
||
# See: http://mathworld.wolfram.com/LeastSquaresFitting.html
|
||
def fit_linear xs, ys
|
||
n = xs.size
|
||
xys = xs.zip(ys)
|
||
sx = sigma xs
|
||
sy = sigma ys
|
||
sx2 = sigma(xs) { |x| x ** 2 }
|
||
sxy = sigma(xys) { |x,y| x * y }
|
||
c = n * sx2 - sx**2
|
||
a = (sy * sx2 - sx * sxy) / c
|
||
b = ( n * sxy - sx * sy ) / c
|
||
return a, b, fit_error(xys) { |x| a + b * x }
|
||
end
|
||
##
|
||
# To fit a functional form: y = ax^b.
|
||
#
|
||
# Takes x and y values and returns [a, b, r^2].
|
||
#
|
||
# See: http://mathworld.wolfram.com/LeastSquaresFittingPowerLaw.html
|
||
def fit_power xs, ys
|
||
n = xs.size
|
||
xys = xs.zip(ys)
|
||
slnxlny = sigma(xys) { |x, y| Math.log(x) * Math.log(y) }
|
||
slnx = sigma(xs) { |x | Math.log(x) }
|
||
slny = sigma(ys) { | y| Math.log(y) }
|
||
slnx2 = sigma(xs) { |x | Math.log(x) ** 2 }
|
||
b = (n * slnxlny - slnx * slny) / (n * slnx2 - slnx ** 2);
|
||
a = (slny - b * slnx) / n
|
||
return Math.exp(a), b, fit_error(xys) { |x| (Math.exp(a) * (x ** b)) }
|
||
end
|
||
##
|
||
# Enumerates over +enum+ mapping +block+ if given, returning the
|
||
# sum of the result. Eg:
|
||
#
|
||
# sigma([1, 2, 3]) # => 1 + 2 + 3 => 7
|
||
# sigma([1, 2, 3]) { |n| n ** 2 } # => 1 + 4 + 9 => 14
|
||
def sigma enum, &block
|
||
enum = enum.map(&block) if block
|
||
enum.inject { |sum, n| sum + n }
|
||
end
|
||
##
|
||
# Returns a proc that calls the specified fit method and asserts
|
||
# that the error is within a tolerable threshold.
|
||
def validation_for_fit msg, threshold
|
||
proc do |range, times|
|
||
a, b, rr = send "fit_#{msg}", range, times
|
||
assert_operator rr, :>=, threshold
|
||
[a, b, rr]
|
||
end
|
||
end
|
||
end
|
||
end
|
||
class MiniTest::Spec
|
||
##
|
||
# This is used to define a new benchmark method. You usually don't
|
||
# use this directly and is intended for those needing to write new
|
||
# performance curve fits (eg: you need a specific polynomial fit).
|
||
#
|
||
# See ::bench_performance_linear for an example of how to use this.
|
||
def self.bench name, &block
|
||
define_method "bench_#{name.gsub(/\W+/, '_')}", &block
|
||
end
|
||
##
|
||
# Specifies the ranges used for benchmarking for that class.
|
||
#
|
||
# bench_range do
|
||
# bench_exp(2, 16, 2)
|
||
# end
|
||
#
|
||
# See Unit::TestCase.bench_range for more details.
|
||
def self.bench_range &block
|
||
return super unless block
|
||
meta = (class << self; self; end)
|
||
meta.send :define_method, "bench_range", &block
|
||
end
|
||
##
|
||
# Create a benchmark that verifies that the performance is linear.
|
||
#
|
||
# describe "my class" do
|
||
# bench_performance_linear "fast_algorithm", 0.9999 do |n|
|
||
# @obj.fast_algorithm(n)
|
||
# end
|
||
# end
|
||
def self.bench_performance_linear name, threshold = 0.99, &work
|
||
bench name do
|
||
assert_performance_linear threshold, &work
|
||
end
|
||
end
|
||
##
|
||
# Create a benchmark that verifies that the performance is constant.
|
||
#
|
||
# describe "my class" do
|
||
# bench_performance_constant "zoom_algorithm!" do |n|
|
||
# @obj.zoom_algorithm!(n)
|
||
# end
|
||
# end
|
||
def self.bench_performance_constant name, threshold = 0.99, &work
|
||
bench name do
|
||
assert_performance_constant threshold, &work
|
||
end
|
||
end
|
||
##
|
||
# Create a benchmark that verifies that the performance is exponential.
|
||
#
|
||
# describe "my class" do
|
||
# bench_performance_exponential "algorithm" do |n|
|
||
# @obj.algorithm(n)
|
||
# end
|
||
# end
|
||
def self.bench_performance_exponential name, threshold = 0.99, &work
|
||
bench name do
|
||
assert_performance_exponential threshold, &work
|
||
end
|
||
end
|
||
end
|
test/lib/minitest/hell.rb | ||
---|---|---|
# encoding: utf-8
|
||
######################################################################
|
||
# This file is imported from the minitest project.
|
||
# DO NOT make modifications in this repo. They _will_ be reverted!
|
||
# File a patch instead and assign it to Ryan Davis.
|
||
######################################################################
|
||
require "minitest/parallel_each"
|
||
# :stopdoc:
|
||
class Minitest::Unit::TestCase
|
||
class << self
|
||
alias :old_test_order :test_order
|
||
def test_order
|
||
:parallel
|
||
end
|
||
end
|
||
end
|
||
# :startdoc:
|
test/lib/minitest/mock.rb | ||
---|---|---|
# encoding: utf-8
|
||
######################################################################
|
||
# This file is imported from the minitest project.
|
||
# DO NOT make modifications in this repo. They _will_ be reverted!
|
||
# File a patch instead and assign it to Ryan Davis.
|
||
######################################################################
|
||
class MockExpectationError < StandardError; end # :nodoc:
|
||
##
|
||
# A simple and clean mock object framework.
|
||
module MiniTest # :nodoc:
|
||
##
|
||
# All mock objects are an instance of Mock
|
||
class Mock
|
||
alias :__respond_to? :respond_to?
|
||
skip_methods = %w(object_id respond_to_missing? inspect === to_s)
|
||
instance_methods.each do |m|
|
||
undef_method m unless skip_methods.include?(m.to_s) || m =~ /^__/
|
||
end
|
||
def initialize # :nodoc:
|
||
@expected_calls = Hash.new { |calls, name| calls[name] = [] }
|
||
@actual_calls = Hash.new { |calls, name| calls[name] = [] }
|
||
end
|
||
##
|
||
# Expect that method +name+ is called, optionally with +args+ or a
|
||
# +blk+, and returns +retval+.
|
||
#
|
||
# @mock.expect(:meaning_of_life, 42)
|
||
# @mock.meaning_of_life # => 42
|
||
#
|
||
# @mock.expect(:do_something_with, true, [some_obj, true])
|
||
# @mock.do_something_with(some_obj, true) # => true
|
||
#
|
||
# @mock.expect(:do_something_else, true) do |a1, a2|
|
||
# a1 == "buggs" && a2 == :bunny
|
||
# end
|
||
#
|
||
# +args+ is compared to the expected args using case equality (ie, the
|
||
# '===' operator), allowing for less specific expectations.
|
||
#
|
||
# @mock.expect(:uses_any_string, true, [String])
|
||
# @mock.uses_any_string("foo") # => true
|
||
# @mock.verify # => true
|
||
#
|
||
# @mock.expect(:uses_one_string, true, ["foo"]
|
||
# @mock.uses_one_string("bar") # => true
|
||
# @mock.verify # => raises MockExpectationError
|
||
def expect(name, retval, args=[], &blk)
|
||
if block_given?
|
||
raise ArgumentError, "args ignored when block given" unless args.empty?
|
||
@expected_calls[name] << { :retval => retval, :block => blk }
|
||
else
|
||
raise ArgumentError, "args must be an array" unless Array === args
|
||
@expected_calls[name] << { :retval => retval, :args => args }
|
||
end
|
||
self
|
||
end
|
||
def __call name, data # :nodoc:
|
||
case data
|
||
when Hash then
|
||
"#{name}(#{data[:args].inspect[1..-2]}) => #{data[:retval].inspect}"
|
||
else
|
||
data.map { |d| __call name, d }.join ", "
|
||
end
|
||
end
|
||
##
|
||
# Verify that all methods were called as expected. Raises
|
||
# +MockExpectationError+ if the mock object was not called as
|
||
# expected.
|
||
def verify
|
||
@expected_calls.each do |name, calls|
|
||
calls.each do |expected|
|
||
msg1 = "expected #{__call name, expected}"
|
||
msg2 = "#{msg1}, got [#{__call name, @actual_calls[name]}]"
|
||
raise MockExpectationError, msg2 if
|
||
@actual_calls.has_key?(name) and
|
||
not @actual_calls[name].include?(expected)
|
||
raise MockExpectationError, msg1 unless
|
||
@actual_calls.has_key?(name) and
|
||
@actual_calls[name].include?(expected)
|
||
end
|
||
end
|
||
true
|
||
end
|
||
def method_missing(sym, *args) # :nodoc:
|
||
unless @expected_calls.has_key?(sym) then
|
||
raise NoMethodError, "unmocked method %p, expected one of %p" %
|
||
[sym, @expected_calls.keys.sort_by(&:to_s)]
|
||
end
|
||
index = @actual_calls[sym].length
|
||
expected_call = @expected_calls[sym][index]
|
||
unless expected_call then
|
||
raise MockExpectationError, "No more expects available for %p: %p" %
|
||
[sym, args]
|
||
end
|
||
expected_args, retval, val_block =
|
||
expected_call.values_at(:args, :retval, :block)
|
||
if val_block then
|
||
raise MockExpectationError, "mocked method %p failed block w/ %p" %
|
||
[sym, args] unless val_block.call(args)
|
||
# keep "verify" happy
|
||
@actual_calls[sym] << expected_call
|
||
return retval
|
||
end
|
||
if expected_args.size != args.size then
|
||
raise ArgumentError, "mocked method %p expects %d arguments, got %d" %
|
||
[sym, expected_args.size, args.size]
|
||
end
|
||
fully_matched = expected_args.zip(args).all? { |mod, a|
|
||
mod === a or mod == a
|
||
}
|
||
unless fully_matched then
|
||
raise MockExpectationError, "mocked method %p called with unexpected arguments %p" %
|
||
[sym, args]
|
||
end
|
||
@actual_calls[sym] << {
|
||
:retval => retval,
|
||
:args => expected_args.zip(args).map { |mod, a| mod === a ? mod : a }
|
||
}
|
||
retval
|
||
end
|
||
def respond_to?(sym, include_private = false) # :nodoc:
|
||
return true if @expected_calls.has_key?(sym.to_sym)
|
||
return __respond_to?(sym, include_private)
|
||
end
|
||
end
|
||
end
|
||
class Object # :nodoc:
|
||
##
|
||
# Add a temporary stubbed method replacing +name+ for the duration
|
||
# of the +block+. If +val_or_callable+ responds to #call, then it
|
||
# returns the result of calling it, otherwise returns the value
|
||
# as-is. Cleans up the stub at the end of the +block+. The method
|
||
# +name+ must exist before stubbing.
|
||
#
|
||
# def test_stale_eh
|
||
# obj_under_test = Something.new
|
||
# refute obj_under_test.stale?
|
||
#
|
||
# Time.stub :now, Time.at(0) do
|
||
# assert obj_under_test.stale?
|
||
# end
|
||
# end
|
||
def stub name, val_or_callable, &block
|
||
new_name = "__minitest_stub__#{name}"
|
||
metaclass = class << self; self; end
|
||
if respond_to? name and not methods.map(&:to_s).include? name.to_s then
|
||
metaclass.send :define_method, name do |*args|
|
||
super(*args)
|
||
end
|
||
end
|
||
metaclass.send :alias_method, new_name, name
|
||
metaclass.send :define_method, name do |*args|
|
||
if val_or_callable.respond_to? :call then
|
||
val_or_callable.call(*args)
|
||
else
|
||
val_or_callable
|
||
end
|
||
end
|
||
yield self
|
||
ensure
|
||
metaclass.send :undef_method, name
|
||
metaclass.send :alias_method, name, new_name
|
||
metaclass.send :undef_method, new_name
|
||
end
|
||
end
|
test/lib/minitest/parallel_each.rb | ||
---|---|---|
# encoding: utf-8
|
||
######################################################################
|
||
# This file is imported from the minitest project.
|
||
# DO NOT make modifications in this repo. They _will_ be reverted!
|
||
# File a patch instead and assign it to Ryan Davis.
|
||
######################################################################
|
||
##
|
||
# Provides a parallel #each that lets you enumerate using N threads.
|
||
# Use environment variable N to customize. Defaults to 2. Enumerable,
|
||
# so all the goodies come along (tho not all are wrapped yet to
|
||
# return another ParallelEach instance).
|
||
class ParallelEach
|
||
require 'thread'
|
||
include Enumerable
|
||
##
|
||
# How many Threads to use for this parallel #each.
|
||
N = (ENV['N'] || 2).to_i
|
||
##
|
||
# Create a new ParallelEach instance over +list+.
|
||
def initialize list
|
||
@queue = Queue.new # *sigh*... the Queue api sucks sooo much...
|
||
list.each { |i| @queue << i }
|
||
N.times { @queue << nil }
|
||
end
|
||
def grep pattern # :nodoc:
|
||
self.class.new super
|
||
end
|
||
def select(&block) # :nodoc:
|
||
self.class.new super
|
||
end
|
||
alias find_all select # :nodoc:
|
||
##
|
||
# Starts N threads that yield each element to your block. Joins the
|
||
# threads at the end.
|
||
def each
|
||
threads = N.times.map {
|
||
Thread.new do
|
||
Thread.current.abort_on_exception = true
|
||
while job = @queue.pop
|
||
yield job
|
||
end
|
||
end
|
||
}
|
||
threads.map(&:join)
|
||
end
|
||
def count
|
||
[@queue.size - N, 0].max
|
||
end
|
||
alias_method :size, :count
|
||
end
|
||
class MiniTest::Unit
|
||
alias _old_run_suites _run_suites
|
||
##
|
||
# Runs all the +suites+ for a given +type+. Runs suites declaring
|
||
# a test_order of +:parallel+ in parallel, and everything else
|
||
# serial.
|
||
def _run_suites suites, type
|
||
parallel, serial = suites.partition { |s| s.test_order == :parallel }
|
||
ParallelEach.new(parallel).map { |suite| _run_suite suite, type } +
|
||
serial.map { |suite| _run_suite suite, type }
|
||
end
|
||
end
|
test/lib/minitest/pride.rb | ||
---|---|---|
# encoding: utf-8
|
||
######################################################################
|
||
# This file is imported from the minitest project.
|
||
# DO NOT make modifications in this repo. They _will_ be reverted!
|
||
# File a patch instead and assign it to Ryan Davis.
|
||
######################################################################
|
||
require "minitest/unit"
|
||
##
|
||
# Show your testing pride!
|
||
class PrideIO
|
||
# Start an escape sequence
|
||
ESC = "\e["
|
||
# End the escape sequence
|
||
NND = "#{ESC}0m"
|
||
# The IO we're going to pipe through.
|
||
attr_reader :io
|
||
def initialize io # :nodoc:
|
||
@io = io
|
||
# stolen from /System/Library/Perl/5.10.0/Term/ANSIColor.pm
|
||
# also reference http://en.wikipedia.org/wiki/ANSI_escape_code
|
||
@colors ||= (31..36).to_a
|
||
@size = @colors.size
|
||
@index = 0
|
||
# io.sync = true
|
||
end
|
||
##
|
||
# Wrap print to colorize the output.
|
||
def print o
|
||
case o
|
||
when "." then
|
||
io.print pride o
|
||
when "E", "F" then
|
||
io.print "#{ESC}41m#{ESC}37m#{o}#{NND}"
|
||
else
|
||
io.print o
|
||
end
|
||
end
|
||
def puts(*o) # :nodoc:
|
||
o.map! { |s|
|
||
s.to_s.sub(/Finished tests/) {
|
||
@index = 0
|
||
'Fabulous tests'.split(//).map { |c|
|
||
pride(c)
|
||
}.join
|
||
}
|
||
}
|
||
super
|
||
end
|
||
##
|
||
# Color a string.
|
||
def pride string
|
||
string = "*" if string == "."
|
||
c = @colors[@index % @size]
|
||
@index += 1
|
||
"#{ESC}#{c}m#{string}#{NND}"
|
||
end
|
||
def method_missing msg, *args # :nodoc:
|
||
io.send(msg, *args)
|
||
end
|
||
end
|
||
##
|
||
# If you thought the PrideIO was colorful...
|
||
#
|
||
# (Inspired by lolcat, but with clean math)
|
||
class PrideLOL < PrideIO
|
||
PI_3 = Math::PI / 3 # :nodoc:
|
||
def initialize io # :nodoc:
|
||
# walk red, green, and blue around a circle separated by equal thirds.
|
||
#
|
||
# To visualize, type this into wolfram-alpha:
|
||
#
|
||
# plot (3*sin(x)+3), (3*sin(x+2*pi/3)+3), (3*sin(x+4*pi/3)+3)
|
||
# 6 has wide pretty gradients. 3 == lolcat, about half the width
|
||
@colors = (0...(6 * 7)).map { |n|
|
||
n *= 1.0 / 6
|
||
r = (3 * Math.sin(n ) + 3).to_i
|
||
g = (3 * Math.sin(n + 2 * PI_3) + 3).to_i
|
||
b = (3 * Math.sin(n + 4 * PI_3) + 3).to_i
|
||
# Then we take rgb and encode them in a single number using base 6.
|
||
# For some mysterious reason, we add 16... to clear the bottom 4 bits?
|
||
# Yes... they're ugly.
|
||
36 * r + 6 * g + b + 16
|
||
}
|
||
super
|
||
end
|
||
##
|
||
# Make the string even more colorful. Damnit.
|
||
def pride string
|
||
c = @colors[@index % @size]
|
||
@index += 1
|
||
"#{ESC}38;5;#{c}m#{string}#{NND}"
|
||
end
|
||
end
|
||
klass = ENV['TERM'] =~ /^xterm|-256color$/ ? PrideLOL : PrideIO
|
||
MiniTest::Unit.output = klass.new(MiniTest::Unit.output)
|
test/lib/minitest/spec.rb | ||
---|---|---|
# encoding: utf-8
|
||
######################################################################
|
||
# This file is imported from the minitest project.
|
||
# DO NOT make modifications in this repo. They _will_ be reverted!
|
||
# File a patch instead and assign it to Ryan Davis.
|
||
######################################################################
|
||
#!/usr/bin/ruby -w
|
||
require 'minitest/unit'
|
||
class Module # :nodoc:
|
||
def infect_an_assertion meth, new_name, dont_flip = false # :nodoc:
|
||
# warn "%-22p -> %p %p" % [meth, new_name, dont_flip]
|
||
self.class_eval <<-EOM
|
||
def #{new_name} *args
|
||
case
|
||
when Proc === self then
|
||
MiniTest::Spec.current.#{meth}(*args, &self)
|
||
when #{!!dont_flip} then
|
||
MiniTest::Spec.current.#{meth}(self, *args)
|
||
else
|
||
MiniTest::Spec.current.#{meth}(args.first, self, *args[1..-1])
|
||
end
|
||
end
|
||
EOM
|
||
end
|
||
##
|
||
# infect_with_assertions has been removed due to excessive clever.
|
||
# Use infect_an_assertion directly instead.
|
||
def infect_with_assertions(pos_prefix, neg_prefix,
|
||
skip_re,
|
||
dont_flip_re = /\c0/,
|
||
map = {})
|
||
abort "infect_with_assertions is dead. Use infect_an_assertion directly"
|
||
end
|
||
end
|
||
module Kernel # :nodoc:
|
||
##
|
||
# Describe a series of expectations for a given target +desc+.
|
||
#
|
||
# TODO: find good tutorial url.
|
||
#
|
||
# Defines a test class subclassing from either MiniTest::Spec or
|
||
# from the surrounding describe's class. The surrounding class may
|
||
# subclass MiniTest::Spec manually in order to easily share code:
|
||
#
|
||
# class MySpec < MiniTest::Spec
|
||
# # ... shared code ...
|
||
# end
|
||
#
|
||
# class TestStuff < MySpec
|
||
# it "does stuff" do
|
||
# # shared code available here
|
||
# end
|
||
# describe "inner stuff" do
|
||
# it "still does stuff" do
|
||
# # ...and here
|
||
# end
|
||
# end
|
||
# end
|
||
def describe desc, additional_desc = nil, &block # :doc:
|
||
stack = MiniTest::Spec.describe_stack
|
||
name = [stack.last, desc, additional_desc].compact.join("::")
|
||
sclas = stack.last || if Class === self && is_a?(MiniTest::Spec::DSL) then
|
||
self
|
||
else
|
||
MiniTest::Spec.spec_type desc
|
||
end
|
||
cls = sclas.create name, desc
|
||
stack.push cls
|
||
cls.class_eval(&block)
|
||
stack.pop
|
||
cls
|
||
end
|
||
private :describe
|
||
end
|
||
##
|
||
# MiniTest::Spec -- The faster, better, less-magical spec framework!
|
||
#
|
||
# For a list of expectations, see MiniTest::Expectations.
|
||
class MiniTest::Spec < MiniTest::Unit::TestCase
|
||
##
|
||
# Oh look! A MiniTest::Spec::DSL module! Eat your heart out DHH.
|
||
module DSL
|
||
##
|
||
# Contains pairs of matchers and Spec classes to be used to
|
||
# calculate the superclass of a top-level describe. This allows for
|
||
# automatically customizable spec types.
|
||
#
|
||
# See: register_spec_type and spec_type
|
||
TYPES = [[//, MiniTest::Spec]]
|
||
##
|
||
# Register a new type of spec that matches the spec's description.
|
||
# This method can take either a Regexp and a spec class or a spec
|
||
# class and a block that takes the description and returns true if
|
||
# it matches.
|
||
#
|
||
# Eg:
|
||
#
|
||
# register_spec_type(/Controller$/, MiniTest::Spec::Rails)
|
||
#
|
||
# or:
|
||
#
|
||
# register_spec_type(MiniTest::Spec::RailsModel) do |desc|
|
||
# desc.superclass == ActiveRecord::Base
|
||
# end
|
||
def register_spec_type(*args, &block)
|
||
if block then
|
||
matcher, klass = block, args.first
|
||
else
|
||
matcher, klass = *args
|
||
end
|
||
TYPES.unshift [matcher, klass]
|