|
Filing this one as an improvement.
I feel like the current way single tests are run can potentially give users the "false" sense that all tests in a given test case are passing, while in reality some of they might have been broken while trying pass the single test that was run.
Steps to reproduce
Create a test case and write a new test:
test1
self assert: self foo equals: 1
Run it. It should fail because #foo has not been implemented yet.
Implement it so the test passes:
foo
^ 1
Write another test:
test2
self assert: self foo equals: 2
Now click on the gray button by the test name to run only #test2.
#test2 fails (as expected)
Now go back to #foo and change it so #test2 passes:
foo
^ 2
Run #test2 again (it should pass).
Expected Behaviour
Test status icon for #test1 should turn gray, informing the user that the test should be run again. #test 1 might have been broken during our efforts to make #test2 pass (thats what we did).
In fact, all tests but the last run should turn gray, inviting the user to run them agand to make sure the changes made to pass the last test did not break anything else that was working before.
Actual Behaviour
The test result icon for the run test is updated, and test result icons for the rests of the tests in the test class remains the same (in this scenario, test result icon for #test1 remains green, even though #test1 is broken now).
Analysis
When running a single test, TestResult>>#updateResultsInHistory is sent, which appends to the test case history, without resetting the other tests statuses.
The entire test history should be replaced with a new one, only containing the result of the last test run.
|
|
|
Priority: 5 – Fix If Time
|
|
Status: Working On
|
|
Assigned to: Nicolás Papagna Maldonado
|
|
Milestone: Pharo6.0
|
Go to Case
|
|