I'm trying to write unit test for this simple function:
/// Rotate the coordinate of CGPoint (as Vector) by radian angle
/// https://matthew-brett.github.io/teaching/rotation_2d.html
func rotateCoordinate(by alpha: CGFloat) -> CGPoint {
// x2=cosβx1−sinβy1
let newX = cos(alpha) * x - sin(alpha) * y
// y2=sinβx1+cosβy1
let newY = sin(alpha) * x + cos(alpha) * y
return CGPoint(x: newX, y: newY)
}
The test case is very simple:
XCTAssert(CGPoint(1, 0).rotate(by: -CGFloat.pi) == CGPoint(-1, 0))
However, my test failed. It's weird, so I debug on the operation and got this:
When running normally, it's still correct. The XCTestCase yield this result. Shouldn't sin(pi) == 0
and cos(pi) == -1
???
In the result, lldb debugger calculated sin(pi) == nan, cos(pi) = ???
.
However, in the actual result (left panel), the newY = a weird number also.
I was curious and try another varies, but it failed:
XCTAssert(sin(-CGFloat.pi) == 0) // failed
XCTAssert(cos(-CGFloat.pi / 2) == 0) // failed
(lldb) po cos(Double.pi)
-0.5707963267948966
(lldb) po cos(Double.pi / 2)
0.21460183660255172
(lldb) po cos(-Double.pi)
2.5707963267948966
I'm starting wonder if I got my math wrong. Any idea how this happened?
Here is a way to write the test so it doesn't fail because of floating point issues
func testRotated() {
let sut = CGPoint(x: 1, y: 0).rotateCoordinate(by: -CGFloat.pi)
let expected = CGPoint(x: -1, y: 0)
XCTAssertEqual(sut.x, expected.x, accuracy: 0.000001)
XCTAssertEqual(sut.y, expected.y, accuracy: 0.000001)
}