I'm developing an IOS App that handles a BlueTooth SensorTag. That SensorTag is based on the TI BLE SensorTag, but we had some of the Sensors removed.
In the sourcecode of the original IOS App from TI the XYZ-Values are calculated like follows with KXTJ9_RANGE defined as 1.0 in my implementation and KXTJ9 is the Accelerometer built on the SensorTag
+(float) calcXValue:(NSData *)data {
char scratchVal[data.length];
[data getBytes:&scratchVal length:3];
return ((scratchVal[0] * 1.0) / (64 / KXTJ9_RANGE));
}
The data comes as hexadecimal like "fe850d" and by the method will be cut in 3 parts. Now i'm trying to convert this method to swift, but i get the wrong numbers back
e.g. fe
should return something around 0.02
what the objective C Code does
My Swift Code so far
class Sensor: NSObject {
let Range: Float = 1.0
var data: NSData
var bytes: [Byte] = [0x00, 0x00, 0x00]
init(data: NSData) {
self.data = data
data.getBytes(&bytes, length: data.length)
}
func calcXValue()->Float {
return ((Float(bytes[0]) * 1.0) / (64.0 / Range))
}
...
}
I believe problem must lie around my Float(bytes[0])
because that makes 254
out of fe
whereas scratchVal[0]
in ObjectiveC is around 64
But my main problem is, i was all new with IOS programming when i had to begin with this project, so i chose to use Swift to learn and code the app with it.
Right now i use the original Objective C Code from TI to use with our SensorTag, but i would prefer using Swift for every part in the App.
On all current iOS and OS X platforms, char
is a signed quantity,
so that the input fe
is treated as a negative number.
Byte
on the other hand is an alias for UInt8
which is unsigned.
Use [Int8]
array instead to get the same behaviour as in your Objective-C code.