I am getting accelerometer data from a BLE device in the form of 6 bytes and I want to convert it into a floating point value to be as precise as possible.
Every 2 bytes represents a different axis, (x, y & z respectively). Instead of a number like 982, I want a number like 0.98243873
I've tried converting the bytes into a Float by trying...
let data = characteristic.value!
let floatX = Float(bitPattern:UInt32(littleEndian:data[0...1].withUnsafeBytes{$0.pointee}))
OR
let floatX = data[0...1].withUnsafeBytes{$0.pointee} as Float
but I am getting weird numbers like -6.777109e-21
when the Int16 value is 1047
and I am expecting something like 1.04793283
. Would it have something to do with the bytes being signed? Can I even get precision like this from two bytes?
The problem is that you're trying to turn little endian UInt32
values into Float
merely by "reinterpreting" the same bit patterns as a new value (that's what Float(bitPattern:)
is for), but that's not at all how Float stores its data. Swift's Float
and Double
datatypes are implementations of the 32 and 64 bit floating point data types from IEEE 754. There's plenty of online resources that explain it, but the TL;DR is that they store numbers in a similar way as scientific notation, with a mantissa raised to the power of an exponent.
I think part of your difficulty comes from trying to do too much at once. Break it down into small pieces. Write a function that takes your data, and decomposes it into the 3 UInt32
components. Then write a separate function that does whatever transformation you want on those components, such as turning them into floats. Here's a rough example:
import Foundation
func createTestData(x: UInt32, y: UInt32, z: UInt32) -> Data {
return [x, y, z]
.map { UInt32(littleEndian: $0) }
.withUnsafeBufferPointer { Data(buffer: $0) }
}
func decode(data: Data) -> (x: UInt32, y: UInt32, z: UInt32) {
let values = data.withUnsafeBytes { bufferPointer in
bufferPointer
.bindMemory(to: UInt32.self)
.map { rawBitPattern in
return UInt32(littleEndian: rawBitPattern)
}
}
assert(values.count == 3)
return (x: values[0], y: values[1], z: values[2])
}
func transform(ints: (x: UInt32, y: UInt32, z: UInt32))
-> (x: Float, y: Float, z: Float) {
let transform: (UInt32) -> Float = { Float($0) / 1000 } // define whatever transformation you need
return (transform(ints.x), transform(ints.y), transform(ints.z))
}
let testData = createTestData(x: 123, y: 456, z: 789)
print(testData) // => 12 bytes
let decodedVector = decode(data: testData)
print(decodedVector) // => (x: 123, y: 456, z: 789)
let intsToFloats = transform(ints: decodedVector)
print(intsToFloats) // => (x: 0.123, y: 0.456, z: 0.789)