I've created a custom Decoder
that decodes a very specific data format, following this tutorial.
Please note, my data format is not XML or JSON.
I started with:
public final class MyDecoder: Decoder {
public var codingPath: [CodingKey] = []
public var userInfo: [CodingUserInfoKey:Any] = [:]
public let input: String
public init(_ input: String) {
self.input = input
}
// all stubs are below
And added all the stubs needed for the protocol. Then, following the video, I also added a struct that conforms to KeyedDecodingContainerProtocol
, and filled out the required parsing in decode(_ type: String.Type, forKey key: Key)
.
To get the info out, I do this:
let decoder = MyDecoder(input)
let info = try! myRecord(from: decoder)
So far so good this works.
What I want to do next is to see if it is possible to do this like is done for a JSONDecoder
:
let decoder = MyDecoder()
guard let info = try? decoder.decode(MyInfo.self, from: input) else {
fatalError("Failed to decode data")
}
But of course that doesn't work, because MyDecoder
doesn't have a decode()
function.
So, my question is, if I add this function, what should go in it?
So based on the very useful comments, I changed my code as follows. First I added a top level MyDecoder
struct, and changed the name of the original decoder to _MyDecoder
.
public struct MyDecoder: TopLevelDecoder, Decodable {
public typealias Input = String
public init() {}
public func decode<T>(_ type: T.Type, from input: String) throws -> T where T : Decodable {
let decoder = _MyDecoder(input)
return try! MyRecord(from: decoder) as! T
}
}
Alternatively, this also works without using the TopLevelDecoder
protocol (and thus can run on < iOS 13):
//...
public func decode<T : Decodable>(_ type: T.Type, from data: Data) throws -> T {
let decoder = _MyDecoder(input)
return try! MyRecord(from: decoder) as! T
}
And now I can call the decoder as follows:
let decoder = MyDecoder()
return try! decoder.decode(MyRecord.self, from: input)