I need to use AVSpeechSynthesizer
to read a text to my user.
User will control reading with AirPods, so I need to use MPRemoteCommandCenter
.
For now, I need to prepare my audio files using AVSpeechSynthesizer.write(_:toBufferCallback:)
the create a playlist and read it using AVQueuePlayer
.
It works. But preparing audio files take time. I prefer to use AVSpeechSynthesizer.speak(_:)
directly in background mode, and activate it via MPRemoteCommandCenter
commands.
Is this possible? Or perhaps any workaround?
Thank you!
i have the same issue, and now figure out the only thing you need to do
try? AVAudioSession.sharedInstance().setCategory(.playback)
at the begining. don't use .duckOthers
, .mixWithOthers
addTarget to RemoteCommand
func addRemoteCommandCenter() {
let rcc = MPRemoteCommandCenter.shared()
//添加暂停监听
rcc.pauseCommand.addTarget(self, action: #selector(playOrPauseEvent(_:)))
//添加播放监听
rcc.playCommand.addTarget(self, action: #selector(playOrPauseEvent(_:)))
//下一首
rcc.nextTrackCommand.addTarget(self, action: #selector(nextCommandEvent(_:)))
//上一首
rcc.previousTrackCommand.addTarget(self, action: #selector(previousCammndEvent(_:)))
//耳机暂停和播放的监听
rcc.togglePlayPauseCommand.addTarget(self, action: #selector(togglePlayPauseCommand(_:)))
}
at the speechSynthesizer(_:didStart:)
and speechSynthesizer(_:willSpeakRangeOfSpeechString:utterance:)
update the UI
let infoCenter = MPNowPlayingInfoCenter.default()
infoCenter.nowPlayingInfo = [MPMediaItemPropertyTitle:"Title", MPMediaItemPropertyArtist: "Artist", MPMediaItemPropertyAlbumTitle: "", MPMediaItemPropertyArtwork: MPMediaItemArtwork(boundsSize: CGSize(width: 120, height: 120), requestHandler: { (size) -> UIImage in
UIImage()
})]