I am new to this flutter and i am trying to create speech to text app. I looked at the document, tutorial and did some research as well regarding this issue but i was unable to solve it. if someone could help me to solve this issue, it would be really great!..
Below is the log information
C:\abc\app\speachtotext>flutter clean
Deleting build... 4,266ms (!)
Deleting .dart_tool... 36ms
Deleting Generated.xcconfig... 6ms
Deleting flutter_export_environment.sh... 11ms
C:\abc\app\speachtotext>flutter run
Running "flutter pub get" in speachtotext... 1.8s
Using hardware rendering with device AOSP on IA Emulator. If you notice graphics artifacts, consider enabling software
rendering with "--enable-software-rendering".
Launching lib\main.dart on AOSP on IA Emulator in debug mode...
Note: C:\Users\abc\AppData\Local\Pub\Cache\hosted\pub.dartlang.org\speech_recognition-0.3.0+1\android\src\main\java\bz\rxla\flutter\speechrecognition\SpeechRecognitionPlugin.java uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
Running Gradle task 'assembleDebug'...
Running Gradle task 'assembleDebug'... Done 67.4s
√ Built build\app\outputs\flutter-apk\app-debug.apk.
Installing build\app\outputs\flutter-apk\app.apk... 2.7s
Waiting for AOSP on IA Emulator to report its views... 17ms
D/EGL_emulation( 8963): eglMakeCurrent: 0xdfa70ac0: ver 3 0 (tinfo 0xe1576e70)
D/eglCodecCommon( 8963): setVertexArrayObject: set vao to 0 (0) 1 0
I/flutter ( 8963): _MyAppState.activateSpeechRecognizer...
Syncing files to device AOSP on IA Emulator... 681ms
D/SpeechRecognitionPlugin( 8963): Current Locale : en_US
Flutter run key commands.
r Hot reload.
R Hot restart.
h Repeat this help message.
d Detach (terminate "flutter run" but leave application running).
c Clear the screen
q Quit (terminate the application on the device).
An Observatory debugger and profiler on AOSP on IA Emulator is available at: http://127.0.0.1:64049/-_rQJ6XA0Ms=/
I/flutter ( 8963): _platformCallHandler call speech.onCurrentLocale en_US
I/flutter ( 8963): _MyAppState.onCurrentLocale... en_US
I/flutter ( 8963): _MyAppState.start => result true
D/SpeechRecognitionPlugin( 8963): onRmsChanged : -2.12
D/SpeechRecognitionPlugin( 8963): onRmsChanged : -2.12
D/SpeechRecognitionPlugin( 8963): onReadyForSpeech
I/flutter ( 8963): _platformCallHandler call speech.onSpeechAvailability true
D/SpeechRecognitionPlugin( 8963): onRmsChanged : -2.12
D/SpeechRecognitionPlugin( 8963): onRmsChanged : -2.0
D/SpeechRecognitionPlugin( 8963): onRmsChanged : -2.12
D/SpeechRecognitionPlugin( 8963): onError : 2
I/flutter ( 8963): _platformCallHandler call speech.onSpeechAvailability false
I/flutter ( 8963): _platformCallHandler call speech.onError 2
I/flutter ( 8963): Unknowm method speech.onError
I am using the same example code given by flutter team for speech reorganization package.
import 'package:flutter/material.dart';
import 'package:speech_recognition/speech_recognition.dart';
void main() {
runApp(new MyApp());
}
const languages = const [
const Language('Francais', 'fr_FR'),
const Language('English', 'en_US'),
const Language('Pусский', 'ru_RU'),
const Language('Italiano', 'it_IT'),
const Language('Español', 'es_ES'),
];
class Language {
final String name;
final String code;
const Language(this.name, this.code);
}
class MyApp extends StatefulWidget {
@override
_MyAppState createState() => new _MyAppState();
}
class _MyAppState extends State<MyApp> {
SpeechRecognition _speech;
bool _speechRecognitionAvailable = false;
bool _isListening = false;
String transcription = '';
//String _currentLocale = 'en_US';
Language selectedLang = languages.first;
@override
initState() {
super.initState();
activateSpeechRecognizer();
}
// Platform messages are asynchronous, so we initialize in an async method.
void activateSpeechRecognizer() {
print('_MyAppState.activateSpeechRecognizer... ');
_speech = new SpeechRecognition();
_speech.setAvailabilityHandler(onSpeechAvailability);
_speech.setCurrentLocaleHandler(onCurrentLocale);
_speech.setRecognitionStartedHandler(onRecognitionStarted);
_speech.setRecognitionResultHandler(onRecognitionResult);
_speech.setRecognitionCompleteHandler(onRecognitionComplete);
_speech
.activate()
.then((res) => setState(() => _speechRecognitionAvailable = res));
}
@override
Widget build(BuildContext context) {
return new MaterialApp(
home: new Scaffold(
appBar: new AppBar(
title: new Text('SpeechRecognition'),
actions: [
new PopupMenuButton<Language>(
onSelected: _selectLangHandler,
itemBuilder: (BuildContext context) => _buildLanguagesWidgets,
)
],
),
body: new Padding(
padding: new EdgeInsets.all(8.0),
child: new Center(
child: new Column(
mainAxisSize: MainAxisSize.min,
crossAxisAlignment: CrossAxisAlignment.stretch,
children: [
new Expanded(
child: new Container(
padding: const EdgeInsets.all(8.0),
color: Colors.grey.shade200,
child: new Text(transcription))),
_buildButton(
onPressed: _speechRecognitionAvailable && !_isListening
? () => start()
: null,
label: _isListening
? 'Listening...'
: 'Listen (${selectedLang.code})',
),
_buildButton(
onPressed: _isListening ? () => cancel() : null,
label: 'Cancel',
),
_buildButton(
onPressed: _isListening ? () => stop() : null,
label: 'Stop',
),
],
),
)),
),
);
}
List<CheckedPopupMenuItem<Language>> get _buildLanguagesWidgets => languages
.map((l) => new CheckedPopupMenuItem<Language>(
value: l,
checked: selectedLang == l,
child: new Text(l.name),
))
.toList();
void _selectLangHandler(Language lang) {
setState(() => selectedLang = lang);
}
Widget _buildButton({String label, VoidCallback onPressed}) => new Padding(
padding: new EdgeInsets.all(12.0),
child: new RaisedButton(
color: Colors.cyan.shade600,
onPressed: onPressed,
child: new Text(
label,
style: const TextStyle(color: Colors.white),
),
));
void start() => _speech
.listen(locale: selectedLang.code)
.then((result) => print('_MyAppState.start => result ${result}'));
void cancel() =>
_speech.cancel().then((result) => setState(() => _isListening = result));
void stop() =>
_speech.stop().then((result) => setState(() => _isListening = result));
void onSpeechAvailability(bool result) =>
setState(() => _speechRecognitionAvailable = result);
void onCurrentLocale(String locale) {
print('_MyAppState.onCurrentLocale... $locale');
setState(
() => selectedLang = languages.firstWhere((l) => l.code == locale));
}
void onRecognitionStarted() => setState(() => _isListening = true);
void onRecognitionResult(String text) => setState(() => transcription = text);
void onRecognitionComplete() => setState(() => _isListening = false);
}
Here is my mainfest file.. Its all the same default setting and i just added permission on top of it.
<manifest xmlns:android="http://schemas.android.com/apk/res/android"
package="com.example.speachtotext">
<!-- io.flutter.app.FlutterApplication is an android.app.Application that
calls FlutterMain.startInitialization(this); in its onCreate method.
In most cases you can leave this as-is, but you if you want to provide
additional functionality it is fine to subclass or reimplement
FlutterApplication and put your custom class here. -->
<uses-permission android:name="android.permission.RECORD_AUDIO" />
<application
android:name="io.flutter.app.FlutterApplication"
android:label="speachtotext"
android:icon="@mipmap/ic_launcher">
<activity
android:name=".MainActivity"
android:launchMode="singleTop"
android:theme="@style/LaunchTheme"
android:configChanges="orientation|keyboardHidden|keyboard|screenSize|smallestScreenSize|locale|layoutDirection|fontScale|screenLayout|density|uiMode"
android:hardwareAccelerated="true"
android:windowSoftInputMode="adjustResize">
<!-- Specifies an Android theme to apply to this Activity as soon as
the Android process has started. This theme is visible to the user
while the Flutter UI initializes. After that, this theme continues
to determine the Window background behind the Flutter UI. -->
<meta-data
android:name="io.flutter.embedding.android.NormalTheme"
android:resource="@style/NormalTheme"
/>
<!-- Displays an Android View that continues showing the launch screen
Drawable until Flutter paints its first frame, then this splash
screen fades out. A splash screen is useful to avoid any visual
gap between the end of Android's launch screen and the painting of
Flutter's first frame. -->
<meta-data
android:name="io.flutter.embedding.android.SplashScreenDrawable"
android:resource="@drawable/launch_background"
/>
<intent-filter>
<action android:name="android.intent.action.MAIN"/>
<category android:name="android.intent.category.LAUNCHER"/>
</intent-filter>
</activity>
<!-- Don't delete the meta-data below.
This is used by the Flutter tool to generate GeneratedPluginRegistrant.java -->
<meta-data
android:name="flutterEmbedding"
android:value="2" />
</application>
</manifest>
Here is the pubspec.yml
name: speachtotext
description: speach to text app
version: 1.0.0+1
environment:
sdk: ">=2.7.0 <3.0.0"
dependencies:
flutter:
sdk: flutter
# The following adds the Cupertino Icons font to your application.
# Use with the CupertinoIcons class for iOS style icons.
cupertino_icons: ^0.1.3
dev_dependencies:
flutter_test:
sdk: flutter
speech_recognition: ^0.3.0+1
# For information on the generic Dart part of this file, see the
# following page: https://dart.dev/tools/pub/pubspec
# The following section is specific to Flutter.
flutter:
# The following line ensures that the Material Icons font is
# included with your application, so that you can use the icons in
# the material Icons class.
uses-material-design: true
its all just regular example code for speech reorganization app as i didnt add anything on top of it. also, i did provide the necessary permission in emulator. when i click the mic icon i could hear the listening sound but immediately it throws this error and it did not listen or transcript the message.
Below is the details of flutter doctor -v. Since i have visual studio ide code is available in my machine its throwing the plugin observation.
C:\abc\app\speachtotext>flutter doctor -v
[√] Flutter (Channel master, 1.20.0-1.0.pre.207, on Microsoft Windows [Version 10.0.17763.1217], locale en-US)
• Flutter version 1.20.0-1.0.pre.207 at C:\src\flutter
• Framework revision 91bdf15858 (11 hours ago), 2020-06-24 23:38:01 -0400
• Engine revision 0c14126211
• Dart version 2.9.0 (build 2.9.0-18.0.dev d8eb844e5d)
[√] Android toolchain - develop for Android devices (Android SDK version 29.0.3)
• Android SDK at C:\Users\af81193\AppData\Local\Android\Sdk
• Platform android-29, build-tools 29.0.3
• ANDROID_HOME = C:\Users\af81193\AppData\Local\Android\Sdk
• ANDROID_SDK_ROOT = C:\Users\af81193\AppData\Local\Android\Sdk
• Java binary at: C:\Android\jre\bin\java
• Java version OpenJDK Runtime Environment (build 1.8.0_242-release-1644-b01)
• All Android licenses accepted.
[√] Android Studio (version 4.0)
• Android Studio at C:\Android
• Flutter plugin version 46.0.2
• Dart plugin version 193.7361
• Java version OpenJDK Runtime Environment (build 1.8.0_242-release-1644-b01)
[!] VS Code, 64-bit edition (version 1.27.1)
• VS Code at C:\Program Files\Microsoft VS Code
X Flutter extension not installed; install from
https://marketplace.visualstudio.com/items?itemName=Dart-Code.flutter
[√] Connected device (1 available)
• AOSP on IA Emulator • emulator-5554 • android-x86 • Android 9 (API 28) (emulator)
! Doctor found issues in 1 category.
Appreciate your response!.. Thanks!
Updated error information after using speech_to_text plugin based on Sagar's suggestion.
I/flutter (20582): Received listener status: listening, listening: true
I/flutter (20582): Received error status: SpeechRecognitionError
msg: error_network, permanent: true, listening: true
Maybe there is Some issue with the plugin or just check out with the real device, I would like to suggest this alternative plugin: https://pub.dev/packages/speech_to_text
As the plugin you are using is out of maintenance, it might have some issues.
You can check out their sample code for the above-mentioned plugin which works well.