Search code examples
flutterspeech-recognitionspeech-to-text

flutter 'Object' can't be assinged to type 'Widget?' error


I'm keep getting these error.

I know that the problem is occuring because of Navigator.pushnamed but I have no Idea how to fix it.

Widget build(BuildContext context) {
    return MaterialApp(
      debugShowCheckedModeBanner: false,
      routes: {
        '/temp': (context) => const Temp(),
        '/menu': (context) => const Menu(),
        '/eat': (context) => const Eat(),
        '/pay': (context) => const Pay(),
        '/main': (context) => const Startpage(),
        '/Veganeat': (context) => const VeganEat(),
        '/Vegancheck': (context) => const VeganChk(),
        '/cheeseeat': (context) => const CheEat(),
        '/cheesecheck': (context) => const CheChk(),
        '/thanks': (context) => const Thanks(),
      },
      home: const Startpage(),
    );
  }
Container(
          height: 200,
          width: 400,
          child: _speechToText.isListening
              ? _lastWords.contains("vegan burger")
                  ? Navigator.pushNamed(context, 'veganeat')
                  : _lastWords.contains("cheese burger")
                      ? Navigator.pushNamed(context, 'cheeseeat')
                      : Text("tell me which burger to order")
              : Text("loading"))

I tried using someother way to move other pages but it didn't work. either


Solution

  • You have to get words properly using, https://pub.dev/packages/speech_to_text package in here you should get recorgnized words as follows

    watch video tutorial here: https://www.youtube.com/watch?v=6SP9xu5p7rk

    After that check the word you want to filter using contains,then navigate it, Add listner properly as follows.

    Other mistake you have done was, you assigning Navigator.pushNamed(..) to child of container as one of the expected result which is not a widget.

    you can add "_onSpeechResult" listner as in example and then navigate to page from that

    import 'package:flutter/material.dart';
    import 'package:speech_to_text/speech_recognition_result.dart';
    import 'package:speech_to_text/speech_to_text.dart';
    
    void main() {
      runApp(MyApp());
    }
    
    class MyApp extends StatelessWidget {
      @override
      Widget build(BuildContext context) {
        return MaterialApp(
          title: 'Flutter Demo',
          home: MyHomePage(),
        );
      }
    }
    
    class MyHomePage extends StatefulWidget {
      MyHomePage({Key? key}) : super(key: key);
    
      @override
      _MyHomePageState createState() => _MyHomePageState();
    }
    
    class _MyHomePageState extends State<MyHomePage> {
      SpeechToText _speechToText = SpeechToText();
      bool _speechEnabled = false;
      String _lastWords = '';
    
      @override
      void initState() {
        super.initState();
        _initSpeech();
      }
    
      /// This has to happen only once per app
      void _initSpeech() async {
        _speechEnabled = await _speechToText.initialize();
        setState(() {});
      }
    
      /// Each time to start a speech recognition session
      void _startListening() async {
        await _speechToText.listen(onResult: _onSpeechResult);
        setState(() {});
      }
    
      /// Manually stop the active speech recognition session
      /// Note that there are also timeouts that each platform enforces
      /// and the SpeechToText plugin supports setting timeouts on the
      /// listen method.
      void _stopListening() async {
        await _speechToText.stop();
        setState(() {});
      }
    
      /// This is the callback that the SpeechToText plugin calls when
      /// the platform returns recognized words.
      void _onSpeechResult(SpeechRecognitionResult result) {
        setState(() {
          _lastWords = result.recognizedWords;
        });
      }
    
      @override
      Widget build(BuildContext context) {
        return Scaffold(
          appBar: AppBar(
            title: Text('Speech Demo'),
          ),
          body: Center(
            child: Column(
              mainAxisAlignment: MainAxisAlignment.center,
              children: <Widget>[
                Container(
                  padding: EdgeInsets.all(16),
                  child: Text(
                    'Recognized words:',
                    style: TextStyle(fontSize: 20.0),
                  ),
                ),
                Expanded(
                  child: Container(
                    padding: EdgeInsets.all(16),
                    child: Text(
                      // If listening is active show the recognized words
                      _speechToText.isListening
                          ? '$_lastWords'
                          // If listening isn't active but could be tell the user
                          // how to start it, otherwise indicate that speech
                          // recognition is not yet ready or not supported on
                          // the target device
                          : _speechEnabled
                              ? 'Tap the microphone to start listening...'
                              : 'Speech not available',
                    ),
                  ),
                ),
              ],
            ),
          ),
          floatingActionButton: FloatingActionButton(
            onPressed:
                // If not yet listening for speech start, otherwise stop
                _speechToText.isNotListening ? _startListening : _stopListening,
            tooltip: 'Listen',
            child: Icon(_speechToText.isNotListening ? Icons.mic_off : Icons.mic),
          ),
        );
      }
    }