Search code examples
pythondeep-learningstreamlitfilenotfounderror

FileNotFoundError When Deploying a Python Streamlit Application on the streamlit.io Platform


This code works on my local computer, but it is not working on the streamlit.io platform. It displays the following error message:

File “/home/appuser/venv/lib/python3.8/site-packages/streamlit/runtime/scriptrunner/script_runner.py”, line 565, in _run_script exec(code, module.dict) File “/app/lipreading/app/streamlitapp.py”, line 20, in options = os.listdir(os.path.join(‘…’, ‘data’, ‘s1’))

Problem code:

import streamlit as st
import os
from moviepy.editor import VideoFileClip
import imageio
import tensorflow as tf
from utils import load_data, num_to_char
from modelutil import load_model

# Set the layout to the Streamlit app as wide
st.set_page_config(layout='wide')

# Setup the sidebar
with st.sidebar:
    st.image('https://www.onepointltd.com/wp-content/uploads/2020/03/inno2.png')
    st.markdown("<h1 style='text-align: center; color: white;'>Abstract</h1>", unsafe_allow_html=True)
    st.info('This project, developed by Amith A G as his MCA final project at KVVS Institute Of Technology, focuses on implementing the LipNet deep learning model for lip-reading and speech recognition. The project aims to demonstrate the capabilities of the LipNet model through a Streamlit application.')

st.markdown("<h1 style='text-align: center; color: white;'>LipNet</h1>", unsafe_allow_html=True)

# Generating a list of options or videos
options = os.listdir(os.path.join('..', 'data', 's1'))
selected_video = st.selectbox('Choose video', options)

# Generate two columns
col1, col2 = st.columns(2)

if options:
    # Rendering the video
    with col1:
        st.info('The video below displays the converted video in mp4 format')
        file_path = os.path.join('..', 'data', 's1', selected_video)
        output_path = os.path.join('test_video.mp4')

        # Convert the video using moviepy
        video_clip = VideoFileClip(file_path)
        video_clip.write_videofile(output_path, codec='libx264')

        # Display the video in the app
        video = open(output_path, 'rb')
        video_bytes = video.read()
        st.video(video_bytes)

    with col2:
        st.info('This is all the machine learning model sees when making a prediction')
        video, annotations = load_data(tf.convert_to_tensor(file_path))
        imageio.mimsave('animation.gif', video, fps=10)
        st.image('animation.gif', width=400)

        st.info('This is the output of the machine learning model as tokens')
        model = load_model()
        yhat = model.predict(tf.expand_dims(video, axis=0))
        decoder = tf.keras.backend.ctc_decode(yhat, [75], greedy=True)[0][0].numpy()
        st.text(decoder)

        # Convert prediction to text
        st.info('Decode the raw tokens into words')
        converted_prediction = tf.strings.reduce_join(num_to_char(decoder)).numpy().decode('utf-8')
        st.text(converted_prediction)

Here's an inline link to GitHubRepository

Requirement: imageio==2.9.0
numpy== 1.22.2
moviepy== 1.0.3
opencv-python==4.7.0.72
streamlit== 1.22.0
tensorflow==2.12.0

Code explaination: The provided code is a Streamlit application that implements the LipNet deep learning model for lip-reading and speech recognition. When executed, the application launches with a wide layout and displays a sidebar containing an image and an introductory paragraph about the project. The main section of the application showcases the LipNet model with a heading and allows users to choose a video from a list of options. Upon selecting a video, the application renders it in the first column as an mp4 video and presents frames and annotations in the second column. The frames are processed by the LipNet model, which predicts output tokens and displays them, along with the converted text prediction. The raw tokens are further decoded into words. Overall, the application provides a user-friendly interface to explore the lip-reading and speech recognition capabilities of LipNet, offering visual representations and insights into the model's predictions.

oswalk :

Current Directory: D:\LipReading
Number of subdirectories: 3
Subdirectories: app, data, models
Number of files: 3
Files: .gitattributes, oswalk.py, requirements.txt

Current Directory: D:\LipReading\app
Number of subdirectories: 0
Subdirectories: 
Number of files: 5
Files: animation.gif, modelutil.py, streamlitapp.py, test_video.mp4, utils.py

Current Directory: D:\LipReading\data
Number of subdirectories: 2
Subdirectories: alignments, s1
Number of files: 0
Files: 

Current Directory: D:\LipReading\data\alignments
Number of subdirectories: 1
Subdirectories: s1
Number of files: 0
Files: 

Current Directory: D:\LipReading\data\alignments\s1
Number of subdirectories: 0
Subdirectories:
Number of files: 1000
Files: bbaf2n.align, bbaf3s.align, bbaf4p.align, bbaf5a.align, bbal6n.align, bbal7s.align, bbal8p.align, bbal9a.align, bbas1s.align, bbas2p.align, bbas3a.align, bbaszn.align, bbaz4n.align, bbaz5s.align, bbaz6p.align, bbaz7a.align, bbbf6n.align, bbbf7s.align, bbbf8p.align, bbbf9a.align....ect
Current Directory: D:\LipReading\data\s1
Number of subdirectories: 0
Subdirectories:
Number of files: 1001
Files: bbaf2n.mpg, bbaf3s.mpg, bbaf4p.mpg, bbaf5a.mpg, bbal6n.mpg, bbal7s.mpg, bbal8p.mpg, bbal9a.mpg, bbas1s.mpg, bbas2p.mpg, bbas3a.mpg, bbaszn.mpg, bbaz4n.mpg, bbaz5s.mpg, bbaz6p.mpg, bbaz7a.mpg, bbbf6n.mpg, bbbf7s.mpg, bbbf8p.mpg, bbbf9a.mpg, bbbm1s.mpg, bbbm2p.mpg, bbbm3a.mpg, bbbmzn.mpg, bbbs4n.mpg, bbbs5s.mpg, bbbs6p.mpg, bbbs7a.mpg, bbbz8n.mpg, bbbz9s.mpg, bbie8n.mpg, bbie9s.mpg, bbif1a.mpg, bbifzp.mpg, bbil2n.mpg, bbil3s.mpg, bbil4p.mpg, bbil5a.mpg, bbir6n.mpg, bbir7s.mpg, bbir8p.mpg, bbir9a.mpg, bbiz1s.mpg, bbiz2p.mpg, bbiz3a.mpg, bbizzn.mpg, bbwg1s.mpg, bbwg2p.mpg, bbwg3a.mpg, bbwgzn.mpg, bbwm4n.mpg, bbwm5s.mpg, bbwm6p.mpg, bbwm7a.mpg, bbws8n.mpg, bbws9s.mpg, bbwt1a.mpg, bbwtzp.mpg, bgaa6n.mpg, bgaa7s.mpg, bgaa8.......etc

Current Directory: D:\LipReading\models
Number of subdirectories: 1
Subdirectories: __MACOSX
Number of files: 3
Files: checkpoint, checkpoint.data-00000-of-00001, checkpoint.index

Current Directory: D:\LipReading\models\__MACOSX
Number of subdirectories: 0
Subdirectories:
Number of files: 3
Files: ._checkpoint, ._checkpoint.data-00000-of-00001, ._checkpoint.index

Solution

  • The path is relative to the current working directory, which is not the same as file's location. When you run your code locally, you're probably in that directory and just doing python my_file.py, but that's not what the platform is doing, so the current path is different.

    The current python file's path can be accessed using __file__. Then we could use remove filename from that to get the directory. I like using pathlib for that:

    import pathlib
    code_dir = pathlib.Path(__file__).parent.resolve()
    

    Resolve converts the path to absolute, for safety. Pathlib allows do what os.path.join did using / operator and strings:

    files_location = code_dir / ".." / "data" / "s1"  
    files_location = files_location.resolve()  # because we used .. in path, it's safer to resolve so we get absolute path
    

    Alternative forms of the files_location = code_dir / ".." / "data" / "s1" line:

    • files_location = code_dir / "../data/s1"
    • files_location = code_dir.parent / "data" / "s1"
    • files_location = code_dir.parent / "data/s1"

    Use whichever seems the most intuitive for your usecase.

    Listing the dir: os.listdir accepts path-like object and pathlib.Path is pathlike, so we can just do os.listdir(files_location)

    Other lines to convert:

    • instead of file_path = os.path.join('..', 'data', 's1', selected_video), we can do file_path = files_location / selected_video
    • and output_path = os.path.join('test_video.mp4') becomes output_path = code_dir / 'test_video.mp4'to dump it in the same location as the code file is (what you had locally)