Created message service responsible for searching the prompts inside the recognized text and sending it to the client. Created recognizer with two strategies: whisper and Dany's fast whisper. Implemented file stack which works in the separated thread, sends the file to the recognizer and after that sends the message to the client (Rat, for example).
62 lines
1.3 KiB
Plaintext
62 lines
1.3 KiB
Plaintext
asgiref==3.7.2
|
|
av==11.0.0
|
|
blinker==1.7.0
|
|
certifi==2024.2.2
|
|
charset-normalizer==3.3.2
|
|
click==8.1.7
|
|
coloredlogs==15.0.1
|
|
ctranslate2==4.0.0
|
|
Cython==3.0.8
|
|
dtw-python==1.3.1
|
|
faster-whisper==1.0.0
|
|
filelock==3.13.1
|
|
Flask==3.0.2
|
|
flatbuffers==23.5.26
|
|
fsspec==2024.2.0
|
|
huggingface-hub==0.21.3
|
|
humanfriendly==10.0
|
|
idna==3.6
|
|
itsdangerous==2.1.2
|
|
Jinja2==3.1.3
|
|
llvmlite==0.42.0
|
|
MarkupSafe==2.1.5
|
|
more-itertools==10.2.0
|
|
mpmath==1.3.0
|
|
networkx==3.2.1
|
|
numba==0.59.0
|
|
numpy==1.26.4
|
|
nvidia-cublas-cu12==12.1.3.1
|
|
nvidia-cuda-cupti-cu12==12.1.105
|
|
nvidia-cuda-nvrtc-cu12==12.1.105
|
|
nvidia-cuda-runtime-cu12==12.1.105
|
|
nvidia-cudnn-cu12==8.9.2.26
|
|
nvidia-cufft-cu12==11.0.2.54
|
|
nvidia-curand-cu12==10.3.2.106
|
|
nvidia-cusolver-cu12==11.4.5.107
|
|
nvidia-cusparse-cu12==12.1.0.106
|
|
nvidia-nccl-cu12==2.19.3
|
|
nvidia-nvjitlink-cu12==12.3.101
|
|
nvidia-nvtx-cu12==12.1.105
|
|
onnxruntime==1.17.1
|
|
openai-whisper @ git+https://github.com/openai/whisper.git@ba3f3cd54b0e5b8ce1ab3de13e32122d0d5f98ab
|
|
packaging==23.2
|
|
pillow==10.2.0
|
|
protobuf==4.25.3
|
|
python-dotenv==1.0.1
|
|
PyYAML==6.0.1
|
|
regex==2023.12.25
|
|
requests==2.31.0
|
|
scipy==1.12.0
|
|
six==1.16.0
|
|
sympy==1.12
|
|
tiktoken==0.6.0
|
|
tokenizers==0.15.2
|
|
torch==2.2.1
|
|
torchaudio==2.2.1
|
|
torchvision==0.17.1
|
|
tqdm==4.66.2
|
|
triton==2.2.0
|
|
typing_extensions==4.10.0
|
|
urllib3==2.2.1
|
|
Werkzeug==3.0.1
|
|
whisper-timestamped==1.15.0 |