Errors during and after one click installs #293
Unanswered
ChrisCantwell
asked this question in
Q&A
Replies: 2 comments 1 reply
-
Thanks for reporting!
Voice clone might want the old hydra core, once I'm back at my workstation
I'll try to reproduce it, maybe after 8 hours.
As for the installation - it's impossible to install it all without pip
errors, but I test the "unsupported versions" when I make the app.
How long was the audio? It should be about 5-15 seconds.
…On Sat, Mar 23, 2024, 2:45 AM ChrisCantwell ***@***.***> wrote:
Greetings,
I installed this system through the one click installer, and when I then
try to clone a voice I get this error.
- ready started server on 0.0.0.0:3000, url: http://localhost:3000
Downloading HuBERT base model
Downloaded HuBERT
D:\Programs\one-click-installers-tts-6.0\installer_files\env\lib\site-packages\hydra\experimental\initialize.py:43: UserWarning: hydra.experimental.initialize() is no longer experimental. Use hydra.initialize()
deprecation_warning(message=message)
D:\Programs\one-click-installers-tts-6.0\installer_files\env\lib\site-packages\hydra\experimental\initialize.py:45: UserWarning:
The version_base parameter is not specified.
Please specify a compatability version level, or None.
Will assume defaults for version 1.1
self.delegate = real_initialize(
D:\Programs\one-click-installers-tts-6.0\installer_files\env\lib\site-packages\hydra\experimental\compose.py:25: UserWarning: hydra.experimental.compose() is no longer experimental. Use hydra.compose()
deprecation_warning(message=message)
D:\Programs\one-click-installers-tts-6.0\installer_files\env\lib\site-packages\hydra\core\default_element.py:124: UserWarning: In 'config': Usage of deprecated keyword in package header '# @Package _group_'.
See https://hydra.cc/docs/1.2/upgrades/1.0_to_1.1/changes_to_package_header for more information
deprecation_warning(
D:\Programs\one-click-installers-tts-6.0\installer_files\env\lib\site-packages\fairseq\checkpoint_utils.py:432: UserWarning:
'config' is validated against ConfigStore schema with the same name.
This behavior is deprecated in Hydra 1.1 and will be removed in Hydra 1.2.
See https://hydra.cc/docs/1.2/upgrades/1.0_to_1.1/automatic_schema_matching for migration instructions.
state = load_checkpoint_to_cpu(filename, arg_overrides)
D:\Programs\one-click-installers-tts-6.0\installer_files\env\lib\site-packages\hydra\compose.py:56: UserWarning:
The strict flag in the compose API is deprecated.
See https://hydra.cc/docs/1.2/upgrades/0.11_to_1.0/strict_mode_flag_deprecated for more info.
deprecation_warning(
D:\Programs\one-click-installers-tts-6.0\installer_files\env\lib\site-packages\hydra\experimental\initialize.py:43: UserWarning: hydra.experimental.initialize() is no longer experimental. Use hydra.initialize()
deprecation_warning(message=message)
D:\Programs\one-click-installers-tts-6.0\installer_files\env\lib\site-packages\hydra\experimental\initialize.py:45: UserWarning:
The version_base parameter is not specified.
Please specify a compatability version level, or None.
Will assume defaults for version 1.1
self.delegate = real_initialize(
D:\Programs\one-click-installers-tts-6.0\tts-generation-webui\src\bark\clone\tab_voice_clone.py:26: UserWarning:
'config' is validated against ConfigStore schema with the same name.
This behavior is deprecated in Hydra 1.1 and will be removed in Hydra 1.2.
See https://hydra.cc/docs/1.2/upgrades/1.0_to_1.1/automatic_schema_matching for migration instructions.
hubert_model = CustomHubert(
Traceback (most recent call last):
File "D:\Programs\one-click-installers-tts-6.0\installer_files\env\lib\site-packages\gradio\queueing.py", line 407, in call_prediction
output = await route_utils.call_process_api(
File "D:\Programs\one-click-installers-tts-6.0\installer_files\env\lib\site-packages\gradio\route_utils.py", line 226, in call_process_api
output = await app.get_blocks().process_api(
File "D:\Programs\one-click-installers-tts-6.0\installer_files\env\lib\site-packages\gradio\blocks.py", line 1550, in process_api
result = await self.call_function(
File "D:\Programs\one-click-installers-tts-6.0\installer_files\env\lib\site-packages\gradio\blocks.py", line 1185, in call_function
prediction = await anyio.to_thread.run_sync(
File "D:\Programs\one-click-installers-tts-6.0\installer_files\env\lib\site-packages\anyio\to_thread.py", line 33, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "D:\Programs\one-click-installers-tts-6.0\installer_files\env\lib\site-packages\anyio\_backends\_asyncio.py", line 877, in run_sync_in_worker_thread
return await future
File "D:\Programs\one-click-installers-tts-6.0\installer_files\env\lib\site-packages\anyio\_backends\_asyncio.py", line 807, in run
result = context.run(func, *args)
File "D:\Programs\one-click-installers-tts-6.0\installer_files\env\lib\site-packages\gradio\utils.py", line 661, in wrapper
response = f(*args, **kwargs)
File "D:\Programs\one-click-installers-tts-6.0\tts-generation-webui\src\bark\clone\tab_voice_clone.py", line 227, in generate_voice
full_generation = get_prompts(wav_file, use_gpu)
File "D:\Programs\one-click-installers-tts-6.0\tts-generation-webui\src\bark\clone\tab_voice_clone.py", line 87, in get_prompts
semantic_prompt = get_semantic_prompt(path_to_wav, device)
File "D:\Programs\one-click-installers-tts-6.0\tts-generation-webui\src\bark\clone\tab_voice_clone.py", line 81, in get_semantic_prompt
semantic_vectors = get_semantic_vectors(path_to_wav, device)
File "D:\Programs\one-click-installers-tts-6.0\tts-generation-webui\src\bark\clone\tab_voice_clone.py", line 47, in get_semantic_vectors
return _get_semantic_vectors(hubert_model, path_to_wav, device)
File "D:\Programs\one-click-installers-tts-6.0\tts-generation-webui\src\bark\clone\tab_voice_clone.py", line 42, in _get_semantic_vectors
return hubert_model.forward(wav, input_sample_hz=sr)
File "D:\Programs\one-click-installers-tts-6.0\installer_files\env\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "D:\Programs\one-click-installers-tts-6.0\installer_files\env\lib\site-packages\bark_hubert_quantizer\pre_kmeans_hubert.py", line 89, in forward
embed = self.model(
File "D:\Programs\one-click-installers-tts-6.0\installer_files\env\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "D:\Programs\one-click-installers-tts-6.0\installer_files\env\lib\site-packages\fairseq\models\hubert\hubert.py", line 467, in forward
x, _ = self.encoder(
File "D:\Programs\one-click-installers-tts-6.0\installer_files\env\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "D:\Programs\one-click-installers-tts-6.0\installer_files\env\lib\site-packages\fairseq\models\wav2vec\wav2vec2.py", line 1003, in forward
x, layer_results = self.extract_features(x, padding_mask, layer)
File "D:\Programs\one-click-installers-tts-6.0\installer_files\env\lib\site-packages\fairseq\models\wav2vec\wav2vec2.py", line 1049, in extract_features
x, (z, lr) = layer(
File "D:\Programs\one-click-installers-tts-6.0\installer_files\env\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "D:\Programs\one-click-installers-tts-6.0\installer_files\env\lib\site-packages\fairseq\models\wav2vec\wav2vec2.py", line 1260, in forward
x, attn = self.self_attn(
File "D:\Programs\one-click-installers-tts-6.0\installer_files\env\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "D:\Programs\one-click-installers-tts-6.0\installer_files\env\lib\site-packages\fairseq\modules\multihead_attention.py", line 539, in forward
return F.multi_head_attention_forward(
File "D:\Programs\one-click-installers-tts-6.0\installer_files\env\lib\site-packages\torch\nn\functional.py", line 5334, in multi_head_attention_forward
attn_output = scaled_dot_product_attention(q, k, v, attn_mask, dropout_p, is_causal)
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 48.95 GiB (GPU 0; 11.99 GiB total capacity; 50.20 GiB already allocated; 0 bytes free; 54.29 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
I have tried adding --gpu-memory 11000 to the flags in webui.py as
def run_model():
os.chdir("tts-generation-webui")
run_cmd("python server.py --gpu-memory 11000") # put your flags here!
And also added to start_windows.bat
set 'PYTORCH_CUDA_ALLOC_CONF=max_split_size_mb:512'
To no avail.
I tried reinstalling, and then I noticed these errors in the install
process.
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
audiocraft 1.3.0a0 requires hydra-core>=1.1, but you have hydra-core 1.0.7 which is incompatible.
audiocraft 1.3.0a0 requires torch==2.1.0, but you have torch 2.0.0 which is incompatible.
rvc-beta 0.1.1 requires scipy==1.9.3, but you have scipy 1.12.0 which is incompatible.
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
styletts2 0.1.7 requires scipy<2.0.0,>=1.10.0, but you have scipy 1.9.3 which is incompatible.
There are support threads that rhyme with this in discussions of
oobabooga, but I haven't found anything specific to this system. If there's
any direction any of you could provide I'd be grateful.
I'm using an RTX 4070 (12GB VRAM) and I have 96GB RAM in the PC with an i7
10th gen CPU running Windows 11 Pro.
Thanks!
—
Reply to this email directly, view it on GitHub
<#293>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/ABTRXI7H7KVFSTGXALSZ5HLYZTGDJAVCNFSM6AAAAABFEGBRG2VHI2DSMVQWIX3LMV43ERDJONRXK43TNFXW4OZWGQYTAOJVGI>
.
You are receiving this because you are subscribed to this thread.Message
ID: ***@***.***>
|
Beta Was this translation helpful? Give feedback.
1 reply
-
I'll see if I can force the audioclips to be short. Even though it might
not reject them, the model can only use the last 15 seconds. Either way,
I'll add the tips about usage, thanks for letting me know.
…On Wed, Mar 27, 2024, 1:42 AM ChrisCantwell ***@***.***> wrote:
I have tried a few audio samples all of which were plenty longer than 15
seconds. The shortest I attempted was 1 minute and it did not succeed.
—
Reply to this email directly, view it on GitHub
<#293 (reply in thread)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ABTRXI5KQT5LPOMVMTX6CV3Y2IBXRAVCNFSM6AAAAABFEGBRG2VHI2DSMVQWIX3LMV43SRDJONRXK43TNFXW4Q3PNVWWK3TUHM4DSMRRGUYTK>
.
You are receiving this because you commented.Message ID:
***@***.***
.com>
|
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Greetings,
I installed this system through the one click installer, and when I then try to clone a voice I get this error.
I have tried adding --gpu-memory 11000 to the flags in webui.py as
And also added to start_windows.bat
set 'PYTORCH_CUDA_ALLOC_CONF=max_split_size_mb:512'
To no avail.
I tried reinstalling, and then I noticed these errors in the install process.
There are support threads that rhyme with this in discussions of oobabooga, but I haven't found anything specific to this system. If there's any direction any of you could provide I'd be grateful.
I'm using an RTX 4070 (12GB VRAM) and I have 96GB RAM in the PC with an i7 10th gen CPU running Windows 11 Pro.
Thanks!
Beta Was this translation helpful? Give feedback.
All reactions