vllm安装glm-5模型报错

what 发表于: 2026-04-02   最后更新时间: 2026-04-02 18:29:28  
{{totalSubscript}} 订阅, 10 游览
# VLLM_USE_MODELSCOPE=true vllm serve zai-org/GLM-5-FP8 \
     --tensor-parallel-size 8 \
     --gpu-memory-utilization 0.85 \
     --speculative-config.method mtp \
     --speculative-config.num_speculative_tokens 1 \
     --tool-call-parser glm47 \
     --reasoning-parser glm45 \
     --enable-auto-tool-choice \
     --served-model-name glm-5-fp8
(APIServer pid=1175) INFO 04-02 18:26:57 [utils.py:293] 
(APIServer pid=1175) INFO 04-02 18:26:57 [utils.py:293]        █     █     █▄   ▄█
(APIServer pid=1175) INFO 04-02 18:26:57 [utils.py:293]  ▄▄ ▄█ █     █     █ ▀▄▀ █  version 0.16.0rc2.dev376+gf4af642a6
(APIServer pid=1175) INFO 04-02 18:26:57 [utils.py:293]   █▄█▀ █     █     █     █  model   zai-org/GLM-5-FP8
(APIServer pid=1175) INFO 04-02 18:26:57 [utils.py:293]    ▀▀  ▀▀▀▀▀ ▀▀▀▀▀ ▀     ▀
(APIServer pid=1175) INFO 04-02 18:26:57 [utils.py:293] 
(APIServer pid=1175) INFO 04-02 18:26:57 [utils.py:229] non-default args: {'model_tag': 'zai-org/GLM-5-FP8', 'enable_auto_tool_choice': True, 'tool_call_parser': 'glm47', 'model': 'zai-org/GLM-5-FP8', 'served_model_name': ['glm-5-fp8'], 'reasoning_parser': 'glm45', 'tensor_parallel_size': 8, 'gpu_memory_utilization': 0.85, 'speculative_config': {'method': 'mtp', 'num_speculative_tokens': 1}}
(APIServer pid=1175) 2026-04-02 18:26:58,024 - modelscope - WARNING - Repo zai-org/GLM-5-FP8 not exists on https://www.modelscope.cn, will try on alternative endpoint https://www.modelscope.ai.
(APIServer pid=1175) Downloading Model from https://www.modelscope.ai to directory: /root/.cache/modelscope/hub/models/zai-org/GLM-5-FP8
(APIServer pid=1175) 2026-04-02 18:27:00,030 - modelscope - INFO - Got 7 files, start to download ...
Downloading [tokenizer_config.json]: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████| 760/760 [00:01<00:00, 740B/s]
Downloading [generation_config.json]: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████| 198/198 [00:01<00:00, 186B/s]
Downloading [README.md]: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████| 10.2k/10.2k [00:01<00:00, 9.77kB/s]
Downloading [configuration.json]: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████| 48.0/48.0 [00:01<00:00, 40.8B/s]
Downloading [chat_template.jinja]: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████| 3.05k/3.05k [00:01<00:00, 2.54kB/s]
Downloading [config.json]: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████| 35.0k/35.0k [00:01<00:00, 28.7kB/s]
Downloading [tokenizer.json]: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████| 19.3M/19.3M [00:21<00:00, 949kB/s]
Processing 7 items: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 7.00/7.00 [00:21<00:00, 3.05s/it]
(APIServer pid=1175) 2026-04-02 18:27:21,366 - modelscope - INFO - Download model 'zai-org/GLM-5-FP8' successfully.
(APIServer pid=1175) 2026-04-02 18:27:21,727 - modelscope - ERROR - The request model: zai-org/GLM-5-FP8 does not exist!
(APIServer pid=1175) ERROR 04-02 18:27:21 [repo_utils.py:47] Error retrieving file list: The request model: zai-org/GLM-5-FP8 does not exist!, retrying 1 of 2
(APIServer pid=1175) 2026-04-02 18:27:24,083 - modelscope - ERROR - The request model: zai-org/GLM-5-FP8 does not exist!
(APIServer pid=1175) ERROR 04-02 18:27:24 [repo_utils.py:45] Error retrieving file list: The request model: zai-org/GLM-5-FP8 does not exist!19.0M/19.3M [00:21<00:00, 983kB/s]
(APIServer pid=1175) ERROR 04-02 18:27:24 [repo_utils.py:110] Error retrieving file list. Please ensure your `model_name_or_path``repo_type`, `token` and `revision` arguments are correctly set. Returning an empty list.
(APIServer pid=1175) 2026-04-02 18:27:24,652 - modelscope - ERROR - The request model: zai-org/GLM-5-FP8 does not exist!
(APIServer pid=1175) ERROR 04-02 18:27:24 [repo_utils.py:47] Error retrieving file list: The request model: zai-org/GLM-5-FP8 does not exist!, retrying 1 of 2
(APIServer pid=1175) 2026-04-02 18:27:26,987 - modelscope - ERROR - The request model: zai-org/GLM-5-FP8 does not exist!
(APIServer pid=1175) ERROR 04-02 18:27:26 [repo_utils.py:45] Error retrieving file list: The request model: zai-org/GLM-5-FP8 does not exist!
(APIServer pid=1175) Traceback (most recent call last):
(APIServer pid=1175)   File "/usr/local/bin/vllm", line 10, in <module>
(APIServer pid=1175)     sys.exit(main())
(APIServer pid=1175)              ^^^^^^
(APIServer pid=1175)   File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/cli/main.py", line 73, in main
(APIServer pid=1175)     args.dispatch_function(args)
(APIServer pid=1175)   File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/cli/serve.py", line 112, in cmd
(APIServer pid=1175)     uvloop.run(run_server(args))
(APIServer pid=1175)   File "/usr/local/lib/python3.12/dist-packages/uvloop/__init__.py", line 96, in run
(APIServer pid=1175)     return __asyncio.run(
(APIServer pid=1175)            ^^^^^^^^^^^^^^
(APIServer pid=1175)   File "/usr/lib/python3.12/asyncio/runners.py", line 195, in run
(APIServer pid=1175)     return runner.run(main)
(APIServer pid=1175)            ^^^^^^^^^^^^^^^^
(APIServer pid=1175)   File "/usr/lib/python3.12/asyncio/runners.py", line 118, in run
(APIServer pid=1175)     return self._loop.run_until_complete(task)
(APIServer pid=1175)            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(APIServer pid=1175)   File "uvloop/loop.pyx", line 1518, in uvloop.loop.Loop.run_until_complete
(APIServer pid=1175)   File "/usr/local/lib/python3.12/dist-packages/uvloop/__init__.py", line 48, in wrapper
(APIServer pid=1175)     return await main
(APIServer pid=1175)            ^^^^^^^^^^
(APIServer pid=1175)   File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/openai/api_server.py", line 471, in run_server
(APIServer pid=1175)     await run_server_worker(listen_address, sock, args, **uvicorn_kwargs)
(APIServer pid=1175)   File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/openai/api_server.py", line 490, in run_server_worker
(APIServer pid=1175)     async with build_async_engine_client(
(APIServer pid=1175)                ^^^^^^^^^^^^^^^^^^^^^^^^^^
(APIServer pid=1175)   File "/usr/lib/python3.12/contextlib.py", line 210, in __aenter__
(APIServer pid=1175)     return await anext(self.gen)
(APIServer pid=1175)            ^^^^^^^^^^^^^^^^^^^^^
(APIServer pid=1175)   File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/openai/api_server.py", line 96, in build_async_engine_client
(APIServer pid=1175)     async with build_async_engine_client_from_engine_args(
(APIServer pid=1175)                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(APIServer pid=1175)   File "/usr/lib/python3.12/contextlib.py", line 210, in __aenter__
(APIServer pid=1175)     return await anext(self.gen)
(APIServer pid=1175)            ^^^^^^^^^^^^^^^^^^^^^
(APIServer pid=1175)   File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/openai/api_server.py", line 122, in build_async_engine_client_from_engine_args
(APIServer pid=1175)     vllm_config = engine_args.create_engine_config(usage_context=usage_context)
(APIServer pid=1175)                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(APIServer pid=1175)   File "/usr/local/lib/python3.12/dist-packages/vllm/engine/arg_utils.py", line 1431, in create_engine_config
(APIServer pid=1175)     model_config = self.create_model_config()
(APIServer pid=1175)                    ^^^^^^^^^^^^^^^^^^^^^^^^^^
(APIServer pid=1175)   File "/usr/local/lib/python3.12/dist-packages/vllm/engine/arg_utils.py", line 1283, in create_model_config
(APIServer pid=1175)     return ModelConfig(
(APIServer pid=1175)            ^^^^^^^^^^^^
(APIServer pid=1175)   File "/usr/local/lib/python3.12/dist-packages/pydantic/_internal/_dataclasses.py", line 121, in __init__
(APIServer pid=1175)     s.__pydantic_validator__.validate_python(ArgsKwargs(args, kwargs), self_instance=s)
(APIServer pid=1175) pydantic_core._pydantic_core.ValidationError: 1 validation error for ModelConfig
(APIServer pid=1175)   Value error, Invalid repository ID or local directory specified: 'zai-org/GLM-5-FP8'.
(APIServer pid=1175) Please verify the following requirements:
(APIServer pid=1175) 1. Provide a valid Hugging Face repository ID.
(APIServer pid=1175) 2. Specify a local directory that contains a recognized configuration file.
(APIServer pid=1175)    - For Hugging Face models: ensure the presence of a 'config.json'.
(APIServer pid=1175)    - For Mistral models: ensure the presence of a 'params.json'.
(APIServer pid=1175)  [type=value_error, input_value=ArgsKwargs((), {'model': ...rocessor_plugin': None}), input_type=ArgsKwargs]
(APIServer pid=1175)     For further information visit https://errors.pydantic.dev/2.12/v/value_error
更新于 2026-04-02
在线,9小时前登录

查看vLLM更多相关的文章或提一个关于vLLM的问题,也可以与我们一起分享文章