最近在做多模态大模型相关的实验,采取的codebase是使用LLaVA-1.5作为backbone的,但是这个模型显然太老了,我希望希望能把它改编成Qwen3.5的模型。
llava风格的模型各个部分的划分还是比较清晰的,只要在llava/language_model/下添加一个继承自LlavaMetaModel的模型,使用Qwen3.5作为实际的backbone。在llava/multimodal_encoder/中添加Qwen对应的Visual Model作为ViT.
于是我就把Qwen3.5/Qwen3的源码也拉下来了,让Copilot根据源码以及我的意图进行修改,没想到Copilot给我带来了无穷的烦恼..
首当其冲的是梯度变成NaN的问题:
bashAfter: tensor(0, device='cuda:2') tensor(0, device='cuda:2')
Before: tensor(0, device='cuda:1') tensor(0, device='cuda:1')
After: tensor(0, device='cuda:1') tensor(0, device='cuda:1')
[step 1] loss=4.1013
GOLD : Medium.
PRED : Based<|im_end|>
[step 1] loss=3.9050
GOLD : Cylindrical.
PRED : Basedylindrical<|im_end|>
[step 1] loss=2.8436
GOLD : Closed.
PRED : The<|im_end|>
/root/miniconda3/envs/qwen-3d/lib/python3.12/site-packages/torch/functional.py:554: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argume
nt. (Triggered internally at /pytorch/aten/src/ATen/native/TensorShape.cpp:4322.)
return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined]
Before: tensor(0, device='cuda:0') tensor(0, device='cuda:0')
After: tensor(0, device='cuda:0') tensor(0, device='cuda:0')
[step 1] loss=4.7032
GOLD : Cabinet.
PRED : Basedabinet<|im_end|>
Before: tensor(0, device='cuda:0') tensor(0, device='cuda:0')
After: tensor(0, device='cuda:0') tensor(0, device='cuda:0')
Before: Before: tensor(0, device='cuda:1') tensor(0, device='cuda:1')
After: tensor(0, device='cuda:1') tensor(0, device='cuda:1')
Before: tensor(0, device='cuda:2') tensor(0, device='cuda:3') tensor(0, device='cuda:2')
tensor(0, device='cuda:3')
After: After: tensor(0, device='cuda:2') tensor(0, device='cuda:3') tensor(0, device='cuda:2')
tensor(0, device='cuda:3')
[step 2] loss=2.7932
GOLD : The corridor is lined with a series of doors along its length, indicating the way to adjacent rooms. there are no visible windows, which is common in corridor design. the walls are decorated in light colors, the floor is dark hardwood, and the ceiling is a flat plane.
PRED : Based corridor region a with ** ** of doors, its length, leading it room to other rooms. The are no windows windows in as is typical for interior design to The walls are typically with a colors, and floor is made,, and the ceiling is white light white with
[step 2] loss=4.6449
GOLD : Standing upright.
PRED : Based on on
[step 2] loss=2.2299
GOLD : No.
PRED : No,
Before: tensor(0, device='cuda:1') tensor(0, device='cuda:1')
After: tensor(0, device='cuda:1') tensor(0, device='cuda:1')
Before: Before: tensor(0, device='cuda:2') tensor(0, device='cuda:2')
After: tensor(0, device='cuda:2') tensor(0, device='cuda:2')
tensor(0, device='cuda:0') tensor(0, device='cuda:0')
After: tensor(0, device='cuda:0') tensor(0, device='cuda:0')
Before: tensor(0, device='cuda:3') tensor(0, device='cuda:3')
After: tensor(0, device='cuda:3') tensor(0, device='cuda:3')
[step 3] loss=5.1326
GOLD : <chair 14> used in conjunction with table to meet additional seating needs, highlighting its functional role.
PRED : Basedanswer>1>><|im_end|> as the with < provide<|im_end|> needs needs<|im_end|> typically its role role in
[step 3] loss=2.4497
GOLD : White.
PRED : Based<|im_end|>
[step 3] loss=5.7281
GOLD : Yes.
PRED : No,
[step 3] loss=2.8701
GOLD : No, there isn't.
PRED : No, there is't a
Before: Before: tensor(0, device='cuda:2') tensor(0, device='cuda:2')
After: tensor(0, device='cuda:2') tensor(0, device='cuda:2')
tensor(0, device='cuda:0') tensor(0, device='cuda:0')
Before: After: tensor(0, device='cuda:0') tensor(0, device='cuda:0')
Before: tensor(0, device='cuda:1') tensor(0, device='cuda:1')
After: tensor(0, device='cuda:1') tensor(0, device='cuda:3') tensor(0, device='cuda:1')
tensor(0, device='cuda:3')
After: tensor(0, device='cuda:3') tensor(0, device='cuda:3')
[step 4] loss=5.2262
GOLD : In good condition.
PRED : Based the condition<|im_end|>
[step 4] loss=3.8528
GOLD : No, the window is furniture and the window is decorations.
PRED : Based, the two does not, the chair is not.
[step 4] loss=2.3563
GOLD : No, there isn't.
PRED : No, there is't a
[step 4] loss=3.6326
GOLD : Table.
PRED : Based<|im_end|>
{'loss': '3.64', 'grad_norm': '1.414', 'learning_rate': '0', 'epoch': '0.1481'}
14%|���������������������� | 1/7 [00:18<01:46, 17.75s/it]Before: tensor(11368960, device='cuda:1') tensor(0, device='cuda:1')
After: tensor(0, device='cuda:1') tensor(0, device='cuda:1')
Before: tensor(11048960, device='cuda:2') tensor(0, device='cuda:2')
After: tensor(0, device='cuda:2') tensor(0, device='cuda:2')
Before: tensor(10127360, device='cuda:0') tensor(0, device='cuda:0')
After: tensor(0, device='cuda:0') tensor(0, device='cuda:0')
Before: tensor(9446400, device='cuda:3') tensor(0, device='cuda:3')
After: tensor(0, device='cuda:3') tensor(0, device='cuda:3')
[step 5] loss=nan
GOLD : Closed.
PRED : !!
[step 5] loss=nan
GOLD : This staircase has a classic design with a dark wood finish complemented by white steps. It takes on a rectangular shape, with each step having rounded edges. The dark handrail, crafted from polished wood, follows a curved design, providing both safety and aesthetic appeal. The staircase is of standard size, running vertically and positioned inside a room. It is in good condition, showing no obvious signs of wear or damage. Leading upwards, the staircase indicates access to the floor above. Functionally, it serves as a pathway for moving between floors. A design highlight is the contrast between the dark wood steps and the white walls, adding depth and dimension to the space.
PRED : !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
[step 5] loss=nan
GOLD : Turned on.
PRED : !!!!
[step 5] loss=nan
GOLD : No.
PRED : !!
Before: tensor(11530240, device='cuda:1') tensor(0, device='cuda:1')
After: tensor(0, device='cuda:1') tensor(0, device='cuda:1')
Before: tensor(13811200, device='cuda:2') tensor(0, device='cuda:2')
After: tensor(0, device='cuda:2') tensor(0, device='cuda:2')
Before: tensor(11425280, device='cuda:3') tensor(0, device='cuda:3')
After: tensor(0, device='cuda:3') tensor(0, device='cuda:3')
Before: tensor(11440640, device='cuda:0') tensor(0, device='cuda:0')
After: tensor(0, device='cuda:0') tensor(0, device='cuda:0')
[step 6] loss=nan
GOLD : Provides hot and cold water for kitchen sink use.
PRED : !!!!!!!!!!
可以发现,在训练几个step之后loss就爆炸了,模型也开始输出无意义内容.
首先值得怀疑的是输入输出中是否还有NaN, 但是理论上我的数据预处理并没有动,但为了保险起见还是检查了一下他的代码,确实会对nan进行处理,在检查模型的输入的时候也确实没有问题。
我又继续在一众大模型的联合指导下在forward和backward上添加了hook检查,检查每一层的输入输出从什么时候出现了NaN,梯度什么时候出现了NaN. 结果如下:
NaN gradient in model.layers.28.linear_attn.out_proj.weight NaN gradient in model.layers.28.linear_attn.dt_bias NaN gradient in model.layers.28.linear_attn.A_log NaN gradient in model.layers.28.linear_attn.conv1d.weight NaN gradient in model.layers.28.linear_attn.in_proj_a.weight NaN gradient in model.layers.28.linear_attn.in_proj_b.weight NaN gradient in model.layers.28.linear_attn.in_proj_qkv.weight NaN gradient in model.layers.28.input_layernorm.weight NaN gradient in model.layers.28.input_layernorm.weight NaN gradient in model.layers.27.mlp.down_proj.weight NaN gradient in model.layers.27.mlp.down_proj.weight NaN gradient in model.layers.27.mlp.up_proj.weight NaN gradient in model.layers.27.mlp.up_proj.weight NaN gradient in model.layers.27.mlp.gate_proj.weight NaN gradient in model.layers.27.post_attention_layernorm.weight NaN gradient in model.layers.27.mlp.gate_proj.weight NaN gradient in model.layers.27.post_attention_layernorm.weight NaN gradient in model.layers.27.self_attn.o_proj.weight NaN gradient in model.layers.27.self_attn.o_proj.weight NaN gradient in model.layers.27.self_attn.v_proj.weight NaN gradient in model.layers.27.self_attn.k_norm.weight NaN gradient in model.layers.27.self_attn.v_proj.weight NaN gradient in model.layers.27.self_attn.k_norm.weight NaN gradient in model.layers.27.self_attn.k_proj.weight NaN gradient in model.layers.27.self_attn.k_proj.weight NaN gradient in model.layers.27.self_attn.q_norm.weight NaN gradient in model.layers.27.input_layernorm.weight NaN gradient in model.layers.27.self_attn.q_norm.weight NaN gradient in model.layers.27.input_layernorm.weight NaN gradient in model.layers.27.self_attn.q_proj.weight NaN gradient in model.layers.27.self_attn.q_proj.weight NaN gradient in model.layers.27.input_layernorm.weight NaN gradient in model.layers.27.input_layernorm.weight NaN gradient in model.layers.26.mlp.down_proj.weight NaN gradient in model.layers.26.mlp.down_proj.weight NaN gradient in model.layers.26.mlp.down_proj.weight NaN gradient in model.layers.26.mlp.up_proj.weight NaN gradient in model.layers.26.mlp.down_proj.weight NaN gradient in model.layers.26.mlp.up_proj.weight NaN gradient in model.layers.26.mlp.up_proj.weight NaN gradient in model.layers.26.mlp.gate_proj.weight NaN gradient in model.layers.26.mlp.up_proj.weight NaN gradient in model.layers.26.post_attention_layernorm.weight NaN gradient in model.layers.26.mlp.gate_proj.weight NaN gradient in model.layers.26.mlp.gate_proj.weight NaN gradient in model.layers.26.post_attention_layernorm.weight NaN gradient in model.layers.26.post_attention_layernorm.weight NaN gradient in model.layers.26.mlp.gate_proj.weight NaN gradient in model.layers.26.post_attention_layernorm.weight NaN gradient in model.layers.26.linear_attn.out_proj.weight NaN gradient in model.layers.26.linear_attn.out_proj.weight NaN gradient in model.layers.26.linear_attn.out_proj.weight NaN gradient in model.layers.26.linear_attn.out_proj.weight NaN gradient in model.layers.26.linear_attn.dt_bias NaN gradient in model.layers.26.linear_attn.A_log NaN gradient in model.layers.26.linear_attn.dt_bias NaN gradient in model.layers.26.linear_attn.dt_bias NaN gradient in model.layers.26.linear_attn.A_logNaN gradient in model.layers.26.linear_attn.conv1d.weight NaN gradient in model.layers.26.linear_attn.A_log NaN gradient in model.layers.26.linear_attn.in_proj_a.weight NaN gradient in model.layers.26.linear_attn.dt_biasNaN gradient in model.layers.26.linear_attn.in_proj_b.weight NaN gradient in model.layers.26.linear_attn.conv1d.weight NaN gradient in model.layers.26.linear_attn.conv1d.weight NaN gradient in model.layers.26.linear_attn.A_log NaN gradient in model.layers.26.linear_attn.in_proj_a.weight NaN gradient in model.layers.26.linear_attn.in_proj_a.weight NaN gradient in model.layers.26.linear_attn.in_proj_b.weight NaN gradient in model.layers.26.linear_attn.in_proj_b.weight NaN gradient in model.layers.26.linear_attn.conv1d.weight NaN gradient in model.layers.26.linear_attn.in_proj_z.weight NaN gradient in model.layers.26.linear_attn.in_proj_a.weight NaN gradient in model.layers.26.linear_attn.in_proj_b.weight NaN gradient in model.layers.26.linear_attn.in_proj_z.weight NaN gradient in model.layers.26.linear_attn.in_proj_z.weight NaN gradient in model.layers.26.linear_attn.in_proj_z.weight NaN gradient in model.layers.26.linear_attn.in_proj_qkv.weight NaN gradient in model.layers.26.linear_attn.in_proj_qkv.weight NaN gradient in model.layers.26.linear_attn.in_proj_qkv.weight NaN gradient in model.layers.26.input_layernorm.weight NaN gradient in model.layers.26.linear_attn.in_proj_qkv.weight NaN gradient in model.layers.26.input_layernorm.weight NaN gradient in model.layers.26.input_layernorm.weight NaN gradient in model.layers.26.input_layernorm.weight NaN gradient in model.layers.25.mlp.down_proj.weight NaN gradient in model.layers.25.mlp.down_proj.weight NaN gradient in model.layers.25.mlp.down_proj.weight NaN gradient in model.layers.25.mlp.up_proj.weight NaN gradient in model.layers.25.mlp.down_proj.weight NaN gradient in model.layers.25.mlp.up_proj.weight NaN gradient in model.layers.25.mlp.up_proj.weight NaN gradient in model.layers.25.mlp.up_proj.weight NaN gradient in model.layers.25.mlp.gate_proj.weight NaN gradient in model.layers.25.post_attention_layernorm.weight NaN gradient in model.layers.25.mlp.gate_proj.weight NaN gradient in model.layers.25.mlp.gate_proj.weight NaN gradient in model.layers.25.post_attention_layernorm.weight NaN gradient in model.layers.25.post_attention_layernorm.weight NaN gradient in model.layers.25.mlp.gate_proj.weight NaN gradient in model.layers.25.post_attention_layernorm.weight NaN gradient in model.layers.25.linear_attn.out_proj.weight NaN gradient in model.layers.25.linear_attn.out_proj.weight NaN gradient in model.layers.25.linear_attn.out_proj.weight NaN gradient in model.layers.25.linear_attn.out_proj.weight NaN gradient in model.layers.25.linear_attn.dt_bias NaN gradient in model.layers.25.linear_attn.A_log NaN gradient in model.layers.25.linear_attn.dt_bias NaN gradient in model.layers.25.linear_attn.dt_bias NaN gradient in model.layers.25.linear_attn.A_log NaN gradient in model.layers.25.linear_attn.conv1d.weight NaN gradient in model.layers.25.linear_attn.A_log NaN gradient in model.layers.25.linear_attn.in_proj_a.weight NaN gradient in model.layers.25.linear_attn.conv1d.weight NaN gradient in model.layers.25.linear_attn.in_proj_b.weightNaN gradient in model.layers.25.linear_attn.dt_bias NaN gradient in model.layers.25.linear_attn.conv1d.weight NaN gradient in model.layers.25.linear_attn.in_proj_a.weight NaN gradient in model.layers.25.linear_attn.A_log NaN gradient in model.layers.25.linear_attn.in_proj_a.weight NaN gradient in model.layers.25.linear_attn.in_proj_b.weight NaN gradient in model.layers.25.linear_attn.in_proj_b.weight NaN gradient in model.layers.25.linear_attn.conv1d.weight NaN gradient in model.layers.25.linear_attn.in_proj_z.weight NaN gradient in model.layers.25.linear_attn.in_proj_a.weight NaN gradient in model.layers.25.linear_attn.in_proj_b.weight NaN gradient in model.layers.25.linear_attn.in_proj_z.weight NaN gradient in model.layers.25.linear_attn.in_proj_z.weight NaN gradient in model.layers.25.linear_attn.in_proj_z.weight NaN gradient in model.layers.25.linear_attn.in_proj_qkv.weight NaN gradient in model.layers.25.linear_attn.in_proj_qkv.weight NaN gradient in model.layers.25.linear_attn.in_proj_qkv.weight NaN gradient in model.layers.25.input_layernorm.weight NaN gradient in model.layers.25.linear_attn.in_proj_qkv.weight NaN gradient in model.layers.25.input_layernorm.weight NaN gradient in model.layers.25.input_layernorm.weight NaN gradient in model.layers.25.input_layernorm.weight [Qwen3_5RMSNorm] NaN in OUTPUT NaN gradient in model.layers.24.mlp.down_proj.weight NaN gradient in model.layers.24.mlp.down_proj.weight NaN gradient in model.layers.24.mlp.down_proj.weight NaN gradient in model.layers.24.mlp.up_proj.weight NaN gradient in model.layers.24.mlp.up_proj.weight NaN gradient in model.layers.24.mlp.up_proj.weight
这下就很奇怪了:我的Qwen代码和LLaVA-1.5代码并没有本质区别,但一个能正常训练一个并不能. 区别肯定是在模型侧: Qwen3.5使用了线性注意力,并且原生是用BF16存储的;LLaVA-1.5是比较古早的架构,原生用FP32存储的,但训练的时候也是用的bf16. 基于这些差别,在Copilot的指导下,也根据上面nan出现的顺序,自然怀疑到了linear_attn的数值精度问题,强制将其转成fp32又在网络里添加了一层RMSNorm。
结果是,训练倒是能正常训练,推理又出问题了!推理的时候会出现NaN,这又是什么玩意...
我感觉不能这样下去了,毕竟Qwen3.5原生就是使用的bf16,用bf16理论上不会出问题。
在机缘巧合之下,我改用torchrun进行训练,这下就好了!所以问题可以锁定在Deepspeed的配置上了。又在Codex的帮助下,找到了最终解决办法,zero2配置如下:
json{
"fp16": {
"enabled": "auto"
},
"bf16": {
"enabled": "auto",
"check_grad_overflow": true
},
"communication_data_type": "fp32",
"grad_accum_dtype": "fp32",
"train_micro_batch_size_per_gpu": "auto",
"train_batch_size": "auto",
"gradient_accumulation_steps": "auto",
"gradient_clipping": "auto",
"zero_optimization": {
"stage": 2,
"overlap_comm": false,
"contiguous_gradients": false,
"sub_group_size": 1e9,
"reduce_bucket_size": 50000000
}
}
可以看出,应该是数据交流的时候的精度问题,deepspeed没有默认开启bf16的溢出检查,将其改成fp32后就能正常训练了
LLaVA使用CLIP作为visual encoder,但clip期待的是恒定大小的图片,Copilot在修改的时候没有正确发现qwen是动态图片大小的,导致qwen一直在不看图片下进行训练..
人类唯一对ai的优势是上下文比ai长,能够自动压缩记忆。ai工具的改进也能真正地促进生产力(copilot->codex).所以怎么用好ai改代码还真是一个学问,希望这种错误少犯.。