问题是由于错误的修改导致的..此讨论的数据无效 #73
Replies: 3 comments 15 replies
-
也许你的DenseNet-BC改进主要受益于有限VRAM下更大的batchszie(至少对于Resnet版本,更大的batchsize可以提升表现)。 |
Beta Was this translation helpful? Give feedback.
-
对麻将的信息进行降采样效果是否存疑?我在Resnet中尝试过类似的做法但是效果不佳,而且数据只有一维的情况下计算量减少也不太多。 |
Beta Was this translation helpful? Give feedback.
-
@hyskylord @adsf0427 @Nitasurin @smly pt过大的原因找到了..我的修改损坏 Resnet版本的模型 导致模型输出异常... 异常代码修复: 下面是正常的对战数据 初步效果... 由于对战数量较低和模型未完全收敛....没多大意义.. 池层替换为步长为2的卷积层....... |
Beta Was this translation helpful? Give feedback.
-
精度...不确定.......(都没有online训练)..但我之前训练模型比不上网页版的Mortal4.0
另外..我的Rust好像坏掉了..编译的libriichi.pyd 无法调用(即使我重新下载libriichi源码编译)..之前编译的可以.Rust升级到最新也这样...
最后...
DenseNet-BC:
conv_channels = 48
num_blocks = 12 (Brain中一共有4个DenseBlock块 )
这样设置是因为对标原始
nn.Linear(32 * 34, 1024) 也就是 nn.Linear(1088 1024)
这样设置能达到nn.Linear(1086, 1024) (最接近..)
enable_cudnn_benchmark 必须关闭(特别慢..)
在获取version的代码下添加修改 如:
version = config['control']['version']
dense = config['control'].get('dense', False)
model.py 修改:
class BottleneckBlock(nn.Module):
def init(self, in_planes, out_planes):
super(BottleneckBlock, self).init()
inter_planes = out_planes * 4
self.bn1 = nn.BatchNorm1d(in_planes)
self.mish = nn.Mish(inplace=True)
self.conv1 = nn.Conv1d(in_planes, inter_planes, kernel_size=1, stride=1,
padding=0, bias=False)
self.bn2 = nn.BatchNorm1d(inter_planes)
self.conv2 = nn.Conv1d(inter_planes, out_planes, kernel_size=3, stride=1,
padding=1, bias=False)
def TransitionBlock(input_channels, num_channels):
return nn.Sequential(
nn.BatchNorm1d(input_channels), nn.Mish(),
nn.Conv1d(input_channels, num_channels, kernel_size=1),
nn.AvgPool1d(kernel_size=2, stride=2))
class DenseBlock(nn.Module):
def init(self, num_convs, input_channels, num_channels):
super(DenseBlock, self).init()
layer = []
for i in range(num_convs):
layer.append(BottleneckBlock(
num_channels * i + input_channels, num_channels))
class DenseNet(nn.Module):
def init(
self,
in_channels,
conv_channels,
num_blocks,
):
super().init()
z = 4
class Brain(nn.Module):
Beta Was this translation helpful? Give feedback.
All reactions