Issue with predict and HLS Compilation #676
Replies: 11 comments
-
Hi @LordScarface , thanks for your post. Do you have padding in your conv2d in the Resnet model? Would you mind provide your model file also, thanks |
Beta Was this translation helpful? Give feedback.
-
Hi and thank you for the reply! Yes, the Conv2D Layers have the padding set to here is the Code used for generating the modeldef resnet_block(input_data, filters, conv_size):
x = Conv2D(filters, 1, activation=None, padding='same')(input_data)
x = BatchNormalization()(x)
x = Activation('relu')(x)
x = Conv2D(filters, conv_size, activation=None, padding='same')(x)
x = BatchNormalization()(x)
x = Activation('relu')(x)
x = Conv2D(filters, conv_size, activation=None, padding='same')(x)
x = BatchNormalization()(x)
x = Add()([x, input_data])
x = Activation('relu')(x)
y = Conv2D(filters, conv_size, activation=None, padding='same')(x)
y = BatchNormalization()(y)
y = Activation('relu')(y)
y = Conv2D(filters, conv_size, activation=None, padding='same')(y)
y = BatchNormalization()(y)
y = Add()([y, x])
y = Activation('relu')(y)
z = MaxPooling2D(2, strides = (2,1), padding = 'same') (y)
return z
num_resnet_blocks = 4
num_filters = 32
kernel_size = 5,1
rf_input = Input(shape=input_shp, name = 'rf_input')
x = Conv2D(num_filters, (kernel_size), activation=None, padding='same')(rf_input)
x = BatchNormalization()(x)
x = Activation('relu')(x)
for i in range(num_resnet_blocks):
x = resnet_block(x, num_filters, (kernel_size))
x = Conv2D(num_filters, (kernel_size), activation=None, padding = 'same')(x)
x = BatchNormalization()(x)
x = Activation('relu')(x)
# use if number of resnet blocks = 6
#x = Reshape((4,4,num_filters), input_shape = (16,1,num_filters)) (x)
# use if number of resent blocks = 4
x = Reshape((8,8,num_filters), input_shape = (32,1,num_filters)) (x)
x = GlobalAveragePooling2D()(x)
dense_1 = Dense(256, activation='relu')(x)
dropout_1 = Dropout(0.5)(dense_1)
dense_2 = Dense(128, activation='relu')(dropout_1)
dropout_2 = Dropout(0.5)(dense_2)
dense_3 = Dense(num_classes)(dropout_2)
softmax = Activation('softmax', name = 'softmax')(dense_3)
optimizer= Adam(learning_rate=0.00050)
model = keras.Model(rf_input, softmax)
model.compile(loss='categorical_crossentropy', metrics=["accuracy"]) The Model Summary
This is the trained model (trained only on the first 200k Samples of the Dataset) |
Beta Was this translation helpful? Give feedback.
-
Thanks, I am not sure the 'same' padding means 0 padding in your models. Normally, conv2d of hls4ml has zero padding but can configure it. I suggest you test a small model at first. |
Beta Was this translation helpful? Give feedback.
-
Thank you for the input, I switched the model to use Anyway, model passes synthesis now but Resource Usage seems high (Target ZCU104) : Strategy is Is this to be expected or is it too high? |
Beta Was this translation helpful? Give feedback.
-
As far as I know, the resource strategy is not available now. In this way, it mainly supports the latency strategy. If you want to get a good balance between latency and resources. I suggest that you may improve the coding from the hardware level. Hls4ml is more friendly with lightweight models. |
Beta Was this translation helpful? Give feedback.
-
The Resource Strategy has worked for me in the past, I saw that in #534 you were able to get past synthesis with the VGG-16 model, when choosing the Latency Strategy I get issues with Layers containing more that 4096 parameters, how did you adress that? My goal right now is to implement the model fully parallel without any optimizations and then to see how much things can be improved by quantization, pruning, etc. |
Beta Was this translation helpful? Give feedback.
-
Nice and u can share your resource strategy for ResNet if it's possible. Because it's too much array partition Here. You may consider to modify it or just comment. |
Beta Was this translation helpful? Give feedback.
-
For the Resource Strategy I just replaced this: hls_config['Model']['Precision'] = 'ap_fixed<16,6>'
hls_config['Model']['ReuseFactor'] = 1
for Layer in hls_config['LayerName'].keys():
hls_config['LayerName'][Layer]['Strategy'] = 'Latency'
hls_config['LayerName'][Layer]['ReuseFactor'] = 1
#If you want best numerical performance for high-accuray models, while the default latency strategy is faster but numerically more unstable
hls_config['LayerName']['softmax']['Strategy'] = 'Stable' with this: hls_config['Model']['Strategy'] = 'Resource'
hls_config['LayerName']['softmax']['Strategy'] = 'Stable'
hls_config['LayerName']['dense_28']['ReuseFactor'] = 16 I tried the Latency Strategy with commenting |
Beta Was this translation helpful? Give feedback.
-
Ok, thanks. I comment this because the size of layer is over 4096 but not sure your problems. Yes, I try to compile VGG-16 which encounter same memory problem. So I think we should reduce precision to 8bits. In this way, it is possible to deploy. |
Beta Was this translation helpful? Give feedback.
-
Btw, you can also try vitis_hls 2020.2. |
Beta Was this translation helpful? Give feedback.
-
Hi, I am also trying to work with the |
Beta Was this translation helpful? Give feedback.
-
Hi,
I'm trying to implement this model using hls4ml. The model conversion seems to be working but when calling
model.predict()
with the test data I get:When I open the generated HLS Project, it looks fine and when using
backend='VivadoAccelerator'
the HLS Compilation actually completes without any errors, however it only takes a minute, Resource Usage is around 0% and the summary says?
for Latency and Interval. I assume maybe only themyproject_axi()
wrapper was processed?When changing the backend to
backend='Vivado'
and running the HLS Compilation, it stops with the error:So I assume there is something wrong with the code generated by hls4ml?
I'm using version 0.6.0 of hls4ml, but I get the same issue with the master branch and Version 0.5.0.
I hope someone can help me out or point me in the right direction, thank you in advance,
Best Regards
My Code used for generating the HLS Project
The output generated
Beta Was this translation helpful? Give feedback.
All reactions