-
Notifications
You must be signed in to change notification settings - Fork 399
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add QDepthwiseConv2D, DepthwiseConv2D, DepthwiseConv1D support #834
Changes from 2 commits
330a4ca
0014903
923e7ae
c160318
38530a1
b26621b
a09fb59
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -75,6 +75,22 @@ void depthwise_conv_2d_buffer_cl( | |
} | ||
} | ||
|
||
template <class data_T, class res_T, typename CONFIG_T> | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Same comment for using this function below, like in the 1D case. |
||
void depthwise_conv_2d_cl( | ||
hls::stream<data_T> &data, hls::stream<res_T> &res, | ||
typename CONFIG_T::weight_t weights[CONFIG_T::filt_height * CONFIG_T::filt_width * CONFIG_T::n_chan], | ||
typename CONFIG_T::bias_t biases[CONFIG_T::n_chan]) { | ||
#pragma HLS inline recursive | ||
switch (CONFIG_T::implementation) { | ||
case conv_implementation::linebuffer: | ||
depthwise_conv_2d_buffer_cl<data_T, res_T, CONFIG_T>(data, res, weights, biases); | ||
break; | ||
case conv_implementation::encoded: | ||
depthwise_conv_2d_encoded_cl<data_T, res_T, CONFIG_T>(data, res, weights, biases); | ||
break; | ||
} | ||
} | ||
|
||
template <class data_T, class res_T, typename CONFIG_T> | ||
void pointwise_conv_2d_cl(hls::stream<data_T> &data, hls::stream<res_T> &res, | ||
typename CONFIG_T::weight_t weights[CONFIG_T::n_chan * CONFIG_T::n_filt], | ||
|
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -4,6 +4,7 @@ | |
import numpy as np | ||
import pytest | ||
from qkeras.qconv2d_batchnorm import QConv2DBatchnorm | ||
from qkeras.qconvolutional import QDepthwiseConv2D | ||
from qkeras.qlayers import QActivation, QDense | ||
from qkeras.quantizers import ( | ||
binary, | ||
|
@@ -400,6 +401,48 @@ def test_qconv2dbn(randX_100_8_8_1, backend, io_type): | |
np.testing.assert_array_equal(y_qkeras, y_hls4ml.reshape(y_qkeras.shape)) | ||
|
||
|
||
@pytest.fixture(scope='module') | ||
def randX_10_32_32_3(): | ||
return np.random.rand(10, 32, 32, 3) | ||
|
||
|
||
# Currently only Vivado and Vitis is supported for io_stream. | ||
# Note, qkeras only supports 2d version of depthwise | ||
@pytest.mark.parametrize('backend', ['Vivado', 'Vitis']) | ||
@pytest.mark.parametrize('io_type', ['io_stream']) | ||
def test_qdepthwiseconv2d(randX_10_32_32_3, backend, io_type): | ||
''' | ||
Test proper handling of QConv2DBatchnorm. | ||
''' | ||
X = randX_10_32_32_3 | ||
X = np.round(X * 2**10) * 2**-10 # make it an exact ap_fixed<16,6> | ||
model = Sequential() | ||
model.add( | ||
QDepthwiseConv2D( | ||
kernel_size=(3, 3), | ||
input_shape=(32, 32, 3), | ||
depthwise_quantizer='quantized_bits(6, 0, alpha=1)', | ||
depthwise_initializer='ones', | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Isn't this test cheating a bit? All the weights will be 1s and the biases 0s, no point in using quantizers. It doesn't really test the proper handling of quantizers and the accuracy of the implementation. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. That's a leftover from me playing around with the initializer. Let me find better ones. For depthwise, the default is probably good ("he_normal"), so I'll move to that. The default for bias is 0. Let me see if I should change it. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Probably worth changing the bias initialiser - just to be sure the bias is not ignored in the C++ implementation. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Hi @jmitrevs . I'm facing some errors regarding parameters that are bigger than the int256 representation in the parameters.h file. |
||
bias_quantizer='quantized_bits(4, 0, alpha=1)', | ||
bias_initializer='zeros', | ||
activation='quantized_relu(3, 0)', | ||
) | ||
) | ||
model.compile() | ||
|
||
config = hls4ml.utils.config_from_keras_model(model, granularity='name') | ||
output_dir = str(test_root_path / f'hls4mlprj_qkeras_qdepthwiseconv2d_{backend}_{io_type}') | ||
hls_model = hls4ml.converters.convert_from_keras_model( | ||
model, hls_config=config, output_dir=output_dir, backend=backend, io_type=io_type | ||
) | ||
hls_model.compile() | ||
|
||
y_qkeras = model.predict(X) | ||
y_hls4ml = hls_model.predict(X) | ||
|
||
np.testing.assert_allclose(y_qkeras, y_hls4ml.reshape(y_qkeras.shape), rtol=1e-2, atol=0.01) | ||
|
||
|
||
@pytest.mark.parametrize('backend', ['Vivado', 'Vitis', 'Quartus']) | ||
@pytest.mark.parametrize('io_type', ['io_parallel', 'io_stream']) | ||
@pytest.mark.parametrize('strategy', ['Latency', 'Resource']) | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For the sake of code brevity - this function can be directly called, instead of the switch statement on lines 112 - 122
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You are right. The reason is of course historic, since the function on 112 -122 existed before. But I think I'll make the suggested change.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should check if using it this way it get inlined or not, otherwise it costs a cycle.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Anyway, I pushed the change. I expect that it would cause no difference in the produced code.