Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[GPU/OpenCL] Initial version of SwiGLU Layer with OpenCL ops #2624

Merged
merged 1 commit into from
Jun 25, 2024

Conversation

niket-agarwal
Copy link
Contributor

@niket-agarwal niket-agarwal commented Jun 6, 2024

Added initial version of SwiGLU Layer for GPU. This is a basic implementation using naive kernel.

Changes added with this PR:

  • swiglu_cl.cpp added containing the new SwigluLayerCL class for OpenCL implementation.
  • Added unittest_layers_swiglu_cl.cpp to test Swiglu Layer on GPU.

Signed-off-by: Niket Agarwal [email protected]

@taos-ci
Copy link
Collaborator

taos-ci commented Jun 6, 2024

📝 TAOS-CI Version: 1.5.20200925. Thank you for submitting PR #2624. Please a submit 1commit/1PR (one commit per one PR) policy to get comments quickly from reviewers. Your PR must pass all verificiation processes of cibot before starting a review process from reviewers. If you are new member to join this project, please read manuals in documentation folder and wiki page. In order to monitor a progress status of your PR in more detail, visit http://ci.nnstreamer.ai/.

@taos-ci
Copy link
Collaborator

taos-ci commented Jun 6, 2024

:octocat: cibot: @niket-agarwal, nntrainer/layers/cl_layers/swiglu_cl.h does not include Doxygen tags such as @file @brief @author @bug. You must include the Doxygen tags in the source code. Please refer to a Doxygen manual at http://github.com/nnstreamer/TAOS-CI/blob/main/ci/doc/doxygen-documentation.md

Copy link
Collaborator

@taos-ci taos-ci left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@niket-agarwal, 💯 All CI checkers are successfully verified. Thanks.

@taos-ci
Copy link
Collaborator

taos-ci commented Jun 7, 2024

:octocat: cibot: @niket-agarwal, nntrainer/layers/cl_layers/swiglu_cl.h does not include Doxygen tags such as @file @brief @author @bug. You must include the Doxygen tags in the source code. Please refer to a Doxygen manual at http://github.com/nnstreamer/TAOS-CI/blob/main/ci/doc/doxygen-documentation.md

@taos-ci
Copy link
Collaborator

taos-ci commented Jun 7, 2024

:octocat: cibot: @niket-agarwal, The last line of a text file must have a newline character. Please append a new line at the end of the line in nntrainer/layers/cl_layers/swiglu_cl.h.

@niket-agarwal niket-agarwal changed the title [WIP][GPU/OpenCL] Initial version of SwiGLU Layer with OpenCL ops [GPU/OpenCL] Initial version of SwiGLU Layer with OpenCL ops Jun 7, 2024
@niket-agarwal niket-agarwal marked this pull request as ready for review June 7, 2024 13:14
@myungjoo
Copy link
Member

myungjoo commented Jun 8, 2024

Your file format is not consistent. Refer to the static check: https://github.com/nnstreamer/nntrainer/actions/runs/9417608692/job/25967822229?pr=2624

Maybe you are using "MS-DOS" type of text file. Please use Linux/Unix type. The utility "dos2unix" will easily fix this. You may need to configure your code editor properly to prevent this issue, too.

@myungjoo
Copy link
Member

myungjoo commented Jun 8, 2024

Please add proper doxygen tags, too.
Without them, we are required to add doxygen tags for every class and function later right before a milestone release, which is going to be a huge task if we don't add tags for each commit. Please add them at each commit when you write a new function or class:

[ERROR] File name: api/ccapi/include/layer.h, 300 line, SwigluCl(const function needs @brief tag 

[ERROR] File name: nntrainer/layers/cl_layers/swiglu_cl.h, 98 line, void function needs @brief tag 

[ERROR] File name: nntrainer/layers/cl_layers/swiglu_cl.h, 101 line, void function needs @brief tag 

[ERROR] File name: nntrainer/layers/cl_layers/swiglu_cl.h, 104 line, void function needs @brief tag 

"nchw", "fp16", "fp16");

GTEST_PARAMETER_TEST(SwigluGPU16, LayerGoldenTest,
::testing::Values(swiglu_basic_plain_w16a16));
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please add negative test cases with _n prefixes.
To pass the release criteria, the number of negative cases should be larger than or equal to the number of positive cases.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you are going to add negative test cases after this PR, it's ok.

break;
}

size_t dim1_size = sizeof(float) * dim1;
Copy link
Contributor

@pallaviNNT pallaviNNT Jun 10, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldnt this be cl_half instead of float?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes right. Fixed.

Copy link
Collaborator

@taos-ci taos-ci left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@niket-agarwal, 💯 All CI checkers are successfully verified. Thanks.

Copy link
Contributor

@s-debadri s-debadri left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

* @brief Helper function to create SwigluCl layer
*/
inline std::unique_ptr<Layer>
SwigluCl(const std::vector<std::string> &properties = {},
Copy link
Contributor

@EunjuYang EunjuYang Jun 12, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

SwigluCl doesn't seem to be consistent with the previous implementation (e.g., fc_layer_cl). Naming the layer with Cl seems only to support cl kernel. What do you think about this?
I think separately exposing layer for cl version for API doesn't seem to be proper.
As I understood, compute_engine is used to hide this.

Copy link
Contributor Author

@niket-agarwal niket-agarwal Jun 12, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The CPU version of SwiGLU is not yet present for the NNTrainer. We can update this API once both CPU and GPU are available. If required I can change the naming of the layer and remove cl from it for now, or this could be done when adding the CPU version, what do you suggest?

Copy link
Contributor

@EunjuYang EunjuYang Jun 19, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

About the API policy, we may need to listen other reviewers' opinion as well. In my opinion, what about leaving @todo in the comment in order to replace the API later ?

Copy link
Contributor Author

@niket-agarwal niket-agarwal Jun 20, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added a generalized naming for the swiglu layer in the latest commit. Please have a look and let me know if you think todo comment is still required.

Copy link
Collaborator

@taos-ci taos-ci left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@niket-agarwal, 💯 All CI checkers are successfully verified. Thanks.

Copy link
Contributor

@djeong20 djeong20 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

please check the size of the buffer in SwiGLULayerCl::swiglu_cl() and swiglu_cl_fp16() and fix supportBackwarding().

size_t dim1_size = sizeof(float) * dim1;
size_t dim2_size = sizeof(float) * dim2;
int dim = int(dim1 * dim2);
opencl::Buffer inputA(context.context_inst_, dim1_size * dim2_size, true,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should the size be as follows?

Suggested change
opencl::Buffer inputA(context.context_inst_, dim1_size * dim2_size, true,
opencl::Buffer inputA(context.context_inst_, sizeof(float) * dim1 * dim2, true,

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes right! Corrected in latest commit.

nntrainer/layers/cl_layers/swiglu_cl.h Show resolved Hide resolved
../unittest/layers/unittest_layers_input.cpp \
../unittest/layers/unittest_layers_loss.cpp \
../unittest/layers/unittest_layers_fully_connected_cl.cpp \
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

any reason to remove this?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's not removed, just shifted to line 445.

@taos-ci
Copy link
Collaborator

taos-ci commented Jun 21, 2024

To contributor, We have used 'Signed-off-by:' notation by default to handle the license issues, that result from contributors. Note that 'Is there a Signed-off-by line?' is important because lawyers tell us we must have to it to cleanly maintain the open-source license issues even though it has nothing to do with the code itself.

Copy link
Collaborator

@taos-ci taos-ci left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@niket-agarwal, 💯 All CI checkers are successfully verified. Thanks.

Copy link
Collaborator

@taos-ci taos-ci left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@niket-agarwal, 💯 All CI checkers are successfully verified. Thanks.

@myungjoo
Copy link
Member

  1. Please rebase to the recent main.
  2. Any big commit or PR will suffer longer time to merge. In the next time, please try to submit smaller PRs.


Layer *create_swiglu_layer_cl() {
auto layer = new SwiGLULayerCl();
std::cout << "swiglu created\n";
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you do not need to std::out hear.


Layer *create_swiglu_layer_cl() {
auto layer = new SwiGLULayerCl();
std::cout << "swiglu created\n";
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

plz remove this std::cout.

}

void destroy_swiglu_layer_cl(Layer *layer) {
std::cout << "swiglu deleted\n";
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ditto.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Resolved.

Copy link
Collaborator

@taos-ci taos-ci left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@niket-agarwal, 💯 All CI checkers are successfully verified. Thanks.

@@ -694,6 +695,10 @@ std::string RunLayerContext::getKernelName(LayerKernel layerKernel) {
return "addition_cl";
case LayerKernel::ADD_FP16:
return "addition_cl_fp16";
case LayerKernel::SWIGLU:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not this PR, but we need to rethink that this Kernel Name and actual kernel object placed in RunContext. Originally RunContext is just for the input / output / weight and Tensors to compute.. and I think this ClKernel should be handled at cl layer itself as in CPU Kernel.

Added naive version of OpenCL implementation for SwiGLU Layer.
Incorporated kernel for ops used.
Added unit test for SwiGLU_layer_cl.

Signed-off-by: Niket Agarwal <[email protected]>
Copy link
Collaborator

@taos-ci taos-ci left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@niket-agarwal, 💯 All CI checkers are successfully verified. Thanks.

Copy link
Collaborator

@jijoongmoon jijoongmoon left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@jijoongmoon jijoongmoon merged commit ed2d27f into nnstreamer:main Jun 25, 2024
24 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

8 participants