Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

New pruning type for our ICLR22 paper: Grouped Kernel Pruning — a densely structured pruning granularity with better pruning freedom than filter/channel methods. #35

Open
henryzhongsc opened this issue Nov 22, 2022 · 6 comments
Labels
enhancement New feature or request

Comments

@henryzhongsc
Copy link

henryzhongsc commented Nov 22, 2022

Greetings,

We are the authors of the said paper/code, and we thank you for your inclusion; it has certainly generated some traffic for us.

However, it might be worth noting that our pruning granularity is not F (filter level). Our algorithm prunes at a Grouped Kernel level, which is — to the best of our knowledge — the most fine-grained approach under the constraint of outputting a densely structured pruned network, much like channel or filter prunings.

Since pushing the pruning freedom further while remaining structured is probably our most important contribution, we'd appreciate a simple fix (and maybe a new type category if you're feeling generous — as we'd certainly welcome more adaptations on the grouped kernel pruning framework). Thanks!

@cyfwry
Copy link

cyfwry commented Nov 22, 2022 via email

@he-y
Copy link
Owner

he-y commented Nov 22, 2022

@choh Thanks for your comments!
Grouped Kernel level is an interesting direction. Is it possible for you to contribute to the repo with a list of papers in this direction? This will definitely give others a better view.
Thanks a lot!

@he-y he-y added the enhancement New feature or request label Nov 22, 2022
@henryzhongsc
Copy link
Author

@he-y As far as we know, Grouped Kernel (within the realm of structured pruning) is a pretty novel pruning granularity, and there is no previous adaptation of GKP within the scope of publishers your repo tracks.

The first close adaption of GKP we know of is [1]; but it requires iterative analysis of intermediate feature maps, a special fine-tuning/retraining process with knowledge distillation, yet it doesn't provide any experiment results comparable with modern pruning arts (only ZF-Net and VGG on ImageNet). We believe these factors might explain why it didn't gain much traffic.

There's another one [2] that utilizes grouped convolution + pruning, but its pruned networks are not dense due to unequal group sizes. There's also [3], which is pretty much our ICLR 22's GKP framework without the lottery-driven clustering or the greedy pruning algo, but this also came from our co-authors. We wanted to get this out — for couple admin purposes — before our ICLR 22 paper, but it ended up as a concurrent work.

I am up to contribute, but all of them are not within your repo's scope of publishers, so I don't know if you'd like me to include them.


At the risk of too actively advertising, we'd humbly argue that our ICLR 22 paper is the first clean adaptation of Grouped Kernel level pruning with SOTA results. There are no new GKP methods after us yet, but we'd welcome the pruning community to explore more within the GKP scope, as we believe it makes sense to pursue a higher degree of pruning freedom yet retain those densely structured properties (empirical results also suggest a one-shot adoption of GKP is often better than most iterative filter pruning methods with much higher retraining/fine-tuning budget). And we feel like — given the mass attention this repo receives — one of the best ways is probably to add a GKP tag (maybe along with some other structured tags, like C for channel-pruning).

Let me know how you think, thx!


[1] Niange Yu et al., Accelerating Convolutional Neural Networks by Group-wise 2D-filter Pruning, IJCNN 2017.
[2] Qingbei Guo et al., Self-grouping Convolutional Neural Networks. Neural Networks 2020.
[3] Guanqun Zhang et al., Group-based network pruning via nonlinear relationship between convolution filters. Applied Intelligence 2022.

@henryzhongsc
Copy link
Author

henryzhongsc commented Nov 22, 2022

On the note of giving people a better view, we happened to have this pruning granularity illustration made for rebuttal purposes. We can generalize it on GKP and add a close-up for weight pruning if you'd like.

illustration

@he-y
Copy link
Owner

he-y commented Nov 28, 2022

@choh Thank you for your detailed explanation about this direction! It's something between weight pruning and filter pruning.
Since research in this direction is at an early stage currently, I will leave the issue open so that others who are interested can get more information.
I will add the GKP tag when there are more research papers in this direction. Thanks!

@henryzhongsc henryzhongsc changed the title Correction on pruning type: Revisit Kernel Pruning with Lottery Regulated Grouped Convolutions. New pruning type for our ICLR22 paper: Grouped Kernel Pruning — a densely structured granularity with better pruning freedom than filter/channel methods. Dec 1, 2022
@henryzhongsc
Copy link
Author

@he-y Thank you, that's fair enough. I have updated my issue title to provide more information, and I will certainly reach out should there be more methods following the GKP direction.

Good day :)

@henryzhongsc henryzhongsc changed the title New pruning type for our ICLR22 paper: Grouped Kernel Pruning — a densely structured granularity with better pruning freedom than filter/channel methods. New pruning type for our ICLR22 paper: Grouped Kernel Pruning — a densely structured pruning granularity with better pruning freedom than filter/channel methods. Dec 1, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

3 participants