Skip to content

The first collection of academic iKUN papers in the world

Notifications You must be signed in to change notification settings

cs-ikun/awesome-cs-ikun

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 

Repository files navigation

Awesome CS IKUN

【English | 中文

📖 Overview

  • This is the first collection of academic iKUN papers in the world 🔥🔥🔥
  • We have collected 4 KUNs of papers (1 KUN = 2.5) in Computer Science (CS)
  • We hope to bring joy through this work and inspire new ways of promoting scientific research
  • Your stars motivate us to keep updating, and we also welcome everyone to contribute awesome-cs-ikun via issues

🎉 News

  • June 21, 2024, We release the awesome-cs-ikun repository

📑 Content

🔥 Awesome Papers

🚀 Paper Details

[ICLR 2024] SyncDreamer: Generating Multiview-consistent Images from a Single-view Image

[CVPR 2024] iKUN: Speak to Trackers without Retraining

[ESWA 2024] Human Evolutionary Optimization Algorithm

[ICCV 2023] Tune-A-Video: One-Shot Tuning of Image Diffusion Models for Text-to-Video Generation

[202312] A Challenger to GPT-4V? Early Explorations of Gemini in Visual Expertise

[202310] You Only Train Once: A Unified Framework for Both Full-Reference and No-Reference Image Quality Assessment

[202305] Control-A-Video: Controllable Text-to-Video Generation with Diffusion Models

[202305] Watermarking Diffusion Model

[202106] Federated Learning on Non-IID Data: A Survey

[201811] iQIYI-VID: A Large Dataset for Multi-modal Person Identification

📚 Paper List

Title Venue Year Code
SyncDreamer: Generating Multiview-consistent Images from a Single-view Image ICLR 2024 Github
iKUN: Speak to Trackers without Retraining CVPR 2024 Github
Human Evolutionary Optimization Algorithm ESWA 2024 Github
Tune-A-Video: One-Shot Tuning of Image Diffusion Models for Text-to-Video Generation ICCV 2023 Github
A Challenger to GPT-4V? Early Explorations of Gemini in Visual Expertise arXiv 2023 Github
You Only Train Once: A Unified Framework for Both Full-Reference and No-Reference Image Quality Assessment arXiv 2023 Github
Control-A-Video: Controllable Text-to-Video Generation with Diffusion Models arXiv 2023 Github
Watermarking Diffusion Model arXiv 2023 -
Federated Learning on Non-IID Data: A Survey Neurocomputing 2021 -
iQIYI-VID: A Large Dataset for Multi-modal Person Identification arXiv 2018 -

📊 Analysis

✨ Citation

  • We conducted a citation analysis of iKUN-related papers (as of June 21, 2024) and found that some of them have received significant citations

🌟 Github Star

  • We also conducted a GitHub star analysis of iKUN-related papers (as of June 21, 2024) and found that some papers have a high number of GitHub stars

👨‍💻‍ Author Distribution

  • We found that all first authors of iKUN-related papers are Chinese

  • The main institutions where the papers are published include Tencent, NUS, HKU, etc.

  • We believe that including kun kun in papers can bring more exposure to their research, thereby indirectly increasing their influence. This serves as an excellent example of how the entertainment industry can contribute to the promotion of the scientific community 🎉, bringing new inspiration to the way modern research is showcased 🎊

🤝 Acknowledgments

  • Sincere thanks to all CS iKUN for including kun kun in your papers, you have pioneered a new way of idol-following 👍

📬 Contact

If you have any questions, feedback, or would like to get in touch, please feel free to reach out to us via email at [email protected]

About

The first collection of academic iKUN papers in the world

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published