Yonsei Korean Sign Language Dataset
The Yonsei Korean Sign Language Dataset consists of depth-only video data collected from 20 volunteers for 50 Korean Sign Language vocabularies.
The Yonsei Korean Sign Language Dataset consists of depth-only video data collected from 20 volunteers for 50 Korean Sign Language vocabularies.
Data Collection Environment: Apple iPhone X depth camera
Dataset Size: Approximately 10,000 videos
Approx. 5,000 raw depth-only videos
Approx. 5,000 refactored depth-only videos (Detailed refactoring scheme will soon be open)
Words included in Dataset: Refer to the README in the dataset archive
Number of Study Participants: 20 people
Licence: This dataset is licensed under a Creative Commons Attribution 4.0 International License.
Please cite this dataset in your work using the following publication, where the dataset was initially introduced.
[Bibtex format]
@article{park21sugo,author = {Park, HyeonJung and Lee, Youngki and Ko, JeongGil},title = {Enabling Real-time Sign Language Translation on Mobile Platforms with On-board Depth Cameras},year = {2021},issue_date = {June 2021},publisher = {Association for Computing Machinery},address = {New York, NY, USA},volume = {5},number = {2},journal = {Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.},month = jun,}
[Plain text format]
Hyeonjung Park, Youngki Lee, JeongGil Ko. "Enabling Real-time Sign Language Translation on Mobile Platforms with On-board Depth Cameras", Proceedings of the ACM (PACM) on Interactive, Mobile, Wearable and Ubiquitous Technologies (IMWUT), Volume 5, Issue 2, June 2020.
For further inquiries please contact HyeonJung Park (hyeonjung@yonsei.ac.kr) or JeongGil Ko (jeonggil.ko@yonsei.ac.kr)