Please use this identifier to cite or link to this item:
https://rda.sliit.lk/handle/123456789/3141
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Mohottala, S | - |
dc.contributor.author | Abeygunawardana, S | - |
dc.contributor.author | Samarasinghe, P | - |
dc.contributor.author | Kasthurirathna, D | - |
dc.contributor.author | Abhayaratne, C | - |
dc.date.accessioned | 2023-01-23T10:47:28Z | - |
dc.date.available | 2023-01-23T10:47:28Z | - |
dc.date.issued | 2022-11 | - |
dc.identifier.citation | S. Mohottala, S. Abeygunawardana, P. Samarasinghe, D. Kasthurirathna and C. Abhayaratne, "2D Pose Estimation based Child Action Recognition," TENCON 2022 - 2022 IEEE Region 10 Conference (TENCON), Hong Kong, Hong Kong, 2022, pp. 1-7, doi: 10.1109/TENCON55691.2022.9977799. | en_US |
dc.identifier.issn | 21593442 | - |
dc.identifier.uri | https://rda.sliit.lk/handle/123456789/3141 | - |
dc.description.abstract | We present a graph convolutional network with 2D pose estimation for the first time on child action recognition task achieving on par results with LRCN on a benchmark dataset containing unconstrained environment based videos. | en_US |
dc.language.iso | en | en_US |
dc.publisher | Institute of Electrical and Electronics Engineers Inc. | en_US |
dc.relation.ispartofseries | IEEE Region 10 Annual International Conference, Proceedings/TENCON; | - |
dc.subject | child action recognition | en_US |
dc.subject | graph convolutional networks | en_US |
dc.subject | Long-term recurrent convolutional network | en_US |
dc.subject | transfer learning | en_US |
dc.title | 2D Pose Estimation based Child Action Recognition | en_US |
dc.type | Article | en_US |
dc.identifier.doi | 10.1109/TENCON55691.2022.9977799 | en_US |
Appears in Collections: | Department of Information Technology |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
2D_Pose_Estimation_based_Child_Action_Recognition.pdf | 380.76 kB | Adobe PDF | View/Open |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.