The Wild Selfie Dataset (WSD) contains the selfie images captured from the cameras of different smart phones, unlike existing datasets where most of the images are captured in controlled environment. The WSD dataset contains 45,424 images from 42 individuals (i.e., 24 female and 18 male subjects), which are divided into 40,862 training and 4,562 test images. The average number of images per subject is 1,082 with minimum and maximum number of images for any subject are 518 and 2,634, respectively. The proposed dataset consists of several challenges, including but not limited to augmented reality filtering, mirrored images, occlusion, illumination, scale, expressions, view-point, aspect ratio, blur, partial faces, rotation, and alignment. To obtain the dataset, please visit Project Github Page.
Please cite the following article if you use WSD dataset in your research:
L. Kumarapu, S. R. Dubey, S. Mukherjee, P. Mohan, S. P. Vinnakoti, and S. Karthikeya, "WSD: Wild Selfie Dataset for Face Recognition in Selfie Images", 8th International Conference on Computer Vision and Image Processing (CVIP), Nov 2023.
It is a novel dataset of multi-channel surface electromyography (sEMG) signals to evaluate the activities of daily living (ADL). The dataset is acquired from 25 able-bodied subjects while performing 22 activities categorized according to the functional arm activity behavioral system (FAABOS) (3 - full hand gestures, 6 - open/close office draw, 8 - grasping and holding of small office objects, 2 - flexion and extension of finger movements, 2 - writing and 1 - rest). The dataset can be analyzed for hand activity recognition classification performance. To obtain the dataset, please Visit Harvard Dataverse - repository.
Please cite the following repository and related article if you validate your method on this dataset:
N. K. Karnam, A. C. Turlapaty, S. R. Dubey, and B. Gokaraju, “Electromyography Analysis of Human Activities - DataBase 1 (EMAHA-DB1)”, Harvard Dataverse, 2023. (doi: 10.7910/DVN/R6JJ4Q)
N. K. Karnam, A. C. Turlapaty, S. R. Dubey, and B. Gokaraju, “EMAHA-DB1: A New Upper Limb sEMG Dataset for Classification of Activities of Daily Living”, IEEE Transactions on Instrumentation and Measurement, 2023.
The LEDNet dataset consists of image data of a field area that are captured from a mobile phone camera. Images in the dataset contain the information of an area where a PCB board is placed, containing 6 LEDs. Each state of the LEDs on the PCB board represents a binary number, with the ON state corresponding to binary 1 and the OFF state corresponding to binary 0. All the LEDs placed in sequence represent a binary sequence or encoding of an analog value. Dataset consists of image data of an experimental setup collected under different lighting conditions and various heights. To obtain the dataset, please Visit IEEE Data Port. High-Resolution data can be found at this drive.
Please cite the following paper if you validate your method on this dataset:
Nehul Rangappa, Yerra Raja Vara Prasad, and Shiv Ram Dubey, “LEDNet: A Deep Learning Based Ground Sensor Data Monitoring System For Wide Area Precision Agriculture Applications”, IEEE Sensors Journal, 22(1):842-850, Jan 2022.
This dataset contains a gallery set and a probe set containing face images of seven subjects captured in unconstrained environment. To obtain the dataset, please click here and download the agreement form. Please fill it, sign it and send it to the email id: srdubey@iiits.in and srdubey@iiita.ac.in.
Please cite the following paper if you validate your method on this dataset:
Shiv Ram Dubey and Snehasis Mukherjee, “A Multi-Face Challenging Dataset for Robust Face Recognition”, 15th International Conference on Control, Automation, Robotics and Vision (ICARCV), 2018.