Developer Blog

How to Collect Training Data for Face Recognition Part 2

Date:

Picking up from where we left off, we have successfully gathered a substantial and varied dataset. This time, we will take a look at what will happen after the data is collected.

4. Organizing the Data

With the data collection complete, we faced the intricate task of organizing it into a cohesive dataset. To achieve this, we adhered to the TRACE principle, a guideline ensuring that our data is effectively managed. Here's what each letter in TRACE represents:

Trackable
The dataset must be easily navigable for all users, from AI scientists to decision-makers. This involves clear access to key information such as the number of images, their intended use, collection details, file sizes, update history, and data accessibility.

Readable
Clarity is paramount. We ensure that everything from labels and column names to folder structures and filenames is intuitively understandable, reducing any potential for misinterpretation.

Applicable
The dataset is designed to seamlessly integrate with existing applications and pipelines, thus avoiding future restructuring needs. It’s tailored to be immediately useful in ongoing projects.

Clean
Quality control is crucial. This means excluding data that could detract from the dataset’s effectiveness, like improperly captured images or those likely to introduce errors into AI models.

Extendable
Flexibility for future use is a key consideration. The dataset’s structure is designed to easily accommodate additional data or new categories, anticipating and adapting to evolving use cases.

Our thermal camera will collect a blank image like this every time it boots up.
This image is labeled “handshaking” in the infamous Widerface dataset

5. Is Manual Reviewing Needed?

A common question in the era of automation is the necessity of manual review. Despite our array of automated tools for data cleaning, our data science lead highlights the irreplaceable value of human oversight. “Automated systems are efficient, but they can't catch everything. The human eye plays a crucial role in ensuring data integrity,” he explains. "Manual review not only catches overlooked errors but also deepens our understanding of the dataset we’ve created."


6. Completing the Dataset

The dataset represents not just a significant deliverable from our data science team, but also a valuable asset for the company. Therefore, several key principles guide us in finalizing a comprehensive dataset:

Balancing Automation with Human Insight
While relying on automated processes for efficiency, we always leave room for human intervention. This approach ensures a blend of technological precision and human discernment, essential for maintaining data quality.

Monitoring Acceptance Rates
Keeping a close eye on the acceptance rate of the data is crucial. It serves as an indicator of the dataset's quality and reliability, helping us to continually refine our data collection methods.

Designing A/B Testing Frameworks Early
Establishing the workflows and splits for A/B testing at an early stage is vital. This proactive planning allows us to effectively test and validate our data, ensuring it meets the rigorous standards required for training accurate and reliable AI models.


In our next blog, we will delve deeper into the role of data scientists and explore how they transform this raw data into a powerful tool for training AI models. Stay tuned for an insightful look into the world of data science and its impact on AI development.

Contact

Please feel free to contact us if you have any questions.

About Japan Computer Vision (Known as JCV)

Japan Computer Vision Corp., a subsidiary of Softbank Corp., is an AI company that leverages image recognition technology to provide cutting-edge solutions for the smart building and smart retail sectors.