“The Process of Reversing the hydrogenation of Carbon Dioxide to formic Acid.” In a project aimed at developing smart tools to combat child abuse, computer scientists at the University of Groningen have developed a program to analyze noise generated by individual cameras. This information can be used to attach a video or image to a specific camera. The results were published in the journal SN Computer Technology dated June 4, 2022 and Expert Methods with Materials May 10, 2022.
The Netherlands is A Major Distributor of Digital content Depicting Child sexual abuse
“as reported by the Internet Watch Foundation in 2019. To combat this type of abuse, forensic tools are needed to analyze digital content to determine which images or videos contain child abuse. Another unused source of information is sound for photos or video frames. As part of the EU project, computer scientists at the University of Groningen, along with colleagues at the University of León (Spain), have found a way to extract and decipher sound from a photograph or video showing camera fingerprints. . made of it.”
You can compare it to certain bullet holes,” said George Azzopardi, an assistant professor in the Information Systems research team at the Bernoulli Institute for Mathematics, Computer Science and Artificial Intelligence at the University of Groningen. Each gun produces a specific pattern. or link the two characters found at different crime scenes with the same weapon.
Every Camera Has Some Flaws in its integrated Sensors
which manifests itself as the sound of an image in all the frames but invisible to the naked eye”, explains Azzopardi. This produces a specific sound from the camera. Guru Bennabhaktula, Ph.D. both in Groningen and at the University of León, he developed a sound-emitting system. “In visual perception, separators are used to extract information about the shape and composition of objects in an image to determine location,” said Bennabhaktula. “We use these spacers to make specific camera sounds.”
He created a computer model to extract the camera audio from video frames captured by 28 different cameras, taken from the publicly available VISION database, and used it to train the convolutional neural network. Later, he tested whether the trained system could detect frames made with the same camera. “It turns out we can do this with 72 percent accuracy,” Bennettula said. She also points out that sound can be limited to individual cameras, specific types, and individual cameras. In a test case, she found 99 percent accuracy in classifying 18 camera models using images from Dresden’s public databases.
Her work became part of the EU project, 4NSEEK, in which scientists and law enforcement agencies worked together to create smart tools to help combat child abuse. Azzopardi affirms that “each team was in charge of developing a specific forensic tool”. The model developed by Bennett may have such a practical application. “When the police obtain a camera from a child molester, they can link it to photos or videos found in the files.”
The model goes up, adds Bennett. “Using only five frames of the video, it is possible to split five videos per second. The category editor used in the model has been used by others to separate over 10,000 different classes in other desktop applications.” This means that a categorist can compare the sound of tens of thousands of cameras. The 4NSEEK project is now over, but Azzopardi is still in contact with intelligence experts and law firms to continue his investigation of it. “And we’re working on identifying the source of the source between the image pairs, which have different challenges. That will form our next article on this topic.”