- 17,133
- Posts
- 12
- Years
- she / they
- Seen Jan 12, 2024
Just how real is Google deep-mind? Do you actually think your technology is spying on you in new and invasive ways? Are those security updates really looking out for your best interest? Or, maybe you're just paranoid!
Regardless, enough concern has been generated that a new AI known as "Neural Voice Camouflage" can prevent your devices from eavesdropping on your conversations. The whole point is to replace any distinguishable language with abstract, indecipherable noise. Companies have been known to use "bossware" to monitor their employees when they're near their computers, giving them virtual auditory access to your private conversations. There's a multitude of spyware apps that are recording your phone calls and tracking your personal information. Even home devices such as Amazon's Echo can record everyday conversations.
The new AI claims to use an "adversarial attack" of sorts. The strategy utilizes machine learning, in which algorithms find patterns in data, to tweak sounds in a way that causes an AI, but not the human ear, to mistake it for white noise. Essentially, the idea is to combat one AI listening device with another AI voice scrambling device.
It could all be in your head too. ¯\_(ツ)_/¯
Regardless, enough concern has been generated that a new AI known as "Neural Voice Camouflage" can prevent your devices from eavesdropping on your conversations. The whole point is to replace any distinguishable language with abstract, indecipherable noise. Companies have been known to use "bossware" to monitor their employees when they're near their computers, giving them virtual auditory access to your private conversations. There's a multitude of spyware apps that are recording your phone calls and tracking your personal information. Even home devices such as Amazon's Echo can record everyday conversations.
The new AI claims to use an "adversarial attack" of sorts. The strategy utilizes machine learning, in which algorithms find patterns in data, to tweak sounds in a way that causes an AI, but not the human ear, to mistake it for white noise. Essentially, the idea is to combat one AI listening device with another AI voice scrambling device.
But, isn't it just as likely that these AI meant to disrupt human speech are just as capable of recording, recognizing, and employing the same technology their creator's insist to eliminate? Where do you draw your personal when it comes to machine learning being easily manipulated and do you see yourself using a technology like this? Maintaining privacy is hard work, where do you see yourself on the spectrum of AI security?The scientists overlaid the output of their system onto recorded speech as it was being fed directly into one of the automatic speech recognition (ASR) systems that might be used by eavesdroppers to transcribe. The system increased the ASR software's word error rate from 11.3% to 80.2%. "I'm nearly starved myself, for this conquering kingdoms is hard work," for example, was transcribed as "im mearly starme my scell for threa for this conqernd kindoms as harenar ov the reson".
The error rates for speech disguised by white noise and a competing adversarial attack (which, lacking predictive capabilities, masked only what it had just heard with noise played half a second too late) were only 12.8% and 20.5%, respectively. The work was presented in a paper last month at the International Conference on Learning Representations, which peer reviews manuscript submissions.
It could all be in your head too. ¯\_(ツ)_/¯