Department of Defense Fiscal Year (FY) 2010 Budget Estimates May 2009, USA
```
Silent Talk
(U) Silent Talk will allow user-to-user communication on the battlefield without the use of vocalized speech
through analysis of neural signals. The brain generates word-specific signals prior to sending electrical
impulses to the vocal cords. These signals of “intended speech” will be analyzed and translated into
distinct words, allowing covert person-to-person communication. This program has three major goals: a)
to attempt to identify electroencephalography patterns unique to individual words, b) ensure that those
patterns are generalizable across users in order to prevent extensive device training, and c) construct a
fieldable pre-prototype that would decode the signal and transmit over a limited range.
```
https://commons.wikimedia.org/wiki/File:Fiscal_Year_2010_DAR...
This looks no harder than to train custom Kaldi (circa 2017) phoneme model on brain waves, and using remaining Kaldi's functionality for everything else, except for text-to-speech. There was WaveNet for the TTS at that time, with sound quality that is good enough for (and can be improved by) radio transmission.
point is - such tech is used right now to neutralize individuals. imagine hearing word "bread" inescapably couple hundred times a day coming from an unknown source right to your head. for months and (!) years right at the moment you are trying to conceptualize slightly harder thought than usual. everywhere you go 24/7. while there's no help from anywhere (police hasn't answered me for 2 years and counting) as the general public brushes it off as schizophrenia (it's not - voices completely stopped when the lightning storm took out the electricity) and Church paints it as the second coming of Christ (or antichrist when more suitable).
my mostly uneducated guess of what's going on is: radio wave gets sent, human body slightly modulates it and same signal gets received back and used to reconstruct (approximation of?) EEG from noise delta. neural models is the secret sauce that makes such signal processing possible
The Microwave Auditory Effect
James C. Lin, Ph.D. degrees in electrical engineering from the University of Washington, Seattle
University of Illinois Chicago
```
The preceding sections document that an audible sound originates from within the head when human subjects are exposed to pulsed microwave radiation. The auditory detection of pulsed microwaves in laboratory animals has been confirmed both in behavioral and neurophysiological studies. The site of microwave-to-sound conversion is shown to be in the brain tissue. The primary mechanism of interaction is microwave pulse-induced thermoelastic expansion of brain matter.
```
Neural decoding of music from the EEG, University of Essex
> Using only EEG data, without participant specific fMRI-informed source analysis, we were able to identify the music a participant was listening to with a mean rank accuracy of 59.2%
https://pmc.ncbi.nlm.nih.gov/articles/PMC9837107/
Were this the case, it would be trivial to read the internal monologue from your brain activity with a device placed on your head. Can you find me examples of medical devices that can do this?
That’s just a library that applies a CNN to EEG, but that doesn’t show that it can actually extract text reliably. As far as I know, the machines that successfully do that use the nerve signals to vocal muscles, not the brain directly.
and you need to take into account the background I have. (synthetic?) telepathy is what I'm forced to deal with every day (and yes - with nearly no way to prove it to you). radiomyography and microwave auditory effect is the best and most suitable explanation I've managed to find at least somewhat backed up with public scientific papers. no real contradictions, certainly seems more truthful than "evil shamans hate you and your astral implants". I don't have a hard evidence in a form of working device nor can I afford it. keep your pills to yourself
this particular library may not be as reliable as one would like to. but the approach is fairly simplistic and likely without an access to powerful data center computation power. as far as (!) you know - you have already acknowledged existence of such machines. what I'm advocating for is existence of slightly more exotic sensing mechanisms - available to be used en masse straight from the telecom towers. RMG as a successful substitute for EMG (which in turn is a substitute for EEG) in context of deciphering whatever data captured into inner monologue.
My logic is pretty simple. If it’s hard or impossible to do something by placing a high conductivity sensor on the surface of a person’s skin, then it’s probably not possible to do it from a long distance right now. Doing this with telecom towers, which are randomly positioned relative to people, would be an absolute technological marvel, sci fi stuff.
and if you wonder why would anyone do such thing - there's a peculiar coincidence: right before voices became annoyingly obnoxious - Russian State Revenue Service got hacked (not to mention there's an ongoing war in neighboring country for quite awhile). I was not intoxicated, kept up fairly healthy lifestyle. besides who's attacking or who's defending, surely enough - there's enough of steam for some casual torture
also - my suspicions are that promise of this kind of surveillance is precisely the reason for data center construction boom. that and augmented generative pornography with some war simulations on the side
Evaluation of antenna suitability for the use in radiomyography
Dublin Institute of Technology, Dublin, Ireland
```
The envisioned application is radiomyography which aims to detect muscular activity by the means of electromagnetic waves coupled into the human body.
The paper concludes that it is possible to detect changes in the thickness and the properties of the muscle solely by evaluating the reflection coefficient of an antenna structure.
The ability to detect these changes strongly depends on the antenna type.
```
```
Unvoiced or silent speech recognition recognizes speech by observing the EMG activity of muscles associated with speech.
It is targeted for use in noisy environments, and may be helpful for people without vocal cords, with aphasia, with dysphonia, and more.
```
[dead]
more or less same crap is achieved via radiomyography from your local telecom towers. down to deciphering your inner monologue
Department of Defense Fiscal Year (FY) 2010 Budget Estimates May 2009, USA
``` Silent Talk (U) Silent Talk will allow user-to-user communication on the battlefield without the use of vocalized speech through analysis of neural signals. The brain generates word-specific signals prior to sending electrical impulses to the vocal cords. These signals of “intended speech” will be analyzed and translated into distinct words, allowing covert person-to-person communication. This program has three major goals: a) to attempt to identify electroencephalography patterns unique to individual words, b) ensure that those patterns are generalizable across users in order to prevent extensive device training, and c) construct a fieldable pre-prototype that would decode the signal and transmit over a limited range. ``` https://commons.wikimedia.org/wiki/File:Fiscal_Year_2010_DAR...
This looks no harder than to train custom Kaldi (circa 2017) phoneme model on brain waves, and using remaining Kaldi's functionality for everything else, except for text-to-speech. There was WaveNet for the TTS at that time, with sound quality that is good enough for (and can be improved by) radio transmission.
Thanks for a link!
point is - such tech is used right now to neutralize individuals. imagine hearing word "bread" inescapably couple hundred times a day coming from an unknown source right to your head. for months and (!) years right at the moment you are trying to conceptualize slightly harder thought than usual. everywhere you go 24/7. while there's no help from anywhere (police hasn't answered me for 2 years and counting) as the general public brushes it off as schizophrenia (it's not - voices completely stopped when the lightning storm took out the electricity) and Church paints it as the second coming of Christ (or antichrist when more suitable).
my mostly uneducated guess of what's going on is: radio wave gets sent, human body slightly modulates it and same signal gets received back and used to reconstruct (approximation of?) EEG from noise delta. neural models is the secret sauce that makes such signal processing possible
The Microwave Auditory Effect James C. Lin, Ph.D. degrees in electrical engineering from the University of Washington, Seattle University of Illinois Chicago
``` The preceding sections document that an audible sound originates from within the head when human subjects are exposed to pulsed microwave radiation. The auditory detection of pulsed microwaves in laboratory animals has been confirmed both in behavioral and neurophysiological studies. The site of microwave-to-sound conversion is shown to be in the brain tissue. The primary mechanism of interaction is microwave pulse-induced thermoelastic expansion of brain matter. ```
https://ieeexplore.ieee.org/document/9366412
[dead]
Neural decoding of music from the EEG, University of Essex > Using only EEG data, without participant specific fMRI-informed source analysis, we were able to identify the music a participant was listening to with a mean rank accuracy of 59.2% https://pmc.ncbi.nlm.nih.gov/articles/PMC9837107/
Were this the case, it would be trivial to read the internal monologue from your brain activity with a device placed on your head. Can you find me examples of medical devices that can do this?
here's an example - you can download and run code for yourself https://github.com/CNN-for-EEG-classification/CNN-EEG
That’s just a library that applies a CNN to EEG, but that doesn’t show that it can actually extract text reliably. As far as I know, the machines that successfully do that use the nerve signals to vocal muscles, not the brain directly.
and you need to take into account the background I have. (synthetic?) telepathy is what I'm forced to deal with every day (and yes - with nearly no way to prove it to you). radiomyography and microwave auditory effect is the best and most suitable explanation I've managed to find at least somewhat backed up with public scientific papers. no real contradictions, certainly seems more truthful than "evil shamans hate you and your astral implants". I don't have a hard evidence in a form of working device nor can I afford it. keep your pills to yourself
What makes you a strategic target? I don’t have those voices.
this particular library may not be as reliable as one would like to. but the approach is fairly simplistic and likely without an access to powerful data center computation power. as far as (!) you know - you have already acknowledged existence of such machines. what I'm advocating for is existence of slightly more exotic sensing mechanisms - available to be used en masse straight from the telecom towers. RMG as a successful substitute for EMG (which in turn is a substitute for EEG) in context of deciphering whatever data captured into inner monologue.
My logic is pretty simple. If it’s hard or impossible to do something by placing a high conductivity sensor on the surface of a person’s skin, then it’s probably not possible to do it from a long distance right now. Doing this with telecom towers, which are randomly positioned relative to people, would be an absolute technological marvel, sci fi stuff.
I don't think it's that hard nor impossible. I mean - here's a dude playing a video game with (!) somebody else's hand https://www.washington.edu/news/2013/08/27/researcher-contro... 12 years ago
and if you wonder why would anyone do such thing - there's a peculiar coincidence: right before voices became annoyingly obnoxious - Russian State Revenue Service got hacked (not to mention there's an ongoing war in neighboring country for quite awhile). I was not intoxicated, kept up fairly healthy lifestyle. besides who's attacking or who's defending, surely enough - there's enough of steam for some casual torture
also - my suspicions are that promise of this kind of surveillance is precisely the reason for data center construction boom. that and augmented generative pornography with some war simulations on the side
Novel Muscle Sensing by Radiomyography (RMG) and Its Application to Hand Gesture Recognition Cornell University, Ithaca, NY https://pmc.ncbi.nlm.nih.gov/articles/PMC10950291/
Evaluation of antenna suitability for the use in radiomyography Dublin Institute of Technology, Dublin, Ireland
``` The envisioned application is radiomyography which aims to detect muscular activity by the means of electromagnetic waves coupled into the human body. The paper concludes that it is possible to detect changes in the thickness and the properties of the muscle solely by evaluating the reflection coefficient of an antenna structure. The ability to detect these changes strongly depends on the antenna type. ```
https://ieeexplore.ieee.org/document/6711930
https://en.wikipedia.org/wiki/Electromyography
``` Unvoiced or silent speech recognition recognizes speech by observing the EMG activity of muscles associated with speech. It is targeted for use in noisy environments, and may be helpful for people without vocal cords, with aphasia, with dysphonia, and more. ```
radiomyography ~= form of electromyography
spying on Americans is illegal so Americans outsource spying of Americans to British