[A] team of scientists in Switzerland is claiming that a fully functioning replica of a human brain could be built by 2020. … They are using one of the most powerful computers in the world to replicate the actions of the 100 billion neurons in the human brain. It is this approach - essentially copying how a brain works without necessarily understanding all of its actions - that will lead to success, the team hopes.
While this raises all sorts of fun things—like confounding ethical dilemmas and the singularity—let's see what this might do for language learning.
Read more...The article almost takes us directly to language learning:
And there are other questions, too, questions at the centre of the nurture versus nature debate. Would this human mind, for example, automatically feel guilt or would it need to be 'taught' a sense of morality first? And how would it respond to religion? Indeed, are these questions that a human mind asks of its own accord, or must it be taught to ask them first?
To that, I would add, how would this brain learn language? Do you just set it off on the internets and check back in a week? It sounds like they're trying to mimic an adult brain. Would the brain then skip the key developmental stages of infants in language learning?
I'll assume that, if they actually pull off the adult brain, they'll eventually figure out how to get language into that brain. And, if they do, the Turing test should be cake:
It is a simple test in which someone is asked to communicate, using a screen and keyboard, with a computer trying to mimic a human, and another, real human. If the judge cannot tell the machine from the other person, the computer has 'passed' the test. So far, every computer we have built has failed.
If this Swiss team's brain can pass this test, you could effectively have a translator on your computer that would be no different than a real person. Assuming that they could get the tech to fit into a package a bit smaller than "one of the most powerful computers in the world", you could potentially have C-3PO (fluent in over six million forms of communication) on your iPhone.
My guess is that this would decrease the demand for language learning; why bother getting a language in your head when a truly effective digital translator could handle it all for you, most likely matching nuances better than most live translators possibly could? (Aside from C-3PO, there weren't very many polyglots in Star Wars, were there?) While there's a twinge that that'd be a shame (especially given all the time I've spent learning languages), it would also be pretty damn cool.
On the other hand, a digital brain could make a great native-speaker tutor, so I suppose the sword cuts both ways.
[B]rain cells activated by an experience keep one another on biological speed-dial, like a group of people joined in common witness of some striking event. Call on one and word quickly goes out to the larger network of cells, each apparently adding some detail, sight, sound, smell. The brain appears to retain a memory by growing thicker, or more efficient, communication lines between these cells.
My approach to language learning has always been one of multiple types of exposure. Take a new vocab word, for example. Let's say you come across it in a book. You've now got speed dial set up between that book and the word, and perhaps between the word and the sentence, paragraph, thing it was in reference to, etc. Then you look it up. Now you've got the connections built to the meaning. Let's say you later hear it in a podcast. There's another connection. An example like this would seem to fit into the paradigm they suggest: you're building thicker connections to that word, and are thus more likely to learn it. Apply that to all units of language learning—words, phrases, grammar rules, characters, pronunciation, intonation, etc.—and you can see how various exposure makes language learning easier.
A quick look at the ethical issues, and a clip from The Matrix,after the jump.
Read more...Beyond learning mechanics, there are some ethical issues involved here as well. They have begun work on chemicals that affect memory, initially aimed at treating problems but enhancing performance is just a few steps beyond that. The issues largely parallel steroid use in sports:
“If this [critical memory] molecule is as important as it appears to be, you can see the possible implications,” said Dr. Todd C. Sacktor, a 52-year-old neuroscientist who leads the team at the SUNY Downstate Medical Center, in Brooklyn, which demonstrated its effect on memory. “For trauma. For addiction, which is a learned behavior. Ultimately for improving memory and learning.” … [W]hen scientists find a drug to strengthen memory, will everyone feel compelled to use it? … A substance that improved memory would immediately raise larger social concerns… “We know that people already use smart drugs and performance enhancers of all kinds, so a substance that actually improved memory could lead to an arms race,” Dr. Hyman said.
I can say for sure that I don't want to be one of the first guinea pigs to try something like this out, but if something like this is truly proven safe and enhances learning without screwing anything else up, it would certainly be something to consider.
Personally, however, rather than popping some pill and then using other learning techniques to learn a language, I'd rather take my languages like Neo took his kungfu. Just find a way to upload it straight to my brain, please. Rather than "I know kungfu", I'd be able to open my eyes and say, "I know Korean".
Japan is once again leading the way in sort-of-creepy-but-still-pretty-damn-cool robots. The child-like robot below, known as CB2, has some interesting language-learning abilities:
In coming decades, [Osaka University professor Minoru] Asada expects science will come up with a "robo species" that has learning abilities somewhere between those of a human and other primate species such as the chimpanzee.
And he hopes that this little CB2 may lead the way into this brave new world, with the goal to have the robo-kid speaking in basic sentences within about two years, matching the intelligence of a two-year-old child.
Read the full article here, or just check out the video below. So far, all the robot appears to be able to say is "e" え.