Announcement

Collapse
No announcement yet.

1984, here we come!

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • 1984, here we come!

    Maybe not. But we're on our way to mind reading.

    WHAT are you thinking about? Which memory are you reliving right now? You may think that only you can answer, but by combining brain scans with pattern-detection software, neuroscientists are prying open a window into the human mind.

    In the last few years, patterns in brain activity have been used to successfully predict what pictures people are looking at, their location in a virtual environment or a decision they are poised to make. The most recent results show that researchers can now recreate moving images that volunteers are viewing - and even make educated guesses at which event they are remembering.

    Last week at the Society for Neuroscience meeting in Chicago, Jack Gallant, a leading "neural decoder" at the University of California, Berkeley, presented one of the field's most impressive results yet. He and colleague Shinji Nishimoto showed that they could create a crude reproduction of a movie clip that someone was watching just by viewing their brain activity. Others at the same meeting claimed that such neural decoding could be used to read memories and future plans - and even to diagnose eating disorders.

    Understandably, such developments are raising concerns about "mind reading" technologies, which might be exploited by advertisers or oppressive governments (see "The risks of open-mindedness"). Yet despite - or perhaps because of - the recent progress in the field, most researchers are wary of calling their work mind-reading. Emphasising its limitations, they call it neural decoding.

    The development of 'mind-reading' technologies is raising concerns about who might exploit them
    They are quick to add that it may lead to powerful benefits, however. These include gaining a better understanding of the brain and improved communication with people who can't speak or write, such as stroke victims or people with neurodegenerative diseases. There is also excitement over the possibility of being able to visualise something highly graphical that someone healthy, perhaps an artist, is thinking.

    So how does neural decoding work? Gallant's team drew international attention last year by showing that brain imaging could predict which of a group of pictures someone was looking at, based on activity in their visual cortex. But simply decoding still images alone won't do, says Nishimoto. "Our natural visual experience is more like movies."

    Nishimoto and Gallant started their most recent experiment by showing two lab members 2 hours of video clips culled from DVD trailers, while scanning their brains. A computer program then mapped different patterns of activity in the visual cortex to different visual aspects of the movies such as shape, colour and movement. The program was then fed over 200 days' worth of YouTube clips, and used the mappings it had gathered from the DVD trailers to predict the brain activity that each YouTube clip would produce in the viewers.

    Finally, the same two lab members watched a third, fresh set of clips which were never seen by the computer program, while their brains were scanned. The computer program compared these newly captured brain scans with the patterns of predicted brain activity it had produced from the YouTube clips. For each second of brain scan, it chose the 100 YouTube clips it considered would produce the most similar brain activity - and then merged them. The result was continuous, very blurry footage, corresponding to a crude "brain read-out" of the clip that the person was watching.

    In some cases, this was more successful than others. When one lab member was watching a clip of the actor Steve Martin in a white shirt, the computer program produced a clip that looked like a moving, human-shaped smudge, with a white "torso", but the blob bears little resemblance to Martin, with nothing corresponding to the moustache he was sporting.

    Another clip revealed a quirk of Gallant and Nishimoto's approach: a reconstruction of an aircraft flying directly towards the camera - and so barely seeming to move - with a city skyline in the background omitted the plane but produced something akin to a skyline. That's because the algorithm is more adept at reading off brain patterns evoked by watching movement than those produced by watching apparently stationary objects.

    "It's going to get a lot better," says Gallant. The pair plan to improve the reconstruction of movies by providing the program with additional information about the content of the videos.

    Team member Thomas Naselaris demonstrated the power of this approach on still images at the conference. For every pixel in a set of images shown to a viewer and used to train the program, researchers indicated whether it was part of a human, an animal, an artificial object or a natural one. The software could then predict where in a new set of images these classes of objects were located, based on brain scans of the picture viewers.

    Movies and pictures aren't the only things that can be discerned from brain activity, however. A team led by Eleanor Maguire and Martin Chadwick at University College London presented results at the Chicago meeting showing that our memory isn't beyond the reach of brain scanners.

    Movies and pictures aren't the only things that can be discerned from brain activity
    A brain structure called the hippocampus is critical for forming memories, so Maguire's team focused its scanner on this area while 10 volunteers recalled videos they had watched of different women performing three banal tasks, such as throwing away a cup of coffee or posting a letter. When Maguire's team got the volunteers to recall one of these three memories, the researchers could tell which the volunteer was recalling with an accuracy of about 50 per cent.

    That's well above chance, says Maguire, but it is not mind reading because the program can't decode memories that it hasn't already been trained on. "You can't stick somebody in a scanner and know what they're thinking." Rather, she sees neural decoding as a way to understand how the hippocampus and other brain regions form and recall a memory.

    Maguire could tackle this by varying key aspects of the clips - the location or the identity of the protagonist, for instance - and see how those changes affect their ability to decode the memory. She is also keen to determine how memory encoding changes over the weeks, months or years after memories are first formed.

    Meanwhile, decoding how people plan for the future is the hot topic for John-Dylan Haynes at the Bernstein Center for Computational Neuroscience in Berlin, Germany. In work presented at the conference, he and colleague Ida Momennejad found they could use brain scans to predict intentions in subjects planning and performing simple tasks. What's more, by showing people, including some with eating disorders, images of food, Haynes's team could determine which suffered from anorexia or bulimia via brain activity in one of the brain's "reward centres".

    Another focus of neural decoding is language. Marcel Just at Carnegie Melon University in Pittsburgh, Pennsylvania, and his colleague Tom Mitchell reported last year that they could predict which of two nouns - such as "celery" and "airplane" - a subject is thinking of, at rates well above chance. They are now working on two-word phrases.

    Their ultimate goal of turning brain scans into short sentences is distant, perhaps impossible. But as with the other decoding work, it's an idea that's as tantalising as it is creepy.



    Next we need to figure out ways to induce specific impulses in the brain, and then we'll have mind control. Oh wait, we've already got that, too.

    Perhaps you are not particularly worried about the idea of remote-controlled insects spying on you, on behalf of the Pentagon. Darpa-funded researchers at the University of California, Berkeley would like to disabuse you of that notion. They’ve succeeded in "controlling a live rhinoceros beetle by radio," Tech-On reports.

    Researchers hooked a series of six electrodes up to the brain and muscles of the insect. Then, during a demonstration at the MEMS 2009 academic conference in
    Sorrento, Italy, "they equipped the beetle with a module incorporating a circuit to send signals to the electrodes, wireless circuit, microcontroller and battery. The university has so far succeeded in several experiments of electrically controlling insects, but it used a radio control system this time."

    The researchers used rhinoceros beetles in this experiment because they can carry a weight of up to 3 [grams]. And another reason is that they look cool, according to the university.

    It’s one of a number of Darpa-backed experiments, to develop insect spies. The University of Michigan has its own cyborg beetles. University of Georgia researchers are implanting mini-machines into larval moths, so they can live to a ripe, old, remote-controlled age. Then there’s the idea to use sex-starved insects to follow bank robbers. Seriously.
    Click here if you're having trouble sleeping.
    "We confess our little faults to persuade people that we have no large ones." - François de La Rochefoucauld

  • #2
    Come on, guys. We have mind control games now. The future is ****ing here!

    Click here if you're having trouble sleeping.
    "We confess our little faults to persuade people that we have no large ones." - François de La Rochefoucauld

    Comment


    • #3
      To what extent can this help closeted gay men think they're acting on their fantasies, if any?

      Comment


      • #4
        For that, my friend, you need a holodeck. Unfortunately, those are still a long way off.
        Click here if you're having trouble sleeping.
        "We confess our little faults to persuade people that we have no large ones." - François de La Rochefoucauld

        Comment


        • #5
          More like a holodick

          Comment


          • #6
            “As a lifelong member of the Columbia Business School community, I adhere to the principles of truth, integrity, and respect. I will not lie, cheat, steal, or tolerate those who do.”
            "Capitalism ho!"

            Comment

            Working...
            X