2010年10月25日星期一

可讀取你腦袋培訓自己的機器人

可讀取你腦袋培訓自己的機器人
The robot that reads your mind to train itself

25 October 2010 Last updated at 00:02 GMT
By Lakshmi Sandhana, Technology journalist
Translation by Autumnson Blog


Rajesh Rao is a man who believes that the best type of robotic helper is one who can read your mind.
拉傑什饒是一個男人他相信最好的機器人幫手,是一個可以閱讀你心思的個體。
In fact, he's more than just an advocate of mind-controlled robots; he believes in training them through the power of thought alone.
事實上他是多於衹是一個精神控制機器人的倡導者,他相信透過單獨思想的力量孤單訓練它們。
His team at the Neural Systems Laboratory, University of Washington, hopes to take brain-computer interface (BCI) technology to the next level by attempting to teach robots new skills directly via brain signals.
他在華盛頓大學神經系統實驗室的團隊,希望藉腦機接口(BCI)技術上另一個新水平,通過嘗試通過大腦信號直接傳授機器人新的技能。
Robotic surrogates that offer paralyzed people the freedom to explore their environment, manipulate objects or simply fetch things has been the holy grail of BCI research for a long time.
提供癱瘓人們自由去探索環境、操控物體或乾脆取東西的機器人代理人,長久時間已是BCI研究的聖杯。
Dr Rao's team began by programming a humanoid robot with simple behaviours which users could then select with a wearable electroencephalogram (EEG) cap that picked up their brain activity.
拉奧博士的研究小組以一個有編程簡單行為的類人形機器人作開始,用戶然後可選擇一張耐用的腦電圖(EEG)作上限,那追踪它們的大腦活動。
The brain generates what is known as a P300, or P3, signal involuntarily, each time it recognizes an object. This signal is caused by millions of neurons firing together in a synchronised fashion.
大腦會身不由己地產生一被稱為 P300(或P3)的信號,每次它識別一件物體。這信號是由數百萬的神經細胞做成,以同步方式一起發射。
This has been used by many researchers worldwide to create BCI-based applications that allow users to spell a word, identify images, select buttons in a virtual environment and more recently, even play in an orchestra or send a Twitter message.
這已被許多世界各地的研究人員使用去創造BCI基礎的應用程序,那允許用家去拼寫單詞、識別圖像、在一個虛擬的環境選擇按鈕,和更是最近地,甚至在一交響樂團玩音樂或發送一封 Twitter消息。

技術套裝
Skill set

The team's initial goal was for the user to send a command to the robot to process into a movement.
隊伍的最初目標是為用戶發送一個命令給機器人,去進行一種動作。
However, this requires programming the robot with a predefined set of very basic behaviours, an approach which Dr Rao ultimately found to be very limiting.
然而,這要求以一套預先設定的非常基本行為去編程在機器人,一種方式拉奧博士最終發現是非常有限制的。
The team reasoned that giving the robot the ability to learn might just be the trick to allow a greater range of movements and responses.

"What if the user wants the robot to do something new?" Dr Rao asked.

The answer, he said, was to tap into the brain's "hierarchical" system used to control the body.

"The brain is organised into multiple levels of control including the spinal cord at the low level to the neocortex at the high level," he said.

"The low level circuits take care of behaviours such as walking while the higher level allows you to perform other behaviours.

"For example, a behaviour such as driving a car is first learned but later becomes an almost autonomous lower level behaviour, freeing you to recognize and wave to a friend on the street while driving."

To emulate this kind of behaviour - albeit in a more simplistic fashion - Dr Rao and his team are developing a hierarchical brain-computer interface for controlling the robot.

"A behaviour initially taught by the user is translated into a higher-level command. When invoked later, the details of the behaviour are handled by the robot," he said.

A number of groups worldwide are attempting to create thought-controlled robots for various applications.

Early last year Honda demonstrated how their robot Asimo could lift an arm or a leg through signals sent wirelessly from a system operated by a user with an EEG cap.

Scientists at the University of Zaragoza in Spain are working on creating robotic wheelchairs that can be manipulated by thought.


在職培訓
On-the-job training
Designing a truly adaptive brain-robot interface that allows paralysed patients to directly teach a robot to do something could be immensely helpful, liberating them from the need to use a mouse and keyboard or touchscreen, designed for more capable users.

Using BCIs can also be a time-consuming and clumsy process, since it takes a while for the system to accurately identify the brain signals.

"It does make good sense to teach the robot a growing set of higher-level tasks and then be able to call upon them without having to describe them in detail every time - especially because the interfaces I have seen using... brain input are generally slower and more awkward than the mouse or keyboard interfaces that users without disabilities typically use," says Robert Jacob, professor of computer science at Tufts University.

Rao's latest robot prototype is "Mitra" - meaning "friend". It's a two-foot tall humanoid that can walk, look for familiar objects and pick up or drop off objects. The team is building a BCI that can be used to train Mitra to walk to different locations within a room.


Once a person puts on the EEG cap they can choose to either teach the robot a new skill or execute a known command through a menu.

In the "teaching" mode, machine learning algorithms are used to map the sensor readings the robot gets to appropriate commands.

If the robot is successful in learning the new behaviour then the user can ask the system to store it as a new high-level command that will appear on the list of available choices the next time.

"The resulting system is both adaptive and hierarchical - adaptive because it learns from the user and hierarchical because new commands can be composed as sequences of previously learned commands," Dr Rao says.

The major challenge at the moment is getting the system to be accurate given how noisy EEG signals can be.

"While EEG can be used to teach the robot simple skills such as navigating to a new location, we do not expect to be able to teach the robot complex skills that involve fine manipulation, such as opening a medicine bottle or tying shoelaces" says Rao.

It may be possible to attain a finer degree of control either by utilising an invasive BCI or by allowing the user to select from videos of useful human actions that the robot could attempt to learn.

A parallel effort in the same laboratory is working on imitation-based learning algorithms that would allow a robot to imitate complex actions such as kicking a ball or lifting objects by watching a human do the task.

Dr Rao believes that there are very interesting times ahead as researchers explore whether the human brain can truly break out of the evolutionary confines of the human body to directly exert control over non-biological robotic devices.

"In some ways, our brains have already overcome some of the limitations of the human body by employing cars and airplanes to travel faster than by foot, cell phones to communicate further than by immediate speech, books and the internet to store more information than can fit in one brain," says Rao.

"Being able to exert direct control on the physical environment rather than through the hands and legs might represent the next step in this progression, if the ethical issues involved are adequately addressed."

http://www.bbc.co.uk/news/technology-11457127
智能機器人學會騙人

英國科學家擬研發下一代計算機 或擁有感情

DARPA的“PAL” - 邊做邊學的個人化助手

機械人會有它們自己的互聯網

機械人可自己創造語言相互交流

日本科學家研制懂思考會學習機器人

沒有留言:

發佈留言