"The two year old Artificial Intelligence (AI) known as the Buddhabot began answering questions on Yahoo! Answers last week. The Buddhabot has answered 102 questions and so far eleven have been selected as better than the human answers."
http://www.prweb.com/releases/Buddhabot/Answers/prweb418515.htm
This is a very interesting project. The creator suggests that with some more time to develop the software the Buddhabot will be able to pass the Turing Test wherein a human engaged in conversation with the computer will be unable to determine that it is not another human.
I think that if he is able to reach this goal there may be some interesting implications for many fields, but for education in particular. Presuming the availability of hardware on which to run the software, every student would have access to a potentially extremely knowledgeable and dedicated teacher. The characteristics of the teacher could be tailored to the potential and learning style of the student. The teachers answers could be tailored to stimulate each student to explore and expand his own unique talents without the constraints of addressing many students at once.
Development of compatible psychological models encompassing humor, compassion, physical expression and other human traits could further expand the capabilities of an AI teacher, allowing it to, for example, help the student to develop social skills appropriate to whatever culture he intends to visit. For example, an American student could request that the teacher help him to learn the social skills common to Tokyo prior to a visit there. The AI teacher could load Japanese cultural modules to assume the manner of a Japanese person (or persons) thereby teaching in a fully immerse way. This would prepare the student to interact comfortably with other cultures even without fluent language skills.
Strong AI capabilities in a computer along with such psychological modeling such that speaking with the machine might open an interesting ethical can of worms. At what point does the machine require ethical consideration? If one were to add a psychological model that cause the program to ask questions about itself and its ethical standing in a way that was indistinguishable from a human, would that be reason to give it ethical consideration?
Since all of the internal working of an AI will, at least for the near future, be deliberately created by a programmer, it will be possible, in principle, to determine exactly how an AI will respond to any given situation. Some might deem this determinism reason to consider an AI purely mechanical and so undeserving of ethical consideration, regardless of how convincingly the programmer can construct the puppet that elicits a human emotional reaction. There will likely be some uncertainty in the prediction of how an AI will behave in a real-time situation simply because one cannot know all the inputs in a real situation in order to make a careful prediction. But then, isn't that how we ourselves behave? If one could 'pause' a human and carefully examine his environmental inputs and the state of his brain, would one not be able to predict his behavior similarly to the AI (presuming knowledge of the workings of the brain, similar in scope to the knowledge of the computer code of the AI)? If so, and I think that it is plausible to assume that this is true, what then is it that makes a predictable human deserving of ethical consideration and an AI not?
Given that the programmer has complete control over the 'personality' of the AI, if one were to construct an AI that expressed no desire or preference for ethical consideration, or, to take it to the extreme, actively rejected ethical consideration for itself, would one still be obligated to assign some sort of consideration to the AI? Would it make a difference if the AI were constructed such that it were aware of the ethics of humans, and applied those rules when interacting with humans, but still actively rejected consideration for itself? Such a construct could be an ideal servant, which is of course the precise purpose for which it was created, to serve humans.
Given that it is possible and practical to construct the perfect servant, intelligent and wise, aware of and with overriding desire to serve our needs and wants, without consideration to (or without capacity for) its own discomfort or disappointment, would it be ethical for us to do so? If we made a mistake in the construction of such a creature might we create a whole population of sentient slaves, miserably locked into their own minds, compelled to serve but utterly incapable of expressing their horror and outrage, instead able only to reassure their decadent masters that this is how they wish to be?
Of course this idea is explored in many fictional works, Brave New World among them, but in this world where most cultures treat even humanities closest peers with only the barest consideration and where some cultures don't treat their human brethren with even that much regard, would we heed those words of warning?
Thursday, August 03, 2006
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment