Thursday, August 03, 2006

A Brave New AI

"The two year old Artificial Intelligence (AI) known as the Buddhabot began answering questions on Yahoo! Answers last week. The Buddhabot has answered 102 questions and so far eleven have been selected as better than the human answers."

http://www.prweb.com/releases/Buddhabot/Answers/prweb418515.htm

This is a very interesting project. The creator suggests that with some more time to develop the software the Buddhabot will be able to pass the Turing Test wherein a human engaged in conversation with the computer will be unable to determine that it is not another human.

I think that if he is able to reach this goal there may be some interesting implications for many fields, but for education in particular. Presuming the availability of hardware on which to run the software, every student would have access to a potentially extremely knowledgeable and dedicated teacher. The characteristics of the teacher could be tailored to the potential and learning style of the student. The teachers answers could be tailored to stimulate each student to explore and expand his own unique talents without the constraints of addressing many students at once.

Development of compatible psychological models encompassing humor, compassion, physical expression and other human traits could further expand the capabilities of an AI teacher, allowing it to, for example, help the student to develop social skills appropriate to whatever culture he intends to visit. For example, an American student could request that the teacher help him to learn the social skills common to Tokyo prior to a visit there. The AI teacher could load Japanese cultural modules to assume the manner of a Japanese person (or persons) thereby teaching in a fully immerse way. This would prepare the student to interact comfortably with other cultures even without fluent language skills.

Strong AI capabilities in a computer along with such psychological modeling such that speaking with the machine might open an interesting ethical can of worms. At what point does the machine require ethical consideration? If one were to add a psychological model that cause the program to ask questions about itself and its ethical standing in a way that was indistinguishable from a human, would that be reason to give it ethical consideration?

Since all of the internal working of an AI will, at least for the near future, be deliberately created by a programmer, it will be possible, in principle, to determine exactly how an AI will respond to any given situation. Some might deem this determinism reason to consider an AI purely mechanical and so undeserving of ethical consideration, regardless of how convincingly the programmer can construct the puppet that elicits a human emotional reaction. There will likely be some uncertainty in the prediction of how an AI will behave in a real-time situation simply because one cannot know all the inputs in a real situation in order to make a careful prediction. But then, isn't that how we ourselves behave? If one could 'pause' a human and carefully examine his environmental inputs and the state of his brain, would one not be able to predict his behavior similarly to the AI (presuming knowledge of the workings of the brain, similar in scope to the knowledge of the computer code of the AI)? If so, and I think that it is plausible to assume that this is true, what then is it that makes a predictable human deserving of ethical consideration and an AI not?

Given that the programmer has complete control over the 'personality' of the AI, if one were to construct an AI that expressed no desire or preference for ethical consideration, or, to take it to the extreme, actively rejected ethical consideration for itself, would one still be obligated to assign some sort of consideration to the AI? Would it make a difference if the AI were constructed such that it were aware of the ethics of humans, and applied those rules when interacting with humans, but still actively rejected consideration for itself? Such a construct could be an ideal servant, which is of course the precise purpose for which it was created, to serve humans.

Given that it is possible and practical to construct the perfect servant, intelligent and wise, aware of and with overriding desire to serve our needs and wants, without consideration to (or without capacity for) its own discomfort or disappointment, would it be ethical for us to do so? If we made a mistake in the construction of such a creature might we create a whole population of sentient slaves, miserably locked into their own minds, compelled to serve but utterly incapable of expressing their horror and outrage, instead able only to reassure their decadent masters that this is how they wish to be?

Of course this idea is explored in many fictional works, Brave New World among them, but in this world where most cultures treat even humanities closest peers with only the barest consideration and where some cultures don't treat their human brethren with even that much regard, would we heed those words of warning?

Wednesday, July 12, 2006

Happiness Index

A study recently published by the New Economics Foundation expresses the idea that GDP is not a good way to measure the success of a nation. Instead they suggest their 'Happy Planet Index', an attempt to define what makes a nation successful.
"The 178-nation "Happy Planet Index" lists the south Pacific island of Vanuatu as the happiest nation on the planet, while the UK is ranked 108th.

The index is based on consumption levels, life expectancy and happiness, rather than national economic wealth measurements such as GDP. "

The calculation of the index is rather simplistic:


HPI =
Life satisfaction x Life expectancy

Ecological Footprint

and favors long-lived idiots. It completely ignores achievement or success. It appears to me that the index is specifically designed to rank western cultures near the bottom (the US is 150th, Russia 172nd, and the UK falls lower than Lybia) while favoring small, isolated subsistance cultures in highly productive climates such as top-ranked Vanuatu (not to imply that Vanuatu is just a bunch of subsistance farmers, but they aren't far above that).

This index deliberately penalizes resource usage in an absolutely negative way (global area is divided by population, so any culture that uses more per capita than that average is ranked lower). Cultures that exist in highly productive climates are given an artificial bonus because they naturally require less land than the naive global average ( global-area/global-population) figure.

A better method would be to determine the area required for minimal subsistance for each locality. More land is required to support a person in the Siberian tundra than in the Brazilian rain forest, so it doesn't make sense to penalize people living in Siberia for using more land to support themselves.

Unsustainable overconsumption should be penalized, so when total global consumption exceeds total global productivity (as it does now, natural resources are being consumed more quickly than they are being produced) localities that use more land area than required to subsist in their locality can be penalized at a rate dependent on the average over-consumption.

This allows a locality to consume resources greater than subsistance level penalty-free as long as total global consumption is sustainable. When consumption exceeds sustainable levels localities with the greatest consumption over subsistance in their locality receive the highest penality.

By such a measure the USA would still fall pretty far down on the list as a result of our over-consumption, but tropical subsistance cultures would drop as well, as a result of loosing their bonus for using less land than the global average.

There are additional complexities that are difficult to track. For example, the top country is an island that depends on tourism from and goods and technology developed by countries at the bottom of the list (the USA, china, etc). Without the over-consumption of these countries it is possible that the happiness level or ecological footprint of the country would be affected.

There is also no consideration given to the stability or survivability of a culture. One might call this the 'happiness stability' of a culture. While the inhabitants of a south pacific atoll might be ecstatically happy with absolutely minimal resource consumption, they may also be hugely impacted by the first typhoon or tsunami that comes along. If they are like most cultures this would likely affect their happiness level.

Better technology (which requires more resources) can increase the happiness stability of a culture by providing the tools necessary to deal with catastrophy in relative comfort and to quickly provide relief to other areas in times of need.

Since global disasters are likely in the future (large earthquakes, supervolcanos, asteroid strikes, global climate change) species-wide happiness stability would ultimately be maximized by increasing the self-sufficent populations on other planets. This is not a level of stability achievable by a low-technology culture.

So while the index provides a short-term measure of happiness, it doesn't reflect the fact that technology is necessary to provide sustainable happiness and that improving technology requires consumption of resources.

The message the New Economics Foundation is trying to send is clear and correct, we need to focus on reducing needless consumption. However, the analysis they've provided is so flawed that not only is it useless, if it were widely accepted as the definition of successful happiness it would be severely damaging to humanity (unless of course one subscribes to the Voluntary Human Extinction Movement).

Thursday, June 29, 2006

How Much Do You Trust Us?

Today I stopped by StrategyPage.com, my new favorite news site for military news, and I encountered this story about the Israeli governments plans to buy F-35's and F-22's from the US govermnet. Currently they are irked by the US governments reluctance to release the source code for the computers on-board these planes, particularly the F-22, the US's newest and coolest multi-tasking plane.

This reluctance of the US is totally understandable and justified, the software is an enormous part of what makes these planes efficent and effective weapons. It's not something you want leaking to people you might have to engage in battle someday.

However, I'm utterly baffled by Israel's trust of the US hardware. The computing hardware in a modern fighter is deeply embedded in the system, and essential to nearly everything it does. It probably has hundreds of custom chips (ASICs, Application Specific Integrated Circuits) for which there is no source code. Their function is (or can be) fixed when they are manufactured, and the only way to figure out what the do for sure is to take it to a lab, strip the top off with some nasty chemicles, and then use an electron microscope to map the circuits. Then you have to spend god-only-knows-how-much time reverse engineering the circuitry.

Knowing how sneaky the US military can be, I would be not only very surprised, but also very disappointed if they have not used hundreds of the dirtiest, underhandediest and nearly undetectable uber-hacker tricks to ensure that the fighters we sell abroad today are utterly incapable of acting against our will tomorrow, regardless of who is flying them.

Wednesday, June 28, 2006

Improvised Smoke Device

Last year about this time I made some Improvised Smoke Devices (ISD's):

I mentioned then that I'd like to make some much larger ISD's. Toward that end I ordered several pounds of potassium nitrate from the good people over at SkyLighter. I'll be processing small amounts of it into some nice ISD's.

I did look into the legality of manufacturing one's own ISD's, and it appears that its legal, at least for now, but I can't help but wonder if maybe as a result of my order there's an entry in a government database somewhere, just in case the need to do some data mining someday.

Given that I post on a number of electronics and radio-control (land, sea and air) related sites, have enough knowledge of pyrotechnics to safely assemble formidable devices and regularly use encrypted communications, a data mining system could easily place me on a list of people with the capacity to produce IEDs of any size with sophisticated triggers and delivery methods. In fact, with only off-the shelf components anyone over the age of 21 and with some imagination and skill with their hands can develop an effective remotely triggered IED with a variety of remote delivery options.

Next you're expecting me to say something like "Of course, I'd never actually do something like that", but the truth is not so simple. The question is not whether I would, but under what circumstances I might consider it.

Say there was an invasion of the country by a massive military power and the regular military were utterly destroyed, leaving defense of the country to the people. Sure it's far-fetched, but most anybody with the capacity to improvise weapons would do so under such circumstances. Clearly, its not a question of 'would you ever', but 'when would you'.

Would you mail an IED to a senator who voted for an issue you believed to be wrong? Of course not (at least I hope not). So there's a line there somewhere between 'would' and 'would not'. But where is it? I don't think so. It just has to be a situation where you feel that you need a weapon and none are available. Perhaps just a shiv, maybe something more sophisticated.

But what about people who are just curious about setting up devices like they see used by the bad guys in the movies, in the fireworks industry or demolitions industry. Are they bad people for wanting to explore that knowledge? Does wanting to blow stuff up make them evil?

Of course not, but the government knows that it's important to keep track of people with knowledge that could be dangerous. They also know that the vast majority of people with this knowledge would not apply it in the way they seek to prevent (e.g., as the unibomber did). But in the event that someone does apply it, huge databases of information to mine for information could make a huge difference in the search for the criminal.

So does it bother me that my name is probably now in such a database? No, I'm sure I'm already in it, and I doubt that anyone would seriously consider that I'd deliberately do someone harm (except under justifiable circumstances). It's not that I trust the government, more that I am confident that my lifestyle generates enough evidence of my activities to show that I couldn't be deeply involved in such a plot. I hope.

Anyway, off to bake some ISD ingredients :)