August 2, 2007

The Tamagotchi-Effect

The Tamagotchi-Effect refers to the tendency of humans to easily see emotions in and form attachment to even simple robots and software agents. I wrote about it before, focusing on how I believe this acceptance of emotion in robots will become a generation gap, something where we won't understand our children/grandchildren. Now there is a great article in the Washington Post about how the Tamagotchi-Effect works even in the military (short excerpt below, but read the whole thing):

Finally it was down to one leg. Still, it pulled itself forward. Tilden was ecstatic. The machine was working splendidly.

The human in command of the exercise, however -- an Army colonel -- blew a fuse. The colonel ordered the test stopped.

Why? asked Tilden. What's wrong? The colonel just could not stand the pathos of watching the burned, scarred and crippled machine drag itself forward on its last leg. This test, he charged, was inhumane.

Labels:

July 18, 2007

John F. Sowa on Fads and Fallacies about Logic

In a recent IEEE Intelligent Systems John F. Sowa wrote an interesting article that should be read be people interested in the logical side of the Semantic Web. Two of the quotes I particularly liked:

[...] computational complexity is important. But complexity is a property of algorithms, and only indirectly a property of problems, since most problems can be solved by different algorithms with different complexity. The language in which a problem is stated has no effect on complexity. Reducing the expressive power of a logic does not solve any problems faster; its only effect is to make some problems impossible to state.

and on Language and Logic:

What makes formal logic hard to use is its rigidity and its limited set of operators. Natural languages are richer, more expressive, and much more flexible. That flexibility permits vagueness, which some logicians consider a serious flaw, but a precise statement on any topic is impossible until all the details are determined. As a result, formal logic can only express the final result of a lengthy process of analysis and design. Natural language, however, can express every step from the earliest hunch or tentative suggestion to the finished specification.

In short, there are two equal and opposite fallacies about language and logic:  at one extreme, logic is unnatural and irrelevant; at the opposite extreme, language is incurably vague. A more balanced view must recognize the virtues of both:  logic is the basis for precise reasoning in every natural language; but without vagueness in the early stages of a project, it would be impossible to explore all the design options.

The entire article is available for free as a "preprint" here.

Labels: ,

March 8, 2007

On The Parallel Future Of Programming

I wrote about it before, but it deserves to be repeated a couple of times:

  1. Processors are not getting faster at processing single threaded programs anymore. In the past you could be sure that the next CPU generation will execute any program faster - this is not true anymore.
  2. CPU development centers around building more and more processing cores - hence all computing intensive applications that want to be fast need to be multithreaded.
  3. Current programming languages and tools are mostly not well suited for concurrent programs. In the next years we will see a lot of development to address this shortcoming.

At FZI we just bought our first QuadCore machines - but obviously 4 is not going to be the limit - Intel already demoed a 80 core chip.

To learn more about this you can read the posts at O'Reilly Radar here and here

Google Video also has TechTalks about a proposal to add better control abstractions to Java (could be a simple step improve concurrent programming with Java) and about MapReduce - a control abstraction Google uses to more easily take advantage of multiple processors.

There's also an enjoyable video about how a modern computer game takes advantage of multiple cores (about Alan Wake, the new game from the makers of Max Payne).  

Labels: ,

February 11, 2007

Apple Knowledge Navigator

Apple's 1990ish vision of a computer interface of the future - not that different from descriptions of how people should interact with Semantic Web agents (Video below or watch it at Google Video).

Labels: ,

December 12, 2006

Cyc Google TechTalk

Google Video has a video of a talk given by Douglas Lenat, the President and CEO of Cycorp. It's more than 70minutes long, but worth the time of anyone interested in AI. I want to highlight two parts that I found particular interesting:

It's been my believe for a while that general purpose reasoners and theorem provers are only good for very few tasks (such as proving the correctness of a program) and that most real world tasks rather need faster, task specific reasoners or heuristics. For me this thought was always motivated by ideas from cognitive psychology (see for example the research into "Fast and Frugal heuristics" by the ABC Research Group in Germany). However, I always lacked good computer science arguments to back up this point - now at least I can say that Cycorp sees it the same way:

There is a single correct monolithic reasoning mechanism, namely thorem proving; but, in fact, it's so deadly slow that, really, if we ever fall back on our theorem prover, we're doing something wrong. By now, we have over 1,000 specialized reasoning modules, and almost all of the time, when Cyc is doing reasoning, it's running one or another of these particular specialized modules.(~32:20)

I also think that humans are almost constantly reorganizing the knowledge structures in their head - most of the time becoming more effective in reasoning and quicker in learning. An example for this process is the forming of "thought entities". There seems to be a limit on the number of thought entities that humans can manipulate in their short term memory. This limit seems to fixed for live and seems to be somewhere between 5 and 8. What does change with experience is the structure and complexity of these thought entities. A famous example for the effect of experience on the thought entities is the ability to recall chess positions in expert chess players and amateurs. If you show the positions of chess pieces from a normal game to expert chess player and amateurs, the expert players will be much better at recalling the exact positions. But when you place the pieces in a random manner both will perform equally bad. The common explanation for this phenomena is that the expert has more complex though entities at her disposal. In normal chess positions she can find large familiar patterns - like "white has played opening A in variant B". These large and complex thought entities allow the expert to fit the position of up to 32 chess pieces into the available 8 slots. When the chess pieces are placed in a random manner, these structures familiar to the experts don't appear anymore and the expert loses its advantage.
And now I always wondered what could be equivalents to this knowledge reorganization process in logic based systems, Cyc has one interesting answer:

Often what we do in a case like this, if we see the same kind of form occurring again and again and again, is we introduce a new predicate, in this case a relational exists, so that what used to be a complicated looking rule is now a ground atomic formula, in this case a simple ternary assertion in our language (~21:15)

Labels: