Document (#27100)
- Editor
- Lakemeyer, G. u. B. Nebel
- Title
- Exploring artificial intelligence in the new millennium
- Imprint
- San Francisco, CA : Morgan Kaufmann
- Year
- 2003
- Pages
- 404 S
- Isbn
- 1-55860-811-7
- Footnote
- Rez. in: JASIST 55(2004) no.2, S.180-181 (J. Walker): "My initial reaction to this book was that it would be a useful tool for researchers and students outside of the Computer science community who would like a primer of some of the many specialized research areas of artificial intelligence (AI). The book authors note that over the last couple of decades the AI community has seen significant growth and suffers from a great deal of fragmentation. Someone trying to survey some of the most important research literature from the community would find it difficult to navigate the enormous amount of materials, joumal articles, conference papers, and technical reports. There is a genuine need for a book such as this one that attempts to connect the numerous research pieces into a coherent reference source for students and researchers. The papers contained within the text were selected from the International Joint Conference an AI 2001 (IJCAI-2001). The preface warns that it is not an attempt to create a comprehensive book an the numerous areas of research in AI or subfields, but instead is a reference source for individuals interested in the current state of some research areas within AI in the new millennium. Chapter 1 of the book surveys major robot mapping algorithms; it opens with a brilliant historical overview of robot mapping and a discussion of the most significant problems that exist in the field with a focus an indoor navigation. The major approaches surveyed Kalman filter and an alternative to the Kalman, the expectation maximization. Sebastian Thrun examines how all modern approaches to robotic mapping are probabilistic in nature. In addition, the chapter concludes with a very insightful discussion into what research issues still exist in the robotic mapping community, specifically in the area of indoor navigation. The second chapter contains very interesting research an developing digital characters based an the lessons learned from dog behavior. The chapter begins similar to chapter one in that the reasoning and history of such research is presented in an insightful and concise manner. Bruce M. Blumberg takes his readers an a tour of why developing digital characters in this manner is important by showing how they benefit from the modeling of dog training patterns, and transparently demonstrates how these behaviors are emulated.
In the third chapter, the authors present a preliminary statistical system for identifying the semantic roles of elements contained within a sentence such as the topic or individual(s) speaking. The historical context necessary for a reader to gain a true understanding of why the work is needed and what already exists is adequate, but lacking in many areas. For example, the authors examine the tension that exists between statistical systems and logie-based systems in natural language understanding in a trivial manner. A high expectation is placed an the reader to have a strong knowledge of these two areas of natural language understanding in AI research. In the fourth chapter, Derek Lang and Maria Fox examine the debate that has occurred within the AI community regarding automatically extracting domain-specific constraints for planning. The authors discuss two major planning approaches-knowledgespare and knowledge-rieh. They introduce their own approach, which reuses common features from many planning problems with specialized problem-solvers, a process of recognizing common patterns of behavior using automated technologies. The authors construct a clear and coherent picture of the field of planning within AI as well as demonstrate a clear need for their research. Also throughout the chapter there are numerous examples that provide readers with a clearer understanding of planning research. The major weakness of this chapter is the lack of discussion about the researchers' earlier version of their planning system STAN (Static Analysis Planner). They make reference to previous papers that discuss them, but little to no direct discussion. As a result, the reader is left wondering how the researchers arrived at the current version, STAN5. In Chapter 5, David J. Feet et al. look at visual motion analysis focusing an occlusion boundaries, by applying probabilistic techniques like Bayesian inference and particle filtering. The work is most applicable in the area of robotic vision. The authors do an outstanding job of developing a smooth narrative flow while simplifying complex models for visual motion analysis. This would be a good chapter for a graduate student who is looking for a research topic in Al. In the sixth chapter, Frank Wolter and Michael Zaharyaschev deal with reasoning about time and spare, which is a very difficult area of AI research. These two issues have been examined as separate entities in the past. The authors attempt to explore the two entities as one unit using different methods to generate qualitative spatiotemporal calculi and by using previous data from the area of modal logie. The research is presented in such a way that a reader with an inadequate AI concept knowledge will be quickly lost in the miasma of the research.
In Chapter 7, Jeff Rickel and W. Lewis Johnson have created a virtual environment, with virtual humans for team training. The system is designed to allow a digital character to replace team members that may not be present. The system is also designed to allow students to acquire skills to occupy a designated role and help coordinate their activities with their teammates. The paper presents a complex concept in a very manageable fashion. In Chapter 8, Jonathan Yedidia et al. study the initial issues that make up reasoning under uncertainty. This type of reasoning, in which the system takes in facts about a patient's condition and makes predictions about the patient's future condition, is a key issue being looked at by many medical expert system developers. Their research is based an a new form of belief propagation, which is derived from generalized existing probabilistic inference methods that are widely used in AI and numerous other areas such as statistical physics. The ninth chapter, by David McAllester and Robert E. Schapire, looks at the basic problem of learning a language model. This is something that would not be challenging for most people, but can be quite arduous for a machine. The research focuses an a new technique called leave-one-out estimator that was used to investigate why statistical language models have had such success in this area of research. In Chapter 10, Peter Baumgartner looks at simplified theorem proving techniques, which have been applied very effectively in propositional logie, to first-ordered case. The author demonstrates how his new technique surpasses existing techniques in this area of AI research. The chapter simplifies a complex subject area, so that almost any reader with a basic Background in AI could understand the theorem proving. In Chapter 11, David Cohen et al. analyze complexity issues in constraint satisfaction, which is a common problem-solving paradigm. The authors lay out how tractable classes of constraint solvers create new classes that are tractable and more expressive than previous classes. This is not a chapter for an inexperienced student or researcher in AI. In Chapter 12, Jaana Kekalaine and Kalervo Jarvelin examine the question of finding the most important documents for any given query in text-based retrieval. The authors put forth two new measures of relevante and attempt to show how expanding user queries based an facets about the domain benefit retrieval. This is a great interdisciplinary chapter for readers who do not have a strong AI Background but would like to gain some insights into practical AI research. In Chapter 13, Tony Fountain et al. used machine learning techniques to help lower the tost of functional tests for ICs (integrated circuits) during the manufacturing process. The researchers used a probabilistic model of failure patterns extracted from existing data, which allowed generating of a decision-theoretic policy that is used to guide and optimize the testing of ICs. This is another great interdisciplinary chapter for a reader interested in an actual physical example of an AI system, but this chapter would require some AI knowledge.
The book does achieve its aim of being a starting point for someone interested in the state of some areas of AI research at the beginning of the new millennium. The book's most irritating feature is the different writing styles of the authors. The book is organized as a collection of papers similar to a typical graduate survey course packet, and as a result the book does not possess a narrative flow. Also the book contains a number of other major weaknesses such as a lack of an introductory or concluding chapter. The book could greatly benefit from an introductory chapter that would introduce readers to the areas of AI, explain why such a book is needed, and explain why each author's research is important. The manner in which the book currently handles these issues is a preface that talks about some of the above issues in a superficial manner. Also such an introductory chapter could be used to expound an what level of AI mathematical and statistical knowledge is expected from readers in order to gain maximum benefit from this book. A concluding chapter would be useful to readers interested in the other areas of AI not covered by the book, as well as open issues common to all of the research presented. In addition, most of the contributors come exclusively from the computer science field, which heavily slants the work toward the computer science community. A great deal of the research presented is being used by a number of research communities outside of computer science, such as biotechnology and information technology. A wider audience for this book could have been achieved by including a more diverse range of authors showing the interdisciplinary nature of many of these fields. Also the book's editors state, "The reader is expected to have basic knowledge of AI at the level of an introductory course to the field" (p vii), which is not the case for this book. Readers need at least a strong familiarity with many of the core concepts within AI, because a number of the chapters are shallow and terse in their historical overviews. Overall, this book would be a useful tool for a professor putting together a survey course an AI research. Most importantly the book would be useful for eager graduate students in need of a starting point for their research for their thesis. This book is best suited as a reference guide to be used by individuals with a strong familiarity with AI." - Field
- Informatik
Similar documents (content)
-
Williamson, N.J.: Classification in the Millennium (1997)
0.78
0.7807151 = sum of: 0.7807151 = product of: 3.1228604 = sum of: 3.1228604 = weight(title_txt:millennium in 612) [ClassicSimilarity], result of: 3.1228604 = score(doc=612,freq=1.0), product of: 0.7434452 = queryWeight, product of: 1.4345455 = boost 8.401051 = idf(docFreq=26, maxDocs=44218) 0.06168805 = queryNorm 4.2005253 = fieldWeight in 612, product of: 1.0 = tf(freq=1.0), with freq of: 1.0 = termFreq=1.0 8.401051 = idf(docFreq=26, maxDocs=44218) 0.5 = fieldNorm(doc=612) 0.25 = coord(1/4)
-
San Segundo Manuel, R.: From the invalidity of a general classification : theory to a new organization of knowledge for the millennium to come (2008)
0.74
0.7382231 = sum of: 0.7382231 = product of: 1.4764462 = sum of: 0.11019466 = weight(abstract_txt:artificial in 1675) [ClassicSimilarity], result of: 0.11019466 = score(doc=1675,freq=1.0), product of: 0.38756105 = queryWeight, product of: 1.0357618 = boost 6.0656753 = idf(docFreq=278, maxDocs=44218) 0.06168805 = queryNorm 0.28432852 = fieldWeight in 1675, product of: 1.0 = tf(freq=1.0), with freq of: 1.0 = termFreq=1.0 6.0656753 = idf(docFreq=278, maxDocs=44218) 0.046875 = fieldNorm(doc=1675) 1.3662515 = weight(title_txt:millennium in 1675) [ClassicSimilarity], result of: 1.3662515 = score(doc=1675,freq=1.0), product of: 0.7434452 = queryWeight, product of: 1.4345455 = boost 8.401051 = idf(docFreq=26, maxDocs=44218) 0.06168805 = queryNorm 1.8377298 = fieldWeight in 1675, product of: 1.0 = tf(freq=1.0), with freq of: 1.0 = termFreq=1.0 8.401051 = idf(docFreq=26, maxDocs=44218) 0.21875 = fieldNorm(doc=1675) 0.5 = coord(2/4)
-
Special volume on empirical methods (1996)
0.70
0.69788265 = sum of: 0.69788265 = product of: 1.3957653 = sum of: 0.6611342 = weight(abstract_txt:intelligence in 6146) [ClassicSimilarity], result of: 0.6611342 = score(doc=6146,freq=1.0), product of: 0.36126038 = queryWeight, product of: 5.8562455 = idf(docFreq=343, maxDocs=44218) 0.06168805 = queryNorm 1.8300767 = fieldWeight in 6146, product of: 1.0 = tf(freq=1.0), with freq of: 1.0 = termFreq=1.0 5.8562455 = idf(docFreq=343, maxDocs=44218) 0.3125 = fieldNorm(doc=6146) 0.7346311 = weight(abstract_txt:artificial in 6146) [ClassicSimilarity], result of: 0.7346311 = score(doc=6146,freq=1.0), product of: 0.38756105 = queryWeight, product of: 1.0357618 = boost 6.0656753 = idf(docFreq=278, maxDocs=44218) 0.06168805 = queryNorm 1.8955235 = fieldWeight in 6146, product of: 1.0 = tf(freq=1.0), with freq of: 1.0 = termFreq=1.0 6.0656753 = idf(docFreq=278, maxDocs=44218) 0.3125 = fieldNorm(doc=6146) 0.5 = coord(2/4)
-
Frontiers in problem solving : phase transitions and complexity (1996)
0.70
0.69788265 = sum of: 0.69788265 = product of: 1.3957653 = sum of: 0.6611342 = weight(abstract_txt:intelligence in 6776) [ClassicSimilarity], result of: 0.6611342 = score(doc=6776,freq=1.0), product of: 0.36126038 = queryWeight, product of: 5.8562455 = idf(docFreq=343, maxDocs=44218) 0.06168805 = queryNorm 1.8300767 = fieldWeight in 6776, product of: 1.0 = tf(freq=1.0), with freq of: 1.0 = termFreq=1.0 5.8562455 = idf(docFreq=343, maxDocs=44218) 0.3125 = fieldNorm(doc=6776) 0.7346311 = weight(abstract_txt:artificial in 6776) [ClassicSimilarity], result of: 0.7346311 = score(doc=6776,freq=1.0), product of: 0.38756105 = queryWeight, product of: 1.0357618 = boost 6.0656753 = idf(docFreq=278, maxDocs=44218) 0.06168805 = queryNorm 1.8955235 = fieldWeight in 6776, product of: 1.0 = tf(freq=1.0), with freq of: 1.0 = termFreq=1.0 6.0656753 = idf(docFreq=278, maxDocs=44218) 0.3125 = fieldNorm(doc=6776) 0.5 = coord(2/4)
-
Gauch, S.: Intelligent information retrieval : an introduction (1992)
0.66
0.65690166 = sum of: 0.65690166 = product of: 0.87586886 = sum of: 0.26445368 = weight(abstract_txt:intelligence in 503) [ClassicSimilarity], result of: 0.26445368 = score(doc=503,freq=1.0), product of: 0.36126038 = queryWeight, product of: 5.8562455 = idf(docFreq=343, maxDocs=44218) 0.06168805 = queryNorm 0.7320307 = fieldWeight in 503, product of: 1.0 = tf(freq=1.0), with freq of: 1.0 = termFreq=1.0 5.8562455 = idf(docFreq=343, maxDocs=44218) 0.125 = fieldNorm(doc=503) 0.29385245 = weight(abstract_txt:artificial in 503) [ClassicSimilarity], result of: 0.29385245 = score(doc=503,freq=1.0), product of: 0.38756105 = queryWeight, product of: 1.0357618 = boost 6.0656753 = idf(docFreq=278, maxDocs=44218) 0.06168805 = queryNorm 0.7582094 = fieldWeight in 503, product of: 1.0 = tf(freq=1.0), with freq of: 1.0 = termFreq=1.0 6.0656753 = idf(docFreq=278, maxDocs=44218) 0.125 = fieldNorm(doc=503) 0.31756276 = weight(abstract_txt:exploring in 503) [ClassicSimilarity], result of: 0.31756276 = score(doc=503,freq=1.0), product of: 0.40813792 = queryWeight, product of: 1.0629022 = boost 6.2246165 = idf(docFreq=237, maxDocs=44218) 0.06168805 = queryNorm 0.77807707 = fieldWeight in 503, product of: 1.0 = tf(freq=1.0), with freq of: 1.0 = termFreq=1.0 6.2246165 = idf(docFreq=237, maxDocs=44218) 0.125 = fieldNorm(doc=503) 0.75 = coord(3/4)