I left the world of research years ago. I mean years ago.
This page mostly chronicles old work from my previous life as an academician. Most of the links here need to be fixed; you can find some that are more likely to work over at http://research.google.com/pubs/DavidCohn.html.
(Past) Interests
- Learning structure from document bases – the vast majority of available electronic documents reside in unstructured document bases. Hypertext remains woefully inadequate: it is labor-intensive, static, one-directional, and limited to the perspective of the hypertext author. And when we move beyond the traditional text document to audio, video, and other forms of recorded information, even the rudimentary benefits of hypertext are missing.
- Can we automate the extraction of relationships between documents to generate a dynamic structure that lets users navigate and manipulate unstructured document bases at the conceptual level?
- Active learning – many learning problems in pattern recognition and information retrieval afford chances for the learner to select or influence the training data it receives. How should it select training data to learn most quickly, for the least cost?
- Also dabbled in: thin-client robotics, decision support and optimization.
The Papers
- David Cohn, Deepak Verma and Karl Pfleger (2006). Recursive Attribute Factoring, in B. Scholkopf and J. Platt and T. Hoffman, eds., Advances in Neural Information Processing Systems 19.
- David Cohn (2003). Informed Projections, in S. Becker et al., eds., Advances in Neural Information Processing Systems 15.
- David Cohn and Thomas Hofmann (2001). The Missing Link – A Probabilistic Model of Document Content and Hypertext Connectivity, in T. Leen et al., eds., Advances in Neural Information Processing Systems 13.
- Adam Berger, Rich Caruana, David Cohn, Dayne Freitag, and Vibhu Mittal (2000). Bridging the lexical chasm: Statistical approaches to answer-finding, Proceedings of the 23rd Annual Conference on Research and Development in Information Retrieval (ACM SIGIR). Athens, Greece.
- David Cohn and Huan Chang (2000). Probabilistically Identifying AuthoritativeDocuments, Proceedings of the Seventeenth International Conference on Machine Learning. Stanford, CA.
- Huan Chang, David Cohn and Andrew McCallum (2000).Creating Customized Authority Lists, Proceedings of the Seventeenth International Conference on Machine Learning. Stanford, CA.
- Greg Schohn and David Cohn (2000). Less is More: Active Learning with Support Vector Machines, Proceedings of the Seventeenth International Conference on Machine Learning. Stanford, CA.
- Brigham Anderson, Andrew Moore and David Cohn (2000). A Nonparametric Approach to Noisy and Costly Optimization, Proceedings of the Seventeenth International Conference on Machine Learning. Stanford, CA.
Earlier Research
- Satinder Singh and David Cohn. (1998) How to dynamically merge Markov decision processes, in M. Jordan et al, eds, Advances in Neural Information Processing Systems 10.
- David Cohn. (1997) Minimizing Statistical Bias with Queries, in M. Mozer et al, eds, Advances in Neural Information Processing Systems 9. Also appears as AI Lab Memo 1552, CBCL Paper 124.
- David Cohn and Satinder Singh. (1997) Predicting lifetimes in dynamically allocated memory, in M. Mozer et al, eds, Advances in Neural Information Processing Systems 9.
- David Cohn, Zoubin Ghahramani, and Michael Jordan. (1996).Active learning with statistical models, Journal of Artificial Intelligence Research, (4): 129-145.
- David Cohn. (1996). Neural network exploration using optimal experiment design, Neural Networks (9)6: 1071-1083. Preliminary version available online as AI Lab Memo 1491, CBCL Paper 99
- David Cohn, Les Atlas and Richard Ladner. (1994) Improving generalization with active learning, Machine Learning 15(2):201-221.
- David Cohn, Eve Riskin and Richard Ladner. (1994) The theory and practice of vector quantizers trained on small training sets, IEEE Transactions on Pattern Analysis and Machine Intelligence 16(1):54-65.
- David Cohn. (1994) Queries and exploration using optimal experiment design, in J. Cowan et al, eds, Advances in Neural Information Processing Systems 6.