artificial intelligence

AI and cognition

Deep learning models function as important tools in cognitive science and its subfields like linguistics and neuroscience. A major theme of my work to date considers how we can articulate principled methods of reliable prediction and control with AI models in science by achieving better understanding of inductive biases in deep neural networks. I see this work as continuous with my interest in the philosophy of cognitive science. Understanding inductive biases encoded in functionally connected ensembles of neurons is also a major aspect of research that aims to study human and animal cognition. In my previously published work, I defend the use of deep neural networks as models of mid-level processing in specific brain areas. I plan to extend this work to the current revival of interest in classic debates concerning cognitive architectures and the format of mental representations. My present work on symmetries and inductive biases in deep neural networks provides a framework for understanding neural mechanisms underlying aspects of cognition like geometric reasoning and object recognition which stands as a genuine alternative to the symbolic, language-of-thought models that are currently enjoying a resurgence in the philosophy of cognitive science. I am also currently developing a project that closely examines the role of topological structures called manifolds in explaining the functioning of artificial and biological neural networks. Manifolds are continuous, low-dimensional structures embedded in high-dimensional neural activity. Recent research suggests that manifolds play an important role in neural computations. Yet, their ontological and explanatory status remains unclear. My ongoing project, “Data Modelling and Exploratory Concept Formation in Neuroscience,” examines the use of dimensionality reduction techniques used in neuroscience to uncover neural manifolds. I argue that the concept of a neural manifold is the subject of an ongoing process of conceptual stabilization. Getting clear about the ontological status of manifolds and their explanatory role in neuroscience requires new methodological standards. To this end, I advocate for a variety of operationalism, which I construe as a theory of conceptual extension that can serve as a necessary epistemic guardrail on the way to establishing more robust empirical grounding of neural manifolds by connecting the concept to well-defined operations. I hope to further integrate this project with my work on deep neural networks in cognitive science by exploring the questions of whether and how manifolds can fit within the broadly mechanistic paradigm of explanation in contemporary neuroscience. To this end, I am interested in relating manifold learning to interventionist methods of investigating the performance of deep neural networks.