Приказ основних података о документу

dc.creatorSubotić, Vanja
dc.date.accessioned2024-03-13T13:53:42Z
dc.date.available2024-03-13T13:53:42Z
dc.date.issued2024
dc.identifier.issn0039-7857
dc.identifier.urihttp://reff.f.bg.ac.rs/handle/123456789/6276
dc.description.abstractDeep learning (DL) is a statistical technique for pattern classification through which AI researchers train artificial neural networks containing multiple layers that process massive amounts of data. I present a three-level account of explanation that can be reasonably expected from DL models in cognitive neuroscience and that illustrates the explanatory dynamics within a future-biased research program (Feest Philosophy of Science 84:1165–1176, 2017; Doerig et al. Nature Reviews: Neuroscience 24:431–450, 2023). By relying on the mechanistic framework (Craver Explaining the brain: Mechanisms and the mosaic unity of neuroscience. Clarendon, 2007; Stinson Eppuor si muove: Doing history and philosophy of Science with Peter Machamer. Springer, 2017, The Routledge Handbook of the computational mind. Routledge, 2018), I develop an account that corresponds to the stages of mechanism discovery, i.e., our shifting epistemic position and epistemic goals, and propose a representative model for each level. Generic, theoretical DL models at Level 1 address the general features of a cognitive phenomenon through exploration and provide us with how-possibly explanations. On the other hand, DL models at Level 2 either identify the interaction between the features or represent the epistemic stage when the researcher is still unsure if the modeled features are crucial or arbitrary. These models should provide us with how-plausibly explanations. Finally, specific DL models of specific brain areas, i.e., ersatz models filled with relevant cognitive and neuroscientific details, are at Level 3. At this level, a researcher can advance how-actually explanations of cognitive phenomena. The main strength of this account is that it elucidates both global explanatory dynamics and local explanatory dynamics (cf. Dresow Synthese 199:10441–10474, 2021). The former occurs when the transition between levels happens in accordance with the process of obtaining more details about a particular cognitive phenomenon through multiple DL models. The latter, meanwhile, involves cases in which the transition between levels takes place within a single DL model by elucidating internal mechanisms (e.g., using Explainable AI techniques for rendering models more transparent).sr
dc.language.isoensr
dc.publisherSpringersr
dc.rightsclosedAccesssr
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/
dc.sourceSynthesesr
dc.subjectconnectionismsr
dc.subjectdeep learningsr
dc.subjectexplanatory dynamicssr
dc.subjectmechanistic explanationsr
dc.titleExploring, expounding & ersatzing: a three-level account of deep learning models in cognitive neuroscience.sr
dc.typearticlesr
dc.rights.licenseBYsr
dc.citation.rankM21~
dc.citation.spage94
dc.citation.volume203
dc.identifier.doi10.1007/s11229-024-04514-1
dc.type.versionpublishedVersionsr


Документи

ДатотекеВеличинаФорматПреглед

Уз овај запис нема датотека.

Овај документ се појављује у следећим колекцијама

Приказ основних података о документу