We gratefully acknowledge support from
the Simons Foundation and member institutions.

Neurons and Cognition

New submissions

[ total of 9 entries: 1-9 ]
[ showing up to 1000 entries per page: fewer | more ]

New submissions for Fri, 3 May 24

[1]  arXiv:2405.01012 [pdf, other]
Title: Correcting Biased Centered Kernel Alignment Measures in Biological and Artificial Neural Networks
Comments: ICLR 2024 Re-Align Workshop
Subjects: Neurons and Cognition (q-bio.NC); Computer Vision and Pattern Recognition (cs.CV)

Centred Kernel Alignment (CKA) has recently emerged as a popular metric to compare activations from biological and artificial neural networks (ANNs) in order to quantify the alignment between internal representations derived from stimuli sets (e.g. images, text, video) that are presented to both systems. In this paper we highlight issues that the community should take into account if using CKA as an alignment metric with neural data. Neural data are in the low-data high-dimensionality domain, which is one of the cases where (biased) CKA results in high similarity scores even for pairs of random matrices. Using fMRI and MEG data from the THINGS project, we show that if biased CKA is applied to representations of different sizes in the low-data high-dimensionality domain, they are not directly comparable due to biased CKA's sensitivity to differing feature-sample ratios and not stimuli-driven responses. This situation can arise both when comparing a pre-selected area of interest (e.g. ROI) to multiple ANN layers, as well as when determining to which ANN layer multiple regions of interest (ROIs) / sensor groups of different dimensionality are most similar. We show that biased CKA can be artificially driven to its maximum value when using independent random data of different sample-feature ratios. We further show that shuffling sample-feature pairs of real neural data does not drastically alter biased CKA similarity in comparison to unshuffled data, indicating an undesirable lack of sensitivity to stimuli-driven neural responses. Positive alignment of true stimuli-driven responses is only achieved by using debiased CKA. Lastly, we report findings that suggest biased CKA is sensitive to the inherent structure of neural data, only differing from shuffled data when debiased CKA detects stimuli-driven alignment.

[2]  arXiv:2405.01148 [pdf, other]
Title: Qualia and the Formal Structure of Meaning
Comments: 28 pages
Subjects: Neurons and Cognition (q-bio.NC); Artificial Intelligence (cs.AI); Category Theory (math.CT); History and Philosophy of Physics (physics.hist-ph)

This work explores the hypothesis that subjectively attributed meaning constitutes the phenomenal content of conscious experience. That is, phenomenal content is semantic. This form of subjective meaning manifests as an intrinsic and non-representational character of qualia. Empirically, subjective meaning is ubiquitous in conscious experiences. We point to phenomenological studies that lend evidence to support this. Furthermore, this notion of meaning closely relates to what Frege refers to as "sense", in metaphysics and philosophy of language. It also aligns with Peirce's "interpretant", in semiotics. We discuss how Frege's sense can also be extended to the raw feels of consciousness. Sense and reference both play a role in phenomenal experience. Moreover, within the context of the mind-matter relation, we provide a formalization of subjective meaning associated to one's mental representations. Identifying the precise maps between the physical and mental domains, we argue that syntactic and semantic structures transcend language, and are realized within each of these domains. Formally, meaning is a relational attribute, realized via a map that interprets syntactic structures of a formal system within an appropriate semantic space. The image of this map within the mental domain is what is relevant for experience, and thus comprises the phenomenal content of qualia. We conclude with possible implications this may have for experience-based theories of consciousness.

[3]  arXiv:2405.01385 [pdf, other]
Title: Anti-seizure medication tapering is associated with delta band power reduction in a dose, region and time-dependent manner
Subjects: Neurons and Cognition (q-bio.NC)

Anti-seizure medications (ASMs) are the primary treatment for epilepsy, yet medication tapering effects have not been investigated in a dose, region, and time-dependent manner, despite their potential impact on research and clinical practice.
We examined over 3000 hours of intracranial EEG recordings in 32 subjects during long-term monitoring, of which 22 underwent concurrent ASM tapering. We estimated ASM plasma levels based on known pharmaco-kinetics of all the major ASM types.
We found an overall decrease in the power of delta band activity around the period of maximum medication withdrawal in most (80%) subjects, independent of their epilepsy type or medication combination. The degree of withdrawal correlated positively with the magnitude of delta power decrease. This dose-dependent effect was strongly seen across all recorded cortical regions during daytime; but not in sub-cortical regions, or during night time. We found no evidence of differential effect in seizure onset, spiking, or pathological brain regions.
The finding of decreased delta band power during ASM tapering agrees with previous literature. Our observed dose-dependent effect indicates that monitoring ASM levels in cortical regions may be feasible for applications such as medication reminder systems, or closed-loop ASM delivery systems. ASMs are also used in other neurological and psychiatric conditions, making our findings relevant to a general neuroscience and neurology audience.

Cross-lists for Fri, 3 May 24

[4]  arXiv:2405.00679 (cross-list from cs.NE) [pdf, other]
Title: Exploring mechanisms of Neural Robustness: probing the bridge between geometry and spectrum
Subjects: Neural and Evolutionary Computing (cs.NE); Artificial Intelligence (cs.AI); Neurons and Cognition (q-bio.NC)

Backpropagation-optimized artificial neural networks, while precise, lack robustness, leading to unforeseen behaviors that affect their safety. Biological neural systems do solve some of these issues already. Thus, understanding the biological mechanisms of robustness is an important step towards building trustworthy and safe systems. Unlike artificial models, biological neurons adjust connectivity based on neighboring cell activity. Robustness in neural representations is hypothesized to correlate with the smoothness of the encoding manifold. Recent work suggests power law covariance spectra, which were observed studying the primary visual cortex of mice, to be indicative of a balanced trade-off between accuracy and robustness in representations. Here, we show that unsupervised local learning models with winner takes all dynamics learn such power law representations, providing upcoming studies a mechanistic model with that characteristic. Our research aims to understand the interplay between geometry, spectral properties, robustness, and expressivity in neural representations. Hence, we study the link between representation smoothness and spectrum by using weight, Jacobian and spectral regularization while assessing performance and adversarial robustness. Our work serves as a foundation for future research into the mechanisms underlying power law spectra and optimally smooth encodings in both biological and artificial systems. The insights gained may elucidate the mechanisms that realize robust neural networks in mammalian brains and inform the development of more stable and reliable artificial systems.

[5]  arXiv:2405.00719 (cross-list from eess.SP) [pdf, other]
Title: EEG-Deformer: A Dense Convolutional Transformer for Brain-computer Interfaces
Comments: 10 pages, 9 figures. This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessible
Subjects: Signal Processing (eess.SP); Machine Learning (cs.LG); Neurons and Cognition (q-bio.NC)

Effectively learning the temporal dynamics in electroencephalogram (EEG) signals is challenging yet essential for decoding brain activities using brain-computer interfaces (BCIs). Although Transformers are popular for their long-term sequential learning ability in the BCI field, most methods combining Transformers with convolutional neural networks (CNNs) fail to capture the coarse-to-fine temporal dynamics of EEG signals. To overcome this limitation, we introduce EEG-Deformer, which incorporates two main novel components into a CNN-Transformer: (1) a Hierarchical Coarse-to-Fine Transformer (HCT) block that integrates a Fine-grained Temporal Learning (FTL) branch into Transformers, effectively discerning coarse-to-fine temporal patterns; and (2) a Dense Information Purification (DIP) module, which utilizes multi-level, purified temporal information to enhance decoding accuracy. Comprehensive experiments on three representative cognitive tasks consistently verify the generalizability of our proposed EEG-Deformer, demonstrating that it either outperforms existing state-of-the-art methods or is comparable to them. Visualization results show that EEG-Deformer learns from neurophysiologically meaningful brain regions for the corresponding cognitive tasks. The source code can be found at https://github.com/yi-ding-cs/EEG-Deformer.

[6]  arXiv:2405.00899 (cross-list from cs.HC) [pdf, other]
Title: Characterising the Creative Process in Humans and Large Language Models
Subjects: Human-Computer Interaction (cs.HC); Artificial Intelligence (cs.AI); Computation and Language (cs.CL); Neurons and Cognition (q-bio.NC)

Large language models appear quite creative, often performing on par with the average human on creative tasks. However, research on LLM creativity has focused solely on \textit{products}, with little attention on the creative \textit{process}. Process analyses of human creativity often require hand-coded categories or exploit response times, which do not apply to LLMs. We provide an automated method to characterise how humans and LLMs explore semantic spaces on the Alternate Uses Task, and contrast with behaviour in a Verbal Fluency Task. We use sentence embeddings to identify response categories and compute semantic similarities, which we use to generate jump profiles. Our results corroborate earlier work in humans reporting both persistent (deep search in few semantic spaces) and flexible (broad search across multiple semantic spaces) pathways to creativity, where both pathways lead to similar creativity scores. LLMs were found to be biased towards either persistent or flexible paths, that varied across tasks. Though LLMs as a population match human profiles, their relationship with creativity is different, where the more flexible models score higher on creativity. Our dataset and scripts are available on \href{https://github.com/surabhisnath/Creative_Process}{GitHub}.

Replacements for Fri, 3 May 24

[7]  arXiv:2310.04420 (replaced) [pdf, other]
Title: BrainSCUBA: Fine-Grained Natural Language Captions of Visual Cortex Selectivity
Comments: ICLR 2024. Project page: this https URL
Subjects: Machine Learning (cs.LG); Neurons and Cognition (q-bio.NC)
[8]  arXiv:2311.10543 (replaced) [pdf, other]
Title: Joint covariance properties under geometric image transformations for spatio-temporal receptive fields according to the generalized Gaussian derivative model for visual receptive fields
Authors: Tony Lindeberg
Comments: 38 pages, 13 figures. Note: From version 4, this paper considers a different form of joint composition of the geometric image transformations than in the earlier versions
Subjects: Computer Vision and Pattern Recognition (cs.CV); Neurons and Cognition (q-bio.NC)
[9]  arXiv:2402.08099 (replaced) [pdf, ps, other]
Title: Time series segmentation for recognition of epileptiform patterns recorded via Microelectrode Arrays in vitro
Comments: submitted to Plos One
Subjects: Signal Processing (eess.SP); Neurons and Cognition (q-bio.NC)
[ total of 9 entries: 1-9 ]
[ showing up to 1000 entries per page: fewer | more ]

Disable MathJax (What is MathJax?)

Links to: arXiv, form interface, find, q-bio, recent, 2405, contact, help  (Access key information)