We gratefully acknowledge support from
the Simons Foundation and member institutions.

Neural and Evolutionary Computing

New submissions

[ total of 12 entries: 1-12 ]
[ showing up to 2000 entries per page: fewer | more ]

New submissions for Fri, 3 May 24

[1]  arXiv:2405.00679 [pdf, other]
Title: Exploring mechanisms of Neural Robustness: probing the bridge between geometry and spectrum
Subjects: Neural and Evolutionary Computing (cs.NE); Artificial Intelligence (cs.AI); Neurons and Cognition (q-bio.NC)

Backpropagation-optimized artificial neural networks, while precise, lack robustness, leading to unforeseen behaviors that affect their safety. Biological neural systems do solve some of these issues already. Thus, understanding the biological mechanisms of robustness is an important step towards building trustworthy and safe systems. Unlike artificial models, biological neurons adjust connectivity based on neighboring cell activity. Robustness in neural representations is hypothesized to correlate with the smoothness of the encoding manifold. Recent work suggests power law covariance spectra, which were observed studying the primary visual cortex of mice, to be indicative of a balanced trade-off between accuracy and robustness in representations. Here, we show that unsupervised local learning models with winner takes all dynamics learn such power law representations, providing upcoming studies a mechanistic model with that characteristic. Our research aims to understand the interplay between geometry, spectral properties, robustness, and expressivity in neural representations. Hence, we study the link between representation smoothness and spectrum by using weight, Jacobian and spectral regularization while assessing performance and adversarial robustness. Our work serves as a foundation for future research into the mechanisms underlying power law spectra and optimally smooth encodings in both biological and artificial systems. The insights gained may elucidate the mechanisms that realize robust neural networks in mammalian brains and inform the development of more stable and reliable artificial systems.

[2]  arXiv:2405.00680 [pdf, ps, other]
Title: Comparative approach: Electric distribution optimization with loss minimization algorithm and particle swarm optimization
Subjects: Neural and Evolutionary Computing (cs.NE)

Power systems are very large and complex, it can be influenced by many unexpected events this makes power system optimization problems difficult to solve, hence methods for solving these problems ought to be an active research topic. This review presents an overview of important mathematical comparaison of loss minimization algorithm and particle swarm optimization algorithm in terms of the performances of electric distribution.

[3]  arXiv:2405.00686 [pdf, ps, other]
Title: Technical Report on BaumEvA Evolutionary Optimization Python-Library Testing
Comments: The paper consists of 30 pages, 37 figures, 5 tables
Subjects: Neural and Evolutionary Computing (cs.NE); Artificial Intelligence (cs.AI); Computer Vision and Pattern Recognition (cs.CV)

This report presents the test results Python library BaumEvA, which implements evolutionary algorithms for optimizing various types of problems, including computer vision tasks accompanied by the search for optimal model architectures. Testing was carried out to evaluate the effectiveness and reliability of the pro-posed methods, as well as to determine their applicability in various fields. Dur-ing testing, various test functions and parameters of evolutionary algorithms were used, which made it possible to evaluate their performance in a wide range of conditions. Test results showed that the library provides effective and reliable methods for solving optimization problems. However, some limitations were identified related to computational resources and execution time of algorithms on problems with large dimensions. The report includes a detailed description of the tests performed, the results obtained and conclusions about the applicability of the genetic algorithm in various tasks. Recommendations for choosing algorithm pa-rameters and using the library to achieve the best results are also provided. The report may be useful to developers involved in the optimization of complex com-puting systems, as well as to researchers studying the possibilities of using evo-lutionary algorithms in various fields of science and technology.

[4]  arXiv:2405.00698 [pdf, other]
Title: CUDA-Accelerated Soft Robot Neural Evolution with Large Language Model Supervision
Authors: Lechen Zhang
Comments: 3 pages, 5 figures
Subjects: Neural and Evolutionary Computing (cs.NE); Robotics (cs.RO)

This paper addresses the challenge of co-designing morphology and control in soft robots via a novel neural network evolution approach. We propose an innovative method to implicitly dual-encode soft robots, thus facilitating the simultaneous design of morphology and control. Additionally, we introduce the large language model to serve as the control center during the evolutionary process. This advancement considerably optimizes the evolution speed compared to traditional soft-bodied robot co-design methods. Further complementing our work is the implementation of Gaussian positional encoding - an approach that augments the neural network's comprehension of robot morphology. Our paper offers a new perspective on soft robot design, illustrating substantial improvements in efficiency and comprehension during the design and evolutionary process.

[5]  arXiv:2405.00699 [pdf, other]
Title: Direct Training Needs Regularisation: Anytime Optimal Inference Spiking Neural Network
Subjects: Neural and Evolutionary Computing (cs.NE); Artificial Intelligence (cs.AI); Machine Learning (cs.LG)

Spiking Neural Network (SNN) is acknowledged as the next generation of Artificial Neural Network (ANN) and hold great promise in effectively processing spatial-temporal information. However, the choice of timestep becomes crucial as it significantly impacts the accuracy of the neural network training. Specifically, a smaller timestep indicates better performance in efficient computing, resulting in reduced latency and operations. While, using a small timestep may lead to low accuracy due to insufficient information presentation with few spikes. This observation motivates us to develop an SNN that is more reliable for adaptive timestep by introducing a novel regularisation technique, namely Spatial-Temporal Regulariser (STR). Our approach regulates the ratio between the strength of spikes and membrane potential at each timestep. This effectively balances spatial and temporal performance during training, ultimately resulting in an Anytime Optimal Inference (AOI) SNN. Through extensive experiments on frame-based and event-based datasets, our method, in combination with cutoff based on softmax output, achieves state-of-the-art performance in terms of both latency and accuracy. Notably, with STR and cutoff, SNN achieves 2.14 to 2.89 faster in inference compared to the pre-configured timestep with near-zero accuracy drop of 0.50% to 0.64% over the event-based datasets. Code available: https://github.com/Dengyu-Wu/AOI-SNN-Regularisation

[6]  arXiv:2405.00700 [pdf, ps, other]
Title: Oxygen vacancies modulated VO2 for neurons and Spiking Neural Network construction
Comments: 18 pages,4 figures
Subjects: Neural and Evolutionary Computing (cs.NE); Strongly Correlated Electrons (cond-mat.str-el)

Artificial neuronal devices are the basic building blocks for neuromorphic computing systems, which have been motivated by realistic brain emulation. Aiming for these applications, various device concepts have been proposed to mimic the neuronal dynamics and functions. While till now, the artificial neuron devices with high efficiency, high stability and low power consumption are still far from practical application. Due to the special insulator-metal phase transition, Vanadium Dioxide (VO2) has been considered as an idea candidate for neuronal device fabrication. However, its intrinsic insulating state requires the VO2 neuronal device to be driven under large bias voltage, resulting in high power consumption and low frequency. Thus in the current study, we have addressed this challenge by preparing oxygen vacancies modulated VO2 film(VO2-x) and fabricating the VO2-x neuronal devices for Spiking Neural Networks (SNNs) construction. Results indicate the neuron devices can be operated under lower voltage with improved processing speed. The proposed VO2-x based back-propagation SNNs (BP-SNNs) system, trained with the MNIST dataset, demonstrates excellent accuracy in image recognition. Our study not only demonstrates the VO2-x based neurons and SNN system for practical application, but also offers an effective way to optimize the future neuromorphic computing systems by defect engineering strategy.

[7]  arXiv:2405.01014 [pdf, ps, other]
Title: Proven Runtime Guarantees for How the \moead Computes the Pareto Front From the Subproblem Solutions
Subjects: Neural and Evolutionary Computing (cs.NE)

The decomposition-based multi-objective evolutionary algorithm (MOEA/D) does not directly optimize a given multi-objective function $f$, but instead optimizes $N + 1$ single-objective subproblems of $f$ in a co-evolutionary manner. It maintains an archive of all non-dominated solutions found and outputs it as approximation to the Pareto front. Once the MOEA/D found all optima of the subproblems (the $g$-optima), it may still miss Pareto optima of $f$. The algorithm is then tasked to find the remaining Pareto optima directly by mutating the $g$-optima.
In this work, we analyze for the first time how the MOEA/D with only standard mutation operators computes the whole Pareto front of the OneMinMax benchmark when the $g$-optima are a strict subset of the Pareto front. For standard bit mutation, we prove an expected runtime of $O(n N \log n + n^{n/(2N)} N \log n)$ function evaluations. Especially for the second, more interesting phase when the algorithm start with all $g$-optima, we prove an $\Omega(n^{(1/2)(n/N + 1)} \sqrt{N} 2^{-n/N})$ expected runtime. This runtime is super-polynomial if $N = o(n)$, since this leaves large gaps between the $g$-optima, which require costly mutations to cover.
For power-law mutation with exponent $\beta \in (1, 2)$, we prove an expected runtime of $O\left(n N \log n + n^{\beta} \log n\right)$ function evaluations. The $O\left(n^{\beta} \log n\right)$ term stems from the second phase of starting with all $g$-optima, and it is independent of the number of subproblems $N$. This leads to a huge speedup compared to the lower bound for standard bit mutation. In general, our overall bound for power-law suggests that the MOEA/D performs best for $N = O(n^{\beta - 1})$, resulting in an $O(n^\beta \log n)$ bound. In contrast to standard bit mutation, smaller values of $N$ are better for power-law mutation, as it is capable of easily creating missing solutions.

[8]  arXiv:2405.01226 [pdf, other]
Title: Avoiding Redundant Restarts in Multimodal Global Optimization
Subjects: Neural and Evolutionary Computing (cs.NE)

Na\"ive restarts of global optimization solvers when operating on multimodal search landscapes may resemble the Coupon's Collector Problem, with a potential to waste significant function evaluations budget on revisiting the same basins of attractions. In this paper, we assess the degree to which such ``duplicate restarts'' occur on standard multimodal benchmark functions, which defines the \textit{redundancy potential} of each particular landscape. We then propose a repelling mechanism to avoid such wasted restarts with the CMA-ES and investigate its efficacy on test cases with high redundancy potential compared to the standard restart mechanism.

[9]  arXiv:2405.01305 [pdf, other]
Title: Distributed Representations Enable Robust Multi-Timescale Computation in Neuromorphic Hardware
Comments: 16 pages, 6 figures. Supplementary material: 7 pages, 6 figures
Subjects: Neural and Evolutionary Computing (cs.NE); Artificial Intelligence (cs.AI)

Programming recurrent spiking neural networks (RSNNs) to robustly perform multi-timescale computation remains a difficult challenge. To address this, we show how the distributed approach offered by vector symbolic architectures (VSAs), which uses high-dimensional random vectors as the smallest units of representation, can be leveraged to embed robust multi-timescale dynamics into attractor-based RSNNs. We embed finite state machines into the RSNN dynamics by superimposing a symmetric autoassociative weight matrix and asymmetric transition terms. The transition terms are formed by the VSA binding of an input and heteroassociative outer-products between states. Our approach is validated through simulations with highly non-ideal weights; an experimental closed-loop memristive hardware setup; and on Loihi 2, where it scales seamlessly to large state machines. This work demonstrates the effectiveness of VSA representations for embedding robust computation with recurrent dynamics into neuromorphic hardware, without requiring parameter fine-tuning or significant platform-specific optimisation. This advances VSAs as a high-level representation-invariant abstract language for cognitive algorithms in neuromorphic hardware.

Cross-lists for Fri, 3 May 24

[10]  arXiv:2405.01261 (cross-list from cs.LG) [pdf, other]
Title: Continuously evolving rewards in an open-ended environment
Comments: 30 pages, 8 figures
Subjects: Machine Learning (cs.LG); Neural and Evolutionary Computing (cs.NE)

Unambiguous identification of the rewards driving behaviours of entities operating in complex open-ended real-world environments is difficult, partly because goals and associated behaviours emerge endogenously and are dynamically updated as environments change. Reproducing such dynamics in models would be useful in many domains, particularly where fixed reward functions limit the adaptive capabilities of agents. Simulation experiments described assess a candidate algorithm for the dynamic updating of rewards, RULE: Reward Updating through Learning and Expectation. The approach is tested in a simplified ecosystem-like setting where experiments challenge entities' survival, calling for significant behavioural change. The population of entities successfully demonstrate the abandonment of an initially rewarded but ultimately detrimental behaviour, amplification of beneficial behaviour, and appropriate responses to novel items added to their environment. These adjustment happen through endogenous modification of the entities' underlying reward function, during continuous learning, without external intervention.

Replacements for Fri, 3 May 24

[11]  arXiv:2404.02090 (replaced) [pdf, ps, other]
Title: Already Moderate Population Sizes Provably Yield Strong Robustness to Noise
Comments: Full version of the same-titled paper accepted at GECCO 2024
Subjects: Neural and Evolutionary Computing (cs.NE); Artificial Intelligence (cs.AI)
[12]  arXiv:2404.08786 (replaced) [pdf, other]
Title: NeuroLGP-SM: Scalable Surrogate-Assisted Neuroevolution for Deep Neural Networks
Comments: Accept for IEEE Congress on Evolutionary Computation (CEC) (CEC 2024), Yokohama, Japan, 8 pages, 5 figures, 2 tables (Fixed typo x' -> x*)
Subjects: Neural and Evolutionary Computing (cs.NE); Artificial Intelligence (cs.AI)
[ total of 12 entries: 1-12 ]
[ showing up to 2000 entries per page: fewer | more ]

Disable MathJax (What is MathJax?)

Links to: arXiv, form interface, find, cs, recent, 2405, contact, help  (Access key information)