Research at BICA Labs

Areas of Research Interest

AI & Complexity science

Artificially-intelligent agents are dynamic systems exhibiting such properties as chaos, strange attractors, fractal dimensions, different forms of entropy etc. We focus on finding how methods of non-linear science, chaotic time series analysis, fractal calculus, graphs, catastrophes theory etc may be applied to enhance machine learning algorithms.

Communications in multi-agent systems

These includes collective intelligence and behavior in multi-agent intelligent systems; research on the efficient ways of trusted information exchange between artificially intelligent agents; swarming of drones and robots as an application of former and later. In this direction we do a lot of research on usage of blockchain-like protocols as en efficient tool for such tasks.

New BICA architectures

We use cross-disciplinary approach to bring advances in neuroscience, psychology and psychiatry into mathematical and algorithmic models that can be used for creating new efficient cognitive architectures. Rather then taking large-scale brain properties as a basis for cognitive architectures (like deep neural networks), we primarely focus on neurophysilogical properties of synapses and neurons (neuroplasticity, receptor dynamics), genetics, emotions, neuroendocrine factors (like those involved in stress) underling human intelligence, its formation and learning.

Embedded AI and brain-computer interfaces

Our labs performs some ground research into how brain and computer processes can be connected and how they can exchange information. These includes de-ciphering different types of brain activity (EEG, neuronal spikes) and their correlates (mimics etc) with special machine-learning algorithms; finding ways to transfer information into brain or affect conscious and subconscious states, emotions with non-invasive transcranial methods; finding approaches for the development of embedded systems , chips that will process AI efficiently.

Current Projects

Local-learning neural networks

Progress in specific and generic forms of AI is dependent on advances in cognitive architectures. We developing new paradigm for artificial neuron derived from neurophysiological properties of the living neural tissue. The proposed paradigm is suitable for building deep artificial neural networks with mixed feed-forward and recurrent properties; its artificial neuron has intrinsic learning rule, replacing classical back-propagation and highly suitable for reinforced learning. We call this networks “Local-learning neural networks”

Collective AI for vehicles swarming

Big number of multiagent intelligence applications are related to collective behavior; this is especially true for agents with movable physical bodies, like autonomous cars, drones, copters, robots etc. We develop efficient algorithms for their collective movement coordination (swarming) basing on samples from biological systems (like insects). Each of these agents usually have low level of intelligence, but their synergy increases aggregative intelligence to the new level.

Multi-agent evolutionary algorithms with blockchain

Our ground idea for this project is represented in our blog publication “How Blockchain Relates to Artificial Intelligence?”. In particular, we think, that blockchain will make for multi-agent AI the same thing that written language made for humans; it should provide means of accelerated evolution. At BICA Labs we finding algorithms that may be implemented on top of existing (or custom) blockchain protocols and will enable trusted environment for synergetic multi-agent information exchange.

Assymetric dual-component architectures with Hofield and Grossberg ANNs

Research inspired by large-scale brain composition (hemispheric assymetry) demonstrates a significant potential for creation of AI possessing emotional and intuitive components when deterministic and chaotic components are joined into one cognitive architecture. This kind of architecture also shown to form high-level abstractions with unsupervised learning in deep neural networks with Hopfield and Grossberg layers. Project is lead by Prof. Olga Chernavska from Lebedev Institute of Physics (Moscow) and based on non-linear science, including dymamic theory of information by Prof. Dmitry Chernavsky.


Mathematical formalisms for cognition modeling

The project is focused on finding which front-edge mathematical approaches are suited for machine learning, and especially modelling hightly-abstract cognitive processes with better efficiency then models used today (non-linear regression, classic differential calculus on manifolds etc). In particular, we look into fractional and fractal derivatives, graph theory and its applications. We also investigate into new areas of mathematics, including differential analysis on discrete topologies with variable fractal dimensions, probabilistic (statistical) manifolds etc.

Chaos/fractal measures in ANNs learning

The goal of this project is to find complexity measures (fractal dimensions, Lyapunov exponents, topology of unembedded dimensions and bifurcations on it) that correlate with efficient artificial neural network learning mechanics in order to find more efficient approaches to enhance learning process by avoiding local minima, overfitting and overgeneralizations.

Join the Team

Contribute to the project or apply for a position in the labs

Stay informed

Connect with our social network profiles, discussion groups and mailing lists to keep track on our updates


Support our research efforts in enabling more efficient AI