Section: New Software and Platforms
Positioning
Our previous works in the domain of well-defined distributed asynchronous adaptive computations [43] , [40] , [45] have already made us define a library (DANA [39] ), closely related to both the notion of artificial neural networks and cellular automata. From a conceptual point of view, the computational paradigm supporting the library is grounded on the notion of a unit that is essentially a (vector of) potential that can vary along time under the influence of other units and learning. Those units can be organized into layers, maps and networks.
We also gather in the middleware EnaS (that stands for Event Neural Assembly Simulation; cf. http://gforge.inria.fr/projects/enas ) our numerical and theoretical developments, allowing to simulate and analyze so called "event neural assemblies".
We will also have to interact with the High Performance Computing (HPC) community, since having large scale simulations at that mesoscopic level is an important challenge in our systemic view of computational neuroscience. Our approach implies to emulate the dynamics of thousands, or even millions, of integrated computational units, each of them playing the role of a whole elementary neural circuit (e.g. the microcolumn for the cortex). Mesoscopic models are considered in such an integrative approach, in order to exhibit global dynamical effect that would be hardly reachable by compartment models involving membrane equations or even spiking neuron networks.
The vast majority of high performance computing softwares for computational neuroscience addresses sub-neural or neural models [30] , but coarser grained population models are also demanding for large scale simulations, with fully distributed computations, without global memory or time reference, as it is specified in (cf. § 3.2 ).