Axon: The STICK Software Development Kit
The brain, a truly efficient computation machine, encodes and processes information using discrete spikes. Inspired by this, we’ve built Axon, a neuromorphic software framework for building, simulating, and compiling spiking neural networks (SNNs) for general-purpose computation. Axon allows to build complex computation by combining modular computation kernels, avoiding the challenge of having to train case-specific SNN while maintaining the sparsity of spike computing.
Axon is an extension of STICK (Spike Time Interval Computational Kernel).
Axon provides an end-to-end pipeline for deploying interval-coded SNNs to ultra-low-power neuromorphic hardware. At Neucom we’re building one of such chips and we’re calling it ADA. ADA It is built for embedded deployment, yet flexible enough for rapid prototyping and quick iteration cycle.
The Axon SDK includes:
- A Python-based simulator for cycle-accurate emulation of interval-coded symbolic computation.
- A hardware-aware compiler that translates Python-defined algorithms into spiking circuits, ready for simulation or deployment.
- Tools for resource reporting, cycle estimation, and performance profiling of the deployed algorithms.
If you’re building symbolic SNNs for embedded inference, control, or cryptographic tasks, Axon makes it easy to translate deterministic computations into spiking neural networks.
Axon SDK structure
Component | Description |
---|---|
axon_sdk.primitives | Base clases defining the low level components and engine used by the spiking networks |
axon_sdk.networks | Library of modular spiking computation kernels |
axon_sdk.simulator | Spiking network simulator to input spikes, simulate dynamics and read outputs |
axon_sdk.compilation | Compiler for transforming high-level algorithms into spiking networks |
Example: Multiplication spiking-network
from axon_sdk.simulator import Simulator
from axon_sdk.networks import MultiplierNetwork
encoder = DataEncoder(Tmin=10.0, Tcod=100.0)
net = MultiplierNetwork(encoder)
val1 = 0.1
val2 = 0.5
sim = Simulator(net, encoder, dt=0.01)
# Apply both input values
sim.apply_input_value(val1, neuron=net.input1, t0=10)
sim.apply_input_value(val2, neuron=net.input2, t0=10)
# Simulate long enough to see output
sim.simulate(simulation_time=400)
spikes = sim.spike_log.get(net.output.uid, [])
interval = spikes[1] - spikes[0]
decoded_val = encoder.decode_interval(interval)
decoded_val
>> 0.05
Citation
If you use Axon in your research, please cite:
@misc{axon2025,
title = {Axon: A Software Development Kit for Symbolic Spiking Networks},
author = {Neucom},
howpublished = {\url{https://github.com/neucom/axon}},
year = {2025}
}
License
Axon SDK is open-sourced under a GPLv3 license, preventing its inclusion in closed-source projects.
Reach out to initiate a collaboration if you need to use Axon SDK in your closed-source project.
Getting Started
Installation
Axon SDK can be installed from its source code, open-sourced on Github.
git clone https://github.com/neucom/axon-sdk.git
cd axon-sdk
pip install -e .
Axon SDK is built in Python and depends on:
- Python ≥ 3.11
- NumPy
- Matplotlib
- Pytest
The dependencies can be installed using the requirements file:
cd axon-sdk
pip install -r requirements.txt
Quickstart
This tutorial will showcases how to use Axon to build a Spiking Neural Network (SNN) that can multiply two signed numbers. In particular, it covers how to define the SNN, input values to it, simulate its execution and read out the output.
Multiplier and encoder
The spiking neural networks (SNN) defined with Axon SDK are quite different from what’s conventionally understood by spiking neural networks. They have 2 fundamental differences:
- Axon’s SNNs don’t need to be trained.
- Axon uses a pair of spikes to encode a value.
Axon’s SNNs are pre-defined networks of neurons and synapses that implement a certain computation - they are not a trainable SNN.
One of such computations is a multiplication operation, which is available in Axon:
from axon_sdk.networks import MultiplierNetwork
multiplier_net = MultiplierNetwork(encoder)
Axon abstracts the complexity of the underlying SNN into a modular interface that allows composing computation kernels to achieve larger operations.
The multiplier network requires an encoder to work. The encoder is the component that translates between arithmetic values and spike intervals.
from axon_sdk.primitives import DataEncoder
encoder = DataEncoder(Tmin=10.0, Tcod=100.0)
spikes = enc.encode_value(0.2)
interval = spikes[1] - spikes[0]
value = enc.decode_interval(interval)
spikes
>> (0, 30.0)
value
>> 0.2
Simulating the SNN
There is one last fundamental ingredient to execute the SNN - the simulator:
from axon_sdk.primitives import Simulator
sim = Simulator(multiplier_net, encoder, dt=0.01)
The simulator runs a sequential execution of the dynamics of the SNN, hence the need of a simulation timestep parameter (dt
).
The simulator also allows to input values to the network. Under the hood, it uses the encoder to do so:
sim.apply_input_value(0.5, neuron=net.input1)
sim.apply_input_value(0.5, neuron=net.input2)
The last step is to run the simulation and to readout the spikes emitted by the output neuron
sim.simulate(simulation_time=400)
spikes = sim.spike_log.get(net.output.uid, [])
interval = spikes[1] - spikes[0]
output_val = encoder.decode_interval(interval)
output_val
>> 0.2
Instead of using spiking rates to encode values which encode values over many spikes, Axon uses inter-spike intervals. The delay betweekn a couple of spikes encodes a value. This makes Axon extremely spike sparse, hence optimizing for energy consumption (when deploying spiking neural networks to hardware, processing each spike has an energy cost)
Combining computations
Axon SDK provides a library of modular spiking computation kernels that can be combined to achieve complex computations. In this tutorial, we will build a Spiking Multiply-Accumulate module by combining primitive modules.
Spiking Multiply-Accumulate (MAC)
A multiply-accumulate computation over signed scalars takes the following form:
y = a * x + b
Axon provides spiking modules to implement signed multiplication and addition as part of its library. Combining them, we can build a MAC module.
Module | Description |
---|---|
SignedMultiplierNetwork | Multiplies two signed scalars |
AdderNetwork | Adds two signed scalars |
Combining modules requires wiring them together. We will see how to wire the modules together later on.
Signed channels in spiking modules
Signed arithmetics is achieved by having two channels per input/output (one channel for each sign). Therefore, pair of spikes in a certain channel encodes a signed value.
Each module that supports signed arithmetics contains two neurons per input/output:
Module | Inputs | Outputs |
---|---|---|
SignedMultiplierNetwork | net.input1_plus | net.output1_plus |
net.input1_minus | net.output1_minus | |
net.input2_plus | ||
net.input2_minus | ||
AdderNetwork | net.input1_plus | net.output1_plus |
net.input1_minus | net.output1_minus | |
net.input2_plus | ||
net.input2_minus |
Wiring modules together
Putting together the MAC module is a matter of wiring together the SignedMultiplierNetwork
and AdderNetwork
, making sure to wire plus to plus and minus to minus.
from axon_sdk.networks import SignedMultiplierNetwork, AdderNetwork
from axon_sdk.primitives import SpikingNetworkModule
class MacNetwork(SpikingNetworkModule):
def __init__(self, encoder):
super().__init__()
we = 10.0
Tsyn = 1.0
self.add_net = AdderNetwork(encoder)
self.mul_net = SignedMultiplierNetwork(encoder)
self.connect_neurons(self.mul_net.output_plus,
self.add_net.input1_plus,
synapse_type="V",
weight=we,
delay=Tsyn
)
self.connect_neurons(self.mul_net.output_minus,
self.add_net.input1_minus,
synapse_type="V",
weight=we,
delay=Tsyn
)
self.add_subnetwork(add_net)
self.add_subnetwork(mul_net)
Defining the new MAC module, or any new module, requires subclassing SpikingNetworkModule
. This base class does basic housekeeping (e.g. making sure each child module has a unique ID, etc.)
Note: The call to
self.add_subnetwork(...)
is important. It’s required for the base class to register the new module as a child. Without it, the simulation of the dynamics (which we’ll do later on) will not work properly.
There are several things to explain from the snippet above: the origin of the values we
, Tsyn
and V
:
The variable we
stands for weight excitatory and it’s a term used throught Axon. It’s the weigth of a synapse that will excite (trigger) the following neuron in a single timestep. Using we=10.0
comes from the fact that neurons, usually, have a voltage threshold Vt=10
, defined in each module (look inside signed_multiplier.py
).
The variable Tsyn
stands for synapse time and using it’s the time delay introduced by the synapse. The value Tsyn=1.0
is used by default throughtout Axon. It’s arbitrary and can be changed without affecting the dynamics of the spiking networks.
The synapses used to connect the modules are of type V
. V-synapses cause the following network to spike right after receiving a spike. Hence, they are used to propagate information forward in the network.
The simulator
In order to make the network spike, we need some sort of engine that handles the simulation of the dynamics of the spikes. That engine is the Simulator
.
from axon_sdk.simulator import Simulator
from axon_sdk.primitives import DataEncoder
encoder = DataEncoder(Tmin=10.0, Tcon=100.0)
mac_net = MacNetwork(encoder)
sim = Simulator(mac, encoder, dt=0.01)
For an explanation about how DataEncoder
encodes values, take a look at Core concepts > Interval coding
The simulator evolves the spiking module sequentially with a timestep of dt
. Using dt=0.01
is enough to get accurate simulations for most networks. In some cases, dt=0.001
is also used. In general, we want dt << Tsyn
.
Inputs & outputs
The simulator is in charge of inputting spikes to the network.
Let’s set some numeric values for the MAC operation:
a = 0.5
x = 0.3
b = 0.8
Using those:
y = a * x + b = 0.95
We can use the simulator’s method .apply_input_value(val, neuron, t)
to input spikes to the network.
a = 0.5
x = 0.3
b = 0.8
sim.apply_input_value(a, mac_net.mul_net.input1_plus, t0=0)
sim.apply_input_value(x, mac_net.mul_net.input2_plus, t0=0)
sim.apply_input_value(b, mac_net.add_net.input2_plus, t0=0)
The method Simulator.apply_input_value(val, neuron, t)
automatically applies a couple of spikes encoding a value to a neuron at a timestep t
. To input an individual spike, there is also Simulator.apply_input_spike(neuron, t)
.
Since in this example all inputs are positive, we can manually input them to the plus neurons. Inputing values manually is an academic exercise which does not scale to real-world scenatios. In further tutorials we’ll see how to automate this process.
Running the simulation
Now, it’s just a matter of letting the simulation run for a certain amount of time:
sim.simulate(simulation_time=500)
If everything went fine, the plus output of the adder module should have spiked twice, and the interval between the spikes will encode the desired value - 0.95.
spikes_plus = sim.spike_log.get(mac_net.add_net.output_plus.uid, [])
spikes_plus
>> [381.94, 486.94]
encoder.decode_interval(spikes_plus[1] - spikes_plus[0])
>> 0.95
Using the computation compiler
So far, we have combined manually different computational blocks to achieve a larger computation (for example, in the tutorial Combining computations). However, that becomes unfeasible when wanting to run larger computations or even full algorithms.
Axon SDK provides a computation compiler that translates a computation defined in Python to its corresponding spiking network, which implements such computation. The goal of this tutorial is to introduce the computation compiler.
The Scalar
class
Axon SDK provides a class whose purpose is to track the computations applied to scalar values. That class is called Scalar
.
The Scalar
base class wraps floating-point scalars with the purpose of logging the operations performed over them during algorithms.
from axon_sdk.compilation import Scalar
a = Scalar(0.5)
x = Scalar(0.3)
b = Scalar(0.8)
The Axon compiler works by logging the subsequent computations performed over scalars, building a computation graph and finally, transforming it into its equivalent spiking network.
After the variables are tracked by a Scalar
, we can use them as we would in normal Python syntax to implement algorithms:
y_mac = a * x + b
We can also visualize the built computation graph:
y_mac.draw_comp_graph(outfile='MAC_computation_graph')
The computation graph is a graphical representation of a computation where each node represents an operation and each edge a scalar value.
It’s possible to use more complex computations over variables wrapped by Scalar
. Everything works as expected:
from axon_sdk.compilation import Scalar
def dot_prod(a: list[Scalar], b: list[Scalar]):
return a[0] * b[0] + a[1] * b[1]
a = [Scalar(0.5), Scalar(0.3)]
b = [Scalar(0.2), Scalar(0.6)]
y_dot = dot_prod(a, b)
y_dot.draw_comp_graph(outfile='dot_computation_graph')
Once we have a computational graph built, we can transform it into a spiking network using the compiler.
From computation graph to spiking network
Axon provides a compilation module with methods for performing the compilation process and objects to hold the output of the compilation. The compile_computation()
method transforms a computation graph to a spiking network:
from axon_sdk.compilation import ExecutionPlan, compile_computation
from axon_sdk.compilation import Scalar
a = Scalar(0.5)
x = Scalar(0.3)
b = Scalar(0.8)
y = a * x + b
exec_plan = compile_computation(y, max_range=1)
Note: Since our variables in this example are bounded by 1, we can use a
max_range=1
. Using a larger range allows to perform computations with an extended numeric range, but that comes at a cost of precision when executing the spiking network.
The output of the compilation process is an ExecutionPlan
.
ExecutionPlan
| net
| triggers
| reader
The execution plan contains the generated spiking network in net
. Besides, it contains triggers
objects that automatically handle the process of inputting data to the spiking net (taking care of value range normalization and sign) and reader
, for reading the output of the network.
The execution plan is ready to be simulated:
Simulating the spiking computation
The simulator knows how to run the execution plan produced by the compilation:
from axon_sdk.simulator import Simulator
encoder = DataEncoder(Tmin=10.0, Tcod=100.0)
sim = Simulator.init_with_plan(exec_plan, enc)
sim.simulate(simulation_time=600)
All that is left is to readout the output spikes and decode the value
spikes_plus = sim.spike_log.get(execPlan.output_reader.read_neuron_plus.uid, [])
spikes_plus
>> [381.94, 486.94]
encoder.decode_interval(spikes_plus[1] - spikes_plus[0])
>> 0.95
As expected, the spiking network outputs spikes that, when decoded, have computed 0.5 * 0.3 + 0.8 = 0.95
.
Supported operations
The current version of the compiler supports the following scalar operations:
Operation | Description |
---|---|
ADD | a + b |
NEG | - b |
MUL | a * b |
DIV | a / b |
Only algorithms that use the Python operators +
, -
, *
and \
can be compiled to spiking networks.
Other primitive arithmetic operations (EXP
, etc.), complex operations (RELU
, etc.) and control flow operations (BEQ
, etc.) will be added in future releases.
Feel free to submit requests to extend the supported operations in our Github issues.
Using an extended numerical range
In the example before, the input variables were bounded by 1. Both the input values and any intermediate value had to be in the range [-1, 1]
.
Important: It’s the users responsability to guarantee that any input value and intermediate computation value is in the range
[-1, 1]
. The current version of the compiler does not detect overflows. Failure to comply to this will yield an unreliable spiking execution.
However, we can use an extended numerical range which grants more computational freedom by adjusting max_range
:
a = Scalar(5)
x = Scalar(3)
b = Scalar(8)
y = a * x + b
exec_plan = compile_computation(y, max_range=100)
It’s still the user’s responsability to guarantee that the input and intermediate values are within [-max_range
, max_range
].
Behind the scenes, both the simulator and the spiking network are reconfigured to take care of the extended numeric range. Inputs are linearly squeezed to [-1, 1]
and intermediate operations compensate the normalization constant.
For example, multiplications must compensate for an extra max_range
in the denominator:
a * b -> (a / max_range) * (b / max_range) -> a * b / (max_range)^2
From a user’s perspective, everything behaves as expected:
encoder = DataEncoder(Tmin=10.0, Tcod=100.0)
sim = Simulator.init_with_plan(exec_plan, enc)
sim.simulate(simulation_time=600)
spikes_plus = sim.spike_log.get(exec_plan.output_reader.read_neuron_plus.uid, [])
The only difference is that, now, the readout value must be de-normalized:
decoded_val = enc.decode_interval(spikes_plus[1] - spikes_plus[0])
de_norm_value = decoded_val * exec_plan.output_reader.normalization
de_norm_value
>> 23
Building custom spiking modules
In other tutorials, we have used the spiking modules provided by the Axon SDK library or combined them to define complex computations. However, in certain cases one might need to define modules from scratch. Axon SDK provides an infrastructure to do that.
In this tutorial, we will implement from scratch a module that computes the minimum between two input values. This is, a MinNetwork
module, which is not part of the Axon library (yet).
MinNetwork
: compute the minimum
Axon is based on the theoretical foundation introduced by the STICK framework, presented in the paper STICK: Spike Time Interval Computational Kernel, A Framework for General Purpose Computation using Neurons, Precise Timing, Delays, and Synchrony. In the paper, there is a design for a spiking network that computes the minimum between two floating point values, which is the one we will implement in this tutorial.
Note: For a deep understanding of the operating principles behind STICK and the workings of the minimum network, the STICK paper contains a great explanation. If you’re interested in learning about it, please, refer to the paper. This tutorial will only address the implementation and simulation of the network in Axon.
This is the spiking network presented in the paper to compute the minimum between two inputs:
Let’s break down the meaning of the graph, step by step.
Each node represents a spiking neuron and each edge, a synaptic connection.
Each synapse has 3 parameters which define it’s properties:
Synapse
| - type
| - weight
| - delay
All synapses are black and therefore V-type synapses (as presented in the paper).
The weight of each synapse is plotted by it. We can see the values we
, wi
and combinations of them with a preceding factor. The weight we
stands for weight excitatory and means that a synapse with weight we
will cause the receiving neuron to spike immediately (in the next update cycle). The wi
stands for weight inhibitory and has the contrary effect: After receiving a wi
synapse, a neuron will need 2x we
to spike.
The delays of the synapses, indicated by the line type, have the values Tsyn
and Tsyn + Tneu
. The first, Tsyn
, stands for synapse time and it’s an arbitrary but constant value, usually set to Tsyn=1
. The second, Tneu
stands for neuron propagation time and is the time a neuron takes to spike after its membrane potential went over the spiking threshold. Since we’re using a sequential simulator, Tneu = dt
.
Presenting the SpikingNetworkModule
All spiking modules must be childs of a base class called SpikingNetworkModule
. Creating new modules requires subclassing it as well.
from axon_sdk.primitives import SpikingNetworkModule
class MinNetwork(SpikingNetworkModule):
def __init__(self, encoder, module_name=None):
super().__init__(module_name)
...
The SpikingNetworkModule
does basic housekeeping: keeps track of child submodules and neurons, makes sure submodules have unique IDs, and provides methods to wire larger modules together. In short, it allows modularity and composability when building complex spiking networks.
Building the Minimum Network
We can follow the graph describin the minimum network from the STICK paper and wire a new spiking module:
class MinNetwork(SpikingNetworkModule):
def __init__(self, encoder, module_name='min_network'):
super().__init__(module_name)
# neuron params
Vt = 10.0 # threshold voltage
tm = 100.0
tf = 20.0
# synapse params
we = Vt
wi = -Vt
Tsyn = 1.0
Tneu = 0.01
self.input1 = self.add_neuron(Vt, tm, tf, neuron_name='input1')
self.input2 = self.add_neuron(Vt, tm, tf, neuron_name='input2')
self.smaller1 = self.add_neuron(Vt, tm, tf, neuron_name='smaller1')
self.smaller2 = self.add_neuron(Vt, tm, tf, neuron_name='smaller2')
self.output = self.add_neuron(Vt, tm, tf, neuron_name='output')
# from input1
self.connect_neurons(self.input1, self.smaller1, "V", 0.5 * we, Tsyn)
self.connect_neurons(self.input1, self.output, "V", 0.5 * we, 2 * Tsyn + Tneu)
# from input2
self.connect_neurons(self.input2, self.smaller2, "V", 0.5 * we, Tsyn)
self.connect_neurons(self.input2, self.output, "V", 0.5 * we, 2 * Tsyn + Tneu)
# from smaller1
self.connect_neurons(self.smaller1, self.input2, "V", wi, Tsyn)
self.connect_neurons(self.smaller1, self.output, "V", 0.5 * we, Tsyn)
self.connect_neurons(self.smaller1, self.smaller2, "V", 0.5 * wi, Tsyn)
# from smaller2
self.connect_neurons(self.smaller2, self.input1, "V", wi, Tsyn)
self.connect_neurons(self.smaller2, self.output, "V", 0.5 * we, Tsyn)
self.connect_neurons(self.smaller2, self.smaller1, "V", 0.5 * wi, Tsyn)
Once built, we can use the simulator to run the dynamics of the network. Refer to the tutorial on Combining computations for a longer explanation about the simulator.
from axon_sdk.simulator import Simulator
from axon_sdk.primitives import DataEncoder
encoder = DataEncoder(Tmin=10.0, Tcod=100.0)
min_net = MinNetwork(encoder, 'min')
sim = Simulator(min_net, encoder)
What’s the smaller value between 0.7
and 0.2
? That’s a hard question. Let’s find it out:
val1 = 0.7
val2 = 0.2
sim.apply_input_value(val1, min_net.input1, t0=0)
sim.apply_input_value(val2, min_net.input2, t0=0)
sim.simulate(300)
Now, we can readout the spikes produced by the output neuron and decode the computed result:
spikes = sim.spike_log.get(min_net.output.uid, [])
spikes
>> [2.0100000000000002, 32.01]
encoder.decode_interval(spikes[1] - spikes[0])
>> 0.19999999999999996
Core concepts
Neuron Model in Axon
This document details the spiking neuron model used in Axon, which implements the STICK computational paradigm. STICK uses temporal coding, precise spike timing, and synaptic diversity for symbolic and deterministic computation.
Overview
Axon simulates event-driven, integrate-and-fire neurons with:
- Millisecond-precision spike timing
- Multiple synapse types with distinct temporal effects
- Explicit gating to modulate temporal dynamics
The base classes are:
AbstractNeuron
: defines core membrane equationsExplicitNeuron
: tracks spike times and enables connectivitySynapse
: defines delayed, typed connections between neurons
Neuron Dynamics
Each neuron maintains five internal state variables:
Variable | Description |
---|---|
V | Membrane potential |
ge | Persistent excitatory input (constant) |
gf | Fast exponential input (gated) |
gate | Binary gate controlling gf integration |
Vt | Membrane potential threshold |
The membrane potential evolves following the differential equation:
\[ \tau_m \frac{dV}{dt} = g_e + \text{gate} \cdot g_f \] \[ \frac{dg_e}{dt} = 0 \] \[ \tau_f \frac{dg_f}{dt} = -g_f \]
When the membrane potential surpasess a threshold, V > Vt
, the neuron emits a spike and resets:
V → Vreset
ge → 0
gf → 0
gate → 0
Reset guarantees clean operation for subsequent intervals.
Synapse Types
The neuron model supports four synapse types with a certain weight (w
).
Type | Effect |
---|---|
V | Immediate change in membrane: V += w |
ge | Adds persistent drive: ge += w |
gf | Adds fast decaying drive: gf += w |
gate | Toggles gate flag (w = ±1) to activate gf |
Each synapse also includes a configurable delay, enabling precise temporal computation.
Numerical Parameters
Typical neuron parameter values used in Axon:
Parameter | Value | Meaning |
---|---|---|
Vt | 10.0 | Spiking threshold |
Vreset | 0.0 | Voltage after reset |
τm | 100.0 | Membrane integration constant |
τf | 20.0 | Fast synaptic decay constant |
Units are in milliseconds and millivolts, matching real-time symbolic processing and neuromorphic feasibility.
Benefits of This Model
This neuron model is designed for interval-coded values. Time intervals between spikes directly encode numeric values.
The neuron model has dynamic behaviours that eenable symbolic operations such as memory, arithmetic, and differential equation solving. The dynamics of this neuron model forms a Turing-complete computation framework (for in depth information, refer to the STICK paper).
This neuron model has the following characteristics:
- Compact: Minimal neurons required for functional blocks
- Precise: Accurate sub-millisecond spike-based encoding
- Composable: Modular design supports hierarchical circuits
- Hardware-Compatible: Ported to digital integrate-and-fire cores
Neuron Model Animation
This animation demonstrates how a single STICK neuron responds over time to different synaptic inputs. Each input type (V
, ge
, gf
, gate
) produces distinct changes in membrane dynamics. Synapse-type ge
produces a linear increase of V
and gf
an exponential one.
Summary of Synapse Effects
Synapse Type | Behavior |
---|---|
V | Instantaneous jump in membrane potential V |
ge | Slow, steady increase in V over time |
gf + gate | Fast, nonlinear voltage rise due to exponential dynamics |
gate | Controls whether gf affects the neuron at all |
Event-by-event explanation
Time (ms) | Type | Value | Description |
---|---|---|---|
t = 20 | V | 10.0 | Instantaneously pushes V to threshold: triggers immediate spike |
t = 60 | ge | 2.0 | Applies constant integration current: slow, linear voltage increase |
t = 100 | gf | 2.5 | Adds fast-decaying input, gated via gate = 1 at same time |
t = 160 | V | 2.0 | Small, instant boost to V |
t = 200 | gate | -1.0 | Disables exponential decay pathway by zeroing the gate signal |
t = 20 ms - V(10.0)
- A V-synapse adds +10.0 mV to
V
instantly. - Since
Vt = 10.0
, this causes immediate spike. - The neuron resets:
V → 0
,ge, gf, gate → 0
.
Effect: Demonstrates a direct spike trigger via instantaneous voltage jump.
t = 60 ms — ge(2.0)
- A ge-synapse applies constant input current.
- Voltage rises linearly over time.
- Alone, this isn’t sufficient to reach
Vt
, so no spike occurs yet.
Effect: Shows the smooth effect of continuous integration from ge-type input.
t = 100 ms — gf(2.5)
and gate(1.0)
- A gf-synapse delivers fast-decaying input current.
- A gate-synapse opens the gate (
gate = 1
), activatinggf
dynamics. - Voltage rises nonlinearly as
gf
initially dominates, then decays. - Combined effect from earlier
ge
andgf
causes a spike shortly after.
Effect: Demonstrates exponential integration (gf) gated for a temporary burst.
t = 160 ms — V(2.0)
- A small V-synapse bump of +2.0 mV occurs.
- This is not enough to cause a spike, but it shifts
V
upward instantly.
Effect: Shows subthreshold perturbation from a V-type synapse.
t = 200 ms — gate(-1.0)
- The gate is closed (
gate = 0
), disablinggf
decay term. - Any remaining
gf
is no longer integrated intoV
.
Effect: Demonstrates control logic: gf
is disabled, computation halts.
Data Encoding in Axon
The DataEncoder
class is responsible for converting scalar values into inter-spike intervals (ISIs) and decoding them back. This functionality is central to how Axon implements symbolic computation using the STICK (Spike Time Interval Computational Kernel) framework.
1. Concept
In STICK-based networks, numerical values are encoded not in voltage amplitude or spike rate, but in the time difference between two spikes. This allows for:
- High temporal resolution
- Symbolic logic without rate coding
- Hardware-friendly encoding (e.g., for ADA)
2. Encoding Equation
A normalized value ( x \n [0, 1] ) is encoded as a spike interval:
Δt = Tmin + x * Tcod
where:
- ( Tmin ) is the minimum interval (e.g., 1 ms
- ( Tcod ) is the coding range (e.g., 100 ms)
- ( dt ) is the resulting inter-spike interval (ISI)
Example:
Δt = 10 + 0.4 * 100 = 50 ms
Decoding Equation
To decode a spike interval back into a value:
x = (interval - Tmin) / Tcod
where:
interval
is the time difference between two spikes. This is the inverse of the encoder.
3. DataEncoder Class
The DataEncoder
class provides methods for encoding and decoding values:
class DataEncoder:
def __init__(self, Tmin=10.0, Tcod=100.0):
...
def encode_value(self, value: float) -> tuple[float, float]:
...
def decode_interval(self, spiking_interval: float) -> float:
...
Attributes:
Attribute | Description |
---|---|
Tmin | Minimum ISI (typically 10 ms) |
Tcod | Duration over which [0,1] is scaled (e.g., 100 ms) |
Tmax | Tmin + Tcod (maximum possible ISI) |
Integration
The DataEncoder is used during simulation and output processing:
from axon_sdk.simulator import Simulator
from axon_sdk.encoders import DataEncoder
encoder = DataEncoder(Tmin=10.0, Tcod=100.0)
spike_pair = encoder.encode_value(0.6) # returns (0.0, 70.0)
This spike pair can then be injected into the network, and the simulator will handle the timing based on the encoded intervals.
sim.apply_input_value(value=0.6, neuron=input_neuron)
Output Decoding
interval = spike2_time - spike1_time
decoded_value = encoder.decode_interval(interval)
Axon Simulator Engine
The Axon simulator executes symbolic spiking neural networks (SNNs) built with the STICK (Spike Time Interval Computational Kernel) model. This document describes the simulation engine’s architecture, parameters, workflow, and features.
1. Purpose
The Simulator
class provides a discrete-time, event-driven environment to simulate:
- Spiking neuron dynamics
- Synaptic event propagation
- Interval-based input encoding and output decoding
- Internal logging of voltages and spikes
It is optimized for symbolic, low-rate temporal computation rather than high-frequency biological modeling.
2. Core Components
Component | Description |
---|---|
net | The user-defined spiking network (a SpikingNetworkModule ) |
dt | Simulation timestep in seconds (default: 0.001 ) |
event_queue | Priority queue managing scheduled synaptic events |
encoder | Object for encoding/decoding interval-coded values |
spike_log | Maps neuron UIDs to their spike timestamps |
voltage_log | Records membrane voltage per neuron per timestep |
3. Simulation Loop
The simulator proceeds in dt
-sized increments for a specified duration:
-
Event Queue Check
All scheduled events due at timet
are popped. -
Synaptic Updates
Each event updates the target neuron’s state (V
,ge
,gf
, orgate
). -
Neuron Updates
Each affected neuron is numerically integrated using:
V += (ge + gate * gf) * dt / tau_m
where tau_m
is the membrane time constant.
- Spike Detection & Reset
IfV ≥ Vt
, the neuron spikes:
V ← Vreset
,ge ← 0
,gf ← 0
,gate ← 0
- All outgoing synapses generate future spike events
- Activity Tracking
Neurons with non-zeroge
,gf
, orgate
are marked active for the next step.
4. Configuration Knobs
Parameter | Description | Default |
---|---|---|
dt | Time resolution per step (in seconds) | 0.001 (1 ms) |
Tmin | Minimum interspike delay in encoding | 10.0 ms |
Tcod | Encoding range above Tmin | 100.0 ms |
simulation_time | Total simulation duration (in seconds) | user-defined |
These settings are defined at the simulator or encoder level depending on purpose.
5. Inputs & Injection
apply_input_value(value, neuron, t0=0)
Injects a scalar value ∈ [0, 1]
into a neuron via interval-coded spike pair.
apply_input_spike(neuron, t)
Injects a single spike into a neuron at exact time t
.
6. Output Decoding
To read results from signed STICK outputs:
from axon.simulator import decode_output
value = decode_output(simulator, reader)
Decodes interval between two spikes on either the + or − output neuron
Returns a signed scalar in [−1, 1] (scaled by reader.normalization
)
7. Logging and Visualization
The simulator maintains:
spike_log
: Maps neuron UIDs to spike timestamps with: {neuron_uid: [t0, t1, …]}voltage_log
: Maps neuron UIDs to their membrane voltages at each timestep with: {neuron_uid: [V0, V1, …]}
Optional visualization can be enabled by setting VIS=1
in your environment.
sim.launch_visualization()
plot_chronogram()
: Spike raster and voltage tracesvis_topology()
: Interactive network topology visualization
8. Design Flow
- Define Network: Create a
SpikingNetworkModule
with neurons and synapses.
from axon_sdk.network import SpikingNetworkModule
net = SpikingNetworkModule()
- Instantiate Encoder: Create an encoder for interval coding.
from axon_sdk.encoder import DataEncoder
encoder = DataEncoder(Tmin=10.0, Tcod=100.0)
- Instantiate Simulator: Create a
Simulator
instance with the network and parameters.
sim = Simulator(net, encoder dt=0.001)
- Apply Inputs: Use
apply_input_value()
orapply_input_spike()
to inject data.
sim.apply_input_value(0.5, neuron_uid, t0=0)
- Run Simulation: Execute the simulation for a specified duration of timesteps.
sim.run(simulation_time=100)
- Analyze Outputs: Use
decode_output()
to read results from the simulation.
value = sim.decode_output(reader)
Example Usage
from axon_sdk.simulator import Simulator
from axon_sdk.networks import MultiplierNetwork
from axon_sdk.utils import encode_interval
net = MultiplierNetwork()
encoder = DataEncoder()
sim = Simulator(net, encoder, dt=0.001)
a, b = 0.4, 0.25
sim.apply_input_value(a, net.input1)
sim.apply_input_value(b, net.input2)
sim.simulate(simulation_time=0.5)
sim.plot_chronogram()
11. Summary
-
Event-driven, millisecond-resolution simulator
-
Supports interval-coded STICK networks
-
Accurate logging of all internal neuron dynamics
-
Integrates seamlessly with compiler/runtime interfaces
Interval Coding in Axon
This document explains how Axon implements the interval-based encoding and computation as defined by the STICK (Spike Time Interval Computational Kernel) model.
1. Neuron & Synapse Model
Axon uses a simplified Integrate-and-Fire neuron model, supporting three synapse types:
- V-synapses: instantaneously modify membrane potential (excitatory
w_e = V_t
or inhibitoryw_i = -V_t
) - gₑ-synapses: conductance-based, model temporal integration
- g_f-synapses: fast-gated conductance-based
Each synapse includes a configurable delay (≥ T_syn
, the minimal delay) :contentReference[oaicite:1]{index=1}.
2. Interval-Based Value Encoding
Values x ∈ [0,1] are encoded in the time difference Δt between two spikes:
Δt = T_min + x · T_cod
x = (Δt − T_min) / T_cod
where:
- T_min: minimum time difference (e.g., 1 ms)
- T_cod: coding interval (e.g., 10 ms)
- Δt: time difference between two spikes
3. Interval-Based Computation
Spiking networks can be build to process these interval-encoded values. The network dynamics are governed by the synaptic weights and delays, allowing for complex computations based on the timing of spikes.
- Value
x
is represented by timing between spikes Δt. - Spiking networks manipulate these intervals via synaptic delays, integration, and gating, executing operations like addition, multiplication, and memory.
4. Memory & Control Flow Patterns
Axon includes reusable network patterns for symbolic SNN algorithms, such as:
4.1 Volatile Memory
-
Uses an accumulator neuron (acc) to store value in membrane potential.
-
Spike-to-store encodes interval into potential; recall emits output with the same interval once.
<title>Axon SDK API Reference — Axon SDK 0.1.0 documentation</title>
<link rel="stylesheet" type="text/css" href="_static/pygments.css?v=5ecbeea2" />
<link rel="stylesheet" type="text/css" href="_static/basic.css?v=b08954a9" />
<link rel="stylesheet" type="text/css" href="_static/alabaster.css?v=27fed22d" />
<script src="_static/documentation_options.js?v=01f34227"></script>
<script src="_static/doctools.js?v=9bcbadda"></script>
<script src="_static/sphinx_highlight.js?v=dc90522c"></script>
<link rel="index" title="Index" href="genindex.html" />
<link rel="search" title="Search" href="search.html" />
<link rel="next" title="axon_sdk package" href="axon_sdk.html" />
<div class="document">
<div class="documentwrapper">
<div class="bodywrapper">
<div class="body" role="main">
Axon SDK API Reference¶
Welcome to the API reference for the Axon SDK — a simulation framework for spike-timing-based neural computation using the STICK model.
This documentation includes modules for neuron models, synaptic primitives, network architecture, simulation, visualization, and compilation utilities.
</div>
</div>
</div>
<div class="sphinxsidebar" role="navigation" aria-label="Main">
<div class="sphinxsidebarwrapper">
Axon SDK
Navigation
Modules
- axon_sdk package
- axon_sdk.primitives package
- axon_sdk.networks package
- axon_sdk.compilation package
- axon_sdk.visualization package
</div>
</div>
<div class="clearer"></div>
</div>
<div class="footer">
©2025, Neucom ApS.
|
Powered by <a href="https://www.sphinx-doc.org/">Sphinx 8.2.3</a>
& <a href="https://alabaster.readthedocs.io">Alabaster 1.0.0</a>
|
<a href="_sources/index.rst.txt"
rel="nofollow">Page source</a>
</div>
About
About and contact
Axon SDK is a neuromorphic framework to build spiking neural networks (SNN) for general-purpose computation.
Axon SDK was build by Neucom. At Neucom, we’re building a general-purpose neuromorphic processor that executes deterministic computation tasks with minimal energy consumption.
Axon SDK is based on the theoretical work presented in STICK and it expands it in several directions:
- Axon SDK provides a library of spiking computation kernels that can be combined to achieve complex computations.
- Axon SDK extends STICK with an arbitrary computation range, beyond the original constrain to scalars in the [0, 1] range.
- Axon SDK provides new computation primitives, such as a Modulo operation, Scalar multiplication and Division.
- Axon SDK provides a compiler that translates a user-defined algorithms using Python syntax syntax, into their spiking version.
- Axon SDK includes a Simulator to emulate the operation of the constructed SNN.
Axon SDK is open-sourced under a GPLv3 license, preventing its inclusion in closed-source projects.
Axon SDK was developed by Iñigo Lara, Francesco Sheiban and Dmitri Lyalikov and belongs to Neucom APS. Neucom APS is based in Copenhagen, Denmark.
Contact
If you’re working with Axon or STICK-based hardware and want to share your application, request features, or report issues, reach out via GitHub Issues or contact the Neucom team at francesco@neucom.ai
.
Contributing
If you’d like to contribute to Axon SDK, then you should begin by forking the public repository at to your own account, commiting some code and opening a PR.
You are also welcome to submit issues to the Github repo.
We use main
as a stable branch. You should commit your modifications to a new feature branch.
$ git checkout -b feature/my-feature develop
...
$ git commit -m 'This is a verbose commit message.'
Then push your new branch to your repository
$ git push -u origin feature/my-feature
Use the Black code formatter on your final commit. This is a requirement. If your modifications aren’t already covered by a unit test, please include a unit test with your merge request. Unit tests use pytest
and go in the tests/
directory.
Then when you’re ready, make a merge request on Github the feature branch in your fork to the Axon SDK repo.
Building the documentation
The documentation is based on mdbook.
To build a live, locally-hosted HTML version of the docs, use the following commands (you’ll need to install Rust and mdbook):
cd axon-sdk
mdbook serve --open
The docs are built automatically as part of our CI/CD pipeline when a new commit arrives to main
.
Running the tests
As part of the merge review process, we’ll check that all the unit tests pass. You can check this yourself (and probably should), by running the unit tests locally.
To run all the unit tests for Axon SDK, use pytest:
cd axon-sdk
pytest tests
The test are run automatically as part of our CI/CD pipeline in every branch.
References
Axon SDK is based on theoretical work proposed in the following paper by Xavier Lagorce and Ryad Benosman:
Xavier Lagorce, Ryad Benosman; STICK: Spike Time Interval Computational Kernel, a Framework for General Purpose Computation Using Neurons, Precise Timing, Delays, and Synchrony. Neural Comput 2015; 27 (11): 2261–2317. doi: https://doi.org/10.1162/NECO_a_00783