While reading ‘The physics of brain network structure, function and control’ by Chris Lynn and Dani Bassett I learned that the statistical mechanics of complex networks by Réka Albert & Albert-László Barabási was an essential reference [1,2]. But, I didn’t know much graph theory and even less about random graphs, which play a central role in that reference. In order to develop an intuition about random graphs I decided to map this idea to a phenomenon I observed on a daily basis at all levels of society, mimetic behavior.

Here, I propose a simple and tractable mechanism for mimetic desire [3]. When we change our beliefs, we do so not because of their intrinsic value. Our desire to switch from belief to belief is proportional to the number of adherents of belief that we know. Technically, I modeled the problem of two conflicting beliefs that propagate through a network with nodes in a decentralised manner.

Using vertex notation, two individuals and with identical beliefs are connected with probability , and otherwise. changes its belief with a probability proportional to the number of nodes connected to that have opposing views. Two key motivating questions are:

  1. Under what circumstances does a belief get completely wiped out.
  2. Under what circumstances does a belief completely dominate(i.e. wipe out) all other beliefs.

In the scenario where there are only two possible beliefs these two questions are equivalent and I show that on average it’s sufficient that and that initially, one belief has a greater number of adherents than the other.

Representation of the problem:

Virtual weights as a representation of potential connections:

Nodes carrying the first belief were assigned to the set of red vertices, , and nodes carrying the second belief were assigned to the set of blue vertices, . However, I wasn’t satisfied with this representation.

After some reflection, I chose and as labels. The reason being that a change of belief using this representation would be equivalent to multiplication by . As a result, the vertices could be represented by an N-dimensional vector:

\begin{equation} \vec{v} \in \{-1,1\}^N \end{equation}

where .

Using this representation, between each pair of vertices we may define a virtual weight matrix :

\begin{equation} w_{ij} = v_i \cdot v_j \end{equation}

where implies identical beliefs and we have otherwise.

Now, we note that may be conveniently decomposed as follows:

\begin{equation} W= W^+ + W^- \end{equation}

where denotes potential connections between nodes of different color and denotes potential connections between nodes of identical colors.

Modelling the adjacency matrix as a combination of random matrices:

In order to simulate variations in connectivity we may assume that nodes of the same color are connected with probability:

\begin{equation} \frac{1}{2} < q < 1 \end{equation}

and nodes of different color are connected with probability .

Given we may therefore construct the adjacency matrix by sampling random matrices:

\begin{equation} M_1, M_2 \sim \mathcal{U}([0,1])^{N \times N} \end{equation}

\begin{equation} M^+ = 1_{[0,q)} \circ M_1 \end{equation}

\begin{equation} M^- = 1_{(1-q,1]} \circ M_2 \end{equation}

where denotes the characteristic function over the set and then we compute the Hadamard products:

\begin{equation} A^+ = M^+ \cdot W^+ \end{equation}

\begin{equation} A^- = M^- \cdot W^- \end{equation}

so the adjacency matrix is given by .

Now, in order to simulate variations in connectivity we simply use majority vote:

\begin{equation} p(v_i^{n+1}=v_i^n) = \frac{\bar{N_i}}{N_i} \end{equation}

\begin{equation} p(v_i^{n+1}=-1 \cdot v_i^n) = 1- \frac{\bar{N_i}}{N_i} \end{equation}

\begin{equation} \bar{N_i} = \lvert A(i,-) > 0 \rvert -1 \end{equation}

\begin{equation} N_i = \bar{N_i} + \lvert A(i,-) < 0 \rvert \end{equation}

where denotes the number of connections between and nodes sharing the same belief without counting a connection to itself.


Putting everything together with Julia we may create the following mimesis function which takes as input the number of nodes in the random graph , the number of iterates of the system , the fraction of nodes that are red , and finally the

function mimesis(n::Int64,N::Int64,p::Float64,q::Float64)
	n: number of nodes in the random graph

	N: number of iterations of the random graph coloring process

	p: the fraction of nodes that are red

	q: the probability that nodes of the same color form connections

	## initialisation:
	v = zeros(N,n)

	color_ratio = zeros(N)

	## generate positive terms:
	v1 = (rand(n) .< p)*1.0 

	## generate negative terms:
	v2 = ones(n) - v1 ## negative terms

	v[1,:] = v1 - v2

	color_ratio[1] = sum((v[1,:] .> 0)*1.0)/n

	for i = 1:N-1

		W = zeros(n,n)

		for j =1:n
			for k =1:n

				## ignore self-connections
				W[j,k] = v[i,j]*v[i,k]*(j != k)*1.0 



		if abs(sum(v[i,:])) < n

			## split W into positive and negative parts...
			Z1 = (W .< 0)*1.0 ## vertices with different colors
			Z2 = W + Z1 ## vertices with the same color

			## sample random zero-one matrices for new connections:
			M1 = (rand(n,n) .< q)*1.0
			M2 = (rand(n,n) .< 1-q)*1.0

			## use Hadamard products to construct new adjacency matrices:
			A1 = M1 .* Z2
			A2 = M2 .* Z1

			A = A1 - A2 

			## generate probabilities for color transformations:
			M_hat = [sum(A[i,:] .>  0) for i = 1:n]
			M = M_hat + [sum(A[i,:] .<  0) for i = 1:n]

			P = [M_hat[i]/(M[i]+1.0) for i =1:n]

			## zero-one vectors based on P:
			p_q = (rand(n) .< P)*1.0
			Q1 = p_q .* v[i,:]
			Q2 = -1.0*(ones(n) .- p_q) .* v[i,:]

			v[i+1,:] = Q1 + Q2

			color_ratio[i+1] = sum((v[i+1,:] .> 0)*1.0)/n


			v[i+1,:] = v[i,1]*ones(n)
			color_ratio[i+1] = v[i+1,1]



	return color_ratio


The above function may be called as follows for example:

c_ratio = mimesis(10, 100, 0.1,0.6);


The expected number of neighbors:

If we denote the number of red vertices at instant by and the number of blue vertices by we may observe that the expected number of neighbors is given by:

\begin{equation} \langle N(v_i \in R) \rangle = q \cdot (\alpha_n -1) + (1-q) \cdot \beta_n \end{equation}

\begin{equation} \langle N(v_i \in B) \rangle = q \cdot (\beta_n -1) + (1-q) \cdot \alpha_n \end{equation}

Using the above equations we may define:

\begin{equation} \langle \alpha_{n+1} \rangle = \alpha_n \left( \frac{q \cdot (\alpha_n -1)}{q \cdot (\alpha_n -1) + (1-q) \cdot \beta_n} \right) + \beta_n \left(\frac{(1-q) \cdot \alpha_n}{q \cdot (\beta_n -1) + (1-q) \cdot \alpha_n} \right) \end{equation}

and we may deduce that . It must be noted that this is an approximation that works particularly well when or when .

implies that :

Assuming that , a simple calculation shows that:

\begin{equation} \langle \alpha_{n+1} \rangle - \alpha_n \geq 0 \iff \alpha_n \geq \beta_n \end{equation}

and since:

\begin{equation} \langle \alpha_{n+1} \rangle - \alpha_n= 0 \iff \beta_n = 0 \end{equation}

we may deduce that:

\begin{equation} \lim\limits_{n \to \infty} \langle \alpha_{n} \rangle = N \end{equation}

Analysis of :

Using the fact that we may derive the following continuous-space variant of :

\begin{equation} \Delta \alpha(\alpha,q) = \frac{q \cdot (\alpha^2 -\alpha)}{2q \cdot \alpha + N \cdot (1-q) - \alpha - q} + \frac{(1-q) \cdot \alpha \cdot (N-\alpha)}{q \cdot N -2 \cdot q \cdot \alpha + \alpha - q} \end{equation}

Assuming that is a constant(ex. ) I obtained the following graph for various values of :

It’s interesting to note that the curve doesn’t exactly behave in a symmetric manner, which is a little bit surprising. Specifically, the behavior of when doesn’t resemble the behavior of when .


  1. Chris Lynn & Dani Bassett. The physics of brain network structure, function and control. 2019.
  2. Réka Albert & Albert-László Barabási. Statistical mechanics of complex networks. 2002.
  3. René Girard. Le Bouc émissaire. 1986.
  4. P. Erdös and A. Rényi. On the evolution of random graphs. 1960.
  5. G. Grimmett and C. McDiarmid. On colouring random graphs. 1975.