Graphs I
Graphs are probably the most versatile and the most important construction when we refer to representation. They are astonishly ubiquitous. We can describe practically any system of relations or interactions as a graph. This is true at nano scales where we can model individual molecules, to micro scale where we can look at different interactions between biological entities like molecules, drugs and proteins modelled in interactomes, to finally concluding at macro scale by modelling social graphs of people.
Conceptualization: When we think of a graph, we think about a system of relations and interactions between its elements.
Let
and
be sets. Call an element of
a vertex and a element of
an edge. A graph is given by
, and a mapping
that interprets edges as pairs of vertices. Exactly what this means depends on how one defines "mapping that interprets" and "pair". The possibilities are given below. We will need the following notation:
- is the diagonal subset of, the set of pairs, so that its complementis the set of pairs as above where.
This brings us with the following types of graphs according to their ordering:
- 1.Undirected graphs
- 2.Directed graphs
- 3.Undirected graphs as directed graphs with an involution

Example of a graph
We can also attach some features to the nodes that will be modeled as d-dimensional vectors. For example, in a social network graph, each node might contain information relative to the user of the node such as age, height, sex... For the sake of simplicity we are going to ignore possible features linked to edges, but they could exist.
Machine Learners | Labeling RAAM (1994) | Backpropagation through structure (1996) | Graph Neural Networks (2008) | Gated GNN (2015) |
Graph Theorists | Weisfeiler-Lehman kernels (2009) | k-GNN | GIN (2019) | Provably powerful GNN |
Chemists | ChemNet (1995) | Neural descriptors (1997) | Molecular graph net (2005) | Molecular fingerprints (2017) |
The term graph neural networks first appeared in a series of papers by the group of research conformed by M. Gore and F. Scarselli. But graph theory research argue that graph neural networks are just a rebranding of a graph isomorphism test called the Weisfeiler-Lehman test (1968). This test address a classical graph theory problem that tries to determine if 2 graphs are isomorphic or have the same connectivity up to reordering the nodes.
NOTE: Nowadays there isn't any algorithm that can compute and solve the Weisfeiler-Lehman Test in polynomial time. Only solvable in polynomial time up to graphs with 9 nodes. Indeed this problem was proved to have a computational solution in quasi-polynomial
The key structure of a graph or a set is that it's unordered. We don't have a canonical way to order its nodes. Thus when we number the nodes of an graph, for example the above one, we are partially cheating. It's convenient because we can now organize the nodes features into a matrix of dimensions
, where n is the number of nodes and d the dimension of the features. This way we can automatically prescribe some arbitrary ordering and apply it for the adjacency matrix.
The adjacency matrix for a graph with
vertices is an
matrix whose
entry is
if the vertex
and vertex
are connected, and
if they are not.
The representation of the adjacency matrix depends on the selected ordering to the set of nodes.

Adjacency Matrix
Conclusion: If we number the nodes differently implies that the rows of the feature matrix and corresponding rows and columns of the adjacency matrix will be permuted accordingly. We denote this permutation matrix by
, and
is an elemen of the permutation group of
elements. The permuation group contains
elements. We can think of
as a representation of the permutation group.
NOTE: The representation differs with respect to the type of the object. The permutation group acts differently on vectors and matrices.
Strong precondition needed: In order to implement a function acting on a graph that produces a single output for the entire graph, it's mandatory to assert that this single value generated output is unaffected by the chosen ordering of the input nodes. Independently of the ordering of the nodes, the computed output for the same graph must be always the same, although the nodes were or not rearranged.

Invariant Graph Functions: Permutation-invariant
In this scenario it's convenient that the output of the function changes harmoniously to the fluctuations in the input with the reordering of the nodes.
A multiset is like a set, just allowing that the elements have multiplicities. Thus the multiset
differs from the multiset
, while
is the same as
. (See ref
)
It is common to define them in terms of sets and functions.
A
, can be defined as a set
together with a function
(giving each element its multiplicity) from
to a class of nonzero cardinal numbers.
This is the main operation of a classical Graph Neural Network architecture.

neighbourhood example for a given node i
Question: How this operation works? Answer: For each node in the graph we look at the neighbors and take the feature vectors that together conforms the multiset. Even though the indices of the neighbors are unique, the feature vectors are not necessarily unique.

multiset of neighbour features
NOTE: In this example, we have the blue feature repeated twice. Two nodes have the same feature vector. In this scenario we dealing with a multiset. (See section 5.1 Multiset)
Now we try to aggregate these features together with the feature vector of the node itself. It must be done in a permutation invariant way because we don't have a canonical ordering of the neighbors on a graph.

aggregation function over a node i
If we apply this
local function at every node of the graph and stack the results into a feature matrix, denoted by
, we get a permutation equivariant function. Thus if we reorder the nodes then the order of the rows of this matrix will be changed.
This local aggregation function on GNN typically looks as follows:
- : It is often sum or maximal.
- :
- 1.: This function transforms the neighbors features. It's a non linear function that depends on both feature vectors of nodeand. Its output can be seen as a message that is sent from nodeto update node. That's why this architecture is also called message passing graph neural networks.
- 2.: Updates the features of nodeusing the aggregated features of the neighbors.
- :

Gilmer et al. 2017 (MPNN); Battaglia et al 2018; Wang et al. 2018
Last modified 9mo ago