SKA Knowledge Flow Explorer

Visualize the knowledge flow per layer across the forward learning steps, and its trajectory in knowledge space.

8 512
8 512
8 256
2 64
1 200
0.1 0.75
1 100
0 99

Definitions

Quantity Definition
Knowledge Flow Φ = ‖ΔZ‖ / η
ΔZ Zₖ − Zₖ₋₁ (pre-activation change)
η learning rate = τ / K

Reference Paper

Abstract

This paper aims to extend the Structured Knowledge Accumulation (SKA) framework recently proposed by mahi. We introduce two core concepts: the Tensor Net function and the characteristic time property of neural learning. First, we reinterpret the learning rate as a time step in a continuous system. This transforms neural learning from discrete optimization into continuous-time evolution. We show that learning dynamics remain consistent when the product of learning rate and iteration steps stays constant. This reveals a time-invariant behavior and identifies an intrinsic timescale of the network. Second, we define the Tensor Net function as a measure that captures the relationship between decision probabilities, entropy gradients, and knowledge change. Additionally, we define its zero-crossing as the equilibrium state between decision probabilities and entropy gradients. We show that the convergence of entropy and knowledge flow provides a natural stopping condition, replacing arbitrary thresholds with an information-theoretic criterion. We also establish that SKA dynamics satisfy a variational principle based on the Euler-Lagrange equation. These findings extend SKA into a continuous and self-organizing learning model. The framework links computational learning with physical systems that evolve by natural laws. By understanding learning as a time-based process, we open new directions for building efficient, robust, and biologically-inspired AI systems.


SKA Explorer Suite


About this App

Knowledge flow Φ measures how fast the pre-activations Z change per layer, normalized by η. The dotted vertical lines on the temporal plot mark the peak of each layer — each layer reaches its maximum knowledge flow at a different step K, revealing a hierarchical learning cascade. The scatter plot traces the trajectory of each layer in knowledge space — darker points are earlier steps. The red dot marks the entropy minimum for each layer, which aligns with the knowledge flow peak: the point where structured knowledge accumulation is optimal. Layer 4 follows a slower, lower trajectory with no distinct peak, reflecting its classification role.