Yao Zhang - ReadingArtificial intelligence can be studied through a unified mathematical lens, where learning systems are treated as explicit input output mappings. The network structure expression describes major model families such as MLPs, CNNs, RNNs or LSTMs, Transformers, GNNs, and PINNs as compositions of operators that capture key inductive mechanisms. Convolution models spatial locality, recurrence captures temporal dependence, self attention enables global interaction, and message passing represents relational structures. The loss function expression (with architecture embedded) defines objectives that combine task specific losses with regularization or physics informed constraints. An effective optimization strategy uses backpropagation and optimizers such as SGD or Adam to learn parameters. These methods support important applications in science and engineering, including astrophysics, computer vision, and biomedical engineering. |