Presentazione

Organizzazione della Didattica

DM270
INFORMATICA ORD. 2014

Mathematical models and numerical methods for big data

6

Corsi comuni

 

Frontali Esercizi Laboratorio Studio Individuale
ORE: 48 0 0 94

Periodo

AnnoPeriodo
I anno1 semestre

Frequenza

Facoltativa

Erogazione

Convenzionale

Lingua

Italiano

Calendario Attività Didattiche

InizioFine
30/09/201918/01/2020

Tipologia

TipologiaAmbitoSSDCFU
affine/integrativo Nessun ambitoMAT/086


Responsabile Insegnamento

ResponsabileSSDStruttura
DA ASSEGNAREN.D.

Altri Docenti

Non previsti

Attività di Supporto alla Didattica

Non previste

Bollettino

Background on Matrix Theory: Type of matrices: Diagonal, Symmetric, Normal, Positive De nite; Matrix canonical forms: Diagonal, Schur; Matrix spectrum: Kernel, Range, Eigenvalues, Eigenvectors and Eigenspaces Matrix Factorizations: LU, Cholesky, QR, SVD

Learning the mathematical and computational foundations of state-of-the-art numerical algorithms that arise in the analysis of big data and in many machine learning applications. By using modern Matlab toolboxes for large and sparse data, the students will be guided trough the implementation of the methods on real-life problems arising in network analysis and machine learning.

Lectures supported by exercises and lab

Numerical methods for large linear systems ◦ Jacobi and Gauss-Seidel methods ◦ Subspace projection (Krylov) methods ◦ Arnoldi method for linear systems (FOM) ◦ (Optional) Sketches of GMRES ◦ Preconditioning: Sparse and incomplete matrix factorizations Numerical methods for large eigenvalue problems ◦ The power method ◦ Subspace Iterations ◦ Krylov-type methods: Arnoldi (and sketches of Lanczos + Non-Hermitian Lanczos) ◦ (Optional) Sketches of their block implementation ◦ Singular values VS Eigenvalues ◦ Best rank-k approximation Large scale numerical optimization ◦ Steepest descent and Newton's methods ◦ Quasi Newton methods: BFGS ◦ Stochastic steepest descent ◦ Sketches of inexact Newton methods ◦ Sketches Limited memory quasi Newton method Network centrality ◦ Perron-Frobenius theorem ◦ Centrality based on eigenvectors (HITS and Pagerank) ◦ Centrality based on matrix functions Data and network clustering ◦ K-Means algorithm ◦ Principal component analysis and dimensionality reduction ◦ Laplacian matrices, Cheeger constant, nodal domains ◦ Spectral embedding ◦ (Optional) Lovasz extension, exact relaxations, nonlinear power method (sketches) Supervised learning ◦ Linear regression ◦ Logistic regression ◦ Multiclass classi cation ◦ (Optional) Neural networks (sketches)

Written exam


CONTENUTO NON PRESENTE