The new free textbook1 on Probabilistic Artificial Intelligence by Andreas Krause and Jonas Hübotter looks really great, both in terms of content2 and presentation, with nicely formatted derivations (including, oftentimes, comments for each step in a derivation) and clean pseudo-code for algorithms. It also offers a great variety of side notes that even include additional diagrams. This is quite helpful to build up intuition and improve understanding of the material.
The book reminds me—in all the good ways—of the classic Elements of Statistical Learning3. I recommend this to anyone interested in a more formal introduction to ‘AI algorithms’ that deal with uncertainty.
The one area that where I would have loved some more details is Causal inference4. Maybe this is something to include in a second edition^^
–
I’ve leaved through the table of contents and checked out the chapters on Markov decision processes and reinforcement learning, which brought back fond memories :-) ↩
By Trevor Hastie, Robert Tibshirani, and Jerome Friedman; freely available online, and with a more hands-on ‘sibling textbook’, An Introduction to Statistical Learning, also freely available here (even including code samples in R and Python). ↩
Here is a relatively recent survey (PDF) of the field; the approach is interesting for biomedical research, e.g. in genetics (see the discussion here). For more information, Judea Pearl, who (as far as I know) developed the approach and coined the term, has written a lot about this topic. ↩