Natural Language Processing: A Machine Learning Perspective

封面
Cambridge University Press, 2021年1月7日 - 484 頁
With a machine learning approach and less focus on linguistic details, this gentle introduction to natural language processing develops fundamental mathematical and deep learning models for NLP under a unified framework. NLP problems are systematically organised by their machine learning nature, including classification, sequence labelling, and sequence-to-sequence problems. Topics covered include statistical machine learning and deep learning models, text classification and structured prediction models, generative and discriminative models, supervised and unsupervised learning with latent variables, neural networks, and transition-based methods. Rich connections are drawn between concepts throughout the book, equipping students with the tools needed to establish a deep understanding of NLP solutions, adapt existing models, and confidently develop innovative models of their own. Featuring a host of examples, intuition, and end of chapter exercises, plus sample code available as an online resource, this textbook is an invaluable tool for the upper undergraduate and graduate student.
 

內容

Introduction
3
Counting Relative Frequencies
25
Feature Vectors
50
Discriminative Linear Classifiers
73
A Perspective from Information Theory
98
Hidden Variables
120
Generative Sequence Labelling
147
Discriminative Sequence Labelling
169
Bayesian Network
259
Neural Network
289
Summary
310
Neural Structured Prediction
343
Working with Two Texts
370
Pretraining and Transfer Learning
396
Deep Latent Variable Models
423
Bibliography
453

Sequence Segmentation
190
Predicting Tree Structures
212
TransitionBased Methods for Structured Prediction
235

常見字詞

關於作者 (2021)

Yue Zhang is an associate professor at Westlake University. Before joining Westlake, he worked as a research associate at the University of Cambridge and then a faculty member at Singapore University of Technology and Design. His research interests lie in fundamental algorithms for NLP, syntax, semantics, information extraction, text generation, and machine translation. He serves as an action editor for TACL, and as area chairs of ACL, EMNLP, COLING, and NAACL. He gave several tutorials at ACL, EMNLP and NAACL, and won a best paper award at COLING in 2018. Zhiyang Teng is currently a postdoctoral research fellow in the natural language processing group of Westlake University, China. He obtained his Ph.D. from Singapore University of Technology and Design (SUTD) in 2018, and his Master's from the University of Chinese Academy of Science in 2014. He won the best paper award at CCL/NLP-NABD 2014, and published conference papers for ACL/TACL, EMNLP, COLING, NAACL, and TKDE. His research interests include syntactic parsing, sentiment analysis, deep learning, and variational inference.

書目資訊