Einführung in die Computerlinguistik Text Classification and...

145
Einführung in die Computerlinguistik Text Classification and Naive Bayes Alexander Fraser and Robert Zangenfeind Center for Information and Language Processing 2020-01-13 Text classification Naive Bayes NB theory Evaluation of TC Fraser: Text Classification and Naive Bayes 1 / 52

Transcript of Einführung in die Computerlinguistik Text Classification and...

Page 1: Einführung in die Computerlinguistik Text Classification and …fraser/intro_2019_WS/pdf/13baye… · Text Classification and Naive Bayes Alexander Fraser and Robert Zangenfeind

Einführung in die ComputerlinguistikText Classification and Naive Bayes

Alexander Fraser and Robert Zangenfeind

Center for Information and Language Processing

2020-01-13

Text classification Naive Bayes NB theory Evaluation of TCFraser: Text Classification and Naive Bayes 1 / 52

Page 2: Einführung in die Computerlinguistik Text Classification and …fraser/intro_2019_WS/pdf/13baye… · Text Classification and Naive Bayes Alexander Fraser and Robert Zangenfeind

Dieses Foliensatz wurde von Prof. Dr. Hinrich Schütze erstellt.

Fehler und Mängel sind ausschließlich meine Verantwortung.

Text classification Naive Bayes NB theory Evaluation of TCFraser: Text Classification and Naive Bayes 2 / 52

Page 3: Einführung in die Computerlinguistik Text Classification and …fraser/intro_2019_WS/pdf/13baye… · Text Classification and Naive Bayes Alexander Fraser and Robert Zangenfeind

Outline

1 Text classification

2 Naive Bayes

3 NB theory

4 Evaluation of TC

Text classification Naive Bayes NB theory Evaluation of TCFraser: Text Classification and Naive Bayes 3 / 52

Page 4: Einführung in die Computerlinguistik Text Classification and …fraser/intro_2019_WS/pdf/13baye… · Text Classification and Naive Bayes Alexander Fraser and Robert Zangenfeind

Outline

1 Text classification

2 Naive Bayes

3 NB theory

4 Evaluation of TC

Text classification Naive Bayes NB theory Evaluation of TCFraser: Text Classification and Naive Bayes 4 / 52

Page 5: Einführung in die Computerlinguistik Text Classification and …fraser/intro_2019_WS/pdf/13baye… · Text Classification and Naive Bayes Alexander Fraser and Robert Zangenfeind

A text classification task: Email spam filtering

From: ``'' <[email protected]>Subject: real estate is the only way... gem oalvgkay

Anyone can buy real estate with no money down

Stop paying rent TODAY !

There is no need to spend hundreds or even thousands for similar courses

I am 22 years old and I have already purchased 6 properties using themethods outlined in this truly INCREDIBLE ebook.

Change your life NOW !

=================================================Click Below to order:http://www.wholesaledaily.com/sales/nmd.htm=================================================

How would you write a program that would automatically detect and delete thistype of message?

Text classification Naive Bayes NB theory Evaluation of TCFraser: Text Classification and Naive Bayes 5 / 52

Page 6: Einführung in die Computerlinguistik Text Classification and …fraser/intro_2019_WS/pdf/13baye… · Text Classification and Naive Bayes Alexander Fraser and Robert Zangenfeind
Page 7: Einführung in die Computerlinguistik Text Classification and …fraser/intro_2019_WS/pdf/13baye… · Text Classification and Naive Bayes Alexander Fraser and Robert Zangenfeind
Page 8: Einführung in die Computerlinguistik Text Classification and …fraser/intro_2019_WS/pdf/13baye… · Text Classification and Naive Bayes Alexander Fraser and Robert Zangenfeind

Formal definition of TC: Training

Given:

A document space X

Documents are represented in this space – typically some typeof high-dimensional space.

A fixed set of classes C = {c1, c2, . . . , cJ}

The classes are human-defined for the needs of an application(e.g., spam vs. nonspam).

A training set D of labeled documents. Each labeleddocument ⟨d , c⟩ ∈ X× C

Using a learning method or learning algorithm, we then wish tolearn a classifier γ that maps documents to classes:

γ : X→ C

Text classification Naive Bayes NB theory Evaluation of TCFraser: Text Classification and Naive Bayes 8 / 52

Page 9: Einführung in die Computerlinguistik Text Classification and …fraser/intro_2019_WS/pdf/13baye… · Text Classification and Naive Bayes Alexander Fraser and Robert Zangenfeind

Formal definition of TC: Training

Given:A document space X

Documents are represented in this space – typically some typeof high-dimensional space.

A fixed set of classes C = {c1, c2, . . . , cJ}

The classes are human-defined for the needs of an application(e.g., spam vs. nonspam).

A training set D of labeled documents. Each labeleddocument ⟨d , c⟩ ∈ X× C

Using a learning method or learning algorithm, we then wish tolearn a classifier γ that maps documents to classes:

γ : X→ C

Text classification Naive Bayes NB theory Evaluation of TCFraser: Text Classification and Naive Bayes 8 / 52

Page 10: Einführung in die Computerlinguistik Text Classification and …fraser/intro_2019_WS/pdf/13baye… · Text Classification and Naive Bayes Alexander Fraser and Robert Zangenfeind

Formal definition of TC: Training

Given:A document space X

Documents are represented in this space – typically some typeof high-dimensional space.

A fixed set of classes C = {c1, c2, . . . , cJ}

The classes are human-defined for the needs of an application(e.g., spam vs. nonspam).

A training set D of labeled documents. Each labeleddocument ⟨d , c⟩ ∈ X× C

Using a learning method or learning algorithm, we then wish tolearn a classifier γ that maps documents to classes:

γ : X→ C

Text classification Naive Bayes NB theory Evaluation of TCFraser: Text Classification and Naive Bayes 8 / 52

Page 11: Einführung in die Computerlinguistik Text Classification and …fraser/intro_2019_WS/pdf/13baye… · Text Classification and Naive Bayes Alexander Fraser and Robert Zangenfeind

Formal definition of TC: Training

Given:A document space X

Documents are represented in this space – typically some typeof high-dimensional space.

A fixed set of classes C = {c1, c2, . . . , cJ}

The classes are human-defined for the needs of an application(e.g., spam vs. nonspam).

A training set D of labeled documents. Each labeleddocument ⟨d , c⟩ ∈ X× C

Using a learning method or learning algorithm, we then wish tolearn a classifier γ that maps documents to classes:

γ : X→ C

Text classification Naive Bayes NB theory Evaluation of TCFraser: Text Classification and Naive Bayes 8 / 52

Page 12: Einführung in die Computerlinguistik Text Classification and …fraser/intro_2019_WS/pdf/13baye… · Text Classification and Naive Bayes Alexander Fraser and Robert Zangenfeind

Formal definition of TC: Training

Given:A document space X

Documents are represented in this space – typically some typeof high-dimensional space.

A fixed set of classes C = {c1, c2, . . . , cJ}The classes are human-defined for the needs of an application(e.g., spam vs. nonspam).

A training set D of labeled documents. Each labeleddocument ⟨d , c⟩ ∈ X× C

Using a learning method or learning algorithm, we then wish tolearn a classifier γ that maps documents to classes:

γ : X→ C

Text classification Naive Bayes NB theory Evaluation of TCFraser: Text Classification and Naive Bayes 8 / 52

Page 13: Einführung in die Computerlinguistik Text Classification and …fraser/intro_2019_WS/pdf/13baye… · Text Classification and Naive Bayes Alexander Fraser and Robert Zangenfeind

Formal definition of TC: Training

Given:A document space X

Documents are represented in this space – typically some typeof high-dimensional space.

A fixed set of classes C = {c1, c2, . . . , cJ}The classes are human-defined for the needs of an application(e.g., spam vs. nonspam).

A training set D of labeled documents. Each labeleddocument ⟨d , c⟩ ∈ X× C

Using a learning method or learning algorithm, we then wish tolearn a classifier γ that maps documents to classes:

γ : X→ C

Text classification Naive Bayes NB theory Evaluation of TCFraser: Text Classification and Naive Bayes 8 / 52

Page 14: Einführung in die Computerlinguistik Text Classification and …fraser/intro_2019_WS/pdf/13baye… · Text Classification and Naive Bayes Alexander Fraser and Robert Zangenfeind

Formal definition of TC: Training

Given:A document space X

Documents are represented in this space – typically some typeof high-dimensional space.

A fixed set of classes C = {c1, c2, . . . , cJ}The classes are human-defined for the needs of an application(e.g., spam vs. nonspam).

A training set D of labeled documents. Each labeleddocument ⟨d , c⟩ ∈ X× C

Using a learning method or learning algorithm, we then wish tolearn a classifier γ that maps documents to classes:

γ : X→ C

Text classification Naive Bayes NB theory Evaluation of TCFraser: Text Classification and Naive Bayes 8 / 52

Page 15: Einführung in die Computerlinguistik Text Classification and …fraser/intro_2019_WS/pdf/13baye… · Text Classification and Naive Bayes Alexander Fraser and Robert Zangenfeind

We can view sentences also as doc-uments – so “document” refers toany piece of text we want to clas-sify.

Text classification Naive Bayes NB theory Evaluation of TCFraser: Text Classification and Naive Bayes 9 / 52

Page 16: Einführung in die Computerlinguistik Text Classification and …fraser/intro_2019_WS/pdf/13baye… · Text Classification and Naive Bayes Alexander Fraser and Robert Zangenfeind

Formal definition of TC: Application/Testing

Given: a description d ∈ X of a document

Determine: γ(d) ∈ C, that is,determine the class that is most appropriate for d

Text classification Naive Bayes NB theory Evaluation of TCFraser: Text Classification and Naive Bayes 10 / 52

Page 17: Einführung in die Computerlinguistik Text Classification and …fraser/intro_2019_WS/pdf/13baye… · Text Classification and Naive Bayes Alexander Fraser and Robert Zangenfeind

Formal definition of TC: Application/Testing

Given: a description d ∈ X of a document

Determine: γ(d) ∈ C, that is,determine the class that is most appropriate for d

Text classification Naive Bayes NB theory Evaluation of TCFraser: Text Classification and Naive Bayes 10 / 52

Page 18: Einführung in die Computerlinguistik Text Classification and …fraser/intro_2019_WS/pdf/13baye… · Text Classification and Naive Bayes Alexander Fraser and Robert Zangenfeind

Topic classification

classes:

trainingset:

testset:

“regions” “industries” “subject areas”

γ(d ′) =China

“first”“private”“Chinese”“airline”

UK China poultry coffee elections sports

“London”“congestion”

“Big Ben”“Parliament”

“the Queen”“Windsor”

“Beijing”“Olympics”

“Great Wall”“tourism”

“communist”“Mao”

“chicken”“feed”

“ducks”“pate”

“turkey”“bird flu”

“beans”“roasting”

“robusta”“arabica”

“harvest”“Kenya”

“votes”“recount”

“run-off”“seat”

“campaign”“TV ads”

“baseball”“diamond”

“soccer”“forward”

“captain”“team”

d ′

Text classification Naive Bayes NB theory Evaluation of TCFraser: Text Classification and Naive Bayes 11 / 52

Page 19: Einführung in die Computerlinguistik Text Classification and …fraser/intro_2019_WS/pdf/13baye… · Text Classification and Naive Bayes Alexander Fraser and Robert Zangenfeind

Applications of text classification

Language identification(classes: English vs French vs …)The automatic detection of spam pages(spam vs nonspam)Sentiment analysis:Is a movie or product review positive or negative(positive vs negative)Topic-specific or vertical search:Restrict search to a “vertical” like “related to health”(classes: relevant to vertical vs not)

Text classification Naive Bayes NB theory Evaluation of TCFraser: Text Classification and Naive Bayes 12 / 52

Page 20: Einführung in die Computerlinguistik Text Classification and …fraser/intro_2019_WS/pdf/13baye… · Text Classification and Naive Bayes Alexander Fraser and Robert Zangenfeind

Classification methods: 1. Manual

Manual classification was used by Yahoo in the beginning ofthe web. Also: ODP, PubMedVery accurate if job is done by expertsConsistent when the problem size and team is smallScaling manual classification is difficult and expensive.→ We need automatic methods for classification.

Text classification Naive Bayes NB theory Evaluation of TCFraser: Text Classification and Naive Bayes 13 / 52

Page 21: Einführung in die Computerlinguistik Text Classification and …fraser/intro_2019_WS/pdf/13baye… · Text Classification and Naive Bayes Alexander Fraser and Robert Zangenfeind

Classification methods: 1. Manual

Manual classification was used by Yahoo in the beginning ofthe web. Also: ODP, PubMed

Very accurate if job is done by expertsConsistent when the problem size and team is smallScaling manual classification is difficult and expensive.→ We need automatic methods for classification.

Text classification Naive Bayes NB theory Evaluation of TCFraser: Text Classification and Naive Bayes 13 / 52

Page 22: Einführung in die Computerlinguistik Text Classification and …fraser/intro_2019_WS/pdf/13baye… · Text Classification and Naive Bayes Alexander Fraser and Robert Zangenfeind

Classification methods: 1. Manual

Manual classification was used by Yahoo in the beginning ofthe web. Also: ODP, PubMedVery accurate if job is done by experts

Consistent when the problem size and team is smallScaling manual classification is difficult and expensive.→ We need automatic methods for classification.

Text classification Naive Bayes NB theory Evaluation of TCFraser: Text Classification and Naive Bayes 13 / 52

Page 23: Einführung in die Computerlinguistik Text Classification and …fraser/intro_2019_WS/pdf/13baye… · Text Classification and Naive Bayes Alexander Fraser and Robert Zangenfeind

Classification methods: 1. Manual

Manual classification was used by Yahoo in the beginning ofthe web. Also: ODP, PubMedVery accurate if job is done by expertsConsistent when the problem size and team is small

Scaling manual classification is difficult and expensive.→ We need automatic methods for classification.

Text classification Naive Bayes NB theory Evaluation of TCFraser: Text Classification and Naive Bayes 13 / 52

Page 24: Einführung in die Computerlinguistik Text Classification and …fraser/intro_2019_WS/pdf/13baye… · Text Classification and Naive Bayes Alexander Fraser and Robert Zangenfeind

Classification methods: 1. Manual

Manual classification was used by Yahoo in the beginning ofthe web. Also: ODP, PubMedVery accurate if job is done by expertsConsistent when the problem size and team is smallScaling manual classification is difficult and expensive.

→ We need automatic methods for classification.

Text classification Naive Bayes NB theory Evaluation of TCFraser: Text Classification and Naive Bayes 13 / 52

Page 25: Einführung in die Computerlinguistik Text Classification and …fraser/intro_2019_WS/pdf/13baye… · Text Classification and Naive Bayes Alexander Fraser and Robert Zangenfeind

Classification methods: 1. Manual

Manual classification was used by Yahoo in the beginning ofthe web. Also: ODP, PubMedVery accurate if job is done by expertsConsistent when the problem size and team is smallScaling manual classification is difficult and expensive.→ We need automatic methods for classification.

Text classification Naive Bayes NB theory Evaluation of TCFraser: Text Classification and Naive Bayes 13 / 52

Page 26: Einführung in die Computerlinguistik Text Classification and …fraser/intro_2019_WS/pdf/13baye… · Text Classification and Naive Bayes Alexander Fraser and Robert Zangenfeind

Classification methods: 2. Rule-based

E.g., Google Alerts is rule-based classification.

Google Alerts allows the definition of Google queries which aretracked in both News and Web.

There are IDE-type development enviroments for writing verycomplex rules efficiently. (e.g., Verity)Often: Boolean combinations (as in Google Alerts)Accuracy is very high if a rule has been carefully refined overtime by a subject expert.Building and maintaining rule-based classification systems iscumbersome and expensive.

Text classification Naive Bayes NB theory Evaluation of TCFraser: Text Classification and Naive Bayes 14 / 52

Page 27: Einführung in die Computerlinguistik Text Classification and …fraser/intro_2019_WS/pdf/13baye… · Text Classification and Naive Bayes Alexander Fraser and Robert Zangenfeind

Classification methods: 2. Rule-based

E.g., Google Alerts is rule-based classification.

Google Alerts allows the definition of Google queries which aretracked in both News and Web.

There are IDE-type development enviroments for writing verycomplex rules efficiently. (e.g., Verity)Often: Boolean combinations (as in Google Alerts)Accuracy is very high if a rule has been carefully refined overtime by a subject expert.Building and maintaining rule-based classification systems iscumbersome and expensive.

Text classification Naive Bayes NB theory Evaluation of TCFraser: Text Classification and Naive Bayes 14 / 52

Page 28: Einführung in die Computerlinguistik Text Classification and …fraser/intro_2019_WS/pdf/13baye… · Text Classification and Naive Bayes Alexander Fraser and Robert Zangenfeind

Classification methods: 2. Rule-based

E.g., Google Alerts is rule-based classification.Google Alerts allows the definition of Google queries which aretracked in both News and Web.

There are IDE-type development enviroments for writing verycomplex rules efficiently. (e.g., Verity)Often: Boolean combinations (as in Google Alerts)Accuracy is very high if a rule has been carefully refined overtime by a subject expert.Building and maintaining rule-based classification systems iscumbersome and expensive.

Text classification Naive Bayes NB theory Evaluation of TCFraser: Text Classification and Naive Bayes 14 / 52

Page 29: Einführung in die Computerlinguistik Text Classification and …fraser/intro_2019_WS/pdf/13baye… · Text Classification and Naive Bayes Alexander Fraser and Robert Zangenfeind

Classification methods: 2. Rule-based

E.g., Google Alerts is rule-based classification.Google Alerts allows the definition of Google queries which aretracked in both News and Web.

There are IDE-type development enviroments for writing verycomplex rules efficiently. (e.g., Verity)

Often: Boolean combinations (as in Google Alerts)Accuracy is very high if a rule has been carefully refined overtime by a subject expert.Building and maintaining rule-based classification systems iscumbersome and expensive.

Text classification Naive Bayes NB theory Evaluation of TCFraser: Text Classification and Naive Bayes 14 / 52

Page 30: Einführung in die Computerlinguistik Text Classification and …fraser/intro_2019_WS/pdf/13baye… · Text Classification and Naive Bayes Alexander Fraser and Robert Zangenfeind

Classification methods: 2. Rule-based

E.g., Google Alerts is rule-based classification.Google Alerts allows the definition of Google queries which aretracked in both News and Web.

There are IDE-type development enviroments for writing verycomplex rules efficiently. (e.g., Verity)Often: Boolean combinations (as in Google Alerts)

Accuracy is very high if a rule has been carefully refined overtime by a subject expert.Building and maintaining rule-based classification systems iscumbersome and expensive.

Text classification Naive Bayes NB theory Evaluation of TCFraser: Text Classification and Naive Bayes 14 / 52

Page 31: Einführung in die Computerlinguistik Text Classification and …fraser/intro_2019_WS/pdf/13baye… · Text Classification and Naive Bayes Alexander Fraser and Robert Zangenfeind

Classification methods: 2. Rule-based

E.g., Google Alerts is rule-based classification.Google Alerts allows the definition of Google queries which aretracked in both News and Web.

There are IDE-type development enviroments for writing verycomplex rules efficiently. (e.g., Verity)Often: Boolean combinations (as in Google Alerts)Accuracy is very high if a rule has been carefully refined overtime by a subject expert.

Building and maintaining rule-based classification systems iscumbersome and expensive.

Text classification Naive Bayes NB theory Evaluation of TCFraser: Text Classification and Naive Bayes 14 / 52

Page 32: Einführung in die Computerlinguistik Text Classification and …fraser/intro_2019_WS/pdf/13baye… · Text Classification and Naive Bayes Alexander Fraser and Robert Zangenfeind

Classification methods: 2. Rule-based

E.g., Google Alerts is rule-based classification.Google Alerts allows the definition of Google queries which aretracked in both News and Web.

There are IDE-type development enviroments for writing verycomplex rules efficiently. (e.g., Verity)Often: Boolean combinations (as in Google Alerts)Accuracy is very high if a rule has been carefully refined overtime by a subject expert.Building and maintaining rule-based classification systems iscumbersome and expensive.

Text classification Naive Bayes NB theory Evaluation of TCFraser: Text Classification and Naive Bayes 14 / 52

Page 33: Einführung in die Computerlinguistik Text Classification and …fraser/intro_2019_WS/pdf/13baye… · Text Classification and Naive Bayes Alexander Fraser and Robert Zangenfeind

A Verity topic (a complex classification rule)

Text classification Naive Bayes NB theory Evaluation of TCFraser: Text Classification and Naive Bayes 15 / 52

Page 34: Einführung in die Computerlinguistik Text Classification and …fraser/intro_2019_WS/pdf/13baye… · Text Classification and Naive Bayes Alexander Fraser and Robert Zangenfeind

A Verity topic (a complex classification rule)

Text classification Naive Bayes NB theory Evaluation of TCFraser: Text Classification and Naive Bayes 15 / 52

Page 35: Einführung in die Computerlinguistik Text Classification and …fraser/intro_2019_WS/pdf/13baye… · Text Classification and Naive Bayes Alexander Fraser and Robert Zangenfeind

Classification methods: 3. Statistical/Probabilistic

This was our definition of the classification problem:Text classification as a learning problem(i) Supervised learning of a the classification function γ and(ii) application of γ to classifying new documentsWe will look at one method for doing this:Naive BayesNo free lunch: requires hand-classified training dataBut this manual classification can be done by non-experts.

Text classification Naive Bayes NB theory Evaluation of TCFraser: Text Classification and Naive Bayes 16 / 52

Page 36: Einführung in die Computerlinguistik Text Classification and …fraser/intro_2019_WS/pdf/13baye… · Text Classification and Naive Bayes Alexander Fraser and Robert Zangenfeind

Classification methods: 3. Statistical/Probabilistic

This was our definition of the classification problem:Text classification as a learning problem

(i) Supervised learning of a the classification function γ and(ii) application of γ to classifying new documentsWe will look at one method for doing this:Naive BayesNo free lunch: requires hand-classified training dataBut this manual classification can be done by non-experts.

Text classification Naive Bayes NB theory Evaluation of TCFraser: Text Classification and Naive Bayes 16 / 52

Page 37: Einführung in die Computerlinguistik Text Classification and …fraser/intro_2019_WS/pdf/13baye… · Text Classification and Naive Bayes Alexander Fraser and Robert Zangenfeind

Classification methods: 3. Statistical/Probabilistic

This was our definition of the classification problem:Text classification as a learning problem(i) Supervised learning of a the classification function γ and(ii) application of γ to classifying new documents

We will look at one method for doing this:Naive BayesNo free lunch: requires hand-classified training dataBut this manual classification can be done by non-experts.

Text classification Naive Bayes NB theory Evaluation of TCFraser: Text Classification and Naive Bayes 16 / 52

Page 38: Einführung in die Computerlinguistik Text Classification and …fraser/intro_2019_WS/pdf/13baye… · Text Classification and Naive Bayes Alexander Fraser and Robert Zangenfeind

Classification methods: 3. Statistical/Probabilistic

This was our definition of the classification problem:Text classification as a learning problem(i) Supervised learning of a the classification function γ and(ii) application of γ to classifying new documentsWe will look at one method for doing this:Naive Bayes

No free lunch: requires hand-classified training dataBut this manual classification can be done by non-experts.

Text classification Naive Bayes NB theory Evaluation of TCFraser: Text Classification and Naive Bayes 16 / 52

Page 39: Einführung in die Computerlinguistik Text Classification and …fraser/intro_2019_WS/pdf/13baye… · Text Classification and Naive Bayes Alexander Fraser and Robert Zangenfeind

Classification methods: 3. Statistical/Probabilistic

This was our definition of the classification problem:Text classification as a learning problem(i) Supervised learning of a the classification function γ and(ii) application of γ to classifying new documentsWe will look at one method for doing this:Naive BayesNo free lunch: requires hand-classified training data

But this manual classification can be done by non-experts.

Text classification Naive Bayes NB theory Evaluation of TCFraser: Text Classification and Naive Bayes 16 / 52

Page 40: Einführung in die Computerlinguistik Text Classification and …fraser/intro_2019_WS/pdf/13baye… · Text Classification and Naive Bayes Alexander Fraser and Robert Zangenfeind

Classification methods: 3. Statistical/Probabilistic

This was our definition of the classification problem:Text classification as a learning problem(i) Supervised learning of a the classification function γ and(ii) application of γ to classifying new documentsWe will look at one method for doing this:Naive BayesNo free lunch: requires hand-classified training dataBut this manual classification can be done by non-experts.

Text classification Naive Bayes NB theory Evaluation of TCFraser: Text Classification and Naive Bayes 16 / 52

Page 41: Einführung in die Computerlinguistik Text Classification and …fraser/intro_2019_WS/pdf/13baye… · Text Classification and Naive Bayes Alexander Fraser and Robert Zangenfeind

Outline

1 Text classification

2 Naive Bayes

3 NB theory

4 Evaluation of TC

Text classification Naive Bayes NB theory Evaluation of TCFraser: Text Classification and Naive Bayes 17 / 52

Page 42: Einführung in die Computerlinguistik Text Classification and …fraser/intro_2019_WS/pdf/13baye… · Text Classification and Naive Bayes Alexander Fraser and Robert Zangenfeind

The Naive Bayes classifier

The Naive Bayes classifier is a probabilistic classifier.

We compute the probability of a document d being in a classc as follows:

P(c|d) ∝ P(c)∏

1≤k≤nd

P(tk |c)

nd is the length of the document (the number of tokens), kan index to the kth token tk .P(tk |c) is the conditional probability = bedingteWahrscheinlichkeitof term tk occurring in a document of class cP(tk |c) is a measure of how much evidence tk contributesthat c is the correct class.P(c) is the prior probability of c.If a document’s terms do not provide clear evidence for oneclass vs. another, we choose the c with highest P(c).

Text classification Naive Bayes NB theory Evaluation of TCFraser: Text Classification and Naive Bayes 18 / 52

Page 43: Einführung in die Computerlinguistik Text Classification and …fraser/intro_2019_WS/pdf/13baye… · Text Classification and Naive Bayes Alexander Fraser and Robert Zangenfeind

The Naive Bayes classifier

The Naive Bayes classifier is a probabilistic classifier.We compute the probability of a document d being in a classc as follows:

P(c|d) ∝ P(c)∏

1≤k≤nd

P(tk |c)

nd is the length of the document (the number of tokens), kan index to the kth token tk .P(tk |c) is the conditional probability = bedingteWahrscheinlichkeitof term tk occurring in a document of class cP(tk |c) is a measure of how much evidence tk contributesthat c is the correct class.P(c) is the prior probability of c.If a document’s terms do not provide clear evidence for oneclass vs. another, we choose the c with highest P(c).

Text classification Naive Bayes NB theory Evaluation of TCFraser: Text Classification and Naive Bayes 18 / 52

Page 44: Einführung in die Computerlinguistik Text Classification and …fraser/intro_2019_WS/pdf/13baye… · Text Classification and Naive Bayes Alexander Fraser and Robert Zangenfeind

The Naive Bayes classifier

The Naive Bayes classifier is a probabilistic classifier.We compute the probability of a document d being in a classc as follows:

P(c|d) ∝ P(c)∏

1≤k≤nd

P(tk |c)

nd is the length of the document (the number of tokens), kan index to the kth token tk .

P(tk |c) is the conditional probability = bedingteWahrscheinlichkeitof term tk occurring in a document of class cP(tk |c) is a measure of how much evidence tk contributesthat c is the correct class.P(c) is the prior probability of c.If a document’s terms do not provide clear evidence for oneclass vs. another, we choose the c with highest P(c).

Text classification Naive Bayes NB theory Evaluation of TCFraser: Text Classification and Naive Bayes 18 / 52

Page 45: Einführung in die Computerlinguistik Text Classification and …fraser/intro_2019_WS/pdf/13baye… · Text Classification and Naive Bayes Alexander Fraser and Robert Zangenfeind

The Naive Bayes classifier

The Naive Bayes classifier is a probabilistic classifier.We compute the probability of a document d being in a classc as follows:

P(c|d) ∝ P(c)∏

1≤k≤nd

P(tk |c)

nd is the length of the document (the number of tokens), kan index to the kth token tk .P(tk |c) is the conditional probability = bedingteWahrscheinlichkeitof term tk occurring in a document of class c

P(tk |c) is a measure of how much evidence tk contributesthat c is the correct class.P(c) is the prior probability of c.If a document’s terms do not provide clear evidence for oneclass vs. another, we choose the c with highest P(c).

Text classification Naive Bayes NB theory Evaluation of TCFraser: Text Classification and Naive Bayes 18 / 52

Page 46: Einführung in die Computerlinguistik Text Classification and …fraser/intro_2019_WS/pdf/13baye… · Text Classification and Naive Bayes Alexander Fraser and Robert Zangenfeind

The Naive Bayes classifier

The Naive Bayes classifier is a probabilistic classifier.We compute the probability of a document d being in a classc as follows:

P(c|d) ∝ P(c)∏

1≤k≤nd

P(tk |c)

nd is the length of the document (the number of tokens), kan index to the kth token tk .P(tk |c) is the conditional probability = bedingteWahrscheinlichkeitof term tk occurring in a document of class cP(tk |c) is a measure of how much evidence tk contributesthat c is the correct class.

P(c) is the prior probability of c.If a document’s terms do not provide clear evidence for oneclass vs. another, we choose the c with highest P(c).

Text classification Naive Bayes NB theory Evaluation of TCFraser: Text Classification and Naive Bayes 18 / 52

Page 47: Einführung in die Computerlinguistik Text Classification and …fraser/intro_2019_WS/pdf/13baye… · Text Classification and Naive Bayes Alexander Fraser and Robert Zangenfeind

The Naive Bayes classifier

The Naive Bayes classifier is a probabilistic classifier.We compute the probability of a document d being in a classc as follows:

P(c|d) ∝ P(c)∏

1≤k≤nd

P(tk |c)

nd is the length of the document (the number of tokens), kan index to the kth token tk .P(tk |c) is the conditional probability = bedingteWahrscheinlichkeitof term tk occurring in a document of class cP(tk |c) is a measure of how much evidence tk contributesthat c is the correct class.P(c) is the prior probability of c.

If a document’s terms do not provide clear evidence for oneclass vs. another, we choose the c with highest P(c).

Text classification Naive Bayes NB theory Evaluation of TCFraser: Text Classification and Naive Bayes 18 / 52

Page 48: Einführung in die Computerlinguistik Text Classification and …fraser/intro_2019_WS/pdf/13baye… · Text Classification and Naive Bayes Alexander Fraser and Robert Zangenfeind

The Naive Bayes classifier

The Naive Bayes classifier is a probabilistic classifier.We compute the probability of a document d being in a classc as follows:

P(c|d) ∝ P(c)∏

1≤k≤nd

P(tk |c)

nd is the length of the document (the number of tokens), kan index to the kth token tk .P(tk |c) is the conditional probability = bedingteWahrscheinlichkeitof term tk occurring in a document of class cP(tk |c) is a measure of how much evidence tk contributesthat c is the correct class.P(c) is the prior probability of c.If a document’s terms do not provide clear evidence for oneclass vs. another, we choose the c with highest P(c).

Text classification Naive Bayes NB theory Evaluation of TCFraser: Text Classification and Naive Bayes 18 / 52

Page 49: Einführung in die Computerlinguistik Text Classification and …fraser/intro_2019_WS/pdf/13baye… · Text Classification and Naive Bayes Alexander Fraser and Robert Zangenfeind

Maximum a posteriori class

Goal in Naive Bayes classification:Find the “best” class

The best class is the most likely or maximum a posteriori(MAP) class cmap:

cmap = argmaxc∈CP̂(c|d) = argmaxc∈C P̂(c)∏

1≤k≤nd

P̂(tk |c)

Text classification Naive Bayes NB theory Evaluation of TCFraser: Text Classification and Naive Bayes 19 / 52

Page 50: Einführung in die Computerlinguistik Text Classification and …fraser/intro_2019_WS/pdf/13baye… · Text Classification and Naive Bayes Alexander Fraser and Robert Zangenfeind

Maximum a posteriori class

Goal in Naive Bayes classification:Find the “best” classThe best class is the most likely or maximum a posteriori(MAP) class cmap:

cmap = argmaxc∈CP̂(c|d) = argmaxc∈C P̂(c)∏

1≤k≤nd

P̂(tk |c)

Text classification Naive Bayes NB theory Evaluation of TCFraser: Text Classification and Naive Bayes 19 / 52

Page 51: Einführung in die Computerlinguistik Text Classification and …fraser/intro_2019_WS/pdf/13baye… · Text Classification and Naive Bayes Alexander Fraser and Robert Zangenfeind

Taking the log

Multiplying lots of small probabilities can result in floatingpoint underflow.Since log(xy) = log(x) + log(y), we can sum log probabilitiesinstead of multiplying probabilities.Since log is a monotonic function, the class with the highestscore does not change.So what we usually compute in practice is:

cmap = argmaxc∈C [log P̂(c) +∑

1≤k≤nd

log P̂(tk |c)]

Text classification Naive Bayes NB theory Evaluation of TCFraser: Text Classification and Naive Bayes 20 / 52

Page 52: Einführung in die Computerlinguistik Text Classification and …fraser/intro_2019_WS/pdf/13baye… · Text Classification and Naive Bayes Alexander Fraser and Robert Zangenfeind

Taking the log

Multiplying lots of small probabilities can result in floatingpoint underflow.

Since log(xy) = log(x) + log(y), we can sum log probabilitiesinstead of multiplying probabilities.Since log is a monotonic function, the class with the highestscore does not change.So what we usually compute in practice is:

cmap = argmaxc∈C [log P̂(c) +∑

1≤k≤nd

log P̂(tk |c)]

Text classification Naive Bayes NB theory Evaluation of TCFraser: Text Classification and Naive Bayes 20 / 52

Page 53: Einführung in die Computerlinguistik Text Classification and …fraser/intro_2019_WS/pdf/13baye… · Text Classification and Naive Bayes Alexander Fraser and Robert Zangenfeind

Taking the log

Multiplying lots of small probabilities can result in floatingpoint underflow.Since log(xy) = log(x) + log(y), we can sum log probabilitiesinstead of multiplying probabilities.

Since log is a monotonic function, the class with the highestscore does not change.So what we usually compute in practice is:

cmap = argmaxc∈C [log P̂(c) +∑

1≤k≤nd

log P̂(tk |c)]

Text classification Naive Bayes NB theory Evaluation of TCFraser: Text Classification and Naive Bayes 20 / 52

Page 54: Einführung in die Computerlinguistik Text Classification and …fraser/intro_2019_WS/pdf/13baye… · Text Classification and Naive Bayes Alexander Fraser and Robert Zangenfeind

Taking the log

Multiplying lots of small probabilities can result in floatingpoint underflow.Since log(xy) = log(x) + log(y), we can sum log probabilitiesinstead of multiplying probabilities.Since log is a monotonic function, the class with the highestscore does not change.

So what we usually compute in practice is:

cmap = argmaxc∈C [log P̂(c) +∑

1≤k≤nd

log P̂(tk |c)]

Text classification Naive Bayes NB theory Evaluation of TCFraser: Text Classification and Naive Bayes 20 / 52

Page 55: Einführung in die Computerlinguistik Text Classification and …fraser/intro_2019_WS/pdf/13baye… · Text Classification and Naive Bayes Alexander Fraser and Robert Zangenfeind

Taking the log

Multiplying lots of small probabilities can result in floatingpoint underflow.Since log(xy) = log(x) + log(y), we can sum log probabilitiesinstead of multiplying probabilities.Since log is a monotonic function, the class with the highestscore does not change.So what we usually compute in practice is:

cmap = argmaxc∈C [log P̂(c) +∑

1≤k≤nd

log P̂(tk |c)]

Text classification Naive Bayes NB theory Evaluation of TCFraser: Text Classification and Naive Bayes 20 / 52

Page 56: Einführung in die Computerlinguistik Text Classification and …fraser/intro_2019_WS/pdf/13baye… · Text Classification and Naive Bayes Alexander Fraser and Robert Zangenfeind

Naive Bayes classifier

Classification rule:

cmap = argmaxc∈C [log P̂(c) +∑

1≤k≤nd

log P̂(tk |c)]

Simple interpretation:

Each conditional parameter log P̂(tk |c) is a weight thatindicates how good an indicator tk is for c.The prior log P̂(c) is a weight that indicates the relativefrequency of c.The sum of log prior and term weights is then a measure ofhow much evidence there is for the document being in theclass.We select the class with the most evidence.

Text classification Naive Bayes NB theory Evaluation of TCFraser: Text Classification and Naive Bayes 21 / 52

Page 57: Einführung in die Computerlinguistik Text Classification and …fraser/intro_2019_WS/pdf/13baye… · Text Classification and Naive Bayes Alexander Fraser and Robert Zangenfeind

Naive Bayes classifier

Classification rule:

cmap = argmaxc∈C [log P̂(c) +∑

1≤k≤nd

log P̂(tk |c)]

Simple interpretation:

Each conditional parameter log P̂(tk |c) is a weight thatindicates how good an indicator tk is for c.The prior log P̂(c) is a weight that indicates the relativefrequency of c.The sum of log prior and term weights is then a measure ofhow much evidence there is for the document being in theclass.We select the class with the most evidence.

Text classification Naive Bayes NB theory Evaluation of TCFraser: Text Classification and Naive Bayes 21 / 52

Page 58: Einführung in die Computerlinguistik Text Classification and …fraser/intro_2019_WS/pdf/13baye… · Text Classification and Naive Bayes Alexander Fraser and Robert Zangenfeind

Naive Bayes classifier

Classification rule:

cmap = argmaxc∈C [log P̂(c) +∑

1≤k≤nd

log P̂(tk |c)]

Simple interpretation:

Each conditional parameter log P̂(tk |c) is a weight thatindicates how good an indicator tk is for c.The prior log P̂(c) is a weight that indicates the relativefrequency of c.The sum of log prior and term weights is then a measure ofhow much evidence there is for the document being in theclass.We select the class with the most evidence.

Text classification Naive Bayes NB theory Evaluation of TCFraser: Text Classification and Naive Bayes 21 / 52

Page 59: Einführung in die Computerlinguistik Text Classification and …fraser/intro_2019_WS/pdf/13baye… · Text Classification and Naive Bayes Alexander Fraser and Robert Zangenfeind

Naive Bayes classifier

Classification rule:

cmap = argmaxc∈C [log P̂(c) +∑

1≤k≤nd

log P̂(tk |c)]

Simple interpretation:Each conditional parameter log P̂(tk |c) is a weight thatindicates how good an indicator tk is for c.

The prior log P̂(c) is a weight that indicates the relativefrequency of c.The sum of log prior and term weights is then a measure ofhow much evidence there is for the document being in theclass.We select the class with the most evidence.

Text classification Naive Bayes NB theory Evaluation of TCFraser: Text Classification and Naive Bayes 21 / 52

Page 60: Einführung in die Computerlinguistik Text Classification and …fraser/intro_2019_WS/pdf/13baye… · Text Classification and Naive Bayes Alexander Fraser and Robert Zangenfeind

Naive Bayes classifier

Classification rule:

cmap = argmaxc∈C [log P̂(c) +∑

1≤k≤nd

log P̂(tk |c)]

Simple interpretation:Each conditional parameter log P̂(tk |c) is a weight thatindicates how good an indicator tk is for c.The prior log P̂(c) is a weight that indicates the relativefrequency of c.

The sum of log prior and term weights is then a measure ofhow much evidence there is for the document being in theclass.We select the class with the most evidence.

Text classification Naive Bayes NB theory Evaluation of TCFraser: Text Classification and Naive Bayes 21 / 52

Page 61: Einführung in die Computerlinguistik Text Classification and …fraser/intro_2019_WS/pdf/13baye… · Text Classification and Naive Bayes Alexander Fraser and Robert Zangenfeind

Naive Bayes classifier

Classification rule:

cmap = argmaxc∈C [log P̂(c) +∑

1≤k≤nd

log P̂(tk |c)]

Simple interpretation:Each conditional parameter log P̂(tk |c) is a weight thatindicates how good an indicator tk is for c.The prior log P̂(c) is a weight that indicates the relativefrequency of c.The sum of log prior and term weights is then a measure ofhow much evidence there is for the document being in theclass.

We select the class with the most evidence.

Text classification Naive Bayes NB theory Evaluation of TCFraser: Text Classification and Naive Bayes 21 / 52

Page 62: Einführung in die Computerlinguistik Text Classification and …fraser/intro_2019_WS/pdf/13baye… · Text Classification and Naive Bayes Alexander Fraser and Robert Zangenfeind

Naive Bayes classifier

Classification rule:

cmap = argmaxc∈C [log P̂(c) +∑

1≤k≤nd

log P̂(tk |c)]

Simple interpretation:Each conditional parameter log P̂(tk |c) is a weight thatindicates how good an indicator tk is for c.The prior log P̂(c) is a weight that indicates the relativefrequency of c.The sum of log prior and term weights is then a measure ofhow much evidence there is for the document being in theclass.We select the class with the most evidence.

Text classification Naive Bayes NB theory Evaluation of TCFraser: Text Classification and Naive Bayes 21 / 52

Page 63: Einführung in die Computerlinguistik Text Classification and …fraser/intro_2019_WS/pdf/13baye… · Text Classification and Naive Bayes Alexander Fraser and Robert Zangenfeind

Parameter estimation take 1: Maximum likelihood

Estimate parameters P̂(c) and P̂(tk |c) from train data: How?Prior:

P̂(c) = NcN

Nc : number of docs in class c; N: total number of docsConditional probabilities:

P̂(t|c) = Tct∑t′∈V Tct′

Tct is the number of tokens of t in training documents fromclass c (includes multiple occurrences)

Text classification Naive Bayes NB theory Evaluation of TCFraser: Text Classification and Naive Bayes 22 / 52

Page 64: Einführung in die Computerlinguistik Text Classification and …fraser/intro_2019_WS/pdf/13baye… · Text Classification and Naive Bayes Alexander Fraser and Robert Zangenfeind

Parameter estimation take 1: Maximum likelihood

Estimate parameters P̂(c) and P̂(tk |c) from train data: How?

Prior:

P̂(c) = NcN

Nc : number of docs in class c; N: total number of docsConditional probabilities:

P̂(t|c) = Tct∑t′∈V Tct′

Tct is the number of tokens of t in training documents fromclass c (includes multiple occurrences)

Text classification Naive Bayes NB theory Evaluation of TCFraser: Text Classification and Naive Bayes 22 / 52

Page 65: Einführung in die Computerlinguistik Text Classification and …fraser/intro_2019_WS/pdf/13baye… · Text Classification and Naive Bayes Alexander Fraser and Robert Zangenfeind

Parameter estimation take 1: Maximum likelihood

Estimate parameters P̂(c) and P̂(tk |c) from train data: How?Prior:

P̂(c) = NcN

Nc : number of docs in class c; N: total number of docsConditional probabilities:

P̂(t|c) = Tct∑t′∈V Tct′

Tct is the number of tokens of t in training documents fromclass c (includes multiple occurrences)

Text classification Naive Bayes NB theory Evaluation of TCFraser: Text Classification and Naive Bayes 22 / 52

Page 66: Einführung in die Computerlinguistik Text Classification and …fraser/intro_2019_WS/pdf/13baye… · Text Classification and Naive Bayes Alexander Fraser and Robert Zangenfeind

Parameter estimation take 1: Maximum likelihood

Estimate parameters P̂(c) and P̂(tk |c) from train data: How?Prior:

P̂(c) = NcN

Nc : number of docs in class c; N: total number of docs

Conditional probabilities:

P̂(t|c) = Tct∑t′∈V Tct′

Tct is the number of tokens of t in training documents fromclass c (includes multiple occurrences)

Text classification Naive Bayes NB theory Evaluation of TCFraser: Text Classification and Naive Bayes 22 / 52

Page 67: Einführung in die Computerlinguistik Text Classification and …fraser/intro_2019_WS/pdf/13baye… · Text Classification and Naive Bayes Alexander Fraser and Robert Zangenfeind

Parameter estimation take 1: Maximum likelihood

Estimate parameters P̂(c) and P̂(tk |c) from train data: How?Prior:

P̂(c) = NcN

Nc : number of docs in class c; N: total number of docsConditional probabilities:

P̂(t|c) = Tct∑t′∈V Tct′

Tct is the number of tokens of t in training documents fromclass c (includes multiple occurrences)

Text classification Naive Bayes NB theory Evaluation of TCFraser: Text Classification and Naive Bayes 22 / 52

Page 68: Einführung in die Computerlinguistik Text Classification and …fraser/intro_2019_WS/pdf/13baye… · Text Classification and Naive Bayes Alexander Fraser and Robert Zangenfeind

Parameter estimation take 1: Maximum likelihood

Estimate parameters P̂(c) and P̂(tk |c) from train data: How?Prior:

P̂(c) = NcN

Nc : number of docs in class c; N: total number of docsConditional probabilities:

P̂(t|c) = Tct∑t′∈V Tct′

Tct is the number of tokens of t in training documents fromclass c (includes multiple occurrences)

Text classification Naive Bayes NB theory Evaluation of TCFraser: Text Classification and Naive Bayes 22 / 52

Page 69: Einführung in die Computerlinguistik Text Classification and …fraser/intro_2019_WS/pdf/13baye… · Text Classification and Naive Bayes Alexander Fraser and Robert Zangenfeind

The problem with maximum likelihood estimates: Zeros

C=China

X1=“Beijing” X2=“and” X3=“Taipei” X4=“join” X5=“WTO”

P(China|d) ∝ P(China) · P(“Beijing”|China) · P(“and”|China)· P(“Taipei”|China) · P(“join”|China) · P(“WTO”|China)

If “WTO” never occurs in class China in the train set:

P̂(“WTO”|China) =TChina,“WTO”∑

t′∈V TChina,t′=

0∑t′∈V TChina,t′

= 0

Text classification Naive Bayes NB theory Evaluation of TCFraser: Text Classification and Naive Bayes 23 / 52

Page 70: Einführung in die Computerlinguistik Text Classification and …fraser/intro_2019_WS/pdf/13baye… · Text Classification and Naive Bayes Alexander Fraser and Robert Zangenfeind

The problem with maximum likelihood estimates: ZerosC=China

X1=“Beijing” X2=“and” X3=“Taipei” X4=“join” X5=“WTO”

P(China|d) ∝ P(China) · P(“Beijing”|China) · P(“and”|China)· P(“Taipei”|China) · P(“join”|China) · P(“WTO”|China)

If “WTO” never occurs in class China in the train set:

P̂(“WTO”|China) =TChina,“WTO”∑

t′∈V TChina,t′=

0∑t′∈V TChina,t′

= 0

Text classification Naive Bayes NB theory Evaluation of TCFraser: Text Classification and Naive Bayes 23 / 52

Page 71: Einführung in die Computerlinguistik Text Classification and …fraser/intro_2019_WS/pdf/13baye… · Text Classification and Naive Bayes Alexander Fraser and Robert Zangenfeind

The problem with maximum likelihood estimates: ZerosC=China

X1=“Beijing” X2=“and” X3=“Taipei” X4=“join” X5=“WTO”

P(China|d) ∝ P(China) · P(“Beijing”|China) · P(“and”|China)· P(“Taipei”|China) · P(“join”|China) · P(“WTO”|China)

If “WTO” never occurs in class China in the train set:

P̂(“WTO”|China) =TChina,“WTO”∑

t′∈V TChina,t′=

0∑t′∈V TChina,t′

= 0

Text classification Naive Bayes NB theory Evaluation of TCFraser: Text Classification and Naive Bayes 23 / 52

Page 72: Einführung in die Computerlinguistik Text Classification and …fraser/intro_2019_WS/pdf/13baye… · Text Classification and Naive Bayes Alexander Fraser and Robert Zangenfeind

The problem with maximum likelihood estimates: Zeros(cont)

If there are no occurrences of “WTO” in documents in classChina, we get a zero estimate:

P̂(“WTO”|China) =TChina,“WTO”∑

t′∈V TChina,t′= 0

→ We will get P(China|d) = 0 for any document thatcontains WTO!

Text classification Naive Bayes NB theory Evaluation of TCFraser: Text Classification and Naive Bayes 24 / 52

Page 73: Einführung in die Computerlinguistik Text Classification and …fraser/intro_2019_WS/pdf/13baye… · Text Classification and Naive Bayes Alexander Fraser and Robert Zangenfeind

The problem with maximum likelihood estimates: Zeros(cont)

If there are no occurrences of “WTO” in documents in classChina, we get a zero estimate:

P̂(“WTO”|China) =TChina,“WTO”∑

t′∈V TChina,t′= 0

→ We will get P(China|d) = 0 for any document thatcontains WTO!

Text classification Naive Bayes NB theory Evaluation of TCFraser: Text Classification and Naive Bayes 24 / 52

Page 74: Einführung in die Computerlinguistik Text Classification and …fraser/intro_2019_WS/pdf/13baye… · Text Classification and Naive Bayes Alexander Fraser and Robert Zangenfeind

To avoid zeros: Add-one smoothing

Before:P̂(t|c) = Tct∑

t′∈V Tct′

Now: Add one to each count to avoid zeros:

P̂(t|c) = Tct + 1∑t′∈V (Tct′ + 1) =

Tct + 1(∑

t′∈V Tct′) + B

B is the number of bins – in this case the number of differentwords or the size of the vocabulary |V | = M

Text classification Naive Bayes NB theory Evaluation of TCFraser: Text Classification and Naive Bayes 25 / 52

Page 75: Einführung in die Computerlinguistik Text Classification and …fraser/intro_2019_WS/pdf/13baye… · Text Classification and Naive Bayes Alexander Fraser and Robert Zangenfeind

To avoid zeros: Add-one smoothing

Before:P̂(t|c) = Tct∑

t′∈V Tct′

Now: Add one to each count to avoid zeros:

P̂(t|c) = Tct + 1∑t′∈V (Tct′ + 1) =

Tct + 1(∑

t′∈V Tct′) + B

B is the number of bins – in this case the number of differentwords or the size of the vocabulary |V | = M

Text classification Naive Bayes NB theory Evaluation of TCFraser: Text Classification and Naive Bayes 25 / 52

Page 76: Einführung in die Computerlinguistik Text Classification and …fraser/intro_2019_WS/pdf/13baye… · Text Classification and Naive Bayes Alexander Fraser and Robert Zangenfeind

To avoid zeros: Add-one smoothing

Before:P̂(t|c) = Tct∑

t′∈V Tct′

Now: Add one to each count to avoid zeros:

P̂(t|c) = Tct + 1∑t′∈V (Tct′ + 1) =

Tct + 1(∑

t′∈V Tct′) + B

B is the number of bins – in this case the number of differentwords or the size of the vocabulary |V | = M

Text classification Naive Bayes NB theory Evaluation of TCFraser: Text Classification and Naive Bayes 25 / 52

Page 77: Einführung in die Computerlinguistik Text Classification and …fraser/intro_2019_WS/pdf/13baye… · Text Classification and Naive Bayes Alexander Fraser and Robert Zangenfeind

To avoid zeros: Add-one smoothing

Before:P̂(t|c) = Tct∑

t′∈V Tct′

Now: Add one to each count to avoid zeros:

P̂(t|c) = Tct + 1∑t′∈V (Tct′ + 1) =

Tct + 1(∑

t′∈V Tct′) + B

B is the number of bins – in this case the number of differentwords or the size of the vocabulary |V | = M

Text classification Naive Bayes NB theory Evaluation of TCFraser: Text Classification and Naive Bayes 25 / 52

Page 78: Einführung in die Computerlinguistik Text Classification and …fraser/intro_2019_WS/pdf/13baye… · Text Classification and Naive Bayes Alexander Fraser and Robert Zangenfeind

Naive Bayes: Summary

Estimate parameters from the training corpus using add-onesmoothingFor a new document, for each class, compute sum of (i) log ofprior and (ii) logs of conditional probabilities of the termsAssign the document to the class with the largest score

Text classification Naive Bayes NB theory Evaluation of TCFraser: Text Classification and Naive Bayes 26 / 52

Page 79: Einführung in die Computerlinguistik Text Classification and …fraser/intro_2019_WS/pdf/13baye… · Text Classification and Naive Bayes Alexander Fraser and Robert Zangenfeind

Naive Bayes: Summary

Estimate parameters from the training corpus using add-onesmoothing

For a new document, for each class, compute sum of (i) log ofprior and (ii) logs of conditional probabilities of the termsAssign the document to the class with the largest score

Text classification Naive Bayes NB theory Evaluation of TCFraser: Text Classification and Naive Bayes 26 / 52

Page 80: Einführung in die Computerlinguistik Text Classification and …fraser/intro_2019_WS/pdf/13baye… · Text Classification and Naive Bayes Alexander Fraser and Robert Zangenfeind

Naive Bayes: Summary

Estimate parameters from the training corpus using add-onesmoothingFor a new document, for each class, compute sum of (i) log ofprior and (ii) logs of conditional probabilities of the terms

Assign the document to the class with the largest score

Text classification Naive Bayes NB theory Evaluation of TCFraser: Text Classification and Naive Bayes 26 / 52

Page 81: Einführung in die Computerlinguistik Text Classification and …fraser/intro_2019_WS/pdf/13baye… · Text Classification and Naive Bayes Alexander Fraser and Robert Zangenfeind

Naive Bayes: Summary

Estimate parameters from the training corpus using add-onesmoothingFor a new document, for each class, compute sum of (i) log ofprior and (ii) logs of conditional probabilities of the termsAssign the document to the class with the largest score

Text classification Naive Bayes NB theory Evaluation of TCFraser: Text Classification and Naive Bayes 26 / 52

Page 82: Einführung in die Computerlinguistik Text Classification and …fraser/intro_2019_WS/pdf/13baye… · Text Classification and Naive Bayes Alexander Fraser and Robert Zangenfeind

Naive Bayes: Training

TrainMultinomialNB(C,D)1 V ← ExtractVocabulary(D)2 N ← CountDocs(D)3 for each c ∈ C4 do Nc ← CountDocsInClass(D, c)5 prior [c]← Nc/N6 textc ← ConcatenateTextOfAllDocsInClass(D, c)7 for each t ∈ V8 do Tct ← CountTokensOfTerm(textc , t)9 for each t ∈ V

10 do condprob[t][c]← Tct+1∑t′ (Tct′+1)

11 return V , prior , condprob

Text classification Naive Bayes NB theory Evaluation of TCFraser: Text Classification and Naive Bayes 27 / 52

Page 83: Einführung in die Computerlinguistik Text Classification and …fraser/intro_2019_WS/pdf/13baye… · Text Classification and Naive Bayes Alexander Fraser and Robert Zangenfeind

Naive Bayes: Training

TrainMultinomialNB(C,D)1 V ← ExtractVocabulary(D)2 N ← CountDocs(D)3 for each c ∈ C4 do Nc ← CountDocsInClass(D, c)5 prior [c]← Nc/N6 textc ← ConcatenateTextOfAllDocsInClass(D, c)7 for each t ∈ V8 do Tct ← CountTokensOfTerm(textc , t)9 for each t ∈ V

10 do condprob[t][c]← Tct+1∑t′ (Tct′+1)

11 return V , prior , condprob

Text classification Naive Bayes NB theory Evaluation of TCFraser: Text Classification and Naive Bayes 27 / 52

Page 84: Einführung in die Computerlinguistik Text Classification and …fraser/intro_2019_WS/pdf/13baye… · Text Classification and Naive Bayes Alexander Fraser and Robert Zangenfeind

Naive Bayes: Testing

ApplyMultinomialNB(C,V , prior , condprob, d)1 W ← ExtractTokensFromDoc(V , d)2 for each c ∈ C3 do score[c]← log prior [c]4 for each t ∈W5 do score[c]+ = log condprob[t][c]6 return argmaxc∈Cscore[c]

Text classification Naive Bayes NB theory Evaluation of TCFraser: Text Classification and Naive Bayes 28 / 52

Page 85: Einführung in die Computerlinguistik Text Classification and …fraser/intro_2019_WS/pdf/13baye… · Text Classification and Naive Bayes Alexander Fraser and Robert Zangenfeind

Naive Bayes: Testing

ApplyMultinomialNB(C,V , prior , condprob, d)1 W ← ExtractTokensFromDoc(V , d)2 for each c ∈ C3 do score[c]← log prior [c]4 for each t ∈W5 do score[c]+ = log condprob[t][c]6 return argmaxc∈Cscore[c]

Text classification Naive Bayes NB theory Evaluation of TCFraser: Text Classification and Naive Bayes 28 / 52

Page 86: Einführung in die Computerlinguistik Text Classification and …fraser/intro_2019_WS/pdf/13baye… · Text Classification and Naive Bayes Alexander Fraser and Robert Zangenfeind

Exercise: Estimate parameters, classify test setdocID words in document in c = China?

training set 1 Chinese Beijing Chinese yes2 Chinese Chinese Shanghai yes3 Chinese Macao yes4 Tokyo Japan Chinese no

test set 5 Chinese Chinese Chinese Tokyo Japan ?

P̂(c) = NcN

P̂(t|c) = Tct + 1∑t′∈V (Tct′ + 1) =

Tct + 1(∑

t′∈V Tct′) + B

(B is the number of bins – in this case the number of different words or thesize of the vocabulary |V | = M)

cmap = argmaxc∈C [P̂(c) ·∏

1≤k≤nd

P̂(tk |c)]

Text classification Naive Bayes NB theory Evaluation of TCFraser: Text Classification and Naive Bayes 29 / 52

Page 87: Einführung in die Computerlinguistik Text Classification and …fraser/intro_2019_WS/pdf/13baye… · Text Classification and Naive Bayes Alexander Fraser and Robert Zangenfeind

Example: Parameter estimates

Priors: P̂(c) = 3/4 and P̂(c) = 1/4Conditional probabilities:

P̂(“Chinese”|c) = (5 + 1)/(8 + 6) = 6/14 = 3/7P̂(“Tokyo”|c) = P̂(“Japan”|c) = (0 + 1)/(8 + 6) = 1/14

P̂(“Chinese”|c) = (1 + 1)/(3 + 6) = 2/9P̂(“Tokyo”|c) = P̂(“Japan”|c) = (1 + 1)/(3 + 6) = 2/9

The denominators are (8 + 6) and (3 + 6) because the lengths oftextc and textc are 8 and 3, respectively, and because the constantB is 6 as the vocabulary consists of six terms.

Text classification Naive Bayes NB theory Evaluation of TCFraser: Text Classification and Naive Bayes 30 / 52

Page 88: Einführung in die Computerlinguistik Text Classification and …fraser/intro_2019_WS/pdf/13baye… · Text Classification and Naive Bayes Alexander Fraser and Robert Zangenfeind

Example: Parameter estimates

Priors: P̂(c) = 3/4 and P̂(c) = 1/4Conditional probabilities:

P̂(“Chinese”|c) = (5 + 1)/(8 + 6) = 6/14 = 3/7P̂(“Tokyo”|c) = P̂(“Japan”|c) = (0 + 1)/(8 + 6) = 1/14

P̂(“Chinese”|c) = (1 + 1)/(3 + 6) = 2/9P̂(“Tokyo”|c) = P̂(“Japan”|c) = (1 + 1)/(3 + 6) = 2/9

The denominators are (8 + 6) and (3 + 6) because the lengths oftextc and textc are 8 and 3, respectively, and because the constantB is 6 as the vocabulary consists of six terms.

Text classification Naive Bayes NB theory Evaluation of TCFraser: Text Classification and Naive Bayes 30 / 52

Page 89: Einführung in die Computerlinguistik Text Classification and …fraser/intro_2019_WS/pdf/13baye… · Text Classification and Naive Bayes Alexander Fraser and Robert Zangenfeind

Example: Classification

P̂(c|d5) ∝ 3/4 · (3/7)3 · 1/14 · 1/14 ≈ 0.0003P̂(c|d5) ∝ 1/4 · (2/9)3 · 2/9 · 2/9 ≈ 0.0001

Thus, the classifier assigns the test document to c = China.The reason for this classification decision is that the threeoccurrences of the positive indicator “Chinese” in d5 outweigh theoccurrences of the two negative indicators “Japan” and “Tokyo”.

Text classification Naive Bayes NB theory Evaluation of TCFraser: Text Classification and Naive Bayes 31 / 52

Page 90: Einführung in die Computerlinguistik Text Classification and …fraser/intro_2019_WS/pdf/13baye… · Text Classification and Naive Bayes Alexander Fraser and Robert Zangenfeind

Example: Classification

P̂(c|d5) ∝ 3/4 · (3/7)3 · 1/14 · 1/14 ≈ 0.0003P̂(c|d5) ∝ 1/4 · (2/9)3 · 2/9 · 2/9 ≈ 0.0001

Thus, the classifier assigns the test document to c = China.The reason for this classification decision is that the threeoccurrences of the positive indicator “Chinese” in d5 outweigh theoccurrences of the two negative indicators “Japan” and “Tokyo”.

Text classification Naive Bayes NB theory Evaluation of TCFraser: Text Classification and Naive Bayes 31 / 52

Page 91: Einführung in die Computerlinguistik Text Classification and …fraser/intro_2019_WS/pdf/13baye… · Text Classification and Naive Bayes Alexander Fraser and Robert Zangenfeind

UNK – unknown words

UNKAn UNK is a word that occurs in the test set,but did not occur in the training set.

Option 1: Simply ignore UNKsOption 2: Add UNK to the training vocabulary

All counts TcUNK are zero(since UNK does not occur in training set).All words in the test set that did not occur in the training setare replaced by “UNK”.

Text classification Naive Bayes NB theory Evaluation of TCFraser: Text Classification and Naive Bayes 32 / 52

Page 92: Einführung in die Computerlinguistik Text Classification and …fraser/intro_2019_WS/pdf/13baye… · Text Classification and Naive Bayes Alexander Fraser and Robert Zangenfeind

Outline

1 Text classification

2 Naive Bayes

3 NB theory

4 Evaluation of TC

Text classification Naive Bayes NB theory Evaluation of TCFraser: Text Classification and Naive Bayes 33 / 52

Page 93: Einführung in die Computerlinguistik Text Classification and …fraser/intro_2019_WS/pdf/13baye… · Text Classification and Naive Bayes Alexander Fraser and Robert Zangenfeind

Naive Bayes: Analysis

Now we want to gain a better understanding of the propertiesof Naive Bayes.We will formally derive the classification rule ……and make our assumptions explicit.

Text classification Naive Bayes NB theory Evaluation of TCFraser: Text Classification and Naive Bayes 34 / 52

Page 94: Einführung in die Computerlinguistik Text Classification and …fraser/intro_2019_WS/pdf/13baye… · Text Classification and Naive Bayes Alexander Fraser and Robert Zangenfeind

Naive Bayes: Analysis

Now we want to gain a better understanding of the propertiesof Naive Bayes.

We will formally derive the classification rule ……and make our assumptions explicit.

Text classification Naive Bayes NB theory Evaluation of TCFraser: Text Classification and Naive Bayes 34 / 52

Page 95: Einführung in die Computerlinguistik Text Classification and …fraser/intro_2019_WS/pdf/13baye… · Text Classification and Naive Bayes Alexander Fraser and Robert Zangenfeind

Naive Bayes: Analysis

Now we want to gain a better understanding of the propertiesof Naive Bayes.We will formally derive the classification rule …

…and make our assumptions explicit.

Text classification Naive Bayes NB theory Evaluation of TCFraser: Text Classification and Naive Bayes 34 / 52

Page 96: Einführung in die Computerlinguistik Text Classification and …fraser/intro_2019_WS/pdf/13baye… · Text Classification and Naive Bayes Alexander Fraser and Robert Zangenfeind

Naive Bayes: Analysis

Now we want to gain a better understanding of the propertiesof Naive Bayes.We will formally derive the classification rule ……and make our assumptions explicit.

Text classification Naive Bayes NB theory Evaluation of TCFraser: Text Classification and Naive Bayes 34 / 52

Page 97: Einführung in die Computerlinguistik Text Classification and …fraser/intro_2019_WS/pdf/13baye… · Text Classification and Naive Bayes Alexander Fraser and Robert Zangenfeind

Derivation of Naive Bayes rule

We want to find the class that is most likely given the document:

cmap = argmaxc∈C P(c|d)

Apply Bayes rule P(A|B) = P(B|A)P(A)P(B) :

cmap = argmaxc∈CP(d |c)P(c)

P(d)

Drop denominator since P(d) is the same for all classes:

cmap = argmaxc∈C P(d |c)P(c)

Text classification Naive Bayes NB theory Evaluation of TCFraser: Text Classification and Naive Bayes 35 / 52

Page 98: Einführung in die Computerlinguistik Text Classification and …fraser/intro_2019_WS/pdf/13baye… · Text Classification and Naive Bayes Alexander Fraser and Robert Zangenfeind

Derivation of Naive Bayes rule

We want to find the class that is most likely given the document:

cmap = argmaxc∈C P(c|d)

Apply Bayes rule P(A|B) = P(B|A)P(A)P(B) :

cmap = argmaxc∈CP(d |c)P(c)

P(d)

Drop denominator since P(d) is the same for all classes:

cmap = argmaxc∈C P(d |c)P(c)

Text classification Naive Bayes NB theory Evaluation of TCFraser: Text Classification and Naive Bayes 35 / 52

Page 99: Einführung in die Computerlinguistik Text Classification and …fraser/intro_2019_WS/pdf/13baye… · Text Classification and Naive Bayes Alexander Fraser and Robert Zangenfeind

Too many parameters / sparseness

cmap = argmaxc∈C P(d |c)P(c)= argmaxc∈CP(⟨t1, . . . , tk , . . . , tnd ⟩|c)P(c)

There are too many parameters P(⟨t1, . . . , tk , . . . , tnd ⟩|c), onefor each unique combination of a class and a sequence ofwords.

We would need a very, very large number of training examplesto estimate that many parameters.This is the problem of data sparseness.

Text classification Naive Bayes NB theory Evaluation of TCFraser: Text Classification and Naive Bayes 36 / 52

Page 100: Einführung in die Computerlinguistik Text Classification and …fraser/intro_2019_WS/pdf/13baye… · Text Classification and Naive Bayes Alexander Fraser and Robert Zangenfeind

Too many parameters / sparseness

cmap = argmaxc∈C P(d |c)P(c)= argmaxc∈CP(⟨t1, . . . , tk , . . . , tnd ⟩|c)P(c)

There are too many parameters P(⟨t1, . . . , tk , . . . , tnd ⟩|c), onefor each unique combination of a class and a sequence ofwords.

We would need a very, very large number of training examplesto estimate that many parameters.

This is the problem of data sparseness.

Text classification Naive Bayes NB theory Evaluation of TCFraser: Text Classification and Naive Bayes 36 / 52

Page 101: Einführung in die Computerlinguistik Text Classification and …fraser/intro_2019_WS/pdf/13baye… · Text Classification and Naive Bayes Alexander Fraser and Robert Zangenfeind

Too many parameters / sparseness

cmap = argmaxc∈C P(d |c)P(c)= argmaxc∈CP(⟨t1, . . . , tk , . . . , tnd ⟩|c)P(c)

There are too many parameters P(⟨t1, . . . , tk , . . . , tnd ⟩|c), onefor each unique combination of a class and a sequence ofwords.We would need a very, very large number of training examplesto estimate that many parameters.

This is the problem of data sparseness.

Text classification Naive Bayes NB theory Evaluation of TCFraser: Text Classification and Naive Bayes 36 / 52

Page 102: Einführung in die Computerlinguistik Text Classification and …fraser/intro_2019_WS/pdf/13baye… · Text Classification and Naive Bayes Alexander Fraser and Robert Zangenfeind

Too many parameters / sparseness

cmap = argmaxc∈C P(d |c)P(c)= argmaxc∈CP(⟨t1, . . . , tk , . . . , tnd ⟩|c)P(c)

There are too many parameters P(⟨t1, . . . , tk , . . . , tnd ⟩|c), onefor each unique combination of a class and a sequence ofwords.We would need a very, very large number of training examplesto estimate that many parameters.This is the problem of data sparseness.

Text classification Naive Bayes NB theory Evaluation of TCFraser: Text Classification and Naive Bayes 36 / 52

Page 103: Einführung in die Computerlinguistik Text Classification and …fraser/intro_2019_WS/pdf/13baye… · Text Classification and Naive Bayes Alexander Fraser and Robert Zangenfeind

Bag of words model

To reduce the number of parameters to a manageable size, wemake the Naive Bayes conditional independence (bedingteUnabhängigkeit) assumption:

P(d |c) = P(⟨t1, . . . , tnd ⟩|c) =∏

1≤k≤nd

P(Xk = tk |c)

We assume that the probability of observing the conjunction ofattributes is equal to the product of the individual probabilitiesP(Xk = tk |c).

Recall from earlier the estimates for these conditional probabilities:P̂(t|c) = Tct+1

(∑

t′∈V Tct′ )+B

This can be referred to as a bag of words model.

Text classification Naive Bayes NB theory Evaluation of TCFraser: Text Classification and Naive Bayes 37 / 52

Page 104: Einführung in die Computerlinguistik Text Classification and …fraser/intro_2019_WS/pdf/13baye… · Text Classification and Naive Bayes Alexander Fraser and Robert Zangenfeind

Generative model

C=China

X1=“Beijing” X2=“and” X3=“Taipei” X4=“join” X5=“WTO”

P(c|d) ∝ P(c)∏

1≤k≤ndP(tk |c)

Generate a class with probability P(c)Generate each of the words (in their respective positions),conditional on the class, but independent of each other, withprobability P(tk |c)To classify docs, we “reengineer” this process and find theclass that is most likely to have generated the doc.

Text classification Naive Bayes NB theory Evaluation of TCFraser: Text Classification and Naive Bayes 38 / 52

Page 105: Einführung in die Computerlinguistik Text Classification and …fraser/intro_2019_WS/pdf/13baye… · Text Classification and Naive Bayes Alexander Fraser and Robert Zangenfeind

Generative model

C=China

X1=“Beijing” X2=“and” X3=“Taipei” X4=“join” X5=“WTO”

P(c|d) ∝ P(c)∏

1≤k≤ndP(tk |c)

Generate a class with probability P(c)

Generate each of the words (in their respective positions),conditional on the class, but independent of each other, withprobability P(tk |c)To classify docs, we “reengineer” this process and find theclass that is most likely to have generated the doc.

Text classification Naive Bayes NB theory Evaluation of TCFraser: Text Classification and Naive Bayes 38 / 52

Page 106: Einführung in die Computerlinguistik Text Classification and …fraser/intro_2019_WS/pdf/13baye… · Text Classification and Naive Bayes Alexander Fraser and Robert Zangenfeind

Generative model

C=China

X1=“Beijing” X2=“and” X3=“Taipei” X4=“join” X5=“WTO”

P(c|d) ∝ P(c)∏

1≤k≤ndP(tk |c)

Generate a class with probability P(c)Generate each of the words (in their respective positions),conditional on the class, but independent of each other, withprobability P(tk |c)

To classify docs, we “reengineer” this process and find theclass that is most likely to have generated the doc.

Text classification Naive Bayes NB theory Evaluation of TCFraser: Text Classification and Naive Bayes 38 / 52

Page 107: Einführung in die Computerlinguistik Text Classification and …fraser/intro_2019_WS/pdf/13baye… · Text Classification and Naive Bayes Alexander Fraser and Robert Zangenfeind

Generative model

C=China

X1=“Beijing” X2=“and” X3=“Taipei” X4=“join” X5=“WTO”

P(c|d) ∝ P(c)∏

1≤k≤ndP(tk |c)

Generate a class with probability P(c)Generate each of the words (in their respective positions),conditional on the class, but independent of each other, withprobability P(tk |c)To classify docs, we “reengineer” this process and find theclass that is most likely to have generated the doc.

Text classification Naive Bayes NB theory Evaluation of TCFraser: Text Classification and Naive Bayes 38 / 52

Page 108: Einführung in die Computerlinguistik Text Classification and …fraser/intro_2019_WS/pdf/13baye… · Text Classification and Naive Bayes Alexander Fraser and Robert Zangenfeind

Naive Bayes is not so naive

Naive Bayes has won some bakeoffs (e.g., KDD-CUP 97)More robust to nonrelevant features than some more complexlearning methodsMore robust to concept drift (changing of definition of classover time) than some more complex learning methodsBetter than methods like decision trees when we have manyequally important featuresA good dependable baseline for text classification (but not thebest)Optimal if independence assumptions hold (never true fortext, but true for some domains)Very fastLow storage requirements

Text classification Naive Bayes NB theory Evaluation of TCFraser: Text Classification and Naive Bayes 39 / 52

Page 109: Einführung in die Computerlinguistik Text Classification and …fraser/intro_2019_WS/pdf/13baye… · Text Classification and Naive Bayes Alexander Fraser and Robert Zangenfeind

Naive Bayes is not so naive

Naive Bayes has won some bakeoffs (e.g., KDD-CUP 97)

More robust to nonrelevant features than some more complexlearning methodsMore robust to concept drift (changing of definition of classover time) than some more complex learning methodsBetter than methods like decision trees when we have manyequally important featuresA good dependable baseline for text classification (but not thebest)Optimal if independence assumptions hold (never true fortext, but true for some domains)Very fastLow storage requirements

Text classification Naive Bayes NB theory Evaluation of TCFraser: Text Classification and Naive Bayes 39 / 52

Page 110: Einführung in die Computerlinguistik Text Classification and …fraser/intro_2019_WS/pdf/13baye… · Text Classification and Naive Bayes Alexander Fraser and Robert Zangenfeind

Naive Bayes is not so naive

Naive Bayes has won some bakeoffs (e.g., KDD-CUP 97)More robust to nonrelevant features than some more complexlearning methods

More robust to concept drift (changing of definition of classover time) than some more complex learning methodsBetter than methods like decision trees when we have manyequally important featuresA good dependable baseline for text classification (but not thebest)Optimal if independence assumptions hold (never true fortext, but true for some domains)Very fastLow storage requirements

Text classification Naive Bayes NB theory Evaluation of TCFraser: Text Classification and Naive Bayes 39 / 52

Page 111: Einführung in die Computerlinguistik Text Classification and …fraser/intro_2019_WS/pdf/13baye… · Text Classification and Naive Bayes Alexander Fraser and Robert Zangenfeind

Naive Bayes is not so naive

Naive Bayes has won some bakeoffs (e.g., KDD-CUP 97)More robust to nonrelevant features than some more complexlearning methodsMore robust to concept drift (changing of definition of classover time) than some more complex learning methods

Better than methods like decision trees when we have manyequally important featuresA good dependable baseline for text classification (but not thebest)Optimal if independence assumptions hold (never true fortext, but true for some domains)Very fastLow storage requirements

Text classification Naive Bayes NB theory Evaluation of TCFraser: Text Classification and Naive Bayes 39 / 52

Page 112: Einführung in die Computerlinguistik Text Classification and …fraser/intro_2019_WS/pdf/13baye… · Text Classification and Naive Bayes Alexander Fraser and Robert Zangenfeind

Naive Bayes is not so naive

Naive Bayes has won some bakeoffs (e.g., KDD-CUP 97)More robust to nonrelevant features than some more complexlearning methodsMore robust to concept drift (changing of definition of classover time) than some more complex learning methodsBetter than methods like decision trees when we have manyequally important features

A good dependable baseline for text classification (but not thebest)Optimal if independence assumptions hold (never true fortext, but true for some domains)Very fastLow storage requirements

Text classification Naive Bayes NB theory Evaluation of TCFraser: Text Classification and Naive Bayes 39 / 52

Page 113: Einführung in die Computerlinguistik Text Classification and …fraser/intro_2019_WS/pdf/13baye… · Text Classification and Naive Bayes Alexander Fraser and Robert Zangenfeind

Naive Bayes is not so naive

Naive Bayes has won some bakeoffs (e.g., KDD-CUP 97)More robust to nonrelevant features than some more complexlearning methodsMore robust to concept drift (changing of definition of classover time) than some more complex learning methodsBetter than methods like decision trees when we have manyequally important featuresA good dependable baseline for text classification (but not thebest)

Optimal if independence assumptions hold (never true fortext, but true for some domains)Very fastLow storage requirements

Text classification Naive Bayes NB theory Evaluation of TCFraser: Text Classification and Naive Bayes 39 / 52

Page 114: Einführung in die Computerlinguistik Text Classification and …fraser/intro_2019_WS/pdf/13baye… · Text Classification and Naive Bayes Alexander Fraser and Robert Zangenfeind

Naive Bayes is not so naive

Naive Bayes has won some bakeoffs (e.g., KDD-CUP 97)More robust to nonrelevant features than some more complexlearning methodsMore robust to concept drift (changing of definition of classover time) than some more complex learning methodsBetter than methods like decision trees when we have manyequally important featuresA good dependable baseline for text classification (but not thebest)Optimal if independence assumptions hold (never true fortext, but true for some domains)

Very fastLow storage requirements

Text classification Naive Bayes NB theory Evaluation of TCFraser: Text Classification and Naive Bayes 39 / 52

Page 115: Einführung in die Computerlinguistik Text Classification and …fraser/intro_2019_WS/pdf/13baye… · Text Classification and Naive Bayes Alexander Fraser and Robert Zangenfeind

Naive Bayes is not so naive

Naive Bayes has won some bakeoffs (e.g., KDD-CUP 97)More robust to nonrelevant features than some more complexlearning methodsMore robust to concept drift (changing of definition of classover time) than some more complex learning methodsBetter than methods like decision trees when we have manyequally important featuresA good dependable baseline for text classification (but not thebest)Optimal if independence assumptions hold (never true fortext, but true for some domains)Very fast

Low storage requirements

Text classification Naive Bayes NB theory Evaluation of TCFraser: Text Classification and Naive Bayes 39 / 52

Page 116: Einführung in die Computerlinguistik Text Classification and …fraser/intro_2019_WS/pdf/13baye… · Text Classification and Naive Bayes Alexander Fraser and Robert Zangenfeind

Naive Bayes is not so naive

Naive Bayes has won some bakeoffs (e.g., KDD-CUP 97)More robust to nonrelevant features than some more complexlearning methodsMore robust to concept drift (changing of definition of classover time) than some more complex learning methodsBetter than methods like decision trees when we have manyequally important featuresA good dependable baseline for text classification (but not thebest)Optimal if independence assumptions hold (never true fortext, but true for some domains)Very fastLow storage requirements

Text classification Naive Bayes NB theory Evaluation of TCFraser: Text Classification and Naive Bayes 39 / 52

Page 117: Einführung in die Computerlinguistik Text Classification and …fraser/intro_2019_WS/pdf/13baye… · Text Classification and Naive Bayes Alexander Fraser and Robert Zangenfeind

Outline

1 Text classification

2 Naive Bayes

3 NB theory

4 Evaluation of TC

Text classification Naive Bayes NB theory Evaluation of TCFraser: Text Classification and Naive Bayes 40 / 52

Page 118: Einführung in die Computerlinguistik Text Classification and …fraser/intro_2019_WS/pdf/13baye… · Text Classification and Naive Bayes Alexander Fraser and Robert Zangenfeind

Evaluation on Reuters

classes:

trainingset:

testset:

“regions” “industries” “subject areas”

γ(d ′) =China

“first”“private”“Chinese”“airline”

UK China poultry coffee elections sports

“London”“congestion”

“Big Ben”“Parliament”

“the Queen”“Windsor”

“Beijing”“Olympics”

“Great Wall”“tourism”

“communist”“Mao”

“chicken”“feed”

“ducks”“pate”

“turkey”“bird flu”

“beans”“roasting”

“robusta”“arabica”

“harvest”“Kenya”

“votes”“recount”

“run-off”“seat”

“campaign”“TV ads”

“baseball”“diamond”

“soccer”“forward”

“captain”“team”

d ′

Text classification Naive Bayes NB theory Evaluation of TCFraser: Text Classification and Naive Bayes 41 / 52

Page 119: Einführung in die Computerlinguistik Text Classification and …fraser/intro_2019_WS/pdf/13baye… · Text Classification and Naive Bayes Alexander Fraser and Robert Zangenfeind

Example: The Reuters collectionsymbol statistic valueN documents 800,000L avg. # word tokens per document 200M word types 400,000

type of class number examplesregion 366 UK, Chinaindustry 870 poultry, coffeesubject area 126 elections, sports

Text classification Naive Bayes NB theory Evaluation of TCFraser: Text Classification and Naive Bayes 42 / 52

Page 120: Einführung in die Computerlinguistik Text Classification and …fraser/intro_2019_WS/pdf/13baye… · Text Classification and Naive Bayes Alexander Fraser and Robert Zangenfeind

Example: The Reuters collectionsymbol statistic valueN documents 800,000L avg. # word tokens per document 200M word types 400,000

type of class number examplesregion 366 UK, Chinaindustry 870 poultry, coffeesubject area 126 elections, sports

Text classification Naive Bayes NB theory Evaluation of TCFraser: Text Classification and Naive Bayes 42 / 52

Page 121: Einführung in die Computerlinguistik Text Classification and …fraser/intro_2019_WS/pdf/13baye… · Text Classification and Naive Bayes Alexander Fraser and Robert Zangenfeind

A Reuters document

Text classification Naive Bayes NB theory Evaluation of TCFraser: Text Classification and Naive Bayes 43 / 52

Page 122: Einführung in die Computerlinguistik Text Classification and …fraser/intro_2019_WS/pdf/13baye… · Text Classification and Naive Bayes Alexander Fraser and Robert Zangenfeind

A Reuters document

Text classification Naive Bayes NB theory Evaluation of TCFraser: Text Classification and Naive Bayes 43 / 52

Page 123: Einführung in die Computerlinguistik Text Classification and …fraser/intro_2019_WS/pdf/13baye… · Text Classification and Naive Bayes Alexander Fraser and Robert Zangenfeind

Evaluating classification

Evaluation must be done on test data that are independent ofthe training data, i.e., training and test sets are disjoint.It’s easy to get good performance on a test set that wasavailable to the learner during training (e.g., just memorizethe test set).Measures: Precision, recall, F1, classification accuracy

Text classification Naive Bayes NB theory Evaluation of TCFraser: Text Classification and Naive Bayes 44 / 52

Page 124: Einführung in die Computerlinguistik Text Classification and …fraser/intro_2019_WS/pdf/13baye… · Text Classification and Naive Bayes Alexander Fraser and Robert Zangenfeind

Evaluating classification

Evaluation must be done on test data that are independent ofthe training data, i.e., training and test sets are disjoint.

It’s easy to get good performance on a test set that wasavailable to the learner during training (e.g., just memorizethe test set).Measures: Precision, recall, F1, classification accuracy

Text classification Naive Bayes NB theory Evaluation of TCFraser: Text Classification and Naive Bayes 44 / 52

Page 125: Einführung in die Computerlinguistik Text Classification and …fraser/intro_2019_WS/pdf/13baye… · Text Classification and Naive Bayes Alexander Fraser and Robert Zangenfeind

Evaluating classification

Evaluation must be done on test data that are independent ofthe training data, i.e., training and test sets are disjoint.It’s easy to get good performance on a test set that wasavailable to the learner during training (e.g., just memorizethe test set).

Measures: Precision, recall, F1, classification accuracy

Text classification Naive Bayes NB theory Evaluation of TCFraser: Text Classification and Naive Bayes 44 / 52

Page 126: Einführung in die Computerlinguistik Text Classification and …fraser/intro_2019_WS/pdf/13baye… · Text Classification and Naive Bayes Alexander Fraser and Robert Zangenfeind

Evaluating classification

Evaluation must be done on test data that are independent ofthe training data, i.e., training and test sets are disjoint.It’s easy to get good performance on a test set that wasavailable to the learner during training (e.g., just memorizethe test set).Measures: Precision, recall, F1, classification accuracy

Text classification Naive Bayes NB theory Evaluation of TCFraser: Text Classification and Naive Bayes 44 / 52

Page 127: Einführung in die Computerlinguistik Text Classification and …fraser/intro_2019_WS/pdf/13baye… · Text Classification and Naive Bayes Alexander Fraser and Robert Zangenfeind

Precision P and recall R

in the class not in the classpredicted to be in the class true positives (TP) false positives (FP)predicted to not be in the class false negatives (FN) true negatives (TN)

TP, FP, FN, TN are counts of documents. The sum of these fourcounts is the total number of documents.

precision: P = TP/(TP + FP)

recall: R = TP/(TP + FN)

Text classification Naive Bayes NB theory Evaluation of TCFraser: Text Classification and Naive Bayes 45 / 52

Page 128: Einführung in die Computerlinguistik Text Classification and …fraser/intro_2019_WS/pdf/13baye… · Text Classification and Naive Bayes Alexander Fraser and Robert Zangenfeind

Precision/recall tradeoff

You can easily increase recall by returning more results.Recall is a non-decreasing function of the number of resultsreturned.A system that returns everything has 100% recall!The converse is also true (usually): It’s easy to get highprecision for very low recall.In most application scenarios, we need both good precisionand good recall.So we need to find a good precision-recall tradeoff.

Text classification Naive Bayes NB theory Evaluation of TCFraser: Text Classification and Naive Bayes 46 / 52

Page 129: Einführung in die Computerlinguistik Text Classification and …fraser/intro_2019_WS/pdf/13baye… · Text Classification and Naive Bayes Alexander Fraser and Robert Zangenfeind

Precision/recall tradeoff

You can easily increase recall by returning more results.Recall is a non-decreasing function of the number of resultsreturned.A system that returns everything has 100% recall!The converse is also true (usually): It’s easy to get highprecision for very low recall.In most application scenarios, we need both good precisionand good recall.So we need to find a good precision-recall tradeoff.

Text classification Naive Bayes NB theory Evaluation of TCFraser: Text Classification and Naive Bayes 46 / 52

Page 130: Einführung in die Computerlinguistik Text Classification and …fraser/intro_2019_WS/pdf/13baye… · Text Classification and Naive Bayes Alexander Fraser and Robert Zangenfeind

Precision/recall tradeoff

You can easily increase recall by returning more results.

Recall is a non-decreasing function of the number of resultsreturned.A system that returns everything has 100% recall!The converse is also true (usually): It’s easy to get highprecision for very low recall.In most application scenarios, we need both good precisionand good recall.So we need to find a good precision-recall tradeoff.

Text classification Naive Bayes NB theory Evaluation of TCFraser: Text Classification and Naive Bayes 46 / 52

Page 131: Einführung in die Computerlinguistik Text Classification and …fraser/intro_2019_WS/pdf/13baye… · Text Classification and Naive Bayes Alexander Fraser and Robert Zangenfeind

Precision/recall tradeoff

You can easily increase recall by returning more results.Recall is a non-decreasing function of the number of resultsreturned.

A system that returns everything has 100% recall!The converse is also true (usually): It’s easy to get highprecision for very low recall.In most application scenarios, we need both good precisionand good recall.So we need to find a good precision-recall tradeoff.

Text classification Naive Bayes NB theory Evaluation of TCFraser: Text Classification and Naive Bayes 46 / 52

Page 132: Einführung in die Computerlinguistik Text Classification and …fraser/intro_2019_WS/pdf/13baye… · Text Classification and Naive Bayes Alexander Fraser and Robert Zangenfeind

Precision/recall tradeoff

You can easily increase recall by returning more results.Recall is a non-decreasing function of the number of resultsreturned.A system that returns everything has 100% recall!

The converse is also true (usually): It’s easy to get highprecision for very low recall.In most application scenarios, we need both good precisionand good recall.So we need to find a good precision-recall tradeoff.

Text classification Naive Bayes NB theory Evaluation of TCFraser: Text Classification and Naive Bayes 46 / 52

Page 133: Einführung in die Computerlinguistik Text Classification and …fraser/intro_2019_WS/pdf/13baye… · Text Classification and Naive Bayes Alexander Fraser and Robert Zangenfeind

Precision/recall tradeoff

You can easily increase recall by returning more results.Recall is a non-decreasing function of the number of resultsreturned.A system that returns everything has 100% recall!The converse is also true (usually): It’s easy to get highprecision for very low recall.

In most application scenarios, we need both good precisionand good recall.So we need to find a good precision-recall tradeoff.

Text classification Naive Bayes NB theory Evaluation of TCFraser: Text Classification and Naive Bayes 46 / 52

Page 134: Einführung in die Computerlinguistik Text Classification and …fraser/intro_2019_WS/pdf/13baye… · Text Classification and Naive Bayes Alexander Fraser and Robert Zangenfeind

Precision/recall tradeoff

You can easily increase recall by returning more results.Recall is a non-decreasing function of the number of resultsreturned.A system that returns everything has 100% recall!The converse is also true (usually): It’s easy to get highprecision for very low recall.In most application scenarios, we need both good precisionand good recall.

So we need to find a good precision-recall tradeoff.

Text classification Naive Bayes NB theory Evaluation of TCFraser: Text Classification and Naive Bayes 46 / 52

Page 135: Einführung in die Computerlinguistik Text Classification and …fraser/intro_2019_WS/pdf/13baye… · Text Classification and Naive Bayes Alexander Fraser and Robert Zangenfeind

Precision/recall tradeoff

You can easily increase recall by returning more results.Recall is a non-decreasing function of the number of resultsreturned.A system that returns everything has 100% recall!The converse is also true (usually): It’s easy to get highprecision for very low recall.In most application scenarios, we need both good precisionand good recall.So we need to find a good precision-recall tradeoff.

Text classification Naive Bayes NB theory Evaluation of TCFraser: Text Classification and Naive Bayes 46 / 52

Page 136: Einführung in die Computerlinguistik Text Classification and …fraser/intro_2019_WS/pdf/13baye… · Text Classification and Naive Bayes Alexander Fraser and Robert Zangenfeind

A combined measure: F1

F1 allows us to trade off precision against recall.

F1 =1

12

1P + 1

21R

=2PR

P + R

This is the harmonic mean of P and R: 1F = 1

2(1P + 1

R )

The harmonic mean is a kind of “soft” minimum.

Text classification Naive Bayes NB theory Evaluation of TCFraser: Text Classification and Naive Bayes 47 / 52

Page 137: Einführung in die Computerlinguistik Text Classification and …fraser/intro_2019_WS/pdf/13baye… · Text Classification and Naive Bayes Alexander Fraser and Robert Zangenfeind

A combined measure: F1

F1 allows us to trade off precision against recall.

F1 =1

12

1P + 1

21R

=2PR

P + R

This is the harmonic mean of P and R: 1F = 1

2(1P + 1

R )

The harmonic mean is a kind of “soft” minimum.

Text classification Naive Bayes NB theory Evaluation of TCFraser: Text Classification and Naive Bayes 47 / 52

Page 138: Einführung in die Computerlinguistik Text Classification and …fraser/intro_2019_WS/pdf/13baye… · Text Classification and Naive Bayes Alexander Fraser and Robert Zangenfeind

A combined measure: F1

F1 allows us to trade off precision against recall.

F1 =1

12

1P + 1

21R

=2PR

P + R

This is the harmonic mean of P and R: 1F = 1

2(1P + 1

R )

The harmonic mean is a kind of “soft” minimum.

Text classification Naive Bayes NB theory Evaluation of TCFraser: Text Classification and Naive Bayes 47 / 52

Page 139: Einführung in die Computerlinguistik Text Classification and …fraser/intro_2019_WS/pdf/13baye… · Text Classification and Naive Bayes Alexander Fraser and Robert Zangenfeind

A combined measure: F1

F1 allows us to trade off precision against recall.

F1 =1

12

1P + 1

21R

=2PR

P + R

This is the harmonic mean of P and R: 1F = 1

2(1P + 1

R )

The harmonic mean is a kind of “soft” minimum.

Text classification Naive Bayes NB theory Evaluation of TCFraser: Text Classification and Naive Bayes 47 / 52

Page 140: Einführung in die Computerlinguistik Text Classification and …fraser/intro_2019_WS/pdf/13baye… · Text Classification and Naive Bayes Alexander Fraser and Robert Zangenfeind

Accuracy

accuracy =TP + TN

TP + TN + FP + FN

Text classification Naive Bayes NB theory Evaluation of TCFraser: Text Classification and Naive Bayes 48 / 52

Page 141: Einführung in die Computerlinguistik Text Classification and …fraser/intro_2019_WS/pdf/13baye… · Text Classification and Naive Bayes Alexander Fraser and Robert Zangenfeind

F1 scores for Naive Bayes vs. other methods(a) NB Rocchio kNN SVM

micro-avg-L (90 classes) 80 85 86 89macro-avg (90 classes) 47 59 60 60

(b) NB Rocchio kNN trees SVMearn 96 93 97 98 98acq 88 65 92 90 94money-fx 57 47 78 66 75grain 79 68 82 85 95crude 80 70 86 85 89trade 64 65 77 73 76interest 65 63 74 67 78ship 85 49 79 74 86wheat 70 69 77 93 92corn 65 48 78 92 90micro-avg (top 10) 82 65 82 88 92micro-avg-D (118 classes) 75 62 n/a n/a 87

Naive Bayes does pretty well, but some methods beat it consistently (e.g., SVM).

Text classification Naive Bayes NB theory Evaluation of TCFraser: Text Classification and Naive Bayes 49 / 52

Page 142: Einführung in die Computerlinguistik Text Classification and …fraser/intro_2019_WS/pdf/13baye… · Text Classification and Naive Bayes Alexander Fraser and Robert Zangenfeind

F1 scores for Naive Bayes vs. other methods(a) NB Rocchio kNN SVM

micro-avg-L (90 classes) 80 85 86 89macro-avg (90 classes) 47 59 60 60

(b) NB Rocchio kNN trees SVMearn 96 93 97 98 98acq 88 65 92 90 94money-fx 57 47 78 66 75grain 79 68 82 85 95crude 80 70 86 85 89trade 64 65 77 73 76interest 65 63 74 67 78ship 85 49 79 74 86wheat 70 69 77 93 92corn 65 48 78 92 90micro-avg (top 10) 82 65 82 88 92micro-avg-D (118 classes) 75 62 n/a n/a 87

Naive Bayes does pretty well, but some methods beat it consistently (e.g., SVM).

Text classification Naive Bayes NB theory Evaluation of TCFraser: Text Classification and Naive Bayes 49 / 52

Page 143: Einführung in die Computerlinguistik Text Classification and …fraser/intro_2019_WS/pdf/13baye… · Text Classification and Naive Bayes Alexander Fraser and Robert Zangenfeind

Confusion matrix for Reuters-21578

assigned class: mon

ey-fx

trade

inte

rest

whea

t

corn

grain

true class:money-fx 95 0 10 0 0 0

trade 1 1 90 0 1 0interest 13 0 0 0 0 0

wheat 0 0 1 34 3 7corn 1 0 2 13 26 5

grain 0 0 2 14 5 10

Example: 14 documents from grain were incorrectly assigned towheat.

Text classification Naive Bayes NB theory Evaluation of TCFraser: Text Classification and Naive Bayes 50 / 52

Page 144: Einführung in die Computerlinguistik Text Classification and …fraser/intro_2019_WS/pdf/13baye… · Text Classification and Naive Bayes Alexander Fraser and Robert Zangenfeind

Exercise

Compute precision, recall and F1:in class not in class

predicted to be in class TP: 18 FP: 2predicted not to be in class FN: 82 TN: 1,000,000,000

precision: P = TP/(TP + FP)

recall: R = TP/(TP + FN)

F1 =2PR

P + R

Text classification Naive Bayes NB theory Evaluation of TCFraser: Text Classification and Naive Bayes 51 / 52

Page 145: Einführung in die Computerlinguistik Text Classification and …fraser/intro_2019_WS/pdf/13baye… · Text Classification and Naive Bayes Alexander Fraser and Robert Zangenfeind

Besonders klausurrelevant

What is text classification?(or: What is sentence classification?)Naive Bayes classification ruleEstimation of Naive Bayes priors and conditionalsTheory: Bag of words model

Maximum likelihoodAdd-one = Laplace

Precision, recall, F1

Precision-recall tradeoffConfusion matrix

Text classification Naive Bayes NB theory Evaluation of TCFraser: Text Classification and Naive Bayes 52 / 52