<< Chapter < Page Chapter >> Page >
Φ 35000 | y = 1 = i = 1 m 1 { x 35000 ( i ) = 1 y ( i ) = 1 } i = 1 m 1 { y ( i ) = 1 } = 0 Φ 35000 | y = 0 = i = 1 m 1 { x 35000 ( i ) = 1 y ( i ) = 0 } i = 1 m 1 { y ( i ) = 0 } = 0

I.e., because it has never seen “nips” before in either spam or non-spam training examples, it thinks the probability of seeing it ineither type of email is zero. Hence, when trying to decide if one of these messages containing “nips” is spam, it calculates theclass posterior probabilities, and obtains

p ( y = 1 | x ) = i = 1 n p ( x i | y = 1 ) p ( y = 1 ) i = 1 n p ( x i | y = 1 ) p ( y = 1 ) + i = 1 n p ( x i | y = 0 ) p ( y = 0 ) = 0 0 .

This is because each of the terms “ i = 1 n p ( x i | y ) ” includes a term p ( x 35000 | y ) = 0 that is multiplied into it. Hence, our algorithm obtains 0 / 0 , and doesn't know how to make a prediction.

Stating the problem more broadly, it is statistically a bad idea to estimate the probability of some event to be zero just because you haven't seen it before in your finite training set. Take the problem of estimatingthe mean of a multinomial random variable z taking values in { 1 , ... , k } . We can parameterize our multinomial with Φ i = p ( z = i ) . Given a set of m independent observations { z ( 1 ) , ... , z ( m ) } , the maximum likelihood estimates are given by

Φ j = i = 1 m 1 { z ( i ) = j } m .

As we saw previously, if we were to use these maximum likelihood estimates, then some of the Φ j 's might end up as zero, which was a problem.To avoid this, we can use Laplace smoothing , which replaces the above estimate with

Φ j = i = 1 m 1 { z ( i ) = j } + 1 m + k .

Here, we've added 1 to the numerator, and k to the denominator. Note that j = 1 k Φ j = 1 still holds (check this yourself!), which is a desirable property since the Φ j 's are estimates for probabilities that we know must sum to 1. Also, Φ j 0 for all values of j , solving our problem of probabilities being estimated as zero. Under certain (arguably quite strong) conditions, it can be shown that the Laplace smoothing actually gives the optimal estimator of the Φ j 's.

Returning to our Naive Bayes classifier, with Laplace smoothing, we therefore obtain the following estimates of the parameters:

Φ j | y = 1 = i = 1 m 1 { x j ( i ) = 1 y ( i ) = 1 } + 1 i = 1 m 1 { y ( i ) = 1 } + 2 Φ j | y = 0 = i = 1 m 1 { x j ( i ) = 1 y ( i ) = 0 } + 1 i = 1 m 1 { y ( i ) = 0 } + 2

(In practice, it usually doesn't matter much whether we apply Laplace smoothing to Φ y or not, since we will typically have a fair fraction each of spam and non-spam messages, so Φ y will be a reasonable estimate of p ( y = 1 ) and will be quite far from 0 anyway.)

Event models for text classification

To close off our discussion of generative learning algorithms, let's talk about one more model that is specifically for text classification. While Naive Bayes as we've presentedit will work well for many classification problems, for text classification, there is a related model that does even better.

In the specific context of text classification, Naive Bayes as presented uses the what's called the multi-variate Bernoulli event model . In this model, we assumed that the way an email is generated is that first it is randomly determined (according to the classpriors p ( y ) ) whether a spammer or non-spammer will send you your next message. Then, the person sending the email runs through the dictionary, deciding whether to include each word i in that email independently and according to the probabilities p ( x i = 1 | y ) = Φ i | y . Thus, the probability of a message was given by p ( y ) i = 1 n p ( x i | y ) .

Here's a different model, called the multinomial event model . To describe this model, we will use a different notation and set of features for representing emails. We let x i denote the identity of the i -th word in the email. Thus, x i is now an integer taking values in { 1 , ... , | V | } , where | V | is the size of our vocabulary (dictionary). An email of n words is now represented by a vector ( x 1 , x 2 , ... , x n ) of length n ; note that n can vary for different documents. For instance, if an email starts with “A NIPS ...,” then x 1 = 1 (“a” is the first word in the dictionary), and x 2 = 35000 (if “nips” is the 35000th word in the dictionary).

In the multinomial event model, we assume that the way an email is generated is via a random process in which spam/non-spam is first determined (according to p ( y ) ) as before. Then, the sender of the email writes the email by first generating x 1 from some multinomial distribution over words ( p ( x 1 | y ) ). Next, the second word x 2 is chosen independently of x 1 but from the same multinomial distribution, and similarly for x 3 , x 4 , and so on, until all n words of the email have been generated. Thus, the overall probability of a message is given by p ( y ) i = 1 n p ( x i | y ) . Note that this formula looks like the one we had earlier for the probability of a message under themulti-variate Bernoulli event model, but that the terms in the formula now mean very different things. In particular x i | y is now a multinomial, rather than a Bernoulli distribution.

The parameters for our new model are Φ y = p ( y ) as before, Φ k | y = 1 = p ( x j = k | y = 1 ) (for any j ) and Φ i | y = 0 = p ( x j = k | y = 0 ) . Note that we have assumed that p ( x j | y ) is the same for all values of j (i.e., that the distribution according to which a word is generated does not depend on its position j within the email).

If we are given a training set { ( x ( i ) , y ( i ) ) ; i = 1 , ... , m } where x ( i ) = ( x 1 ( i ) , x 2 ( i ) , ... , x n i ( i ) ) (here, n i is the number of words in the i -training example), the likelihood of the data is given by

L ( Φ , Φ k | y = 0 , Φ k | y = 1 ) = i = 1 m p ( x ( i ) , y ( i ) ) = i = 1 m j = 1 n i p ( x j ( i ) | y ; Φ k | y = 0 , Φ k | y = 1 ) p ( y ( i ) ; Φ y ) .

Maximizing this yields the maximum likelihood estimates of the parameters:

Φ k | y = 1 = i = 1 m j = 1 n i 1 { x j ( i ) = k y ( i ) = 1 } i = 1 m 1 { y ( i ) = 1 } n i Φ k | y = 0 = i = 1 m j = 1 n i 1 { x j ( i ) = k y ( i ) = 0 } i = 1 m 1 { y ( i ) = 0 } n i Φ y = i = 1 m 1 { y ( i ) = 1 } m .

If we were to apply Laplace smoothing (which needed in practice for good performance) when estimating Φ k | y = 0 and Φ k | y = 1 , we add 1 to the numerators and | V | to the denominators, and obtain:

Φ k | y = 1 = i = 1 m j = 1 n i 1 { x j ( i ) = k y ( i ) = 1 } + 1 i = 1 m 1 { y ( i ) = 1 } n i + | V | Φ k | y = 0 = i = 1 m j = 1 n i 1 { x j ( i ) = k y ( i ) = 0 } + 1 i = 1 m 1 { y ( i ) = 0 } n i + | V | .

While not necessarily the very best classification algorithm, the Naive Bayes classifier often works surprisingly well. It is often also a very good “first thing to try,”given its simplicity and ease of implementation.

Questions & Answers

what does preconceived mean
sammie Reply
physiological Psychology
Nwosu Reply
How can I develope my cognitive domain
Amanyire Reply
why is communication effective
Dakolo Reply
Communication is effective because it allows individuals to share ideas, thoughts, and information with others.
effective communication can lead to improved outcomes in various settings, including personal relationships, business environments, and educational settings. By communicating effectively, individuals can negotiate effectively, solve problems collaboratively, and work towards common goals.
it starts up serve and return practice/assessments.it helps find voice talking therapy also assessments through relaxed conversation.
miss
Every time someone flushes a toilet in the apartment building, the person begins to jumb back automatically after hearing the flush, before the water temperature changes. Identify the types of learning, if it is classical conditioning identify the NS, UCS, CS and CR. If it is operant conditioning, identify the type of consequence positive reinforcement, negative reinforcement or punishment
Wekolamo Reply
please i need answer
Wekolamo
because it helps many people around the world to understand how to interact with other people and understand them well, for example at work (job).
Manix Reply
Agreed 👍 There are many parts of our brains and behaviors, we really need to get to know. Blessings for everyone and happy Sunday!
ARC
A child is a member of community not society elucidate ?
JESSY Reply
Isn't practices worldwide, be it psychology, be it science. isn't much just a false belief of control over something the mind cannot truly comprehend?
Simon Reply
compare and contrast skinner's perspective on personality development on freud
namakula Reply
Skinner skipped the whole unconscious phenomenon and rather emphasized on classical conditioning
war
explain how nature and nurture affect the development and later the productivity of an individual.
Amesalu Reply
nature is an hereditary factor while nurture is an environmental factor which constitute an individual personality. so if an individual's parent has a deviant behavior and was also brought up in an deviant environment, observation of the behavior and the inborn trait we make the individual deviant.
Samuel
I am taking this course because I am hoping that I could somehow learn more about my chosen field of interest and due to the fact that being a PsyD really ignites my passion as an individual the more I hope to learn about developing and literally explore the complexity of my critical thinking skills
Zyryn Reply
good👍
Jonathan
and having a good philosophy of the world is like a sandwich and a peanut butter 👍
Jonathan
generally amnesi how long yrs memory loss
Kelu Reply
interpersonal relationships
Abdulfatai Reply
What would be the best educational aid(s) for gifted kids/savants?
Heidi Reply
treat them normal, if they want help then give them. that will make everyone happy
Saurabh
What are the treatment for autism?
Magret Reply
hello. autism is a umbrella term. autistic kids have different disorder overlapping. for example. a kid may show symptoms of ADHD and also learning disabilities. before treatment please make sure the kid doesn't have physical disabilities like hearing..vision..speech problem. sometimes these
Jharna
continue.. sometimes due to these physical problems..the diagnosis may be misdiagnosed. treatment for autism. well it depends on the severity. since autistic kids have problems in communicating and adopting to the environment.. it's best to expose the child in situations where the child
Jharna
child interact with other kids under doc supervision. play therapy. speech therapy. Engaging in different activities that activate most parts of the brain.. like drawing..painting. matching color board game. string and beads game. the more you interact with the child the more effective
Jharna
results you'll get.. please consult a therapist to know what suits best on your child. and last as a parent. I know sometimes it's overwhelming to guide a special kid. but trust the process and be strong and patient as a parent.
Jharna
Got questions? Join the online conversation and get instant answers!
Jobilize.com Reply

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, Machine learning. OpenStax CNX. Oct 14, 2013 Download for free at http://cnx.org/content/col11500/1.4
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'Machine learning' conversation and receive update notifications?

Ask