This project proposes the use of NLP (Natural Language Processing) techniques to identify "misleading" and "news" reports that originate from unreliable sources. only a count vector (Term Frequency Inverse Document Frequency) (word sizes contrasted to how (word sizes relative to how often they are used in other articles in your dataset) was generated using values from this function vector and those features with relative importance of 1.4 and that feature with relative importance 2.0 (or equal importance) (word) The linguistic models, however, ignore aspects such as word order and meaning. Two documents with completely different word counts can refer to the same thing. As a result, the data science community has put in place various measures to resolve this problem. Facebook is participating in a challenge on Kaggle to remove fabricated news stories from feeds on their social network using AI. fighting fake news is a straightforward job Can you separate "real" and "news" from "fakes?" Thus, the proposed study would have the false and the real news datasets as input, and use the Naive Bayesifier to construct a model that classifies articles by the words they contain. owing to the increased number of online information sources, it is difficult to know what is right and what is incorrect Therefore, the issue of "fake news" has gained further publicity. This research looks at historical and contemporary approaches for determining truth and falsity in text format, as well as how and why it occurs. This paper combines Nave Bayes Classifier, support vector machines, and semantic analysis to identify fake news, coming up with a system of three sections..Keywords: - Machine learning, Twitter, fake profiles, online social networks, detection, friends, followers