3. Word2Vec¶
3.1. Using Pretrained Word2Vec Vectors¶
Gensim has functions to support the downloading of pretrained Word2Vec vectors. Let’s use another visualisation tool (sklearn.manifold.TSNE
) to see how the word embeddings can help us identify clusters of words with similar meaning.
from sklearn.manifold import TSNE
import numpy as np
import matplotlib.pyplot as plt
import gensim.downloader as api
wv = api.load('word2vec-google-news-300')
3.1.1. Analogy Test: A is to B as C is to D¶
king - man + woman = ?
Google has released a analogy test set, you can use it to evaluate the performance of word embeddings.
Your Turn
Try some other set of words and see how well the pretrained embeddings perform. Consider writing an evaluation function using the analogy test set. If the first word returned by the analogy test matches what’s in the test set, it’s a HIT. As we only consider the most similar for comparison, this evaluation is called HIT@1.
wv.most_similar(positive=["king", "woman"], negative=["man"])
[('queen', 0.7118193507194519),
('monarch', 0.6189674735069275),
('princess', 0.5902431011199951),
('crown_prince', 0.5499460697174072),
('prince', 0.5377322435379028),
('kings', 0.5236844420433044),
('Queen_Consort', 0.5235945582389832),
('queens', 0.5181134939193726),
('sultan', 0.5098593235015869),
('monarchy', 0.5087411403656006)]
3.1.2. Plot using TSNE¶
def display_closestwords_tsnescatterplot(model, word):
arr = np.empty((0,300), dtype='f')
word_labels = [word]
# get close words
close_words = model.similar_by_word(word)
# add the vector for each of the closest words to the array
arr = np.append(arr, np.array([model[word]]), axis=0)
for wrd_score in close_words:
wrd_vector = model[wrd_score[0]]
word_labels.append(wrd_score[0])
arr = np.append(arr, np.array([wrd_vector]), axis=0)
# find tsne coords for 2 dimensions
tsne = TSNE(n_components=2, random_state=0)
np.set_printoptions(suppress=True)
Y = tsne.fit_transform(arr)
x_coords = Y[:, 0]
y_coords = Y[:, 1]
# display scatter plot
plt.scatter(x_coords, y_coords)
for label, x, y in zip(word_labels, x_coords, y_coords):
plt.annotate(label, xy=(x, y), xytext=(0, 0), textcoords='offset points')
plt.xlim(x_coords.min()+0.00005, x_coords.max()+0.00005)
plt.ylim(y_coords.min()+0.00005, y_coords.max()+0.00005)
plt.title(f'Words closest to: {word}')
plt.show()
display_closestwords_tsnescatterplot(wv, "math")
3.1.3. Interactive Visualisation using bokeh¶
from bokeh.plotting import figure, show
from bokeh.io import push_notebook, output_notebook
from bokeh.models import ColumnDataSource, LabelSet
from bokeh.plotting import figure, show
from bokeh.io import push_notebook, output_notebook
from bokeh.models import ColumnDataSource, LabelSet
import pandas as pd
def interactive_tsne(text_labels, tsne_array):
'''makes an interactive scatter plot with text labels for each point'''
# Define a dataframe to be used by bokeh context
bokeh_df = pd.DataFrame(tsne_array, text_labels, columns=['x','y'])
bokeh_df['text_labels'] = bokeh_df.index
# interactive controls to include to the plot
TOOLS="hover, zoom_in, zoom_out, box_zoom, undo, redo, reset, box_select"
p = figure(tools=TOOLS, plot_width=700, plot_height=700)
# define data source for the plot
source = ColumnDataSource(bokeh_df)
# scatter plot
p.scatter('x', 'y', source=source, fill_alpha=0.6,
fill_color="#8724B5",
line_color=None)
# text labels
labels = LabelSet(x='x', y='y', text='text_labels', y_offset=8,
text_font_size="8pt", text_color="#555555",
source=source, text_align='center')
p.add_layout(labels)
# show plot inline
output_notebook()
show(p)
vocab = ['math', 'computing', 'physics']
input_vocab = [word for word in vocab if word in wv.key_to_index.keys()]
X = wv[input_vocab]
# find tsne coords for 2 dimensions
tsne = TSNE(n_components=2, random_state=0)
X_tsne = tsne.fit_transform(X)
print(input_vocab)
points = len(input_vocab)
interactive_tsne(list(input_vocab)[:points], X_tsne)
3.2. Training your own Word2Vec Embeddings¶
3.2.1. Data Crawling from the Web¶
from __future__ import division, unicode_literals
import codecs
from bs4 import BeautifulSoup
import urllib
f = urllib.request.urlopen("https://en.wikipedia.org/wiki/Natural_language_processing")
document= BeautifulSoup(f.read()).get_text()
print(document)
Natural language processing - Wikipedia
Natural language processing
From Wikipedia, the free encyclopedia
Jump to navigation
Jump to search
This article is about natural language processing done by computers. For the natural language processing done by the human brain, see Language processing in the brain.
Field of computer science and linguistics
An automated online assistant providing customer service on a web page, an example of an application where natural language processing is a major component.[1]
Natural language processing (NLP) is a subfield of linguistics, computer science, and artificial intelligence concerned with the interactions between computers and human language, in particular how to program computers to process and analyze large amounts of natural language data. The goal is a computer capable of "understanding" the contents of documents, including the contextual nuances of the language within them. The technology can then accurately extract information and insights contained in the documents as well as categorize and organize the documents themselves.
Challenges in natural language processing frequently involve speech recognition, natural language understanding, and natural language generation.
Contents
1 History
1.1 Symbolic NLP (1950s – early 1990s)
1.2 Statistical NLP (1990s–2010s)
1.3 Neural NLP (present)
2 Methods: Rules, statistics, neural networks
2.1 Statistical methods
2.2 Neural networks
3 Common NLP tasks
3.1 Text and speech processing
3.2 Morphological analysis
3.3 Syntactic analysis
3.4 Lexical semantics (of individual words in context)
3.5 Relational semantics (semantics of individual sentences)
3.6 Discourse (semantics beyond individual sentences)
3.7 Higher-level NLP applications
4 General tendencies and (possible) future directions
4.1 Cognition and NLP
5 See also
6 References
7 Further reading
History[edit]
Further information: History of natural language processing
Natural language processing has its roots in the 1950s. Already in 1950, Alan Turing published an article titled "Computing Machinery and Intelligence" which proposed what is now called the Turing test as a criterion of intelligence, a task that involves the automated interpretation and generation of natural language, but at the time not articulated as a problem separate from artificial intelligence.
Symbolic NLP (1950s – early 1990s)[edit]
The premise of symbolic NLP is well-summarized by John Searle's Chinese room experiment: Given a collection of rules (e.g., a Chinese phrasebook, with questions and matching answers), the computer emulates natural language understanding (or other NLP tasks) by applying those rules to the data it is confronted with.
1950s: The Georgetown experiment in 1954 involved fully automatic translation of more than sixty Russian sentences into English. The authors claimed that within three or five years, machine translation would be a solved problem.[2] However, real progress was much slower, and after the ALPAC report in 1966, which found that ten-year-long research had failed to fulfill the expectations, funding for machine translation was dramatically reduced. Little further research in machine translation was conducted until the late 1980s when the first statistical machine translation systems were developed.
1960s: Some notably successful natural language processing systems developed in the 1960s were SHRDLU, a natural language system working in restricted "blocks worlds" with restricted vocabularies, and ELIZA, a simulation of a Rogerian psychotherapist, written by Joseph Weizenbaum between 1964 and 1966. Using almost no information about human thought or emotion, ELIZA sometimes provided a startlingly human-like interaction. When the "patient" exceeded the very small knowledge base, ELIZA might provide a generic response, for example, responding to "My head hurts" with "Why do you say your head hurts?".
1970s: During the 1970s, many programmers began to write "conceptual ontologies", which structured real-world information into computer-understandable data. Examples are MARGIE (Schank, 1975), SAM (Cullingford, 1978), PAM (Wilensky, 1978), TaleSpin (Meehan, 1976), QUALM (Lehnert, 1977), Politics (Carbonell, 1979), and Plot Units (Lehnert 1981). During this time, the first many chatterbots were written (e.g., PARRY).
1980s: The 1980s and early 1990s mark the hey-day of symbolic methods in NLP. Focus areas of the time included research on rule-based parsing (e.g., the development of HPSG as a computational operationalization of generative grammar), morphology (e.g., two-level morphology[3]), semantics (e.g., Lesk algorithm), reference (e.g., within Centering Theory[4]) and other areas of natural language understanding (e.g., in the Rhetorical Structure Theory). Other lines of research were continued, e.g., the development of chatterbots with Racter and Jabberwacky. An important development (that eventually led to the statistical turn in the 1990s) was the rising importance of quantitative evaluation in this period.[5]
Statistical NLP (1990s–2010s)[edit]
Up to the 1980s, most natural language processing systems were based on complex sets of hand-written rules. Starting in the late 1980s, however, there was a revolution in natural language processing with the introduction of machine learning algorithms for language processing. This was due to both the steady increase in computational power (see Moore's law) and the gradual lessening of the dominance of Chomskyan theories of linguistics (e.g. transformational grammar), whose theoretical underpinnings discouraged the sort of corpus linguistics that underlies the machine-learning approach to language processing.[6]
1990s: Many of the notable early successes on statistical methods in NLP occurred in the field of machine translation, due especially to work at IBM Research. These systems were able to take advantage of existing multilingual textual corpora that had been produced by the Parliament of Canada and the European Union as a result of laws calling for the translation of all governmental proceedings into all official languages of the corresponding systems of government. However, most other systems depended on corpora specifically developed for the tasks implemented by these systems, which was (and often continues to be) a major limitation in the success of these systems. As a result, a great deal of research has gone into methods of more effectively learning from limited amounts of data.
2000s: With the growth of the web, increasing amounts of raw (unannotated) language data has become available since the mid-1990s. Research has thus increasingly focused on unsupervised and semi-supervised learning algorithms. Such algorithms can learn from data that has not been hand-annotated with the desired answers or using a combination of annotated and non-annotated data. Generally, this task is much more difficult than supervised learning, and typically produces less accurate results for a given amount of input data. However, there is an enormous amount of non-annotated data available (including, among other things, the entire content of the World Wide Web), which can often make up for the inferior results if the algorithm used has a low enough time complexity to be practical.
Neural NLP (present)[edit]
In the 2010s, representation learning and deep neural network-style machine learning methods became widespread in natural language processing, due in part to a flurry of results showing that such techniques[7][8] can achieve state-of-the-art results in many natural language tasks, for example in language modeling,[9] parsing,[10][11] and many others. This is increasingly important in medicine and healthcare, where NLP is being used to analyze notes and text in electronic health records that would otherwise be inaccessible for study when seeking to improve care.[12]
Methods: Rules, statistics, neural networks[edit]
In the early days, many language-processing systems were designed by symbolic methods, i.e., the hand-coding of a set of rules, coupled with a dictionary lookup:[13][14] such as by writing grammars or devising heuristic rules for stemming.
More recent systems based on machine-learning algorithms have many advantages over hand-produced rules:
The learning procedures used during machine learning automatically focus on the most common cases, whereas when writing rules by hand it is often not at all obvious where the effort should be directed.
Automatic learning procedures can make use of statistical inference algorithms to produce models that are robust to unfamiliar input (e.g. containing words or structures that have not been seen before) and to erroneous input (e.g. with misspelled words or words accidentally omitted). Generally, handling such input gracefully with handwritten rules, or, more generally, creating systems of handwritten rules that make soft decisions, is extremely difficult, error-prone and time-consuming.
Systems based on automatically learning the rules can be made more accurate simply by supplying more input data. However, systems based on handwritten rules can only be made more accurate by increasing the complexity of the rules, which is a much more difficult task. In particular, there is a limit to the complexity of systems based on handwritten rules, beyond which the systems become more and more unmanageable. However, creating more data to input to machine-learning systems simply requires a corresponding increase in the number of man-hours worked, generally without significant increases in the complexity of the annotation process.
Despite the popularity of machine learning in NLP research, symbolic methods are still (2020) commonly used:
when the amount of training data is insufficient to successfully apply machine learning methods, e.g., for the machine translation of low-resource languages such as provided by the Apertium system,
for preprocessing in NLP pipelines, e.g., tokenization, or
for postprocessing and transforming the output of NLP pipelines, e.g., for knowledge extraction from syntactic parses.
Statistical methods[edit]
Since the so-called "statistical revolution"[15][16] in the late 1980s and mid-1990s, much natural language processing research has relied heavily on machine learning. The machine-learning paradigm calls instead for using statistical inference to automatically learn such rules through the analysis of large corpora (the plural form of corpus, is a set of documents, possibly with human or computer annotations) of typical real-world examples.
Many different classes of machine-learning algorithms have been applied to natural-language-processing tasks. These algorithms take as input a large set of "features" that are generated from the input data. Increasingly, however, research has focused on statistical models, which make soft, probabilistic decisions based on attaching real-valued weights to each input feature (complex-valued embeddings,[17] and neural networks in general have also been proposed, for e.g. speech[18]). Such models have the advantage that they can express the relative certainty of many different possible answers rather than only one, producing more reliable results when such a model is included as a component of a larger system.
Some of the earliest-used machine learning algorithms, such as decision trees, produced systems of hard if-then rules similar to existing hand-written rules. However, part-of-speech tagging introduced the use of hidden Markov models to natural language processing, and increasingly, research has focused on statistical models, which make soft, probabilistic decisions based on attaching real-valued weights to the features making up the input data. The cache language models upon which many speech recognition systems now rely are examples of such statistical models. Such models are generally more robust when given unfamiliar input, especially input that contains errors (as is very common for real-world data), and produce more reliable results when integrated into a larger system comprising multiple subtasks.
Since the neural turn, statistical methods in NLP research have been largely replaced by neural networks. However, they continue to be relevant for contexts in which statistical interpretability and transparency is required.
Neural networks[edit]
Further information: Artificial neural network
A major drawback of statistical methods is that they require elaborate feature engineering. Since 2015,[19] the field has thus largely abandoned statistical methods and shifted to neural networks for machine learning. Popular techniques include the use of word embeddings to capture semantic properties of words, and an increase in end-to-end learning of a higher-level task (e.g., question answering) instead of relying on a pipeline of separate intermediate tasks (e.g., part-of-speech tagging and dependency parsing). In some areas, this shift has entailed substantial changes in how NLP systems are designed, such that deep neural network-based approaches may be viewed as a new paradigm distinct from statistical natural language processing. For instance, the term neural machine translation (NMT) emphasizes the fact that deep learning-based approaches to machine translation directly learn sequence-to-sequence transformations, obviating the need for intermediate steps such as word alignment and language modeling that was used in statistical machine translation (SMT). Latest works tend to use non-technical structure of a given task to build proper neural network.[20]
Common NLP tasks[edit]
The following is a list of some of the most commonly researched tasks in natural language processing. Some of these tasks have direct real-world applications, while others more commonly serve as subtasks that are used to aid in solving larger tasks.
Though natural language processing tasks are closely intertwined, they can be subdivided into categories for convenience. A coarse division is given below.
Text and speech processing[edit]
Optical character recognition (OCR)
Given an image representing printed text, determine the corresponding text.
Speech recognition
Given a sound clip of a person or people speaking, determine the textual representation of the speech. This is the opposite of text to speech and is one of the extremely difficult problems colloquially termed "AI-complete" (see above). In natural speech there are hardly any pauses between successive words, and thus speech segmentation is a necessary subtask of speech recognition (see below). In most spoken languages, the sounds representing successive letters blend into each other in a process termed coarticulation, so the conversion of the analog signal to discrete characters can be a very difficult process. Also, given that words in the same language are spoken by people with different accents, the speech recognition software must be able to recognize the wide variety of input as being identical to each other in terms of its textual equivalent.
Speech segmentation
Given a sound clip of a person or people speaking, separate it into words. A subtask of speech recognition and typically grouped with it.
Text-to-speech
Given a text, transform those units and produce a spoken representation. Text-to-speech can be used to aid the visually impaired.[21]
Word segmentation (Tokenization)
Separate a chunk of continuous text into separate words. For a language like English, this is fairly trivial, since words are usually separated by spaces. However, some written languages like Chinese, Japanese and Thai do not mark word boundaries in such a fashion, and in those languages text segmentation is a significant task requiring knowledge of the vocabulary and morphology of words in the language. Sometimes this process is also used in cases like bag of words (BOW) creation in data mining.
Morphological analysis[edit]
Lemmatization
The task of removing inflectional endings only and to return the base dictionary form of a word which is also known as a lemma. Lemmatization is another technique for reducing words to their normalized form. But in this case, the transformation actually uses a dictionary to map words to their actual form.[22]
Morphological segmentation
Separate words into individual morphemes and identify the class of the morphemes. The difficulty of this task depends greatly on the complexity of the morphology (i.e., the structure of words) of the language being considered. English has fairly simple morphology, especially inflectional morphology, and thus it is often possible to ignore this task entirely and simply model all possible forms of a word (e.g., "open, opens, opened, opening") as separate words. In languages such as Turkish or Meitei,[23] a highly agglutinated Indian language, however, such an approach is not possible, as each dictionary entry has thousands of possible word forms.
Part-of-speech tagging
Given a sentence, determine the part of speech (POS) for each word. Many words, especially common ones, can serve as multiple parts of speech. For example, "book" can be a noun ("the book on the table") or verb ("to book a flight"); "set" can be a noun, verb or adjective; and "out" can be any of at least five different parts of speech.
Stemming
The process of reducing inflected (or sometimes derived) words to a base form (e.g., "close" will be the root for "closed", "closing", "close", "closer" etc.). Stemming yields similar results as lemmatization, but does so on grounds of rules, not a dictionary.
Syntactic analysis[edit]
Grammar induction[24]
Generate a formal grammar that describes a language's syntax.
Sentence breaking (also known as "sentence boundary disambiguation")
Given a chunk of text, find the sentence boundaries. Sentence boundaries are often marked by periods or other punctuation marks, but these same characters can serve other purposes (e.g., marking abbreviations).
Parsing
Determine the parse tree (grammatical analysis) of a given sentence. The grammar for natural languages is ambiguous and typical sentences have multiple possible analyses: perhaps surprisingly, for a typical sentence there may be thousands of potential parses (most of which will seem completely nonsensical to a human). There are two primary types of parsing: dependency parsing and constituency parsing. Dependency parsing focuses on the relationships between words in a sentence (marking things like primary objects and predicates), whereas constituency parsing focuses on building out the parse tree using a probabilistic context-free grammar (PCFG) (see also stochastic grammar).
Lexical semantics (of individual words in context)[edit]
Lexical semantics
What is the computational meaning of individual words in context?
Distributional semantics
How can we learn semantic representations from data?
Named entity recognition (NER)
Given a stream of text, determine which items in the text map to proper names, such as people or places, and what the type of each such name is (e.g. person, location, organization). Although capitalization can aid in recognizing named entities in languages such as English, this information cannot aid in determining the type of named entity, and in any case, is often inaccurate or insufficient. For example, the first letter of a sentence is also capitalized, and named entities often span several words, only some of which are capitalized. Furthermore, many other languages in non-Western scripts (e.g. Chinese or Arabic) do not have any capitalization at all, and even languages with capitalization may not consistently use it to distinguish names. For example, German capitalizes all nouns, regardless of whether they are names, and French and Spanish do not capitalize names that serve as adjectives.
Sentiment analysis (see also Multimodal sentiment analysis)
Extract subjective information usually from a set of documents, often using online reviews to determine "polarity" about specific objects. It is especially useful for identifying trends of public opinion in social media, for marketing.
Terminology extraction
The goal of terminology extraction is to automatically extract relevant terms from a given corpus.
Word sense disambiguation
Many words have more than one meaning; we have to select the meaning which makes the most sense in context. For this problem, we are typically given a list of words and associated word senses, e.g. from a dictionary or an online resource such as WordNet.
Relational semantics (semantics of individual sentences)[edit]
Relationship extraction
Given a chunk of text, identify the relationships among named entities (e.g. who is married to whom).
Semantic parsing
Given a piece of text (typically a sentence), produce a formal representation of its semantics, either as a graph (e.g., in AMR parsing) or in accordance with a logical formalism (e.g., in DRT parsing). This challenge typically includes aspects of several more elementary NLP tasks from semantics (e.g., semantic role labelling, word sense disambiguation) and can be extended to include full-fledged discourse analysis (e.g., discourse analysis, coreference; see Natural language understanding below).
Semantic role labelling (see also implicit semantic role labelling below)
Given a single sentence, identify and disambiguate semantic predicates (e.g., verbal frames), then identify and classify the frame elements (semantic roles).
Discourse (semantics beyond individual sentences)[edit]
Coreference resolution
Given a sentence or larger chunk of text, determine which words ("mentions") refer to the same objects ("entities"). Anaphora resolution is a specific example of this task, and is specifically concerned with matching up pronouns with the nouns or names to which they refer. The more general task of coreference resolution also includes identifying so-called "bridging relationships" involving referring expressions. For example, in a sentence such as "He entered John's house through the front door", "the front door" is a referring expression and the bridging relationship to be identified is the fact that the door being referred to is the front door of John's house (rather than of some other structure that might also be referred to).
Discourse analysis
This rubric includes several related tasks. One task is discourse parsing, i.e., identifying the discourse structure of a connected text, i.e. the nature of the discourse relationships between sentences (e.g. elaboration, explanation, contrast). Another possible task is recognizing and classifying the speech acts in a chunk of text (e.g. yes-no question, content question, statement, assertion, etc.).
Implicit semantic role labelling
Given a single sentence, identify and disambiguate semantic predicates (e.g., verbal frames) and their explicit semantic roles in the current sentence (see Semantic role labelling above). Then, identify semantic roles that are not explicitly realized in the current sentence, classify them into arguments that are explicitly realized elsewhere in the text and those that are not specified, and resolve the former against the local text. A closely related task is zero anaphora resolution, i.e., the extension of coreference resolution to pro-drop languages.
Recognizing textual entailment
Given two text fragments, determine if one being true entails the other, entails the other's negation, or allows the other to be either true or false.[25]
Topic segmentation and recognition
Given a chunk of text, separate it into segments each of which is devoted to a topic, and identify the topic of the segment.
Argument mining
The goal of argument mining is the automatic extraction and identification of argumentative structures from natural language text with the aid of computer programs.[26] Such argumentative structures include the premise, conclusions, the argument scheme and the relationship between the main and subsidiary argument, or the main and counter-argument within discourse.[27][28]
Higher-level NLP applications[edit]
Automatic summarization (text summarization)
Produce a readable summary of a chunk of text. Often used to provide summaries of the text of a known type, such as research papers, articles in the financial section of a newspaper.
Book generation
Not an NLP task proper but an extension of natural language generation and other NLP tasks is the creation of full-fledged books. The first machine-generated book was created by a rule-based system in 1984 (Racter, The policeman's beard is half-constructed).[29] The first published work by a neural network was published in 2018, 1 the Road, marketed as a novel, contains sixty million words. Both these systems are basically elaborate but non-sensical (semantics-free) language models. The first machine-generated science book was published in 2019 (Beta Writer, Lithium-Ion Batteries, Springer, Cham).[30] Unlike Racter and 1 the Road, this is grounded on factual knowledge and based on text summarization.
Dialogue management
Computer systems intended to converse with a human.
Document AI
A Document AI platform sits on top of the NLP technology enabling users with no prior experience of artificial intelligence, machine learning or NLP to quickly train a computer to extract the specific data they need from different document types. NLP-powered Document AI enables non-technical teams to quickly access information hidden in documents, for example, lawyers, business analysts and accountants.[31]
Grammatical error correction
Grammatical error detection and correction involves a great band-width of problems on all levels of linguistic analysis (phonology/orthography, morphology, syntax, semantics, pragmatics). Grammatical error correction is impactful since it affects hundreds of millions of people that use or acquire English as a second language. It has thus been subject to a number of shared tasks since 2011.[32][33][34] As far as orthography, morphology, syntax and certain aspects of semantics are concerned, and due to the development of powerful neural language models such as GPT-2, this can now (2019) be considered a largely solved problem and is being marketed in various commercial applications.
Machine translation
Automatically translate text from one human language to another. This is one of the most difficult problems, and is a member of a class of problems colloquially termed "AI-complete", i.e. requiring all of the different types of knowledge that humans possess (grammar, semantics, facts about the real world, etc.) to solve properly.
Natural language generation (NLG):
Convert information from computer databases or semantic intents into readable human language.
Natural language understanding (NLU)
Convert chunks of text into more formal representations such as first-order logic structures that are easier for computer programs to manipulate. Natural language understanding involves the identification of the intended semantic from the multiple possible semantics which can be derived from a natural language expression which usually takes the form of organized notations of natural language concepts. Introduction and creation of language metamodel and ontology are efficient however empirical solutions. An explicit formalization of natural language semantics without confusions with implicit assumptions such as closed-world assumption (CWA) vs. open-world assumption, or subjective Yes/No vs. objective True/False is expected for the construction of a basis of semantics formalization.[35]
Question answering
Given a human-language question, determine its answer. Typical questions have a specific right answer (such as "What is the capital of Canada?"), but sometimes open-ended questions are also considered (such as "What is the meaning of life?").
General tendencies and (possible) future directions[edit]
Based on long-standing trends in the field, it is possible to extrapolate future directions of NLP. As of 2020, three trends among the topics of the long-standing series of CoNLL Shared Tasks can be observed:[36]
Interest on increasingly abstract, "cognitive" aspects of natural language (1999-2001: shallow parsing, 2002-03: named entity recognition, 2006-09/2017-18: dependency syntax, 2004-05/2008-09 semantic role labelling, 2011-12 coreference, 2015-16: discourse parsing, 2019: semantic parsing).
Increasing interest in multilinguality, and, potentially, multimodality (English since 1999; Spanish, Dutch since 2002; German since 2003; Bulgarian, Danish, Japanese, Portuguese, Slovenian, Swedish, Turkish since 2006; Basque, Catalan, Chinese, Greek, Hungarian, Italian, Turkish since 2007; Czech since 2009; Arabic since 2012; 2017: 40+ languages; 2018: 60+/100+ languages)
Elimination of symbolic representations (rule-based over supervised towards weakly supervised methods, representation learning and end-to-end systems)
Cognition and NLP[edit]
Most higher-level NLP applications involve aspects that emulate intelligent behaviour and apparent comprehension of natural language. More broadly speaking, the technical operationalization of increasingly advanced aspects of cognitive behaviour represents one of the developmental trajectories of NLP (see trends among CoNLL shared tasks above).
Cognition refers to "the mental action or process of acquiring knowledge and understanding through thought, experience, and the senses."[37] Cognitive science is the interdisciplinary, scientific study of the mind and its processes.[38] Cognitive linguistics is an interdisciplinary branch of linguistics, combining knowledge and research from both psychology and linguistics.[39] Especially during the age of symbolic NLP, the area of computational linguistics maintained strong ties with cognitive studies.
As an example, George Lakoff offers a methodology to build natural language processing (NLP) algorithms through the perspective of cognitive science, along with the findings of cognitive linguistics,[40] with two defining aspects:
Apply the theory of conceptual metaphor, explained by Lakoff as “the understanding of one idea, in terms of another” which provides an idea of the intent of the author.[41] For example, consider the English word “big”. When used in a comparison (“That is a big tree”), the author's intent is to imply that the tree is ”physically large” relative to other trees or the authors experience. When used metaphorically (”Tomorrow is a big day”), the author’s intent to imply ”importance”. The intent behind other usages, like in ”She is a big person” will remain somewhat ambiguous to a person and a cognitive NLP algorithm alike without additional information.
Assign relative measures of meaning to a word, phrase, sentence or piece of text based on the information presented before and after the piece of text being analyzed, e.g., by means of a probabilistic context-free grammar (PCFG). The mathematical equation for such algorithms is presented in US patent 9269353 :
R
M
M
(
t
o
k
e
n
N
)
=
P
M
M
(
t
o
k
e
n
N
)
×
1
2
d
(
∑
i
=
−
d
d
(
(
P
M
M
(
t
o
k
e
n
N
−
1
)
×
P
F
(
t
o
k
e
n
N
,
t
o
k
e
n
N
−
1
)
)
i
)
{\displaystyle {RMM(token_{N})}={PMM(token_{N})}\times {\frac {1}{2d}}\left(\sum _{i=-d}^{d}{((PMM(token_{N-1})}\times {PF(token_{N},token_{N-1}))_{i}}\right)}
Where,
RMM, is the Relative Measure of Meaning
token, is any block of text, sentence, phrase or word
N, is the number of tokens being analyzed
PMM, is the Probable Measure of Meaning based on a corpora
d, is the location of the token along the sequence of N-1 tokens
PF, is the Probability Function specific to a language
Ties with cognitive linguistics are part of the historical heritage of NLP, but they have been less frequently addressed since the statistical turn during the 1990s. Nevertheless, approaches to develop cognitive models towards technically operationalizable frameworks have been pursued in the context of various frameworks, e.g., of cognitive grammar,[42] functional grammar,[43] construction grammar,[44] computational psycholinguistics and cognitive neuroscience (e.g., ACT-R), however, with limited uptake in mainstream NLP (as measured by presence on major conferences[45] of the ACL). More recently, ideas of cognitive NLP have been revived as an approach to achieve explainability, e.g., under the notion of "cognitive AI".[46] Likewise, ideas of cognitive NLP are inherent to neural models multimodal NLP (although rarely made explicit).[47]
See also[edit]
1 the Road
Automated essay scoring
Biomedical text mining
Compound term processing
Computational linguistics
Computer-assisted reviewing
Controlled natural language
Deep learning
Deep linguistic processing
Distributional semantics
Foreign language reading aid
Foreign language writing aid
Information extraction
Information retrieval
Language and Communication Technologies
Language technology
Latent semantic indexing
Native-language identification
Natural language programming
Natural language search
Outline of natural language processing
Query expansion
Query understanding
Reification (linguistics)
Speech processing
Spoken dialogue systems
Text-proofing
Text simplification
Transformer (machine learning model)
Truecasing
Question answering
Word2vec
References[edit]
^ Kongthon, Alisa; Sangkeettrakarn, Chatchawal; Kongyoung, Sarawoot; Haruechaiyasak, Choochart (October 27–30, 2009). Implementing an online help desk system based on conversational agent. MEDES '09: The International Conference on Management of Emergent Digital EcoSystems. France: ACM. doi:10.1145/1643823.1643908.
^ Hutchins, J. (2005). "The history of machine translation in a nutshell" (PDF).[self-published source]
^ Koskenniemi, Kimmo (1983), Two-level morphology: A general computational model of word-form recognition and production (PDF), Department of General Linguistics, University of Helsinki
^ Joshi, A. K., & Weinstein, S. (1981, August). Control of Inference: Role of Some Aspects of Discourse Structure-Centering. In IJCAI (pp. 385-387).
^ Guida, G.; Mauri, G. (July 1986). "Evaluation of natural language processing systems: Issues and approaches". Proceedings of the IEEE. 74 (7): 1026–1035. doi:10.1109/PROC.1986.13580. ISSN 1558-2256. S2CID 30688575.
^ Chomskyan linguistics encourages the investigation of "corner cases" that stress the limits of its theoretical models (comparable to pathological phenomena in mathematics), typically created using thought experiments, rather than the systematic investigation of typical phenomena that occur in real-world data, as is the case in corpus linguistics. The creation and use of such corpora of real-world data is a fundamental part of machine-learning algorithms for natural language processing. In addition, theoretical underpinnings of Chomskyan linguistics such as the so-called "poverty of the stimulus" argument entail that general learning algorithms, as are typically used in machine learning, cannot be successful in language processing. As a result, the Chomskyan paradigm discouraged the application of such models to language processing.
^ Goldberg, Yoav (2016). "A Primer on Neural Network Models for Natural Language Processing". Journal of Artificial Intelligence Research. 57: 345–420. arXiv:1807.10854. doi:10.1613/jair.4992. S2CID 8273530.
^ Goodfellow, Ian; Bengio, Yoshua; Courville, Aaron (2016). Deep Learning. MIT Press.
^ Jozefowicz, Rafal; Vinyals, Oriol; Schuster, Mike; Shazeer, Noam; Wu, Yonghui (2016). Exploring the Limits of Language Modeling. arXiv:1602.02410. Bibcode:2016arXiv160202410J.
^ Choe, Do Kook; Charniak, Eugene. "Parsing as Language Modeling". Emnlp 2016.
^ Vinyals, Oriol; et al. (2014). "Grammar as a Foreign Language" (PDF). Nips2015. arXiv:1412.7449. Bibcode:2014arXiv1412.7449V.
^ Turchin, Alexander; Florez Builes, Luisa F. (2021-03-19). "Using Natural Language Processing to Measure and Improve Quality of Diabetes Care: A Systematic Review". Journal of Diabetes Science and Technology. 15 (3): 553–560. doi:10.1177/19322968211000831. ISSN 1932-2968. PMC 8120048. PMID 33736486.
^ Winograd, Terry (1971). Procedures as a Representation for Data in a Computer Program for Understanding Natural Language (Thesis).
^ Schank, Roger C.; Abelson, Robert P. (1977). Scripts, Plans, Goals, and Understanding: An Inquiry Into Human Knowledge Structures. Hillsdale: Erlbaum. ISBN 0-470-99033-3.
^ Mark Johnson. How the statistical revolution changes (computational) linguistics. Proceedings of the EACL 2009 Workshop on the Interaction between Linguistics and Computational Linguistics.
^ Philip Resnik. Four revolutions. Language Log, February 5, 2011.
^ "Investigating complex-valued representation in NLP" (PDF).
^ Trabelsi, Chiheb; Bilaniuk, Olexa; Zhang, Ying; Serdyuk, Dmitriy; Subramanian, Sandeep; Santos, João Felipe; Mehri, Soroush; Rostamzadeh, Negar; Bengio, Yoshua; Pal, Christopher J. (2018-02-25). "Deep Complex Networks". arXiv:1705.09792 [cs.NE].
^ Socher, Richard. "Deep Learning For NLP-ACL 2012 Tutorial". www.socher.org. Retrieved 2020-08-17. This was an early Deep Learning tutorial at the ACL 2012 and met with both interest and (at the time) skepticism by most participants. Until then, neural learning was basically rejected because of its lack of statistical interpretability. Until 2015, deep learning had evolved into the major framework of NLP.
^ Annamoradnejad, I. and Zoghi, G. (2020). Colbert: Using bert sentence embedding for humor detection. arXiv preprint arXiv:2004.12765.
^ Yi, Chucai; Tian, Yingli (2012), "Assistive Text Reading from Complex Background for Blind Persons", Camera-Based Document Analysis and Recognition, Springer Berlin Heidelberg, pp. 15–28, CiteSeerX 10.1.1.668.869, doi:10.1007/978-3-642-29364-1_2, ISBN 9783642293634
^ "What is Natural Language Processing? Intro to NLP in Machine Learning". GyanSetu!. 2020-12-06. Retrieved 2021-01-09.
^ Kishorjit, N.; Vidya, Raj RK.; Nirmal, Y.; Sivaji, B. (2012). "Manipuri Morpheme Identification" (PDF). Proceedings of the 3rd Workshop on South and Southeast Asian Natural Language Processing (SANLP). COLING 2012, Mumbai, December 2012: 95–108.CS1 maint: location (link)
^ Klein, Dan; Manning, Christopher D. (2002). "Natural language grammar induction using a constituent-context model" (PDF). Advances in Neural Information Processing Systems.
^ PASCAL Recognizing Textual Entailment Challenge (RTE-7) https://tac.nist.gov//2011/RTE/
^ Lippi, Marco; Torroni, Paolo (2016-04-20). "Argumentation Mining: State of the Art and Emerging Trends". ACM Transactions on Internet Technology. 16 (2): 1–25. doi:10.1145/2850417. ISSN 1533-5399. S2CID 9561587.
^ "Argument Mining - IJCAI2016 Tutorial". www.i3s.unice.fr. Retrieved 2021-03-09.
^ "NLP Approaches to Computational Argumentation – ACL 2016, Berlin". Retrieved 2021-03-09.
^ "U B U W E B :: Racter". www.ubu.com. Retrieved 2020-08-17.
^ Writer, Beta (2019). Lithium-Ion Batteries. doi:10.1007/978-3-030-16800-1. ISBN 978-3-030-16799-8.
^ "Document Understanding AI on Google Cloud (Cloud Next '19) - YouTube". www.youtube.com. Retrieved 2021-01-11.
^ Administration. "Centre for Language Technology (CLT)". Macquarie University. Retrieved 2021-01-11.
^ "Shared Task: Grammatical Error Correction". www.comp.nus.edu.sg. Retrieved 2021-01-11.
^ "Shared Task: Grammatical Error Correction". www.comp.nus.edu.sg. Retrieved 2021-01-11.
^ Duan, Yucong; Cruz, Christophe (2011). "Formalizing Semantic of Natural Language through Conceptualization from Existence". International Journal of Innovation, Management and Technology. 2 (1): 37–42. Archived from the original on 2011-10-09.
^ "Previous shared tasks | CoNLL". www.conll.org. Retrieved 2021-01-11.
^ "Cognition". Lexico. Oxford University Press and Dictionary.com. Retrieved 6 May 2020.
^ "Ask the Cognitive Scientist". American Federation of Teachers. 8 August 2014. Cognitive science is an interdisciplinary field of researchers from Linguistics, psychology, neuroscience, philosophy, computer science, and anthropology that seek to understand the mind.
^ Robinson, Peter (2008). Handbook of Cognitive Linguistics and Second Language Acquisition. Routledge. pp. 3–8. ISBN 978-0-805-85352-0.
^ Lakoff, George (1999). Philosophy in the Flesh: The Embodied Mind and Its Challenge to Western Philosophy; Appendix: The Neural Theory of Language Paradigm. New York Basic Books. pp. 569–583. ISBN 978-0-465-05674-3.
^ Strauss, Claudia (1999). A Cognitive Theory of Cultural Meaning. Cambridge University Press. pp. 156–164. ISBN 978-0-521-59541-4.
^ "Universal Conceptual Cognitive Annotation (UCCA)". Universal Conceptual Cognitive Annotation (UCCA). Retrieved 2021-01-11.
^ Rodríguez, F. C., & Mairal-Usón, R. (2016). Building an RRG computational grammar. Onomazein, (34), 86-117.
^ "Fluid Construction Grammar – A fully operational processing system for construction grammars". Retrieved 2021-01-11.
^ "ACL Member Portal | The Association for Computational Linguistics Member Portal". www.aclweb.org. Retrieved 2021-01-11.
^ "Chunks and Rules". www.w3.org. Retrieved 2021-01-11.
^ Socher, Richard; Karpathy, Andrej; Le, Quoc V.; Manning, Christopher D.; Ng, Andrew Y. (2014). "Grounded Compositional Semantics for Finding and Describing Images with Sentences". Transactions of the Association for Computational Linguistics. 2: 207–218. doi:10.1162/tacl_a_00177. S2CID 2317858.
Further reading[edit]
Bates, M (1995). "Models of natural language understanding". Proceedings of the National Academy of Sciences of the United States of America. 92 (22): 9977–9982. Bibcode:1995PNAS...92.9977B. doi:10.1073/pnas.92.22.9977. PMC 40721. PMID 7479812.
Steven Bird, Ewan Klein, and Edward Loper (2009). Natural Language Processing with Python. O'Reilly Media. ISBN 978-0-596-51649-9.
Daniel Jurafsky and James H. Martin (2008). Speech and Language Processing, 2nd edition. Pearson Prentice Hall. ISBN 978-0-13-187321-6.
Mohamed Zakaria Kurdi (2016). Natural Language Processing and Computational Linguistics: speech, morphology, and syntax, Volume 1. ISTE-Wiley. ISBN 978-1848218482.
Mohamed Zakaria Kurdi (2017). Natural Language Processing and Computational Linguistics: semantics, discourse, and applications, Volume 2. ISTE-Wiley. ISBN 978-1848219212.
Christopher D. Manning, Prabhakar Raghavan, and Hinrich Schütze (2008). Introduction to Information Retrieval. Cambridge University Press. ISBN 978-0-521-86571-5. Official html and pdf versions available without charge.
Christopher D. Manning and Hinrich Schütze (1999). Foundations of Statistical Natural Language Processing. The MIT Press. ISBN 978-0-262-13360-9.
David M. W. Powers and Christopher C. R. Turk (1989). Machine Learning of Natural Language. Springer-Verlag. ISBN 978-0-387-19557-5.
Wikimedia Commons has media related to Natural language processing.
vteNatural language processingGeneral terms
AI-complete
Bag-of-words
n-gram
Bigram
Trigram
Computational linguistics
Natural-language understanding
Stopwords
Text processing
Text analysis
Collocation extraction
Concept mining
Coreference resolution
Deep linguistic processing
Distant reading
Information extraction
Named-entity recognition
Ontology learning
Parsing
Part-of-speech tagging
Semantic role labeling
Semantic similarity
Sentiment analysis
Terminology extraction
Text mining
Textual entailment
Truecasing
Word-sense disambiguation
Word-sense induction
Text segmentation
Compound-term processing
Lemmatisation
Lexical analysis
Text chunking
Stemming
Sentence segmentation
Word segmentation
Automatic summarization
Multi-document summarization
Sentence extraction
Text simplification
Machine translation
Computer-assisted
Example-based
Rule-based
Statistical
Transfer-based
Neural
Distributional semantics models
BERT
Document-term matrix
Explicit semantic analysis
fastText
GloVe
Latent semantic analysis
Word embedding
Word2vec
Language resources, datasets and corporaTypes and standards
Corpus linguistics
Lexical resource
Linguistic Linked Open Data
Machine-readable dictionary
Parallel text
PropBank
Semantic network
Simple Knowledge Organization System
Speech corpus
Text corpus
Thesaurus (information retrieval)
Treebank
Universal Dependencies
Data
BabelNet
Bank of English
DBpedia
FrameNet
Google Ngram Viewer
ThoughtTreasure
UBY
WordNet
Automatic identificationand data capture
Speech recognition
Speech segmentation
Speech synthesis
Natural language generation
Optical character recognition
Topic model
Document classification
Latent Dirichlet allocation
Pachinko allocation
Computer-assistedreviewing
Automated essay scoring
Concordancer
Grammar checker
Predictive text
Spell checker
Syntax guessing
Natural languageuser interface
Chatbot
Interactive fiction
Question answering
Virtual assistant
Voice user interface
Other software
Natural Language Toolkit
spaCy
Authority control: National libraries
United States
Japan
Language portal
Retrieved from "https://en.wikipedia.org/w/index.php?title=Natural_language_processing&oldid=1037947074"
Categories: Natural language processingComputational linguisticsSpeech recognitionComputational fields of studyArtificial intelligenceHidden categories: CS1 maint: locationArticles with short descriptionShort description matches WikidataCommons category link from WikidataArticles with LCCN identifiersArticles with NDL identifiers
Navigation menu
Personal tools
Not logged inTalkContributionsCreate accountLog in
Namespaces
ArticleTalk
Variants
Views
ReadEditView history
More
Search
Navigation
Main pageContentsCurrent eventsRandom articleAbout WikipediaContact usDonate
Contribute
HelpLearn to editCommunity portalRecent changesUpload file
Tools
What links hereRelated changesUpload fileSpecial pagesPermanent linkPage informationCite this pageWikidata item
Print/export
Download as PDFPrintable version
In other projects
Wikimedia Commons
Languages
AfrikaansالعربيةAzərbaycancaবাংলাBân-lâm-gúБеларускаяБеларуская (тарашкевіца)БългарскиCatalàČeštinaDanskDeutschEestiΕλληνικάEspañolEuskaraفارسیFrançaisGalego한국어Հայերենहिन्दीHrvatskiBahasa IndonesiaÍslenskaItalianoעבריתಕನ್ನಡქართულიLietuviųМакедонскиमराठीМонголမြန်မာဘာသာ日本語ଓଡ଼ିଆPiemontèisPolskiPortuguêsRomânăРусскийSimple EnglishکوردیСрпски / srpskiSrpskohrvatski / српскохрватскиSuomiதமிழ்ไทยTürkçeУкраїнськаTiếng Việt粵語中文
Edit links
This page was last edited on 9 August 2021, at 16:32 (UTC).
Text is available under the Creative Commons Attribution-ShareAlike License;
additional terms may apply. By using this site, you agree to the Terms of Use and Privacy Policy. Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc., a non-profit organization.
Privacy policy
About Wikipedia
Disclaimers
Contact Wikipedia
Mobile view
Developers
Statistics
Cookie statement
3.2.2. Pre-processing¶
import gensim
import re
from gensim.corpora import Dictionary
doc_tokenized = gensim.utils.simple_preprocess(str(document), deacc=True)
doc_tokenized[:10]
['natural',
'language',
'processing',
'wikipedia',
'natural',
'language',
'processing',
'from',
'wikipedia',
'the']
dictionary = Dictionary()
BoW_corpus = dictionary.doc2bow(doc_tokenized, allow_update=True)
BoW_corpus = [(dictionary[id], freq) for id, freq in BoW_corpus]
BoW_corpus[:10]
[('aaron', 1),
('abandoned', 1),
('abbreviations', 1),
('abelson', 1),
('able', 2),
('about', 5),
('above', 3),
('abstract', 1),
('academy', 1),
('accents', 1)]
def convert(tup, dict):
for a, b in tup:
dict[a] = b
return dict
BoW_corpus_dict = dict()
convert(BoW_corpus, BoW_corpus_dict)
{'aaron': 1,
'abandoned': 1,
'abbreviations': 1,
'abelson': 1,
'able': 2,
'about': 5,
'above': 3,
'abstract': 1,
'academy': 1,
'accents': 1,
'access': 1,
'accidentally': 1,
'accordance': 1,
'accountants': 1,
'accountlog': 1,
'accurate': 3,
'accurately': 1,
'achieve': 2,
'acl': 5,
'aclweb': 1,
'acm': 2,
'acquire': 1,
'acquiring': 1,
'acquisition': 1,
'act': 1,
'action': 1,
'acts': 1,
'actual': 1,
'actually': 1,
'addition': 1,
'additional': 2,
'addressed': 1,
'adjective': 1,
'adjectives': 1,
'administration': 1,
'advanced': 1,
'advances': 1,
'advantage': 2,
'advantages': 1,
'affects': 1,
'after': 2,
'against': 1,
'age': 1,
'agent': 1,
'agglutinated': 1,
'agree': 1,
'ai': 8,
'aid': 7,
'al': 1,
'alan': 1,
'alexander': 1,
'algorithm': 3,
'algorithms': 12,
'alignment': 1,
'alike': 1,
'alisa': 1,
'all': 8,
'allocation': 2,
'allows': 1,
'almost': 1,
'along': 2,
'alpac': 1,
'already': 1,
'also': 14,
'although': 2,
'ambiguous': 2,
'america': 1,
'american': 1,
'among': 4,
'amount': 3,
'amounts': 3,
'amr': 1,
'an': 22,
'analog': 1,
'analyses': 1,
'analysis': 18,
'analysts': 1,
'analyze': 2,
'analyzed': 2,
'anaphora': 2,
'and': 141,
'andrej': 1,
'andrew': 1,
'annamoradnejad': 1,
'annotated': 4,
'annotation': 3,
'annotations': 1,
'another': 4,
'answer': 2,
'answering': 4,
'answers': 3,
'anthropology': 1,
'any': 5,
'apertium': 1,
'apparent': 1,
'appendix': 1,
'application': 2,
'applications': 6,
'applied': 1,
'apply': 3,
'applying': 1,
'approach': 3,
'approaches': 5,
'arabic': 2,
'archived': 1,
'are': 28,
'area': 1,
'areas': 3,
'argument': 7,
'argumentation': 2,
'argumentative': 2,
'arguments': 1,
'art': 2,
'article': 2,
'articleabout': 1,
'articles': 1,
'articletalk': 1,
'articulated': 1,
'artificial': 5,
'arxiv': 8,
'as': 53,
'asian': 1,
'ask': 1,
'aspects': 7,
'assertion': 1,
'assign': 1,
'assistant': 2,
'assisted': 2,
'assistive': 1,
'associated': 1,
'association': 2,
'assumption': 2,
'assumptions': 1,
'at': 8,
'attaching': 2,
'attribution': 1,
'august': 3,
'author': 3,
'authority': 1,
'authors': 2,
'automated': 4,
'automatic': 6,
'automatically': 5,
'available': 4,
'babelnet': 1,
'background': 1,
'bag': 2,
'ban': 1,
'band': 1,
'bank': 1,
'base': 3,
'based': 21,
'basic': 1,
'basically': 2,
'basis': 1,
'basque': 1,
'bates': 1,
'batteries': 2,
'be': 26,
'beard': 1,
'became': 1,
'because': 1,
'become': 2,
'been': 10,
'before': 2,
'began': 1,
'behaviour': 2,
'behind': 1,
'being': 8,
'below': 4,
'bengio': 2,
'berlin': 2,
'bert': 2,
'beta': 2,
'between': 7,
'beyond': 3,
'bibcode': 3,
'big': 4,
'bigram': 1,
'bilaniuk': 1,
'biomedical': 1,
'bird': 1,
'blend': 1,
'blind': 1,
'block': 1,
'blocks': 1,
'book': 6,
'books': 2,
'both': 4,
'boundaries': 3,
'boundary': 1,
'bow': 1,
'brain': 2,
'branch': 1,
'breaking': 1,
'bridging': 2,
'broadly': 1,
'build': 2,
'building': 2,
'builes': 1,
'bulgarian': 1,
'business': 1,
'but': 8,
'by': 24,
'cache': 1,
'called': 4,
'calling': 1,
'calls': 1,
'cambridge': 2,
'camera': 1,
'can': 22,
'canada': 2,
'cannot': 2,
'capable': 1,
'capital': 1,
'capitalization': 3,
'capitalize': 1,
'capitalized': 2,
'capitalizes': 1,
'capture': 2,
'carbonell': 1,
'care': 2,
'case': 3,
'cases': 3,
'catalan': 1,
'categories': 3,
'categorize': 1,
'category': 1,
'centering': 2,
'centre': 1,
'certain': 1,
'certainty': 1,
'challenge': 3,
'challenges': 1,
'cham': 1,
'changes': 2,
'changesupload': 2,
'character': 2,
'characters': 2,
'charge': 1,
'charniak': 1,
'chatbot': 1,
'chatchawal': 1,
'chatterbots': 2,
'checker': 2,
'chiheb': 1,
'chinese': 5,
'choe': 1,
'chomskyan': 4,
'choochart': 1,
'christophe': 1,
'christopher': 6,
'chucai': 1,
'chunk': 7,
'chunking': 1,
'chunks': 2,
'cid': 4,
'citeseerx': 1,
'claimed': 1,
'class': 2,
'classes': 1,
'classification': 1,
'classify': 2,
'classifying': 1,
'claudia': 1,
'clip': 2,
'close': 2,
'closed': 2,
'closely': 2,
'closer': 1,
'closing': 1,
'cloud': 2,
'clt': 1,
'coarse': 1,
'coarticulation': 1,
'coding': 1,
'cognition': 4,
'cognitive': 21,
'colbert': 1,
'coling': 1,
'collection': 1,
'collocation': 1,
'colloquially': 2,
'com': 3,
'combination': 1,
'combining': 1,
'commercial': 1,
'common': 5,
'commonly': 3,
'commons': 3,
'communication': 1,
'comp': 2,
'comparable': 1,
'comparison': 1,
'complete': 3,
'completely': 1,
'complex': 5,
'complexity': 5,
'component': 2,
'compositional': 1,
'compound': 2,
'comprehension': 1,
'comprising': 1,
'computational': 16,
'computer': 16,
'computers': 3,
'computing': 1,
'concept': 1,
'concepts': 1,
'conceptual': 4,
'concerned': 3,
'conclusions': 1,
'concordancer': 1,
'conducted': 1,
'conference': 1,
'conferences': 1,
'confronted': 1,
'confusions': 1,
'conll': 4,
'connected': 1,
'consider': 1,
'considered': 3,
'consistently': 1,
'constituency': 2,
'constituent': 1,
'constructed': 1,
'construction': 4,
'consuming': 1,
'contact': 1,
'contained': 1,
'containing': 1,
'contains': 2,
'content': 2,
'contents': 2,
'context': 8,
'contexts': 1,
'contextual': 1,
'continue': 1,
'continued': 1,
'continues': 1,
'continuous': 1,
'contrast': 1,
'contribute': 1,
'control': 2,
'controlled': 1,
'convenience': 1,
'conversational': 1,
'converse': 1,
'conversion': 1,
'convert': 2,
'cookie': 1,
'coreference': 6,
'corner': 1,
'corpora': 5,
'corporatypes': 1,
'corpus': 7,
'correction': 5,
'corresponding': 3,
'counter': 1,
'coupled': 1,
'courville': 1,
'created': 2,
'creating': 2,
'creation': 4,
'creative': 1,
'criterion': 1,
'cruz': 1,
'cs': 3,
'cullingford': 1,
'cultural': 1,
'current': 2,
'customer': 1,
'cwa': 1,
'czech': 1,
'dan': 1,
'daniel': 1,
'danish': 1,
'data': 24,
'databases': 1,
'datasets': 1,
'david': 1,
'day': 2,
'days': 1,
'dbpedia': 1,
'deal': 1,
'december': 1,
'decision': 1,
'decisions': 3,
'deep': 11,
'defining': 1,
'department': 1,
'depended': 1,
'dependencies': 1,
'dependency': 4,
'depends': 1,
'derived': 2,
'describes': 1,
'describing': 1,
'description': 1,
'designed': 2,
'desired': 1,
'desk': 1,
'despite': 1,
'detection': 2,
'determine': 9,
'determining': 1,
'develop': 1,
'developed': 3,
'developers': 1,
'development': 4,
'developmental': 1,
'devising': 1,
'devoted': 1,
'diabetes': 2,
'dialogue': 2,
'dictionary': 8,
'different': 6,
'difficult': 6,
'difficulty': 1,
'digital': 1,
'direct': 1,
'directed': 1,
'directions': 3,
'directly': 1,
'dirichlet': 1,
'disambiguate': 2,
'disambiguation': 4,
'disclaimers': 1,
'discouraged': 2,
'discourse': 12,
'discrete': 1,
'displaystyle': 1,
'distant': 1,
'distinct': 1,
'distinguish': 1,
'distributional': 3,
'division': 1,
'dmitriy': 1,
'do': 5,
'document': 9,
'documents': 6,
'does': 1,
'doi': 9,
'dominance': 1,
'done': 2,
'door': 4,
'download': 1,
'dramatically': 1,
'drawback': 1,
'drop': 1,
'drt': 1,
'duan': 1,
'due': 4,
'during': 5,
'dutch': 1,
'each': 7,
'eacl': 1,
'earliest': 1,
'early': 6,
'easier': 1,
'ecosystems': 1,
'edit': 21,
'editcommunity': 1,
'edited': 1,
'edition': 1,
'edu': 2,
'edward': 1,
'effectively': 1,
'efficient': 1,
'effort': 1,
'either': 2,
'elaborate': 2,
'elaboration': 1,
'electronic': 1,
'elementary': 1,
'elements': 1,
'elimination': 1,
'eliza': 3,
'elsewhere': 1,
'embedding': 2,
'embeddings': 2,
'embodied': 1,
'emergent': 1,
'emerging': 1,
'emnlp': 1,
'emotion': 1,
'emphasizes': 1,
'empirical': 1,
'emulate': 1,
'emulates': 1,
'en': 1,
'enables': 1,
'enabling': 1,
'encourages': 1,
'encyclopedia': 1,
'end': 4,
'ended': 1,
'endings': 1,
'engineering': 1,
'english': 8,
'enormous': 1,
'enough': 1,
'entail': 1,
'entailed': 1,
'entailment': 3,
'entails': 2,
'entered': 1,
'entire': 1,
'entirely': 1,
'entities': 4,
'entity': 4,
'entry': 1,
'equation': 1,
'equivalent': 1,
'erlbaum': 1,
'erroneous': 1,
'error': 6,
'errors': 1,
'especially': 6,
'essay': 2,
'et': 1,
'etc': 3,
'eugene': 1,
'european': 1,
'evaluation': 2,
'even': 1,
'eventsrandom': 1,
'eventually': 1,
'evolved': 1,
'ewan': 1,
'example': 12,
'examples': 3,
'exceeded': 1,
'existence': 1,
'existing': 2,
'expansion': 1,
'expectations': 1,
'expected': 1,
'experience': 3,
'experiment': 2,
'experiments': 1,
'explainability': 1,
'explained': 1,
'explanation': 1,
'explicit': 4,
'explicitly': 2,
'exploring': 1,
'export': 1,
'express': 1,
'expression': 2,
'expressions': 1,
'extended': 1,
'extension': 2,
'extract': 4,
'extraction': 10,
'extrapolate': 1,
'extremely': 2,
'fact': 2,
'facts': 1,
'factual': 1,
'failed': 1,
'fairly': 2,
'false': 2,
'far': 1,
'fashion': 1,
'fasttext': 1,
'feature': 2,
'features': 2,
'february': 1,
'federation': 1,
'felipe': 1,
'fiction': 1,
'field': 5,
'fields': 1,
'file': 1,
'filespecial': 1,
'financial': 1,
'find': 1,
'finding': 1,
'findings': 1,
'first': 7,
'five': 2,
'fledged': 2,
'flesh': 1,
'flight': 1,
'florez': 1,
'fluid': 1,
'flurry': 1,
'focus': 2,
'focused': 3,
'focuses': 2,
'following': 1,
'for': 53,
'foreign': 3,
'form': 7,
'formal': 3,
'formalism': 1,
'formalization': 2,
'formalizing': 1,
'former': 1,
'forms': 2,
'found': 1,
'foundation': 1,
'foundations': 1,
'four': 1,
'fr': 1,
'frac': 1,
'fragments': 1,
'frame': 1,
'framenet': 1,
'frames': 2,
'framework': 1,
'frameworks': 2,
'france': 1,
'free': 4,
'french': 1,
'frequently': 2,
'from': 25,
'front': 3,
'fulfill': 1,
'full': 2,
'fully': 2,
'function': 1,
'functional': 1,
'fundamental': 1,
'funding': 1,
'further': 5,
'furthermore': 1,
'future': 3,
'general': 7,
'generally': 5,
'generate': 1,
'generated': 3,
'generation': 6,
'generative': 1,
'generic': 1,
'george': 2,
'georgetown': 1,
'german': 2,
'given': 24,
'glove': 1,
'goal': 3,
'goals': 1,
'goldberg': 1,
'gone': 1,
'goodfellow': 1,
'google': 2,
'gov': 1,
'government': 1,
'governmental': 1,
'gpt': 1,
'gracefully': 1,
'gradual': 1,
'gram': 1,
'grammar': 17,
'grammars': 2,
'grammatical': 6,
'graph': 1,
'great': 2,
'greatly': 1,
'greek': 1,
'grounded': 2,
'grounds': 1,
'grouped': 1,
'growth': 1,
'guessing': 1,
'guida': 1,
'gyansetu': 1,
'had': 3,
'half': 1,
'hall': 1,
'hand': 6,
'handbook': 1,
'handling': 1,
'handwritten': 4,
'hard': 1,
'hardly': 1,
'haruechaiyasak': 1,
'has': 15,
'have': 15,
'he': 1,
'head': 2,
'health': 1,
'healthcare': 1,
'heavily': 1,
'heidelberg': 1,
'help': 1,
'helplearn': 1,
'helsinki': 1,
'hererelated': 1,
'heritage': 1,
'heuristic': 1,
'hey': 1,
'hidden': 2,
'higher': 4,
'highly': 1,
'hillsdale': 1,
'hinrich': 2,
'historical': 1,
'history': 5,
'hours': 1,
'house': 2,
'how': 4,
'however': 13,
'hpsg': 1,
'hrvatskibahasa': 1,
'html': 1,
'https': 2,
'human': 11,
'humans': 1,
'humor': 1,
'hundreds': 1,
'hungarian': 1,
'hurts': 2,
'hutchins': 1,
'ian': 1,
'ibm': 1,
'idea': 2,
'ideas': 2,
'identical': 1,
'identification': 4,
'identified': 1,
'identifiers': 1,
'identify': 7,
'identifying': 3,
'ieee': 1,
'if': 3,
'ignore': 1,
'ijcai': 2,
'image': 1,
'images': 1,
'impactful': 1,
'impaired': 1,
'implemented': 1,
'implementing': 1,
'implicit': 3,
'imply': 2,
'importance': 2,
'important': 2,
'improve': 2,
'in': 105,
'inaccessible': 1,
'inaccurate': 1,
'inc': 1,
'include': 3,
'included': 2,
'includes': 3,
'including': 2,
'increase': 3,
'increases': 1,
'increasing': 3,
'increasingly': 6,
'index': 1,
'indexing': 1,
'indian': 1,
'individual': 8,
'induction': 3,
'inference': 3,
'inferior': 1,
'inflected': 1,
'inflectional': 2,
'information': 17,
'informationcite': 1,
'inherent': 1,
'innovation': 1,
'input': 13,
'inquiry': 1,
'insights': 1,
'instance': 1,
'instead': 2,
'insufficient': 2,
'integrated': 1,
'intelligence': 6,
'intelligent': 1,
'intended': 2,
'intent': 4,
'intents': 1,
'interaction': 2,
'interactions': 1,
'interactive': 1,
'interest': 3,
'interface': 2,
'intermediate': 2,
'international': 2,
'internet': 1,
'interpretation': 1,
'intertwined': 1,
'into': 16,
'intro': 1,
'introduced': 1,
'introduction': 3,
'investigating': 1,
'investigation': 2,
'involve': 2,
'involved': 1,
'involves': 3,
'involving': 1,
'ion': 2,
'is': 82,
'isbn': 13,
'issn': 3,
'issues': 1,
'iste': 2,
'it': 11,
'italian': 1,
'item': 1,
'items': 1,
'its': 8,
'jabberwacky': 1,
'jair': 1,
'james': 1,
'japan': 1,
'japanese': 2,
'joao': 1,
'john': 3,
'johnson': 1,
'joseph': 1,
'joshi': 1,
'journal': 3,
'jozefowicz': 1,
'july': 1,
'jump': 2,
'jurafsky': 1,
'karpathy': 1,
'kimmo': 1,
'kishorjit': 1,
'klein': 2,
'knowledge': 9,
'known': 3,
'kongthon': 1,
'kongyoung': 1,
'kook': 1,
'koskenniemi': 1,
'kurdi': 2,
'labeling': 1,
'labelling': 6,
'lack': 1,
'lakoff': 3,
'lam': 1,
'language': 105,
'languages': 14,
'languageuser': 1,
'large': 4,
'largely': 3,
'larger': 4,
'last': 1,
'late': 3,
'latent': 3,
'latest': 1,
'law': 1,
'laws': 1,
'lawyers': 1,
'lccn': 1,
'le': 1,
'learn': 4,
'learning': 37,
'least': 1,
'led': 1,
'left': 1,
'lehnert': 2,
'lemma': 1,
'lemmatisation': 1,
'lemmatization': 3,
'lesk': 1,
'less': 2,
'lessening': 1,
'letter': 1,
'letters': 1,
'level': 6,
'levels': 1,
'lexical': 5,
'lexico': 1,
'libraries': 1,
'license': 1,
'life': 1,
'like': 6,
'likewise': 1,
'limit': 1,
'limitation': 1,
'limited': 2,
'limits': 2,
'lines': 1,
'linguistic': 4,
'linguistics': 27,
'link': 2,
'linked': 1,
'linkpage': 1,
'links': 2,
'lippi': 1,
'list': 2,
'lithium': 2,
'little': 1,
'local': 1,
'location': 3,
'log': 1,
'logged': 1,
'logic': 1,
'logical': 1,
'long': 3,
'lookup': 1,
'loper': 1,
'low': 2,
'luisa': 1,
'machine': 34,
'machinery': 1,
'macquarie': 1,
'made': 3,
'main': 3,
'mainstream': 1,
'maint': 2,
'maintained': 1,
'mairal': 1,
'major': 5,
'make': 5,
'makes': 1,
'making': 1,
'man': 1,
'management': 3,
'manipulate': 1,
'manipuri': 1,
'manning': 4,
'many': 13,
'map': 2,
'marco': 1,
'margie': 1,
'mark': 3,
'marked': 1,
'marketed': 2,
'marketing': 1,
'marking': 2,
'markov': 1,
'marks': 1,
'married': 1,
'martin': 1,
'matches': 1,
'matching': 2,
'mathematical': 1,
'mathematics': 1,
'matrix': 1,
'mauri': 1,
'may': 5,
'meaning': 8,
'means': 1,
'measure': 3,
'measured': 1,
'measures': 1,
'medes': 1,
'media': 3,
'medicine': 1,
'meehan': 1,
'mehri': 1,
'meitei': 1,
'member': 3,
'mental': 1,
'mentions': 1,
'menu': 1,
'met': 1,
'metamodel': 1,
'metaphor': 1,
'metaphorically': 1,
'methodology': 1,
'methods': 15,
'mid': 2,
'might': 2,
'mike': 1,
'million': 1,
'millions': 1,
'mind': 3,
'mining': 8,
'misspelled': 1,
'mit': 2,
'mobile': 1,
'model': 6,
'modeling': 4,
'models': 17,
'mohamed': 2,
'moore': 1,
'more': 23,
'morpheme': 1,
'morphemes': 2,
'morphological': 3,
'morphology': 10,
'most': 10,
'much': 4,
'multi': 1,
'multilingual': 1,
'multilinguality': 1,
'multimodal': 2,
'multimodality': 1,
'multiple': 4,
'mumbai': 1,
'must': 1,
'my': 1,
'name': 1,
'named': 7,
'names': 5,
'namespaces': 1,
'national': 2,
'native': 1,
'natural': 66,
...}
3.2.3. Visualisation of Pre-trained Embeddings¶
We only look at the non-stopwords that are in the Word2Vec vocabulary.
import nltk
from nltk.corpus import stopwords
vocab_sorted = dict(sorted(BoW_corpus_dict.items(), key=lambda item: item[1], reverse=True))
#print(vocab_sorted)
stopwords = stopwords.words('english')
input_vocab = [word for word in vocab_sorted if word in wv.key_to_index.keys() and word not in stopwords]
points = len(input_vocab)
X = wv[input_vocab]
X_tsne = tsne.fit_transform(X[:points])
#print(input_vocab)
#points = len(input_vocab)
interactive_tsne(list(input_vocab)[:points], X_tsne)
3.2.4. Training your own Word2Vec¶
from gensim.models import Word2Vec
cores = 16
model = Word2Vec(min_count=1,
window=2,
vector_size=100,
sample=6e-5,
alpha=0.03,
min_alpha=0.0007,
negative=20,
workers=cores-1)
from time import time
t = time()
model.build_vocab([doc_tokenized], progress_per=10)
print('Time to build vocab: {} mins'.format(round((time() - t) / 60, 2)))
Time to build vocab: 0.0 mins
t = time()
model.train(doc_tokenized, total_examples=model.corpus_count, epochs=1000, report_delay=1)
print('Time to train the model: {} mins'.format(round((time() - t) / 60, 2)))
Time to train the model: 0.16 mins
model.wv.key_to_index.keys()
dict_keys(['the', 'of', 'and', 'to', 'language', 'in', 'is', 'natural', 'for', 'as', 'processing', 'text', 'nlp', 'on', 'that', 'learning', 'or', 'machine', 'with', 'such', 'speech', 'words', 'are', 'linguistics', 'be', 'this', 'systems', 'from', 'statistical', 'semantic', 'semantics', 'data', 'given', 'by', 'neural', 'more', 'an', 'can', 'which', 'word', 'based', 'edit', 'sentence', 'cognitive', 'rules', 'other', 'analysis', 'tasks', 'parsing', 'models', 'information', 'task', 'grammar', 'retrieved', 'computer', 'into', 'computational', 'methods', 'have', 'research', 'recognition', 'understanding', 'has', 'since', 'not', 'languages', 'also', 'was', 'translation', 'isbn', 'used', 'input', 'however', 'many', 'see', 'algorithms', 'discourse', 'example', 'it', 'human', 'deep', 'possible', 'been', 'using', 'segmentation', 'world', 'when', 'morphology', 'extraction', 'most', 'real', 'doi', 'one', 'often', 'determine', 'document', 'knowledge', 'www', 'some', 'role', 'part', 'but', 'at', 'use', 'separate', 'its', 'dictionary', 'english', 'being', 'they', 'system', 'symbolic', 'mining', 'science', 'networks', 'arxiv', 'sentences', 'context', 'meaning', 'individual', 'all', 'ai', 'network', 'aid', 'typically', 'pdf', 'technology', 'between', 'results', 'representation', 'chunk', 'process', 'corpus', 'form', 'were', 'question', 'first', 'each', 'general', 'argument', 'aspects', 'what', 'non', 'named', 'these', 'identify', 'structure', 'hand', 'textual', 'increasingly', 'wikipedia', 'there', 'shared', 'difficult', 'especially', 'like', 'book', 'time', 'christopher', 'different', 'model', 'intelligence', 'grammatical', 'generation', 'error', 'early', 'level', 'syntax', 'labelling', 'documents', 'coreference', 'automatic', 'resolution', 'than', 'terms', 'applications', 'may', 'acl', 'pp', 'people', 'complex', 'person', 'theory', 'names', 'two', 'university', 'any', 'approaches', 'make', 'token_', 'correction', 'summarization', 'set', 'generally', 'automatically', 'produce', 'structures', 'so', 'through', 'specific', 'thus', 'typical', 'proceedings', 'complexity', 'sense', 'trends', 'corpora', 'press', 'org', 'history', 'written', 'lexical', 'further', 'common', 'reading', 'do', 'published', 'during', 'chinese', 'then', 'field', 'major', 'about', 'artificial', 'among', 'valued', 'probabilistic', 'big', 'cognition', 'learn', 'annotated', 'paradigm', 'much', 'intent', 'those', 'modeling', 'only', 'handwritten', 'called', 'problem', 'stemming', 'automated', 'door', 'explicit', 'topic', 'identification', 'tagging', 'relative', 'higher', 'disambiguation', 'tree', 'how', 'open', 'relationships', 'another', 'creation', 'where', 'entity', 'large', 'spoken', 'within', 'recognizing', 'entities', 'problems', 'extract', 'cid', 'below', 'serve', 'term', 'dependency', 'answering', 'end', 'online', 'multiple', 'available', 'larger', 'supervised', 'without', 'linguistic', 'chomskyan', 'conceptual', 'due', 'manning', 'development', 'no', 'both', 'sometimes', 'rule', 'conll', 'free', 'racter', 'up', 'construction', 'sentiment', 'mark', 'revolution', 'journal', 'interest', 'categories', 'identifying', 'statistics', 'technical', 'proper', 'web', 'introduction', 'sequence', 'morphological', 'road', 'syntactic', 'bibcode', 'com', 'generated', 'approach', 'base', 'future', 'navigation', 'theoretical', 'rather', 'includes', 'media', 'beyond', 'challenge', 'piece', 'relationship', 'largely', 'terminology', 'issn', 'portal', 'include', 'august', 'pmm', 'considered', 'member', 'distributional', 'lemmatization', 'known', 'their', 'case', 'predicates', 'objects', 'turkish', 'representations', 'types', 'latent', 'areas', 'retrieval', 'foreign', 'will', 'etc', 'induction', 'we', 'boundaries', 'several', 'capitalization', 'measure', 'turn', 'very', 'speaking', 'termed', 'complete', 'above', 'algorithm', 'organization', 'philosophy', 'location', 'goal', 'type', 'same', 'amounts', 'universal', 'examples', 'usually', 'directions', 'increase', 'computers', 'experience', 'questions', 'wikimedia', 'commons', 'tutorial', 'result', 'socher', 'writing', 'cs', 'lakoff', 'front', 'produced', 'john', 'procedures', 'cases', 'thought', 'inference', 'related', 'mind', 'answers', 'eliza', 'developed', 'late', 'focused', 'until', 'increasing', 'had', 'long', 'readable', 'accurate', 'amount', 'main', 'if', 'corresponding', 'true', 'entailment', 'involves', 'formal', 'annotation', 'soft', 'concerned', 'simply', 'commonly', 'made', 'management', 'roles', 'decisions', 'implicit', 'number', 'resource', 'apply', 'author', 'springer', 'now', 'search', 'lithium', 'single', 'article', 'verbal', 'building', 'pcfg', 'disambiguate', 'argumentative', 'convert', 'entails', 'false', 'programs', 'writer', 'detection', 'chunks', 'basically', 'fledged', 'marketed', 'marking', 'parse', 'formalization', 'intended', 'full', 'ambiguous', 'quickly', 'primary', 'component', 'ontology', 'brain', 'books', 'constituency', 'focuses', 'beta', 'explicitly', 'created', 'extension', 'western', 'scripts', 'batteries', 'page', 'classify', 'orthography', 'german', 'house', 'nouns', 'referring', 'spanish', 'done', 'bridging', 'multimodal', 'refer', 'subjective', 'assistant', 'capitalized', 'jump', 'senses', 'although', 'application', 'realized', 'anaphora', 'current', 'ion', 'either', 'statement', 'yes', 'grounded', 'dialogue', 'referred', 'frames', 'various', 'expression', 'second', 'wordnet', 'cannot', 'chatterbots', 'closed', 'low', 'state', 'achieve', 'techniques', 'fully', 'sixty', 'authors', 'three', 'others', 'five', 'wide', 'content', 'things', 'would', 'solved', 'art', 'matching', 'close', 'premise', 'extremely', 'creating', 'before', 'unfamiliar', 'robust', 'whereas', 'over', 'study', 'grammars', 'experiment', 'vs', 'designed', 'care', 'improve', 'less', 'after', 'mid', 'you', 'provide', 'head', 'hurts', 'evaluation', 'importance', 'important', 'centering', 'successful', 'schank', 'lehnert', 'units', 'included', 'focus', 'day', 'might', 'underpinnings', 'discouraged', 'work', 'interaction', 'able', 'take', 'advantage', 'existing', 'provided', 'canada', 'official', 'specifically', 'restricted', 'great', 'limited', 'become', 'proposed', 'significant', 'insufficient', 'contents', 'requiring', 'japanese', 'fairly', 'software', 'characters', 'subtask', 'including', 'involve', 'successive', 'them', 'colloquially', 'well', 'frequently', 'clip', 'bag', 'analyze', 'inflectional', 'reducing', 'map', 'morphemes', 'program', 'class', 'simple', 'forms', 'thousands', 'parts', 'particular', 'noun', 'verb', 'out', 'derived', 'sound', 'representing', 'pipelines', 'weights', 'similar', 'trees', 'reliable', 'tendencies', 'embeddings', 'feature', 'attaching', 'character', 'features', 'references', 'instead', 'parses', 'turing', 'tokenization', 'hidden', 'contains', 'subtasks', 'relational', 'relevant', 'elaborate', 'capture', 'intermediate', 'changes', 'present', 'new', 'fact', 'need', 'build', 'list', 'closely', 'optical', 'assumption', 'arabic', 'embedding', 'towards', 'youtube', 'allocation', 'checker', 'berlin', 'behaviour', 'interface', 'bengio', 'yoshua', 'rmm', 'comp', 'international', 'nus', 'times', 'edu', 'presented', 'sg', 'pnas', 'mit', 'states', 'united', 'ijcai', 'vinyals', 'pf', 'bert', 'token', 'oriol', 'tokens', 'national', 'analyzed', 'cloud', 'frameworks', 'limits', 'control', 'imply', 'along', 'https', 'rte', 'george', 'investigation', 'argumentation', 'ties', 'schutze', 'hinrich', 'transactions', 'klein', 'psychology', 'google', 'wiley', 'iste', 'link', 'volume', 'maint', 'phenomena', 'kurdi', 'zakaria', 'mohamed', 'additional', 'acm', 'systematic', 'phrase', 'tools', 'idea', 'cambridge', 'richard', 'standing', 'query', 'policy', 'privacy', 'changesupload', 'scoring', 'links', 'ucca', 'pmc', 'vec', 'truecasing', 'right', 'essay', 'diabetes', 'ideas', 'pmid', 'association', 'answer', 'simplification', 'workshop', 'under', 'compound', 'assisted', 'neuroscience', 'notable', 'embodied', 'hey', 'unice', 'underlies', 'peter', 'handbook', 'successes', 'routledge', 'acquisition', 'flesh', 'fr', 'ibm', 'occurred', 'ubu', 'jabberwacky', 'internet', 'parry', 'torroni', 'european', 'gov', 'talespin', 'meehan', 'parliament', 'qualm', 'lippi', 'marco', 'paolo', 'appendix', 'multilingual', 'basic', 'politics', 'carbonell', 'emerging', 'plot', 'york', 'understand', 'robinson', 'theories', 'seek', 'american', 'starting', 'scientist', 'ask', 'rhetorical', 'oxford', 'cruz', 'lines', 'sets', 'christophe', 'formalizing', 'existence', 'period', 'innovation', 'quantitative', 'archived', 'rising', 'continued', 'original', 'led', 'eventually', 'previous', 'reference', 'yucong', 'sort', 'lesk', 'next', 'hpsg', 'administration', 'whose', 'lexico', 'centre', 'dominance', 'lessening', 'gradual', 'law', 'moore', 'power', 'clt', 'steady', 'anthropology', 'macquarie', 'generative', 'researchers', 'teachers', 'federation', 'duan', 'objective', 'continues', 'union', 'devising', 'coding', 'met', 'coupled', 'lookup', 'ne', 'pal', 'heuristic', 'joao', 'negar', 'recent', 'rostamzadeh', 'advantages', 'soroush', 'mehri', 'skepticism', 'days', 'participants', 'rejected', 'seeking', 'because', 'inaccessible', 'otherwise', 'records', 'health', 'electronic', 'notes', 'lack', 'healthcare', 'medicine', 'evolved', 'framework', 'felipe', 'santos', 'nist', 'log', 'handling', 'gracefully', 'chiheb', 'trabelsi', 'investigating', 'february', 'revolutions', 'obvious', 'prone', 'consuming', 'four', 'resnik', 'supplying', 'philip', 'omitted', 'accidentally', 'misspelled', 'erroneous', 'bilaniuk', 'seen', 'olexa', 'containing', 'zhang', 'ying', 'serdyuk', 'dmitriy', 'subramanian', 'sandeep', 'directed', 'should', 'effort', 'annamoradnejad', 'zoghi', 'colbert', 'sanlp', 'growth', 'raw', 'unannotated', 'december', 'mumbai', 'coling', 'asian', 'showing', 'southeast', 'south', 'unsupervised', 'semi', 'rd', 'morpheme', 'effectively', 'gone', 'deal', 'success', 'limitation', 'pam', 'dan', 'implemented', 'constituent', 'depended', 'government', 'advances', 'pascal', 'governmental', 'tac', 'calling', 'laws', 'manipuri', 'sivaji', 'desired', 'background', 'flurry', 'widespread', 'became', 'style', 'humor', 'preprint', 'yi', 'practical', 'chucai', 'enough', 'tian', 'yingli', 'assistive', 'inferior', 'blind', 'combination', 'persons', 'entire', 'camera', 'enormous', 'heidelberg', 'citeseerx', 'intro', 'gyansetu', 'kishorjit', 'produces', 'vidya', 'raj', 'rk', 'nirmal', 'wilensky', 'onomazein', 'cullingford', 'oldid', 'interactive', 'fiction', 'virtual', 'voice', 'user', 'toolkit', 'spacy', 'authority', 'libraries', 'japan', 'en', 'index', 'php', 'title', 'fields', 'identifiers', 'studyartificial', 'short', 'challenges', 'themselves', 'organize', 'categorize', 'description', 'matches', 'contained', 'insights', 'wikidatacommons', 'category', 'accurately', 'lccn', 'chatbot', 'languageuser', 'guessing', 'spell', 'machinery', 'computing', 'titled', 'datasets', 'corporatypes', 'alan', 'already', 'roots', 'standards', 'linked', 'parallel', 'propbank', 'thesaurus', 'treebank', 'dependencies', 'babelnet', 'bank', 'dbpedia', 'framenet', 'ngram', 'viewer', 'thoughttreasure', 'uby', 'synthesis', 'classification', 'dirichlet', 'pachinko', 'concordancer', 'predictive', 'ndl', 'menu', 'sam', 'creative', 'pdfprintable', 'version', 'projects', 'ban', 'lam', 'тарашкевіца', 'नद', 'hrvatskibahasa', 'монголမ', 'နမ', 'viet粵語中文', 'last', 'edited', 'utc', 'attribution', 'personal', 'sharealike', 'license', 'site', 'agree', 'registered', 'trademark', 'foundation', 'encyclopedia', 'inc', 'profit', 'disclaimers', 'contact', 'mobile', 'view', 'download', 'providing', 'customer', 'service', 'nuances', 'contextual', 'logged', 'accountlog', 'namespaces', 'capable', 'articletalk', 'variants', 'views', 'readeditview', 'eventsrandom', 'articleabout', 'usdonate', 'contribute', 'helplearn', 'editcommunity', 'interactions', 'portalrecent', 'file', 'subfield', 'hererelated', 'filespecial', 'pagespermanent', 'linkpage', 'informationcite', 'pagewikidata', 'item', 'print', 'export', 'resources', 'glove', 'fasttext', 'vocabularies', 'ng', 'andrew', 'emotion', 'compositional', 'finding', 'almost', 'describing', 'weizenbaum', 'joseph', 'images', 'psychotherapist', 'rogerian', 'simulation', 'tacl_a_', 'worlds', 'matrix', 'blocks', 'bates', 'working', 'shrdlu', 'academy', 'notably', 'sciences', 'america', 'steven', 'bird', 'ewan', 'conducted', 'little', 'reduced', 'startlingly', 'quoc', 'le', 'patient', 'strauss', 'margie', 'claudia', 'cultural', 'understandable', 'structured', 'ontologies', 'write', 'began', 'programmers', 'rodriguez', 'mairal', 'your', 'say', 'uson', 'rrg', 'why', 'unmanageable', 'fluid', 'my', 'responding', 'response', 'generic', 'operational', 'aclweb', 'karpathy', 'small', 'andrej', 'exceeded', 'dramatically', 'funding', 'expectations', 'applying', 'foundations', 'emulates', 'david', 'powers', 'turk', 'phrasebook', 'collection', 'verlag', 'vtenatural', 'room', 'gram', 'searle', 'bigram', 'summarized', 'trigram', 'stopwords', 'collocation', 'articulated', 'concept', 'distant', 'labeling', 'similarity', 'interpretation', 'lemmatisation', 'chunking', 'multi', 'criterion', 'test', 'transfer', 'charge', 'versions', 'fulfill', 'html', 'failed', 'edward', 'loper', 'python', 'year', 'ten', 'found', 'report', 'alpac', 'reilly', 'slower', 'daniel', 'jurafsky', 'progress', 'james', 'martin', 'nd', 'edition', 'pearson', 'prentice', 'years', 'hall', 'claimed', 'russian', 'prabhakar', 'raghavan', 'involved', 'georgetown', 'confronted', 'limit', 'output', 'requires', 'accordance', 'explained', 'provides', 'consider', 'comparison', 'extended', 'physically', 'metaphorically', 'elementary', 'tomorrow', 'behind', 'usages', 'drt', 'formalism', 'logical', 'amr', 'opinion', 'graph', 'she', 'remain', 'whom', 'married', 'who', 'somewhat', 'alike', 'associated', 'makes', 'select', 'assign', 'measures', 'marketing', 'metaphor', 'defining', 'findings', 'frame', 'scientific', 'processes', 'assertion', 'branch', 'combining', 'acts', 'classifying', 'contrast', 'explanation', 'elaboration', 'nature', 'connected', 'rubric', 'age', 'identified', 'area', 'maintained', 'strong', 'entered', 'he', 'expressions', 'studies', 'involving', 'pronouns', 'offers', 'methodology', 'mentions', 'perspective', 'elements', 'social', 'public', 'action', 'measured', 'heritage', 'items', 'stream', 'ner', 'addressed', 'nevertheless', 'develop', 'technically', 'pursued', 'stochastic', 'functional', 'act', 'uptake', 'mainstream', 'presence', 'useful', 'conferences', 'recently', 'nonsensical', 'completely', 'seem', 'potential', 'surprisingly', 'perhaps', 'analyses', 'revived', 'explainability', 'abbreviations', 'notion', 'purposes', 'places', 'historical', 'name', 'function', 'means', 'polarity', 'reviews', 'mathematical', 'equation', 'us', 'adjectives', 'capitalize', 'patent', 'french', 'whether', 'regardless', 'displaystyle', 'capitalizes', 'distinguish', 'consistently', 'even', 'developers', 'frac', 'furthermore', 'left', 'span', 'sum', 'letter', 'inaccurate', 'determining', 'block', 'probable', 'probability', 'acquiring', 'arguments', 'man', 'band', 'far', 'subject', 'extrapolate', 'acquire', 'millions', 'hundreds', 'affects', 'impactful', 'pragmatics', 'topics', 'phonology', 'series', 'levels', 'width', 'observed', 'top', 'abstract', 'accountants', 'analysts', 'business', 'lawyers', 'access', 'teams', 'enables', 'powered', 'train', 'shallow', 'multilinguality', 'prior', 'users', 'certain', 'powerful', 'gpt', 'life', 'expected', 'assumptions', 'confusions', 'basis', 'solutions', 'empirical', 'efficient', 'metamodel', 'concepts', 'notations', 'organized', 'takes', 'manipulate', 'easier', 'logic', 'order', 'nlu', 'intents', 'databases', 'capital', 'nlg', 'properly', 'solve', 'facts', 'possess', 'humans', 'ended', 'translate', 'commercial', 'enabling', 'sits', 'elsewhere', 'represents', 'weakly', 'emulate', 'counter', 'subsidiary', 'intelligent', 'scheme', 'conclusions', 'apparent', 'comprehension', 'broadly', 'segment', 'devoted', 'segments', 'advanced', 'allows', 'platform', 'negation', 'developmental', 'trajectories', 'fragments', 'refers', 'drop', 'pro', 'mental', 'zero', 'local', 'against', 'former', 'resolve', 'specified', 'summary', 'summaries', 'papers', 'articles', 'potentially', 'converse', 'multimodality', 'dutch', 'bulgarian', 'factual', 'danish', 'unlike', 'cham', 'portuguese', 'slovenian', 'swedish', 'basque', 'catalan', 'greek', 'sensical', 'hungarian', 'million', 'novel', 'italian', 'czech', 'constructed', 'half', 'beard', 'policeman', 'elimination', 'newspaper', 'section', 'financial', 'marks', 'punctuation', 'periods', 'abandoned', 'entailed', 'shift', 'goldberg', 'yoav', 'pipeline', 'relying', 'primer', 'jair', 'goodfellow', 'properties', 'ian', 'courville', 'popular', 'shifted', 'engineering', 'cache', 'aaron', 'require', 'drawback', 'required', 'transparency', 'contexts', 'continue', 'replaced', 'jozefowicz', 'rafal', 'comprising', 'integrated', 'errors', 'rely', 'substantial', 'entail', 'stimulus', 'poverty', 'corner', 'though', 'solving', 'stress', 'while', 'direct', 'researched', 'comparable', 'following', 'pathological', 'mathematics', 'tend', 'works', 'latest', 'smt', 'alignment', 'steps', 'experiments', 'obviating', 'transformations', 'directly', 'occur', 'emphasizes', 'nmt', 'fundamental', 'instance', 'distinct', 'addition', 'viewed', 'upon', 'making', 'marked', 'apertium', 'plural', 'calls', 'winograd', 'heavily', 'relied', 'terry', 'thesis', 'roger', 'cwa', 'transforming', 'postprocessing', 'abelson', 'robert', 'preprocessing', 'plans', 'markov', 'goals', 'successfully', 'inquiry', 'training', 'hillsdale', 'still', 'popularity', 'despite', 'erlbaum', 'increases', 'johnson', 'eacl', 'worked', 'hours', 'review', 'possibly', 'annotations', 'quality', 'schuster', 'introduced', 'mike', 'shazeer', 'hard', 'noam', 'decision', 'earliest', 'wu', 'yonghui', 'exploring', 'producing', 'choe', 'kook', 'certainty', 'charniak', 'express', 'eugene', 'emnlp', 'et', 'al', 'nips', 'turchin', 'alexander', 'florez', 'builes', 'luisa', 'applied', 'classes', 'intertwined', 'subdivided', 'convenience', 'actual', 'highly', 'meitei', 'opening', 'opened', 'opens', 'expansion', 'reification', 'entirely', 'ignore', 'proofing', 'greatly', 'depends', 'difficulty', 'transformer', 'kongthon', 'coarse', 'uses', 'actually', 'transformation', 'alisa', 'normalized', 'sangkeettrakarn', 'chatchawal', 'technique', 'kongyoung', 'lemma', 'sarawoot', 'return', 'endings', 'haruechaiyasak', 'agglutinated', 'indian', 'entry', 'outline', 'find', 'likewise', 'boundary', 'breaking', 'inherent', 'describes', 'rarely', 'generate', 'grounds', 'does', 'yields', 'closer', 'closing', 'biomedical', 'root', 'reviewing', 'inflected', 'least', 'controlled', 'adjective', 'flight', 'communication', 'table', 'technologies', 'indexing', 'native', 'ones', 'pos', 'programming', 'removing', 'choochart', 'october', 'conversion', 'blend', 'letters', 'sounds', 'hutchins', 'necessary', 'nutshell', 'self', 'pauses', 'source', 'hardly', 'koskenniemi', 'kimmo', 'production', 'department', 'helsinki', 'joshi', 'opposite', 'weinstein', 'guida', 'mauri', 'july', 'printed', 'issues', 'image', 'ocr', 'ieee', 'proc', 'encourages', 'division', 'coarticulation', 'analog', 'implementing', 'signal', 'bow', 'help', 'vocabulary', 'desk', 'fashion', 'conversational', 'thai', 'agent', 'spaces', 'separated', 'medes', 'trivial', 'continuous', 'conference', 'impaired', 'visually', 'transform', 'grouped', 'equivalent', 'emergent', 'identical', 'variety', 'recognize', 'must', 'digital', 'accents', 'ecosystems', 'france', 'discrete', 'cookie'])
3.2.5. Comparing the purposely trained and the pre-trained vectors¶
We can see that due to the lack of data in training our own embeddings, the semantic information captured by the embeddings are not as meaningful as the the pre-trained ones.
# Our trained domain specific embeddings
model.wv.most_similar(positive=["language"])
[('generated', 0.3286932706832886),
('cache', 0.29854971170425415),
('charge', 0.27286359667778015),
('segmentation', 0.26987093687057495),
('assisted', 0.26623496413230896),
('drop', 0.2609272599220276),
('on', 0.25290459394454956),
('successfully', 0.2467595636844635),
('any', 0.24194024503231049),
('easier', 0.2374984174966812)]
# Pretrained embeddings
wv.most_similar(positive=["language"])
[('langauge', 0.747669517993927),
('Language', 0.6695358157157898),
('languages', 0.6341331601142883),
('English', 0.6120712757110596),
('CMPB_Spanish', 0.6083105802536011),
('nonnative_speakers', 0.6063110828399658),
('idiomatic_expressions', 0.5889802575111389),
('verb_tenses', 0.5841568112373352),
('Kumeyaay_Diegueno', 0.5798824429512024),
('dialect', 0.5724599957466125)]