Difference between revisions of "Journal:Terminology spectrum analysis of natural-language chemical documents: Term-like phrases retrieval routine"
Shawndouglas (talk | contribs) m (→Additional files: Fixed URL) |
Shawndouglas (talk | contribs) (Converted ombox to template) |
||
(One intermediate revision by the same user not shown) | |||
Line 19: | Line 19: | ||
|download = [http://jcheminf.springeropen.com/track/pdf/10.1186/s13321-016-0136-4?site=jcheminf.springeropen.com http://jcheminf.springeropen.com/track/pdf/10.1186/s13321-016-0136-4] (PDF) | |download = [http://jcheminf.springeropen.com/track/pdf/10.1186/s13321-016-0136-4?site=jcheminf.springeropen.com http://jcheminf.springeropen.com/track/pdf/10.1186/s13321-016-0136-4] (PDF) | ||
}} | }} | ||
{{ | |||
{{Ombox math}} | |||
}} | |||
==Abstract== | ==Abstract== | ||
'''Background''': This study seeks to develop, test and assess a methodology for automatic extraction of a complete set of ‘term-like phrases’ and to create a terminology spectrum from a collection of natural language PDF documents in the field of chemistry. The definition of ‘term-like phrases’ is one or more consecutive words and/or alphanumeric string combinations with unchanged spelling which convey specific scientific meanings. A terminology spectrum for a natural language document is an indexed list of tagged entities including: recognized general scientific concepts, terms linked to existing thesauri, names of chemical substances/reactions and term-like phrases. The retrieval routine is based on n-gram textual analysis with a sequential execution of various ‘accept and reject’ rules with taking into account the morphological and structural [[information]]. | '''Background''': This study seeks to develop, test and assess a methodology for automatic extraction of a complete set of ‘term-like phrases’ and to create a terminology spectrum from a collection of natural language PDF documents in the field of chemistry. The definition of ‘term-like phrases’ is one or more consecutive words and/or alphanumeric string combinations with unchanged spelling which convey specific scientific meanings. A terminology spectrum for a natural language document is an indexed list of tagged entities including: recognized general scientific concepts, terms linked to existing thesauri, names of chemical substances/reactions and term-like phrases. The retrieval routine is based on n-gram textual analysis with a sequential execution of various ‘accept and reject’ rules with taking into account the morphological and structural [[information]]. | ||
Line 809: | Line 807: | ||
[[Category:LIMSwiki journal articles (added in 2016)]] | [[Category:LIMSwiki journal articles (added in 2016)]] | ||
[[Category:LIMSwiki journal articles (all)]] | [[Category:LIMSwiki journal articles (all)]] | ||
[[Category:LIMSwiki journal articles (with rendered math)]] | |||
[[Category:LIMSwiki journal articles on chemical informatics]] | [[Category:LIMSwiki journal articles on chemical informatics]] | ||
[[Category:LIMSwiki journal articles on software]] | [[Category:LIMSwiki journal articles on software]] |
Latest revision as of 18:46, 6 October 2021
Full article title | Terminology spectrum analysis of natural-language chemical documents: Term-like phrases retrieval routine |
---|---|
Journal | Journal of Cheminformatics |
Author(s) | Alperin, Boris L.; Kuzmin, Andrey O.; Ilina, Ludmila Y.; Gusev, Vladimir D.; Salomatina, Natalia V.; Parmon, Valentin, N. |
Author affiliation(s) | Boreskov Institute of Catalysis, Sobolev Institute of Mathematics, Novosibirsk State University |
Primary contact | Email: kuzmin [at] catalysis.ru |
Year published | 2016 |
Volume and issue | 8 |
Page(s) | 22 |
DOI | 10.1186/s13321-016-0136-4 |
ISSN | 1758-2946 |
Distribution license | Creative Commons Attribution 4.0 International |
Website | http://jcheminf.springeropen.com/articles/10.1186/s13321-016-0136-4 |
Download | http://jcheminf.springeropen.com/track/pdf/10.1186/s13321-016-0136-4 (PDF) |
This article contains rendered mathematical formulae. You may require the TeX All the Things plugin for Chrome or the Native MathML add-on and fonts for Firefox if they don't render properly for you. |
Abstract
Background: This study seeks to develop, test and assess a methodology for automatic extraction of a complete set of ‘term-like phrases’ and to create a terminology spectrum from a collection of natural language PDF documents in the field of chemistry. The definition of ‘term-like phrases’ is one or more consecutive words and/or alphanumeric string combinations with unchanged spelling which convey specific scientific meanings. A terminology spectrum for a natural language document is an indexed list of tagged entities including: recognized general scientific concepts, terms linked to existing thesauri, names of chemical substances/reactions and term-like phrases. The retrieval routine is based on n-gram textual analysis with a sequential execution of various ‘accept and reject’ rules with taking into account the morphological and structural information.
Results: The assessment of the retrieval process, expressed quantitatively with a precision (P), recall (R) and F1-measure, which are calculated manually from a limited set of documents (the full set of text abstracts belonging to five EuropaCat events were processed) by professional chemical scientists, has proved the effectiveness of the developed approach. The term-like phrase parsing efficiency is quantified with precision (P = 0.53), recall (R = 0.71) and F1-measure (F1 = 0.61) values.
Conclusion: The paper suggests using such terminology spectra to perform various types of textual analysis across document collections. This sort of terminology spectrum may be successfully employed for text information retrieval, for reference database development, to analyze research trends in subject fields of research and to look for the similarity between documents.
|
Keywords: Terminology spectrum, natural language text analysis, n-Gram analysis, term-like phrases retrieval, text information retrieval
Background
The current situation in chemistry, as in any other field of natural science, can be characterized by a substantial growth of texts in natural languages (research papers, conference proceedings, patents, etc.), still being the most important sources of scientific knowledge and experimental data, information about modern research trends and terminology used in the subject areas of science. It greatly increases the value of such powerful information systems as Scopus®, SciFinder®, and Reaxys® which are capable of handling large text document databases and especially those fitted with advanced text information retrieval capabilities. In fact, both efficiency and productivity of modern scientific research in chemistry depend rigorously on quality and completeness of its information support, which is oriented firstly on advanced and flexible reference search, discovering and analysing of text information to afford the most relevant answers to user questions (substances, reactions, relevant patents or journal articles). The main ideas and developments in the information retrieval methods coupled with techniques of full text analysis are now well described and examined.[1]
In conventional information systems, the majority of text information retrieval and discovery methods are based on using specific sets of pre-defined document metadata, e.g. keywords or indexes of terms characterizing the texts content. User queries are converted using an index into information requests expressed by a combination of Boolean terms while bringing into play the vector space and terms weight. Probabilistic approaches may also be employed to take into account such features as terms distribution, co-occurrence information and their relationships derived from information retrieval thesauri (IRT) to include them into analytic process. Any kind of such indexes have to primarily be produced and updated manually by trained experts, but now the possibilities of automated index development attracts closer attention.
It is assumed that the structural foundation of any scientific text is its terminology, which may be represented, in principle, by advanced IRT. However, it leads to difficulties in applying conventional IRTs in practical information text analysis procedures because of limitations inherent in them. Typically, such thesauri are made manually in a very labor-intensive process and often are constructed to reflect the general terminology only. Terms from thesauri originally represent a formally written description of scientific conceptions and definitions which may not exactly match the real usage and spelling used in scientific texts. Moreover, a thesaurus developed for one type of text may be less efficient or not applicable when used with another. A good example is the IUPAC Gold Book compendium of chemical nomenclature, terminology, units and definition recommendations.[2] Terminology drafted by experts of IUPAC spans a wide range of chemistry but does not describe any field in detail and represents only a well-established upper level of scientific terminology. Summarizing, IRT based text analysis alone is unable to solve the problem of the variability of scientific texts written in natural languages because the accuracy of matching thesaurus terms with real text phrases leaves much to be desired.
It should also be noted that the language of science is evolving faster than that of natural language, especially in chemistry and molecular biology. Thus, the analysis of terminology of subject text collection should be done automatically using both primitive extraction and sophisticated knowledge-based parsing. Only automated data analysis can process and reveal the variety of term-like word combinations in the constantly changing world of scientific publications. Automated parsing and analysis of document collections or isolated documents for term-like phrases can also help to discover various contexts in which the same scientific terminology is used in different publications or even parts of the same publication.
There is nothing new in the idea of automated term retrieval. Typically, the terminology analysis of text content is focused on recognition of chemical entities and automatic keyphrase extraction aimed to provide a limited set of keywords which might characterize and classify the document as a whole. Two main strategies are usually applied: machine self-learning and usage of various dictionaries with automated selection rules (heuristics) coupled with calculated features[3], such as TF-IDF.[4][5] Therefore, keyphrase retrieval procedures typically involve the following stages: initial text pre-processing; selecting a candidate to a keyphrase; applying rules to each candidate; and compiling a list of keyphrases.[6] A few existing systems had been analyzed in terms of precision (P), recall (R) and F1-score attainable for existing keyphrase extraction datasets. For such well-known systems as Wingnus, Sztergak, and KP-Mminer, these values are reported as P = 0.34÷0.40, R = 0.11÷0.14, and F1 = 0.17÷0.20.[6] Open-Source Chemistry Analysis Routines (OSCAR4)[7] and ChemicalTagger[8] NLP may also be mentioned as tools for the recognition of named chemical entities and for parsing and tagging the language of text publications in chemistry.
However, there are some inherent shortcomings in the above mentioned keyphrase extraction approaches due to the presence of a significant amount of cases where a limited set of automatically selected top ranked keyphrases does not properly describe the document in details (e.g., a paper may contain the description of a specific procedure of catalyst preparation while not being the main subject of the paper). It may also be seen from the aforementioned values of P, R and F that in many cases the extracted keyphrases do not match the keyphrases selected by experts to an adequate degree. Exact matching of keyphrases is a rather rare event, partially due to the difficulties of taking into account nearly similar phrases, for instance, semantically similar phrases. On the other hand, even though the widely used n-gram analysis can build a full spectrum of token sequences present in the text, it may also produce a great level of noise, making it difficult to use them. Some attempts have been made to take into account the semantic similarity of n-grams and to differentiate between rubbish and candidates to plausible keyphrases.[9][10]
The problem of automatic recognition of scientific terms in natural language texts has been explored in recent decades.[11] That research has shown that taking into account the linguistic information may improve the terms extraction efficiency. The information about grammatical structure of multi-word scientific terms, their text variants, and the context of their usage may be represented as a set of lexico-syntactic patterns. For instance, values of P, R and F-measure equal to 73.1, 53.6 and 61.8 percent respectively for term extraction from scientific texts (only in Russian) on computer science and physics were obtained.[12]
A "terminology spectrum" of a natural language publication may be defined as an indexed list of tagged token sequences with calculated weights, such as recognized general scientific notions, terms linked to existing thesauri, names of chemical entities and "term-like phrases." The term-like phrases are not exactly the keyphrases or terms in the usual sense (like published in thesauri). Such term-like phrases are defined here as one or more consecutive tokens (represented by words and/or alphanumeric strings combinations), which convey specific scientific meaning with unchanged spelling and context as in a real text document. For instance, a term-like phrase may look similar to a specific generally used term but with different spelling or word order reflecting the usage of the term in a different context in natural language environment. Consequently, they may describe real text content and the essence of real processes that the scientific research handles, which makes the analysis of such phrases extremely useful. That sort of terminology spectrum of a natural language publication may be considered as some kind of knowledge representation of a text and may be successfully employed in various information retrieval strategies, text analysis and reference systems.[13]
The present work is aimed to develop and test the methodology of automated retrieval of full terminology spectrum from any natural language chemical text collections in PDF format, with term-like phrases selection being the central part of the procedure. The retrieval routine is based on n-gram text analysis with sequential execution of a complex grouping of "accept" and ‘"eject" rules while taking into account the morphological and structural information. The term "n-gram" denotes here a text string or a sequence of n consecutive words or tokens presented in a text. Numerical assessment of automated term-like phrases retrieval process efficiency done in the paper is calculated by comparing automatically extracted term-like phrases and those manually selected by experts.
Methods
Text collection used for experiments
Chemical catalysis is a foundation of chemical industry and represents a very complex field of scientific and technological research. It includes chemistry, various subject fields of physics, chemical engineering, material science and a lot more. One of the most representative research conferences in catalysis is the European Congress on Catalysis or EuropaCat, which has been chosen as a source of scientific texts covering the wide range of themes of research. A set of abstracts of EuropaCat conferences of 2013, 2011, 2009, 2007, and 2005 (about 6000 documents from all five Congress events) has been used for textual analysis in the present study. All abstracts are in PDF format.
General description of terminology spectrum retrieval process
The developed system of terminology spectrum analysis consists of the following sequentially running procedures or steps, as depicted in Fig. 1.
|
The server side of the terminology spectrum analysis system runs on Java SE 6 platform and the client is a PHP web application to view texts and the results of terminology analysis. To store all data collected in the terminology retrieval process, the cross-platform document-oriented database MongoDB is used.[14] The choice in favor of MongoDB was conditioned by the need to process nested n-gram structures up to level seven.
The main stages and analytic methods involved in the process are discussed in the following sections.
Text materials conversion with PdfTextStream library
The scientific texts are mainly published in PDF format which does not typically contain any information about document structure and therefore is not suitable for immediate text analysis. Thus, at first, a document has to be preprocessed by converting a PDF file into text format and analyzing its structure (highlighting titles, authors, headings, references, etc.) with the aim to make the text suitable for further content information retrieval (see Fig. 2). The following steps are used with PdfTextStream library[15] (stages 1–2 on Fig. 1) to make such a PDF transformation (for a detailed example see Additional File 1):
|
1. Isolate text blocks which have the same formatting (e.g. bold, underline and etc.).
2. Remove empty blocks and merge blocks located on the same text row.
3. Analyze the document structure by classifying each block as containing information about the publication title, the headings, authors, organizations, e-mails, references and content. To perform such analysis a set of special taggers has been developed which are executed sequentially to analyze and tag each text block. Taggers utilize such features as the position of the first and last rows of text block, its text formatting, the position of a block of text on a page, etc. All developed taggers have been adjusted to handle each conference event individually.
4. Filter text blocks to remove unclassified text blocks, for instance, situated before the publication title, because such blocks typically contain useless and already known information about a conference or journal.
5. Unify special symbols (such as variants of the dash, hyphen, and quote characters), removal of space characters placed before brackets in writings of crystal indexes, etc. Regular expressions are used.
Text pre-processing
The text pre-processing stage (step three in Fig. 1) is used to transform a text document obtained from stages one and two into a unified structured format with markup. During this stage the text is split into individual words and sentences (tokenization) followed by a morphological analysis that includes: highlighting objects such as formulas and chemical entities, removing unnecessary words and meaningless combinations of symbols, and recognizing general English words and tokens with special meaning (units, stable isotopes, acronyms, etc.). The result of this stage is a fully marked structured text to be stored in the database. The following steps are involved in the text pre-processing stage.
Tokenization
A tokenizer from the OSCAR4 library is used for splitting a text into words, phrases and other meaningful elements. The tokenizer has been adapted for better handling of chemical texts.
The present study established that the original OSCAR4 tokenizer, in view of our needs, had some shortcomings. The first issue was a separation of tokens with a hyphen "-", which often led to mistakes in recognizing compound terms. To overcome this issue, the parts of the source code which are responsible for splitting tokens with hyphens were commented out (see Additional File 2). Next was a problem where some complex tokens, representing various chemical compositions, were considered by the tokenizer as a sequence of tokens (see Fig. 3). In such cases it was necessary to combine those isolated tokens into an integrated one. The modified tokenizing procedure now makes merging of tandem tokens separated with either the "/" or ":" characters, provided that they are marked by OSCAR4 tag CM
or incorporate a chemical element symbol sign. Additionally, tokens that look as "number %" and are situated at the beginning of such a phrase describing chemical compositions are merged into the integral token too (see Fig. 3).
https://static-content.springer.com/image/art%3A10.1186%2Fs13321-016-0136-4/MediaObjects/13321_2016_136_Fig3_HTML.gif
|
An example of the work of the modified tokenizer is shown on Fig. 3. Blue frames hold the tokens identified by modified OSCAR4 tokenizer. Additional red frames outline tokens which are combined into integral ones. Such tokens are marked with the isolated tag COMP
. This tag is used by accept rule ChemUnigramRule
to identify one-word n-grams describing chemical compositions.
Then the position of a token in the text is determined. Splitting the series of tokens into sentences finalizes the tokenization process, which is realized with the help of the WordToSentenceAnnotator routine of Stanford CoreNLP library.[16][17]
Morphological analysis and labeling tokens with their POS tags
Morphological analysis (Stanford CoreNLP library[18] is used) maps each word with a set of part-of-speech tags (Penn Treebank Tag Set[19] by Stanford CoreNLP is used). Typical tags used in the research are: NN
(plural NNS
) — noun; VB
— verb; JJ
— adjective; CD
— ordinal numeral, etc. For the full information about the POS tags used by terminology spectrum building procedure, see Table 4 (later in the paper).
Lemmatization
Lemmatization is the process of grouping together different inflected word forms so they can be treated as a single item. But, in the present work, lemmatization is only used to replace nouns in the plural form with their lemmas. Preliminary experiments demonstrate that additional lemmatization is not helpful and leads to a significant loss of meaningful information (for example, reforming process leads to reform and process lemmas with the loss of the name of a very important modern industrial chemical process in refining).
Recognition of names of chemical entities
Meta-information about names of chemical entities is very important in various term-like phrases retrieval strategies. The open source OSCAR4 (Open Source Chemistry Analysis Routines)[7][20] software package is applied for selection and semantic annotation of chemical entities across a text. Among a variety of tags and attributes utilized by OSCAR4 routine only the following ones are used in the present study:
1. CM
— chemical term (chemical name, formula or acronym);
2. RN
— reaction (for example, epoxidation, dehydrogenation, hydrolysis, etc.);
3. ONT
— ontology term (for example, glass, adsorption, cation, etc.).
When a token is a part of some recognized chemical entity the token gets the same OSCAR4 tag as a whole entity.
Recognition of tokens with special meaning
The significant part of text pre-processing stage is selection of individual tokens being the words of general English and recognition of various meaningful text strings which are: the general scientific terms (actually performed at the final terminology spectrum building stage but described here for convenience); tokens denoting chemical elements, stable isotopes and measurement units; tokens which cannot be a part of any terms in any way. This part of work is performed using specially developed dictionaries described in details in Table 1.
|
Some extra explanation needs to be given on the general English dictionary, the stop list dictionary and the procedure of recognition of general scientific terms.
More than 560 words either found in scientific terminology (for instance: "acid", "alcohol", "aldehyde", "alloy", "aniline", etc.) or occurring in composite terms (for example, "abundant" may be part of the term "most abundant reactive intermediates") were excluded from the original version of the Corncob Lowercase Dictionary.
The IUPAC Compendium of Chemical Terminology (the only well-known and time-proven dictionary) is used as a source of general chemistry terms. To find the best way to match an n-gram to a scientific term from the compendium, a number of experiments have been performed which resulted in the following criteria:
1. N-gram is considered a general scientific term if all n-gram tokens are the words of a certain IUPAC Gold Book term, regardless of their order; and
2. If (n − 1) of n-gram tokens coincide with the (n − 1) words of an IUPAC Gold Book term, and the remaining word is among other terms in the dictionary, then the n-gram is considered a general scientific term too.
Some examples may be given. The n-gram "RADIAL CONCENTRATION GRADIENT" is a general scientific term because the phrase "concentration gradient" is in the compendium and the word "radial" is part of the term "radial development." The n-gram "CONTENT CATALYTIC ACTIVITY" is a general term because the term "catalytic activity content" is present in the compendium and differs from the n-gram only by word order. The n-gram "TOLUENE ADSORPTION CAPACITY" is not considered a general term, despite the fact that two words coincide with the term "absorption capacity," because the remaining word "TOLUENE" is special and is not found in the compendium. The n-gram "COBALT ACETATE DECOMPOSITION" is not considered a general term either as only the term "decomposition" may be found.
The final comment is about the stop list dictionary that, at first glance, may look like a set of arbitrary words. But, actually, it is based on a series of observations performed with the set of wrongly identified term-like phrases by the earlier version of the terminology analysis system.
Strict filtering
The last step in the text pre-processing stage is strict filtering developed to remove unnecessary words and meaningless combinations of symbols. If at least one n-gram token is labeled by the strict filtering tag ("rubbish" : "true") then such an n-gram is not considered a term-like phrase. At this stage, certain character sequences — as described by the filtering rules (Table 2) and not exempt by the list of exceptions (Table 3) — are looked for. They are successive digits, special symbols, measurement units, symbols of chemical elements, brackets and so on. Custom regular expressions and standard dictionaries described in Table 1 are used for this procedure. A general scheme of strict filtering parsing is illustrated in Fig. 4.
|
|
EL designation of any chemical element, IS designation of any stable isotope
|
The following examples may be given to illustrate the decision-making process of defining a token as "valid" or "rubbish" (Fig. 5).
|
Summary of pre-processing stage
The final result of the text pre-processing stage is the marked and structured text with tagged tokens. These tags are used then by various rules for term-like phrase selection. As there is no need for all the tags from OSCAR4 and Penn Treebank Tag Set, only a few of them are used in the term-like phrases retrieval procedure. The consolidated list of all tags is used, which may be assigned to tokens at different steps of the text pre-processing stage, as specified in the Table 4.
|
As an illustration of tag assignment the following example may be given. Figure 6 shows an example sentence where a few tokens have been tagged. For instance, there are the following different tags used in the example for token 2.7 %CO/10.0 %H2O/He – (pos = "CD"; lemma = "2.7 %CO/10.0 %H2O/He"; oscar = "CM"; rubbish = "false"; exception = "comp"). Every token has at least two tags — pos
(it holds the part-of-speech information) and lemma
(it corresponds to the lemma of a token). In addition some tokens related to chemistry (indicating chemical substances, formulas, reactions and etc.) have a tag oscar
taking the values of CM
or ONT
. Last but not least is the tag rubbish
("true" or "false") marking tokens for which strict filtering is to be applied.
|
N-grams spectrum retrieval procedure
As it is defined earlier within our study, the term "n-gram at length n" connotes a sequence or string of n consecutive tokens situated within the same sentence with omission of useless tokens (at the moment only definite/indefinite articles). N-gram set is obtained by moving a window of n tokens length through an entire sentence. This moving is performed token by token. This process is to be repeated for all sentences for a set of all texts:
For a set of texts, each n-gram may be characterized by textual frequency of n-gram occurrence —total number of n-gram occurrences within a text and by absolute frequency of occurrence —total number of n-gram occurrences. As a result each n-gram may be described by a vector within a set of texts enabling us to develop the additional procedures for n-gram filtering and text information analysis.
The full n-gram data set is redundant and it creates difficulties for analysis. For specific purposes different filtration procedures are to be applied. For instance, threshold filtering based on the values of and may be used.
Module of terminology spectrum building
The final stage of the analysis is to distinguish among the scores of n-grams such as the term-like phrases, general chemistry scientific terms, names of chemical entities and useless n-grams. The calculation of textual and absolute frequencies of term occurrence finishes the terminology spectrum building.
To select term-like n-grams the sets of accept and reject rules are applied. They are all based on token tags assigned at previous steps and developed dictionaries (Table 1). The intention of each set of rules is to determine whether an n-gram of defined length is a term-like phrase or not by analyzing its structure. All rules are applied in a consecutive manner. If an n-gram conforms to an accept or reject rule in the rule sequence, the procedure will be stopped with declaring the n-gram as either a non-term-like or a term-like phrase, probably having a special meaning (e.g. general chemistry scientific term or chemical entity). If no rule is applicable, the n-gram will be considered a term-like phrase too. There are a few general rules that can be used for analysis of n-grams of any length. There are also tailored sets of rules for 1-grams (Table 5), 2-grams (Table 6) and for long (n > 2)-grams (Table 7).
|
|
|
The following examples may be given to illustrate the decision making process whether an n-gram may be considered a term-like phrases or not (Fig. 7).
|
The next step in the terminology analysis stage is the tagging of term-like phrases to describe their roles as entities having a special meaning. There are the following tags at the moment: term-like phrase
, general chemistry term
, and chemical entity
. The final step is the additional filtration procedure aimed to reduce the number of term-like phrases performed by removing short term-like phrases which are parts of n-grams with more length. The criterion of filter application is equality of the absolute frequencies of occurrence for short and long n-grams.
Results and discussion
An example of automatic term-like phrases retrieval is shown in Fig. 8 with some term-like and filtered-off n-grams highlighted. For the filtered-off n-grams the reject rules used are given as well. For the detailed results of terminology analysis for one preselected Congress abstract see the Additional file 1.
|
To understand the overall performance of term-like phrases retrieval routine, the full set of text abstracts belonging to five EuropaCat events were processed. Obtained data were statistically analyzed (see Table 8). It may be seen that the term-like phrases retrieval procedure reduces the total number of all available n-grams to a range of 1÷3 percent, which depends on the n-gram length n.
|
Table 8 demonstrates that the maximum absolute amount of term-like n-grams corresponds to the value of n = 2 (bigrams), which is in good accordance with the well-known fact of the average term length in scientific texts. On the other hand, term indexes are often limited to the n-grams lengths n = 1, 2, 3. The limit n = 3 looks good enough for general science vocabulary (see NGS value from Table 8—a number of general scientific terms found), but it is not sufficient for a specialized thesaurus (e.g. for catalysis). The number of term-like n-grams with the COMP
tag is also large for different n, including n > 3. Summarizing, it should be said that long-length terms retrieval is the distinctive feature of the suggested approach.
It is also seen from Table 8 that nearly half the total amount of 1-grams have an OSCAR tag CM
. It should be noted also that if a plausible term-like phrase has just one token with OSCAR tag, it will be considered to also have the same tag by the system. It may explain the close values (in percentages) for phrases with different length.
To assess the overall effectiveness of the term-like phrases retrieval procedure, it seems necessary to quantitatively answer the questions about what precision and recall values can possibly be achieved. To do that, a preliminary study on comparison between automatically and manually selected term-like phrases was performed with the help of two professional chemical scientists who picked out the term-like phrases from a limited set of a few arbitrarily selected documents. To include a phrase in the list of term-like phrases, a consent among both experts was required. It should be noted here that experts were not required to follow the same procedure of moving a window of n tokens length on an entire sentence used by n-grams isolation. Moreover, experts took into account and analyzed the information put into some simple grammatical structures, which are typical for scientific texts, such as structures with enumeration and so on. It leads to additional differences between the sets of expert and automatically selected term-like phrases (for an example see Fig. 9).
|
The data obtained through expert terminological analysis were compared with the automatically retrieved terms. The precision (P), recall (R) and F-measure values were calculated. In the paper, the precision[21] indicates a fraction of automatically retrieved term-like phrases which coincide with expert selected ones. Recall is a fraction of an expert’s selected term-like phrases that are retrieved by the system.
Both precision and recall therefore may be used as a measure of term-like phrase retrieval process relevance and efficiency. In simple terms, high precision values mean that substantially more term-like phrases are selected than erroneous phrases, while high recall values mean that the most term-like phrases are selected from the text.
Very often these two measures (P and R) are used together to calculate a single value named as F1-measure[22] to provide an overall performance system characteristic. F1-measure is a harmonic mean of P and R, where F1 can reach 1 as its best and 0 as its worst values:
The results on the number of expert selected and automatically retrieved term-like phrases, number of coincidences and calculated P, R and F1 values are represented in Table 9. For the detailed results of terminology analysis for one preselected text, see the Additional file 1.
|
It may be concluded therefore that further improvements can be made with term-like phrase retrieval efficiency by bringing into consideration the knowledge of typical grammatical structures used in scientific texts[12][23] as well as numeric values of both textual and absolute frequencies of n-gram occurrences.
It is also seen that the first version of the terminology analysis system delivers sufficiently high values for precision and recall achievable in the term-like phrases retrieval process. Some comparison can be made with P = 0.34÷0.40, R = 0.11÷0.14, F1 = 0.17÷0.20 values reported[6] by such well-known keyphrases retrieval systems as Wingnus, Sztergak, KP-Mminer, although such disparity does not look consistent enough to be credible due to different goals of the systems (term-like phrases vs. keyphrases retrieval) being brought into comparison.
Conclusions
As mentioned in the introduction, scientific publications are still the most important sources of scientific knowledge, and new methods aimed to retrieve meaningful information from natural language documents are particularly welcome today. The structural foundation of any such publication is widely accepted terms and term-like phrases conveying useful facts and shades of meaning of a document content.
The present study is aimed to develop, test and assess the methodology of automated extraction of a full terminology spectrum from natural language chemical PDF documents, while retrieving as many term-like phrases as possible. Term-like phrases are defined as one or more consecutive words and/or alphanumeric string combinations, which convey specific scientific meaning with unchanged spelling and context as in a real text. The terminology spectrum of a natural language publication is defined as an indexed list of tagged entities: recognized general science notions, terms linked to existing thesauri, names of chemical substances/reactions and term-like phrases. The retrieval routine is based on n-gram text analysis with sequential application of complex accept and reject rules. The main distinctive feature of the suggested approach is in picking out all parsable term-like phrases, not just selecting a limited set of keyphrases meeting any predefined criteria. The next step is to build an extensive term index of a text collection. The developed approach neither takes into account semantic similarity nor differentiates between similar term-like phrases (distinct evaluation metrics may be employed to do it at the later stages). The approach which includes a number of sequentially running procedures appears to show good results in terminology spectrum retrieval as compared with well-known keyphrase retrieval systems.[6] The term-like phrase parsing efficiency is quantified with precision (P = 0.53), recall (R = 0.71) and F1-measure (F1 = 0.61) values calculated from a limited set of documents manually processed by professional chemical scientists.
Terminology spectrum retrieval may be used to perform various types of text analysis across document collections. We believe that this sort of terminology spectrum may be successfully employed for text information retrieval and for reference database development. For example, it may be used to develop thesauri, to analyze research trends in subject fields of research by registering changes in terminology, to derive inference rules in order to understand particular text content, to look for the similarity between documents by comparing their terminology spectrum within an appropriate vector space, and to develop methods to automatically map a document to a reference database field.
For instance, if a set contains a collection of texts from different time periods (in our research, several different events from the EuropaCat research conference were used), the analysis of textual and absolute frequencies of occurrence will allow to follow up the "life cycle" of each term-like phrase on the quantitative level (term usage increasing, decreasing and so on). That gives a unique capability to find out research trends and new concepts in the subject field by registering changes in terminology usage in the most rapidly developing areas of research. Moreover, similar dynamics of change over time for different terms often indicates the existence of an associative linkage between them (e.g. between a new process and developed catalyst or methodology).[24] Indicator words or phrases such as "for the first time," "unique," and "distinctive feature" and so on may also be used in order to detect things like new recipes or catalyst composition for the explored process.
Usage of terminology spectrum for information retrieval will be the subject of our subsequent publications.
Declarations
Author's contributions
BA contributed to software development and architecture. AK conceived of the project and the tasks to be solved. AK and LI designed and performed the experiments, tested the applications and offered feedback as chemical experts. NS and VG were responsible for L-gram analysis algorithm and scientific feedback. VP conceived and coordinated the study. All authors contributed to the scientific and methodological progress of this project. All authors read and approved the final manuscript.
Acknowledgements
Financial assistance provided by Russian Academy of Science Project No. V.46.4.4 are gratefully acknowledged.
Competing interests
The authors declare that they have no competing interests.
Open access
This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.
Additional files
Additional file 1. The detailed example of PDF transformation with terminology analysis performed by experts and by automatic analysis: 13321_2016_136_MOESM1_ESM.pdf
Additional file 2. OSCAR4 tokenizer modification: 13321_2016_136_MOESM2_ESM.pdf
Additional file 3. List of excluded words from general English Corncob-Lowercase list: 13321_2016_136_MOESM3_ESM.pdf
Additional file 4. List of stop words used: 13321_2016_136_MOESM4_ESM.pdf
Additional file 5. List of stable isotopes: 13321_2016_136_MOESM5_ESM.pdf
Additional file 6. List of chemical element symbols: 13321_2016_136_MOESM6_ESM.pdf
Additional file 7. List of measurement units: 13321_2016_136_MOESM7_ESM.pdf
References
- ↑ Salton, G. (1991). "Developments in Automatic Text Retrieval". pp. 974–980. doi:10.1126/science.253.5023.974. PMID 17775340.
- ↑ "IUPAC Gold Book". International Union of Pure and Applied Chemistry. 2014. http://goldbook.iupac.org/.
- ↑ Hussey, R.; Williams, S.; Mitchell, R. (2012). "Automatic keyphrase extraction: A comparison of methods". eKNOW, Proceedings of The Fourth International Conference on Information Process, and Knowledge Management: 18–23. ISBN 9781612081816.
- ↑ Eltyeb, S.; Salim, N. (2014). "Chemical named entities recognition: a review on approaches and applications". Journal of Cheminformatics 6: 17. doi:10.1186/1758-2946-6-17. PMC PMC4022577. PMID 24834132. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4022577.
- ↑ Gurulingappa, H.; Mudi, A.; Toldo, L.; Hofmann-Apitus, M.; Bhate, J. (2013). "Challenges in mining the literature for chemical information". RSC Advances 2013 (3): 16194-16211. doi:10.1039/C3RA40787J.
- ↑ 6.0 6.1 6.2 6.3 Kim, S.N.; Madelyan, O.; Kan, M.-Y.; Baldwin, T. (2013). "Automatic keyphrase extraction from scientific articles". Language Resources and Evaluation 47 (3): 723–742. doi:10.1007/s10579-012-9210-3.
- ↑ 7.0 7.1 Jessop, D.M.; Adams, S.E.; Willighagen, E.L.; Hawizy, L.; Murray-Rust, P. (2011). "OSCAR4: A flexible architecture for chemical text-mining". Journal of Cheminformatics 3: 41. doi:10.1186/1758-2946-3-41. PMC PMC3205045. PMID 21999457. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3205045.
- ↑ Hawizy, L.; Jessop, D.M.; Adams, N.; Murray-Rust, P. (2011). "ChemicalTagger: A tool for semantic text-mining in chemistry". Journal of Cheminformatics 3: 17. doi:10.1186/1758-2946-3-17. PMC PMC3117806. PMID 21575201. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3117806.
- ↑ "Re-examining automatic keyphrase extraction approaches in scientific articles". MWE '09 Proceedings of the Workshop on Multiword Expressions: Identification, Interpretation, Disambiguation and Applications: 9–16. 2009. ISBN 9781932432602.
- ↑ "Approximate matching for evaluating keyphrase extraction". RANLP '09: International Conference on Recent Advances in Natural Language Processing: 484–489. 2009.
- ↑ Castellvi, M.T.C.; Bagot, R.E.; Palatresi, J.V. (2001). "Automatic term detection: A review of current systems". In Bourigault, D.; Jacquemin, C.; L'Homme, M.-C.. Recent Advances in Computational Terminology. John Benjamins Publishing Company. pp. 53–87. doi:10.1075/nlp.2.04cab. ISBN 9789027298164.
- ↑ 12.0 12.1 Bolshakova, E.I.; Efremova, N.E. (2015). "A Heuristic Strategy for Extracting Terms from Scientific Texts". In Khachay, M.Y.; Konstantinova, N.; Panchenko, A.; Ignatov, D.I.; Labunets, V.G.. Analysis of Images, Social Networks and Texts. Springer International Publishing. pp. 297-307. doi:10.1007/978-3-319-26123-2_29. ISBN 9783319261232.
- ↑ Salton, G.; Buckley, C. (1991). "Global Text Matching for Information Retrieval". pp. 1012–1015. doi:10.1126/science.253.5023.1012. PMID 17775345.
- ↑ Chodorow, K.; Dirolf, M. (2010). MongoDB: The Definitive Guide. O'Reilly Media. ISBN 9781449381561.
- ↑ "PDFxStream". Snowtide Informatics Systems, Inc. 2016. https://www.snowtide.com/.
- ↑ "Stanford CoreNLP – A suite of core NLP tools". Github. 2016. http://stanfordnlp.github.io/CoreNLP/.
- ↑ "The Stanford CoreNLP Natural Language Processing Toolkit". Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics: System Demonstrations: 55–60. 2014. doi:10.3115/v1/P14-5010.
- ↑ Toutanova, K.; Klein, D.; Manning, C.D.; Singer, Y. (2003). "Feature-rich part-of-speech tagging with a cyclic dependency network". NAACL '03 Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology 1: 173–180. doi:10.3115/1073445.1073478.
- ↑ Taylor, A.; Marcus, M.; Santorini, B. (2003). "The Penn Treebank: An Overview". In Abeillé, A.. Text, Speech and Language Technology. 20. Springer Netherlands. pp. 5–22. doi:10.1007/978-94-010-0201-1_1. ISBN 978-94-010-0201-1.
- ↑ "Semantic enrichment of journal articles using chemical named entity recognition". ACL '07 Proceedings of the 45th Annual Meeting of the ACL on Interactive Poster and Demonstration Sessions: 45–48. 2007.
- ↑ "Precision and recall". Wikimedia Foundation. https://en.wikipedia.org/wiki/Precision_and_recall.
- ↑ "F1 score". Wikimedia Foundation. https://en.wikipedia.org/wiki/F1_score.
- ↑ Bolshakova, E.; Efremova, N.; Noskov, A. (2010). "LSPL-patterns as a tool for information extraction from natural language texts". In Markov, K.; Ryazanov, V.; Velychko, V.; Aslanyan, L.. New Trends in Classification and Data Mining. ITHEA. pp. 110–118. ISBN 9789541600429.
- ↑ Gusev, V.D.; Salomatina, N.V.; Kuzmin, A.O.; Parmon, V.N. (2012). "An express analysis of the term vocabulary of a subject area: The dynamics of change over time". Automatic Documentation and Mathematical Linguistics 46 (1): 1–7. doi:10.3103/S0005105512010025.
Notes
This presentation is faithful to the original, with only a few minor changes to presentation. In some cases important information was missing from the references, and that information was added. Numerous grammar errors were also corrected throughout the entire text. Finally, the original document on SpringerOpen includes a reference that doesn't clearly get placed inline. It's assumed the final citation from Guzev et al. was meant to be placed in the last paragraph, which is where we have put it.