A simple and fast framework for
- Preprocessing or Cleaning of text
- Extracting top words or reduction of vocabulary
- Feature Extraction
- Word Vectorization
Update: Published the package in PyPI. Install it using pip.
Uses parallel execution by leveraging the multiprocessing library in Python for cleaning of text, extracting top words and feature extraction modules. Contains both sequential and parallel ways (For less CPU intensive processes) for preprocessing text with an option of user-defined number of processes.
PS: There is no multi-processing support for word vectorization
Cleaning Text
- Clean text with various defined stages implemented using standardized techniques in Natural Language Processing (NLP)Vocab Reduction
- Find the top words in the corpus, lets you choose a threshold to consider the words that can stay in the corpus and replaces the othersFeature Extraction
- Extract features from corpus of text using SpaCyWord Vectorization
- Simple code to convert words to vectors (TFIDF, Word2Vec, GloVe) using Scikit-learn and Gensim
Uses nltk for few of the stages defined below. Various stages of cleaning include:
Stage | Description |
---|---|
remove_tags_nonascii | Remove HTML tags, emails, URLs, non-ascii characters and converts accented characters |
lower_case | Converts the text to lower_case |
expand_contractions | Expands the word contractions |
remove_punctuation | Remove punctuation from text, but sentences are seperated by ' . ' |
remove_esacape_chars | Remove escapse characters like \n, \t etc |
remove_stopwords | Remove stopwords using nltk python |
remove_numbers | Remove all digits in the text |
lemmatize | Uses WordNetLemmatizer to lemmatize text |
stemming | Uses SnowballStemmer for stemming of text |
min_word_len | Minimum word length to keep in text |
Shortlists top words based on the percentage as input. Replaces the words not shortlisted and replaces them efficienctly. Also, supports parallel and sequential processing.
Uses Spacy Pipe module to avoid unnecessary parsing to increase speed. Various stages of feature extraction include:
Stage | Description |
---|---|
nouns | Extract the list of Nouns from the given string |
verbs | Extract the list of Verbs from the given string |
adjs | Extract the list of Adjectives from the given string |
noun_phrases | Extract the list of Noun Phrases (Noun chunks) from the given string |
keywords | Uses YAKE for extracting keywords from text |
ner | Extracts Person, Location and Organization as named entities |
numbers | Extracts all digits in the text |
Functions written in python to convert words to vectors using libraries like Scikit-Learn and Gensim. Contains four vectorization techniques like CountVectorizer (Bag of Words Model), TFIDF-Vectorizer, Word2Vec and GloVe. Also contains others features to get the top words according to IDF Scores, similar words with similarity scores and average sentence-wise vectors.
Various Python files and their purposes are mentioned here:
preprocess_nlp.py
- Contains functions which are built around existing techniques for preprocessing or cleaning text. Defines both sequential and parallel ways of code execution for preprocessing.Preprocessing_Example_Notebook.ipynb
- How-to-use example notebook for preprocessing or cleaning stagesrequirements.txt
- Required libraries to run the projectvocab_elimination_nlp.py
- Contains functions which are built around existing techniques for shortlisting top words and reducing vocab sizeVocab_Elimination_Example_Notebook.ipynb
- How-to-use example notebook for vocabulary reduction/elimination or replacement.vectorization_nlp.py
- Contains functions which are built around existing techniques for vectorizing words.Vectorization_Example_Notebook.ipynb
- How-to-use example notebook for vectorization of words and additional functions or features.
- pip install -r requirements.txt
- pip install preprocess-nlp
- Import functions and start using
- pip install -r requirements.txt
- Import
preprocess_nlp.py
and use the functionspreprocess_nlp
(for sequential) andasyn_call_preprocess
(for parallel) as defined in notebook - Import
vocab_elimination_nlp.py
and use functions as defined in the notebookVocab_Elimination_Example_Notebook.ipynb
- Import
feature_extraction.py
and use functions as defined in notebookFeature_Extraction_Example_Notebook.ipynb
- Import
vectorization_nlp.py
and use functions as defined in notebookVectorization_Example_Notebook.ipynb
- Sequential - Processes records in a sequential order, does not consume a lot of CPU Memory but is slower compared to Parallel processing
- Parallel - Can create multiple processes (customizable/user-defined) to preprocess text parallelly, Memory intensive and faster
Refer the code for Docstrings and other function related documentation.
Cheers :)