- Go to the DBLP releases page and download a recent release (e.g., dblp-2020-11-01.xml.gz.md5)
- Unzip it (dblp-2020-11-01.xml) and copy it into
/assets/data/
- Create and activate a Python virtual environment. We have tested using Python3.11
python -m pip install --upgrade pip setuptools wheel
python -m pip install numpy --no-use-pep517
(for M1 Mac. For others, maybe try without--no-use-pep517
)python -m pip install pandas --no-use-pep517
(for M1 Mac. For others, maybe try without--no-use-pep517
)
This file defines various configurations such as path to data, path to output file, the venues of interest, etc.
The DBLP file consists of &
which causes some issues with the lxml parser. Exec'ing this script replaces instances of &
between <ee></ee>
tags with a SPECIAL TAG %26
. This tag will be replaced back to &
in a later step.
Exec'ing this script iterates through the DBLP dataset and persists a list of venue types (e.g., booktitle, journal) and article types (e.g., inproceedings, articles, incollections).
Exec'ing this script creates a .tsv
file with the venues of interest (e.g., VIS, CHI) and their details as filtered from the DBLP dataset. Attributes such as abstract
, citation_count
, and keywords
that are scraped in the subsequent step are also initialized here.
Exec'ing this script calls the abstract
, citation_count
, and keywords
scrapers in the scrapers/
directory and eventually updates the .tsv
created in the above step.
Exec'ing this file postprocesses authors and keywords for analysis purposes, for e.g., decoding utf-8 author names to an ascii form.
Exec'ing this file creates a list of unique keywords.
Exec'ing this file creates a list of unique author names.
These files contain the scraper code to scrape abstracts, citations, and keywords for different venues.
The scrapers access digital libraries (e.g., IEEE Xplore, ACM Digital Library) and download the abstracts, keywords, and citationCounts for different articles. These data are readily available and publicly accessible, that is, do not require any subscriptions, be it paid or free. We do not own the rights to the scraped data and make it available for research purpose only. Also, before running the scrapers, please review the bot policies of the target websites (e.g., robot.txt) to not overwhelm their servers and be in violation.
vitaLITy was created by Arpit Narechania, Alireza Karduni, Ryan Wesslen, and Emily Wall.
@article{narechania2021vitality,
title={vitaLITy: Promoting Serendipitous Discovery of Academic Literature with Transformers \& Visual Analytics},
author={Narechania, Arpit and Karduni, Alireza and Wesslen, Ryan and Wall, Emily},
journal={IEEE Transactions on Visualization and Computer Graphics},
year={2022},
doi={10.1109/TVCG.2021.3114820},
publisher={IEEE}
}
The software is available under the MIT License.
If you have any questions, feel free to open an issue or contact Arpit Narechania.