diff --git a/.gitignore b/.gitignore index 4b7498a..6503faf 100644 --- a/.gitignore +++ b/.gitignore @@ -171,4 +171,5 @@ model2vec_models results/* counts/* results_old/* +local/* lightning_logs/* diff --git a/README.md b/README.md index b2759cc..f0b7f90 100644 --- a/README.md +++ b/README.md @@ -278,7 +278,7 @@ As can be seen, Model2Vec models outperform the GloVe and WL256 models on all cl The figure below shows the relationship between the number of sentences per second and the average classification score. The circle sizes correspond to the number of parameters in the models (larger = more parameters). This plot shows that the Model2Vec models are much faster than the other models, while still being competitive in terms of classification performance with the [all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) model. -| ![Description](assets/images/speed_vs_accuracy.png) | +| ![Description](assets/images/speed_vs_accuracy_v2.png) | |:--:| |*Figure: The average accuracy over all classification datasets plotted against sentence per second. The circle size indicates model size.*| diff --git a/assets/images/speed_vs_accuracy_v2.png b/assets/images/speed_vs_accuracy_v2.png new file mode 100644 index 0000000..448c0a7 Binary files /dev/null and b/assets/images/speed_vs_accuracy_v2.png differ