-
Notifications
You must be signed in to change notification settings - Fork 0
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
Showing
2 changed files
with
29 additions
and
28 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -104,33 +104,34 @@ permalink: /docs/lsc/leaderboards/ | |
| 6 | **MolNet_Ensemble** | Yes | 0.0753 | 0.0797 | polixir.ai | [zouxiaochuan](mailto:[email protected]) (polixir.ai) | [Paper](https://github.com/zouxiaochuan/code_ogblsc2022/blob/main/molnet2022_arxiv.pdf), [Code](https://github.com/zouxiaochuan/code_ogblsc2022) | 32,047,874 | 8 RTX3090 | Nov 1, 2022 | | ||
| 7 | **Global-ViSNet** | No | 0.0766 | 0.0784 | ViSNet | [Tong Wang](mailto:[email protected]) (Microsoft Research AI4Science) | [Paper](https://github.com/ogb-visnet/Global-ViSNet/blob/master/ViSNet_Tech_Report.pdf), [Code](https://github.com/ogb-visnet/Global-ViSNet) | 78,450,692 | 4 NVIDIA A100 GPUs | Oct 26, 2022 | | ||
| 8 | **Transformer-M** | No | 0.0782 | 0.0772 | FML Lab@PKU | [Shengjie Luo](mailto:[email protected]) (Peking University) | [Paper](https://arxiv.org/abs/2210.01765), [Code](https://github.com/lsj2408/Transformer-M) | 68,957,249 | 4 NVIDIA Tesla A100 GPUs (40GB) | Oct 5, 2022 | | ||
| 9 | **GEM-2** | No | 0.0806 | 0.0793 | PaddleHelix | [Donglong He](mailto:[email protected]) (Baidu) | [Paper](https://arxiv.org/abs/2208.05863), [Code](https://github.com/PaddlePaddle/PaddleHelix/tree/dev/apps/pretrained_compound/ChemRL/GEM-2) | 32,086,707 | 16 NVIDIA A100 | Aug 11, 2022 | | ||
| 10 | **GPTrans-L** | No | 0.0821 | 0.0809 | IMAGINE@NJU | [Zhe Chen](mailto:[email protected]) (Nanjing University) | [Paper](https://arxiv.org/abs/2305.11424), [Code](https://github.com/czczup/GPTrans/tree/main) | 85,995,713 | 8 NVIDIA A100 GPUs | Jun 6, 2023 | | ||
| 11 | **GPTrans-T** | No | 0.0842 | 0.0833 | IMAGINE@NJU | [Zhe Chen](mailto:[email protected]) (Nanjing University) | [Paper](https://arxiv.org/abs/2305.11424), [Code](https://github.com/czczup/GPTrans/tree/main) | 6,577,697 | 8 NVIDIA A100 GPUs | Jun 14, 2023 | | ||
| 12 | **Deep graph transformer** | Yes | 0.0843 | 0.0891 | NVIDIA-PCQM4Mv2 | [Jiwei Liu](mailto:[email protected]) (NVIDIA) | [Paper](https://github.com/daxiongshu/PCQM4Mv2_subs/blob/main/tech_report.pdf), [Code](https://github.com/daxiongshu/PCQM4Mv2_subs) | 63,600,000 | 1 NVIDIA V100 GPU 32 GB | Oct 26, 2022 | | ||
| 13 | **Deep graph transformer** | Yes | 0.0844 | 0.0891 | NVIDIA-PCQM4Mv2 | [Jiwei Liu](mailto:[email protected]) (NVIDIA) | [Paper](https://github.com/daxiongshu/PCQM4Mv2_subs/blob/main/tech_report.pdf), [Code](https://github.com/daxiongshu/PCQM4Mv2_subs) | 63,600,000 | 1 NVIDIA V100 GPU 32 GB | Oct 19, 2022 | | ||
| 14 | **Deep graph transformer** | Yes | 0.0852 | 0.0891 | NVIDIA-PCQM4Mv2 | [Jiwei Liu](mailto:[email protected]) (NVIDIA) | [Paper](https://github.com/daxiongshu/PCQM4Mv2_subs/blob/main/tech_report.pdf), [Code](https://github.com/daxiongshu/PCQM4Mv2_subs) | 63,600,000 | 1 NVIDIA V100 GPU 32 GB | Oct 11, 2022 | | ||
| 15 | **GraphGPT(MLM pretrained)** | No | 0.0856 | 0.0847 | Alibaba-DT | [Qifang Zhao](mailto:[email protected]) (Alibaba) | [Paper](https://arxiv.org/abs/2401.00529), [Code](https://github.com/alibaba/graph-gpt) | 453,388,801 | 4 Nvidia L40S | Aug 11, 2024 | | ||
| 16 | **EGT** | No | 0.0862 | 0.0857 | EFT-AIRC | [Md Shamim Hussain](mailto:[email protected]) (RPI / IBM) | [Paper](https://arxiv.org/abs/2108.03348), [Code](https://github.com/shamim-hussain/egt_pytorch) | 89,326,465 | 8 Tesla V100 (32GB) | Jun 24, 2022 | | ||
| 16 | **GPS** | No | 0.0862 | 0.0852 | Mila | [Ladislav Rampasek](mailto:[email protected]) (Mila / Universite de Montreal) | [Paper](https://arxiv.org/abs/2205.12454), [Code](https://github.com/rampasek/GraphGPS) | 13,807,345 | 1 NVIDIA A100 (40GB) | Nov 10, 2022 | | ||
| 17 | **CoAtGIN-base** | No | 0.0866 | 0.0859 | xfcui@sdu | [Xuefeng Cui](mailto:[email protected]) (Shandong University) | [Paper](https://www.biorxiv.org/content/10.1101/2022.08.26.505499v1), [Code](https://github.com/xfcui/CoAtGIN) | 9,938,433 | 8 NVIDIA A40 | Sep 30, 2022 | | ||
| 18 | **EGT+LSPE+HIERA_clustering** | No | 0.0868 | 0.0863 | rwlspegate2 | [YEOM JE YOON](mailto:[email protected]) (University of Seoul) | [Paper](https://github.com/emforce77/rwlspegate2/blob/c3b83ac0a1f0d1dc67e3f59d928eb092dd797cd4/technical_report.pdf), [Code](https://github.com/emforce77/rwlspegate2.git) | 90,102,913 | NVIDIA A100 80GB HBM2E SXM4 | Feb 16, 2023 | | ||
| 19 | **EGT** | No | 0.0872 | 0.0869 | EGT-AIRC | [Md Shamim Hussain](mailto:[email protected]) (RPI / IBM) | [Paper](https://arxiv.org/abs/2108.03348), [Code](https://github.com/shamim-hussain/egt_pytorch) | 89,326,465 | 8 Tesla V100 (32GB) | Jan 26, 2022 | | ||
| 20 | **GRPE-Large** | No | 0.0876 | 0.0867 | WonWoo | [Wonpyo Park](mailto:[email protected]) (SNU / Standigm) | [Paper](https://arxiv.org/abs/2201.12787), [Code](https://github.com/lenscloth/GRPE) | 118,300,000 | 8 A100-SXM4-40GB | Aug 14, 2022 | | ||
| 21 | **GraphSelfAttention** | No | 0.0898 | 0.0890 | WonWoo | [Wonpyo Park](mailto:[email protected]) (Standigm) | [Paper](https://arxiv.org/abs/2201.12787), [Code](https://github.com/lenscloth/GraphSelfAttention) | 46,199,041 | 4 A100 GPU | Nov 15, 2021 | | ||
| 22 | **Deep graph transformer** | Yes | 0.0904 | 0.0891 | NVIDIA-PCQM4Mv2 | [Jiwei Liu](mailto:[email protected]) (NVIDIA) | [Paper](https://github.com/daxiongshu/PCQM4Mv2_subs/blob/main/tech_report.pdf), [Code](https://github.com/daxiongshu/PCQM4Mv2_subs) | 63,600,000 | 1 NVIDIA V100 GPU 32 GB | Oct 3, 2022 | | ||
| 23 | **CoAtGIN-tiny** | No | 0.0908 | 0.0901 | xfcui@sdu | [Xuefeng Cui](mailto:[email protected]) (Shandong University) | [Paper](https://www.biorxiv.org/content/10.1101/2022.08.26.505499v1), [Code](https://github.com/xfcui/CoAtGIN) | 6,395,393 | 1 NVIDIA A40 | Sep 20, 2022 | | ||
| 24 | **TokenGT (Lap)** | No | 0.0919 | 0.0910 | vl-kaist | [Jinwoo Kim](mailto:[email protected]) (KAIST) | [Paper](https://arxiv.org/abs/2207.02505), [Code](https://github.com/jw9730/tokengt) | 48,492,289 | NVIDIA RTX 3090 x 8 | Aug 8, 2022 | | ||
| 25 | **CoAtGIN-tiny** | No | 0.0921 | 0.0916 | xfcui@sdu | [Xuefeng Cui](mailto:[email protected]) (Shandong University) | [Paper](https://www.biorxiv.org/content/10.1101/2022.08.26.505499v1), [Code](https://github.com/xfcui/CoAtGIN) | 6,183,425 | 1 NVIDIA A40 | Sep 8, 2022 | | ||
| 26 | **CoAtGIN-tiny** | No | 0.0935 | 0.0933 | xfcui@sdu | [Xuefeng Cui](mailto:[email protected]) (Shandong University) | [Paper](https://www.biorxiv.org/content/10.1101/2022.08.26.505499v1), [Code](https://github.com/xfcui/CoAtGIN) | 5,460,225 | NVIDIA A40 | Aug 29, 2022 | | ||
| 27 | **HFAGNN** | No | 0.1010 | 0.1005 | Zhuque Zhejianglab | [Can Xu](mailto:[email protected]) (Zhejiang Lab) | [Paper](https://github.com/Tigerrr07/HFAGNN/blob/master/technical_report.pdf), [Code](https://github.com/Tigerrr07/HFAGNN) | 3,949,959 | 1 NVIDIA TITAN V GPU(12GB memory) | Oct 30, 2022 | | ||
| 28 | **GIN-virtual** | No | 0.1084 | 0.1083 | OGB-LSC | [Weihua Hu](mailto:[email protected]) (Stanford) | [Paper](https://openreview.net/pdf?id=qkcLxoC52kL), [Code](https://github.com/snap-stanford/ogb/tree/master/examples/lsc/pcqm4m-v2) | 6,656,406 | 1 GeForce RTX 2080 (11GB GPU) | Sep 8, 2021 | | ||
| 29 | **GCN-virtual** | No | 0.1152 | 0.1153 | OGB-LSC | [Weihua Hu](mailto:[email protected]) (Stanford) | [Paper](https://openreview.net/pdf?id=qkcLxoC52kL), [Code](https://github.com/snap-stanford/ogb/tree/master/examples/lsc/pcqm4m-v2) | 4,850,401 | 1 GeForce RTX 2080 (11GB GPU) | Sep 8, 2021 | | ||
| 30 | **GIN** | No | 0.1218 | 0.1195 | OGB-LSC | [Weihua Hu](mailto:[email protected]) (Stanford) | [Paper](https://openreview.net/pdf?id=qkcLxoC52kL), [Code](https://github.com/snap-stanford/ogb/tree/master/examples/lsc/pcqm4m-v2) | 3,761,406 | 1 GeForce RTX 2080 (11GB GPU) | Sep 8, 2021 | | ||
| 31 | **GraphGPS without PE (subset)** | No | 0.1395 | 0.1334 | GraphLARS | [Xu Wang](mailto:[email protected]) (4Paradigm && EE Tsinghua) | [Paper](https://arxiv.org/abs/2205.12454), [Code](https://github.com/rampasek/GraphGPS) | 6,155,089 | 1 RTX3090 (24GB GPU) | Sep 2, 2022 | | ||
| 32 | **GCN** | No | 0.1398 | 0.1379 | OGB-LSC | [Weihua Hu](mailto:[email protected]) (Stanford) | [Paper](https://openreview.net/pdf?id=qkcLxoC52kL), [Code](https://github.com/snap-stanford/ogb/tree/master/examples/lsc/pcqm4m-v2) | 1,955,401 | 1 GeForce RTX 2080 (11GB GPU) | Sep 8, 2021 | | ||
| 33 | **MLP-Fingerprint** | No | 0.1760 | 0.1753 | OGB-LSC | [Weihua Hu](mailto:[email protected]) (Stanford) | [Paper](https://openreview.net/pdf?id=qkcLxoC52kL), [Code](https://github.com/snap-stanford/ogb/tree/master/examples/lsc/pcqm4m-v2) | 16,107,201 | 1 GeForce RTX 2080 (11GB GPU) | Sep 8, 2021 | | ||
| 34 | **MolNet_Ensemble** | Yes | 0.9899 | 0.0847 | polixir.ai | [xiaochuan zou](mailto:[email protected]) (polixir.ai) | [Paper](https://github.com/zouxiaochuan/code_ogblsc2022/blob/main/molnet2022_arxiv.pdf), [Code](https://github.com/zouxiaochuan/code_ogblsc2022) | 32,047,874 | 8 RTX3090 | Oct 24, 2022 | | ||
| 9 | **GraphGPT(MLM tv10k)** | No | 0.0804 | 0.0840 | Alibaba-DT | [Qifang Zhao](mailto:[email protected]) (Alibaba) | [Paper](https://arxiv.org/abs/2401.00529), [Code](https://github.com/alibaba/graph-gpt) | 453,388,801 | 4 Nvidia L40S | Aug 21, 2024 | | ||
| 10 | **GEM-2** | No | 0.0806 | 0.0793 | PaddleHelix | [Donglong He](mailto:[email protected]) (Baidu) | [Paper](https://arxiv.org/abs/2208.05863), [Code](https://github.com/PaddlePaddle/PaddleHelix/tree/dev/apps/pretrained_compound/ChemRL/GEM-2) | 32,086,707 | 16 NVIDIA A100 | Aug 11, 2022 | | ||
| 11 | **GPTrans-L** | No | 0.0821 | 0.0809 | IMAGINE@NJU | [Zhe Chen](mailto:[email protected]) (Nanjing University) | [Paper](https://arxiv.org/abs/2305.11424), [Code](https://github.com/czczup/GPTrans/tree/main) | 85,995,713 | 8 NVIDIA A100 GPUs | Jun 6, 2023 | | ||
| 12 | **GPTrans-T** | No | 0.0842 | 0.0833 | IMAGINE@NJU | [Zhe Chen](mailto:[email protected]) (Nanjing University) | [Paper](https://arxiv.org/abs/2305.11424), [Code](https://github.com/czczup/GPTrans/tree/main) | 6,577,697 | 8 NVIDIA A100 GPUs | Jun 14, 2023 | | ||
| 13 | **Deep graph transformer** | Yes | 0.0843 | 0.0891 | NVIDIA-PCQM4Mv2 | [Jiwei Liu](mailto:[email protected]) (NVIDIA) | [Paper](https://github.com/daxiongshu/PCQM4Mv2_subs/blob/main/tech_report.pdf), [Code](https://github.com/daxiongshu/PCQM4Mv2_subs) | 63,600,000 | 1 NVIDIA V100 GPU 32 GB | Oct 26, 2022 | | ||
| 14 | **Deep graph transformer** | Yes | 0.0844 | 0.0891 | NVIDIA-PCQM4Mv2 | [Jiwei Liu](mailto:[email protected]) (NVIDIA) | [Paper](https://github.com/daxiongshu/PCQM4Mv2_subs/blob/main/tech_report.pdf), [Code](https://github.com/daxiongshu/PCQM4Mv2_subs) | 63,600,000 | 1 NVIDIA V100 GPU 32 GB | Oct 19, 2022 | | ||
| 15 | **Deep graph transformer** | Yes | 0.0852 | 0.0891 | NVIDIA-PCQM4Mv2 | [Jiwei Liu](mailto:[email protected]) (NVIDIA) | [Paper](https://github.com/daxiongshu/PCQM4Mv2_subs/blob/main/tech_report.pdf), [Code](https://github.com/daxiongshu/PCQM4Mv2_subs) | 63,600,000 | 1 NVIDIA V100 GPU 32 GB | Oct 11, 2022 | | ||
| 16 | **GraphGPT(MLM pretrained)** | No | 0.0856 | 0.0847 | Alibaba-DT | [Qifang Zhao](mailto:[email protected]) (Alibaba) | [Paper](https://arxiv.org/abs/2401.00529), [Code](https://github.com/alibaba/graph-gpt) | 453,388,801 | 4 Nvidia L40S | Aug 11, 2024 | | ||
| 17 | **EGT** | No | 0.0862 | 0.0857 | EFT-AIRC | [Md Shamim Hussain](mailto:[email protected]) (RPI / IBM) | [Paper](https://arxiv.org/abs/2108.03348), [Code](https://github.com/shamim-hussain/egt_pytorch) | 89,326,465 | 8 Tesla V100 (32GB) | Jun 24, 2022 | | ||
| 17 | **GPS** | No | 0.0862 | 0.0852 | Mila | [Ladislav Rampasek](mailto:[email protected]) (Mila / Universite de Montreal) | [Paper](https://arxiv.org/abs/2205.12454), [Code](https://github.com/rampasek/GraphGPS) | 13,807,345 | 1 NVIDIA A100 (40GB) | Nov 10, 2022 | | ||
| 18 | **CoAtGIN-base** | No | 0.0866 | 0.0859 | xfcui@sdu | [Xuefeng Cui](mailto:[email protected]) (Shandong University) | [Paper](https://www.biorxiv.org/content/10.1101/2022.08.26.505499v1), [Code](https://github.com/xfcui/CoAtGIN) | 9,938,433 | 8 NVIDIA A40 | Sep 30, 2022 | | ||
| 19 | **EGT+LSPE+HIERA_clustering** | No | 0.0868 | 0.0863 | rwlspegate2 | [YEOM JE YOON](mailto:[email protected]) (University of Seoul) | [Paper](https://github.com/emforce77/rwlspegate2/blob/c3b83ac0a1f0d1dc67e3f59d928eb092dd797cd4/technical_report.pdf), [Code](https://github.com/emforce77/rwlspegate2.git) | 90,102,913 | NVIDIA A100 80GB HBM2E SXM4 | Feb 16, 2023 | | ||
| 20 | **EGT** | No | 0.0872 | 0.0869 | EGT-AIRC | [Md Shamim Hussain](mailto:[email protected]) (RPI / IBM) | [Paper](https://arxiv.org/abs/2108.03348), [Code](https://github.com/shamim-hussain/egt_pytorch) | 89,326,465 | 8 Tesla V100 (32GB) | Jan 26, 2022 | | ||
| 21 | **GRPE-Large** | No | 0.0876 | 0.0867 | WonWoo | [Wonpyo Park](mailto:[email protected]) (SNU / Standigm) | [Paper](https://arxiv.org/abs/2201.12787), [Code](https://github.com/lenscloth/GRPE) | 118,300,000 | 8 A100-SXM4-40GB | Aug 14, 2022 | | ||
| 22 | **GraphSelfAttention** | No | 0.0898 | 0.0890 | WonWoo | [Wonpyo Park](mailto:[email protected]) (Standigm) | [Paper](https://arxiv.org/abs/2201.12787), [Code](https://github.com/lenscloth/GraphSelfAttention) | 46,199,041 | 4 A100 GPU | Nov 15, 2021 | | ||
| 23 | **Deep graph transformer** | Yes | 0.0904 | 0.0891 | NVIDIA-PCQM4Mv2 | [Jiwei Liu](mailto:[email protected]) (NVIDIA) | [Paper](https://github.com/daxiongshu/PCQM4Mv2_subs/blob/main/tech_report.pdf), [Code](https://github.com/daxiongshu/PCQM4Mv2_subs) | 63,600,000 | 1 NVIDIA V100 GPU 32 GB | Oct 3, 2022 | | ||
| 24 | **CoAtGIN-tiny** | No | 0.0908 | 0.0901 | xfcui@sdu | [Xuefeng Cui](mailto:[email protected]) (Shandong University) | [Paper](https://www.biorxiv.org/content/10.1101/2022.08.26.505499v1), [Code](https://github.com/xfcui/CoAtGIN) | 6,395,393 | 1 NVIDIA A40 | Sep 20, 2022 | | ||
| 25 | **TokenGT (Lap)** | No | 0.0919 | 0.0910 | vl-kaist | [Jinwoo Kim](mailto:[email protected]) (KAIST) | [Paper](https://arxiv.org/abs/2207.02505), [Code](https://github.com/jw9730/tokengt) | 48,492,289 | NVIDIA RTX 3090 x 8 | Aug 8, 2022 | | ||
| 26 | **CoAtGIN-tiny** | No | 0.0921 | 0.0916 | xfcui@sdu | [Xuefeng Cui](mailto:[email protected]) (Shandong University) | [Paper](https://www.biorxiv.org/content/10.1101/2022.08.26.505499v1), [Code](https://github.com/xfcui/CoAtGIN) | 6,183,425 | 1 NVIDIA A40 | Sep 8, 2022 | | ||
| 27 | **CoAtGIN-tiny** | No | 0.0935 | 0.0933 | xfcui@sdu | [Xuefeng Cui](mailto:[email protected]) (Shandong University) | [Paper](https://www.biorxiv.org/content/10.1101/2022.08.26.505499v1), [Code](https://github.com/xfcui/CoAtGIN) | 5,460,225 | NVIDIA A40 | Aug 29, 2022 | | ||
| 28 | **HFAGNN** | No | 0.1010 | 0.1005 | Zhuque Zhejianglab | [Can Xu](mailto:[email protected]) (Zhejiang Lab) | [Paper](https://github.com/Tigerrr07/HFAGNN/blob/master/technical_report.pdf), [Code](https://github.com/Tigerrr07/HFAGNN) | 3,949,959 | 1 NVIDIA TITAN V GPU(12GB memory) | Oct 30, 2022 | | ||
| 29 | **GIN-virtual** | No | 0.1084 | 0.1083 | OGB-LSC | [Weihua Hu](mailto:[email protected]) (Stanford) | [Paper](https://openreview.net/pdf?id=qkcLxoC52kL), [Code](https://github.com/snap-stanford/ogb/tree/master/examples/lsc/pcqm4m-v2) | 6,656,406 | 1 GeForce RTX 2080 (11GB GPU) | Sep 8, 2021 | | ||
| 30 | **GCN-virtual** | No | 0.1152 | 0.1153 | OGB-LSC | [Weihua Hu](mailto:[email protected]) (Stanford) | [Paper](https://openreview.net/pdf?id=qkcLxoC52kL), [Code](https://github.com/snap-stanford/ogb/tree/master/examples/lsc/pcqm4m-v2) | 4,850,401 | 1 GeForce RTX 2080 (11GB GPU) | Sep 8, 2021 | | ||
| 31 | **GIN** | No | 0.1218 | 0.1195 | OGB-LSC | [Weihua Hu](mailto:[email protected]) (Stanford) | [Paper](https://openreview.net/pdf?id=qkcLxoC52kL), [Code](https://github.com/snap-stanford/ogb/tree/master/examples/lsc/pcqm4m-v2) | 3,761,406 | 1 GeForce RTX 2080 (11GB GPU) | Sep 8, 2021 | | ||
| 32 | **GraphGPS without PE (subset)** | No | 0.1395 | 0.1334 | GraphLARS | [Xu Wang](mailto:[email protected]) (4Paradigm && EE Tsinghua) | [Paper](https://arxiv.org/abs/2205.12454), [Code](https://github.com/rampasek/GraphGPS) | 6,155,089 | 1 RTX3090 (24GB GPU) | Sep 2, 2022 | | ||
| 33 | **GCN** | No | 0.1398 | 0.1379 | OGB-LSC | [Weihua Hu](mailto:[email protected]) (Stanford) | [Paper](https://openreview.net/pdf?id=qkcLxoC52kL), [Code](https://github.com/snap-stanford/ogb/tree/master/examples/lsc/pcqm4m-v2) | 1,955,401 | 1 GeForce RTX 2080 (11GB GPU) | Sep 8, 2021 | | ||
| 34 | **MLP-Fingerprint** | No | 0.1760 | 0.1753 | OGB-LSC | [Weihua Hu](mailto:[email protected]) (Stanford) | [Paper](https://openreview.net/pdf?id=qkcLxoC52kL), [Code](https://github.com/snap-stanford/ogb/tree/master/examples/lsc/pcqm4m-v2) | 16,107,201 | 1 GeForce RTX 2080 (11GB GPU) | Sep 8, 2021 | | ||
| 35 | **MolNet_Ensemble** | Yes | 0.9899 | 0.0847 | polixir.ai | [xiaochuan zou](mailto:[email protected]) (polixir.ai) | [Paper](https://github.com/zouxiaochuan/code_ogblsc2022/blob/main/molnet2022_arxiv.pdf), [Code](https://github.com/zouxiaochuan/code_ogblsc2022) | 32,047,874 | 8 RTX3090 | Oct 24, 2022 | | ||
|
||
|
||
|
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -82,4 +82,4 @@ time,unique_identifier,dataset,method,ensemble,approved,test_performance,val_per | |
11/16/2023,65565301854b3,PCQM4Mv2,EGT+Tri. Attn.+RDKit Coords.,No,Yes,0.068258088757189,0.0671,EGT-AIRC,RPI / IBM,Md Shamim Hussain,[email protected],[email protected],1.3.5,https://github.com/shamim-hussain/egt_triangular,https://github.com/shamim-hussain/egt_triangular/blob/master/Report.pdf,203945093.0,"lr: [0.001, 0.002*], num_layers: [12,18,24*], hidden_channels: [512, 768*], source_dropout: [0.1,0.2, 0.3*], path_dropout: [0.1, 0.2*]",32 NVIDIA V100 (32 GB),8 Intel Xeon Gold 6248 CPU (3072 GB Memory),36 hours,1 hour | ||
11/23/2023,656035b4348b6,PCQM4Mv2,EGT+Tri. Attn. (Pure Neural),No,Yes,0.0698127663234968,0.0686,EGT-AIRC,RPI / IBM,Md Shamim Hussain,[email protected],[email protected],1.3.5,https://github.com/shamim-hussain/egt_triangular,https://github.com/shamim-hussain/egt_triangular/blob/master/Report.pdf,203894787.0,"lr: [0.001, 0.002*], num_layers: [12,18,24*], hidden_channels: [512, 768*], source_dropout: [0.1,0.2, 0.3*], path_dropout: [0.1, 0.2*]",32 NVIDIA V100 (32 GB),8 Intel Xeon Gold 6248 CPU (3072 GB Memory),36 hours,1 hour | ||
08/11/2024,66b994a9585c7,PCQM4Mv2,GraphGPT(MLM pretrained),No,Yes,0.0856246531063338,0.0847,Alibaba-DT,Alibaba,Qifang Zhao,[email protected],[email protected],1.3.6,https://github.com/alibaba/graph-gpt,https://arxiv.org/abs/2401.00529,453388801.0,"lr: [0.0001*, 0.0003], num_layers: [12,24,48*], hidden_channels: [512,768*], dropout: [0*, 0.1], path-dropout: [0, 0.1, 0.2*], epochs: [16,32*,40]",4 Nvidia L40S,1 Intel(R) Xeon(R) Platinum 8163 CPU @ 2.50GHz,30 hours,30 mins | ||
2024/08/21,66c6b8fa3a6e2,PCQM4Mv2,GraphGPT(MLM tv10k),No,Yes,0.0803508622332648,0.084,Alibaba-DT,Alibaba,Qifang Zhao,[email protected],[email protected],1.3.6,https://github.com/alibaba/graph-gpt,https://arxiv.org/abs/2401.00529,453388801.0,"lr: [0.0001*, 0.0003], num_layers: [12,24,48*], hidden_channels: [512,768*], dropout: [0*, 0.1], path-dropout: [0, 0.1, 0.2*], epochs: [16,32*,40], mask: [0.8,0,0.2]",4 Nvidia L40S,1 Intel(R) Xeon(R) Platinum 8163 CPU @ 2.50GHz,30 hours,30 mins | ||
08/21/2024,66c6b8fa3a6e2,PCQM4Mv2,GraphGPT(MLM tv10k),No,Yes,0.0803508622332648,0.084,Alibaba-DT,Alibaba,Qifang Zhao,[email protected],[email protected],1.3.6,https://github.com/alibaba/graph-gpt,https://arxiv.org/abs/2401.00529,453388801.0,"lr: [0.0001*, 0.0003], num_layers: [12,24,48*], hidden_channels: [512,768*], dropout: [0*, 0.1], path-dropout: [0, 0.1, 0.2*], epochs: [16,32*,40], mask: [0.8,0,0.2]",4 Nvidia L40S,1 Intel(R) Xeon(R) Platinum 8163 CPU @ 2.50GHz,30 hours,30 mins |