- (06/06/2024) Initial release.
We provide PFMLP models pretrained on ImageNet 2012.
Model | Parameters | FLOPs | Top 1 Acc. | Download |
---|---|---|---|---|
PFMLP-N | 19M | 2.4G | 82.0% | model |
PFMLP-T | 33M | 4.2G | 83.1% | model |
PFMLP-S | 50M | 6.6G | 83.8% | model |
PFMLP-B | 71M | 9.4G | 84.1% | model |
- PyTorch 1.7.0+ and torchvision 0.8.1+
- timm
- lmdb
- thop (optional, for FLOPs calculation)
pip install timm lmdb thop
Download and extract ImageNet train and val images from http://image-net.org/. The directory structure is:
│path/to/imagenet/
├──train/
│ ├── n01440764
│ │ ├── n01440764_10026.JPEG
│ │ ├── n01440764_10027.JPEG
│ │ ├── ......
│ ├── ......
├──val/
│ ├── n01440764
│ │ ├── ILSVRC2012_val_00000293.JPEG
│ │ ├── ILSVRC2012_val_00002138.JPEG
│ │ ├── ......
│ ├── ......
To evaluate a pre-trained PFMLP_Tiny on ImageNet val with a single GPU run:
python main.py --eval true --model tiny --resume path/to/PFMLP_Tiny.pth --data-path /path/to/imagenet
To train PFMLP_Tiny on ImageNet on a single node with 8 gpus for 300 epochs run:
python -m torch.distributed.launch --nproc_per_node=8 main.py --model tiny --epochs 300 --batch-size 128 --update_freq 4 --use_amp true --data-path /path/to/imagenet --output_dir /path/to/save
PFMLP is released under MIT License.