You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I was trying to use your model based on the HF example here. But It turned out that it predicts badly, even trying many versions of models on the training ADE20k samples.
import torch
from PIL import Image
from transformers import OneFormerProcessor, OneFormerModel
image = Image.open('ADE_train_00002024.jpg')
processor = OneFormerProcessor.from_pretrained("shi-labs/oneformer_ade20k_swin_large")
model = OneFormerModel.from_pretrained("shi-labs/oneformer_ade20k_swin_large")
inputs = processor(image, ["semantic"], return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
mask_predictions = outputs.transformer_decoder_mask_predictions
pred = mask_predictions[0].argmax(0)
The results are very different from the Colab demo.
Is there a way to access the colab pretrained weights from the huggingface interface?
The text was updated successfully, but these errors were encountered:
Hi @tldrafael, the HF weights are the same as in our colab demo. However, the result does seem strange. Could you share the image with me so I can try it myself? Thanks.
Hi, thanks for sharing this great work.
I was trying to use your model based on the HF example here. But It turned out that it predicts badly, even trying many versions of models on the training ADE20k samples.
The results are very different from the Colab demo.
Is there a way to access the colab pretrained weights from the huggingface interface?
The text was updated successfully, but these errors were encountered: