mirror of
https://www.modelscope.cn/shiertier/ComfyUI-segformer_b2_clothes.git
synced 2026-04-02 22:32:52 +08:00
'upload model'
This commit is contained in:
144
README.md
144
README.md
@ -1,47 +1,109 @@
|
|||||||
---
|
---
|
||||||
license: Apache License 2.0
|
license: mit
|
||||||
|
tags:
|
||||||
#model-type:
|
- vision
|
||||||
##如 gpt、phi、llama、chatglm、baichuan 等
|
- image-segmentation
|
||||||
#- gpt
|
widget:
|
||||||
|
- src: https://images.unsplash.com/photo-1643310325061-2beef64926a5?ixlib=rb-4.0.3&ixid=MnwxMjA3fDB8MHxzZWFyY2h8Nnx8cmFjb29uc3xlbnwwfHwwfHw%3D&w=1000&q=80
|
||||||
#domain:
|
example_title: Person
|
||||||
##如 nlp、cv、audio、multi-modal
|
- src: https://freerangestock.com/sample/139043/young-man-standing-and-leaning-on-car.jpg
|
||||||
#- nlp
|
example_title: Person
|
||||||
|
datasets:
|
||||||
#language:
|
- mattmdjaga/human_parsing_dataset
|
||||||
##语言代码列表 https://help.aliyun.com/document_detail/215387.html?spm=a2c4g.11186623.0.0.9f8d7467kni6Aa
|
|
||||||
#- cn
|
|
||||||
|
|
||||||
#metrics:
|
|
||||||
##如 CIDEr、Blue、ROUGE 等
|
|
||||||
#- CIDEr
|
|
||||||
|
|
||||||
#tags:
|
|
||||||
##各种自定义,包括 pretrained、fine-tuned、instruction-tuned、RL-tuned 等训练方法和其他
|
|
||||||
#- pretrained
|
|
||||||
|
|
||||||
#tools:
|
|
||||||
##如 vllm、fastchat、llamacpp、AdaSeq 等
|
|
||||||
#- vllm
|
|
||||||
---
|
---
|
||||||
### 当前模型的贡献者未提供更加详细的模型介绍。模型文件和权重,可浏览“模型文件”页面获取。
|
# Segformer B2 fine-tuned for clothes segmentation
|
||||||
#### 您可以通过如下git clone命令,或者ModelScope SDK来下载模型
|
|
||||||
|
SegFormer model fine-tuned on [ATR dataset](https://github.com/lemondan/HumanParsing-Dataset) for clothes segmentation but can also be used for human segmentation.
|
||||||
|
The dataset on hugging face is called "mattmdjaga/human_parsing_dataset".
|
||||||
|
|
||||||
|
|
||||||
|
**NEW** -
|
||||||
|
**[Training code](https://github.com/mattmdjaga/segformer_b2_clothes)**. Right now it only contains the pure code with some comments, but soon I'll add a colab notebook version
|
||||||
|
and a blog post with it to make it more friendly.
|
||||||
|
|
||||||
SDK下载
|
|
||||||
```bash
|
|
||||||
#安装ModelScope
|
|
||||||
pip install modelscope
|
|
||||||
```
|
|
||||||
```python
|
```python
|
||||||
#SDK模型下载
|
from transformers import SegformerImageProcessor, AutoModelForSemanticSegmentation
|
||||||
from modelscope import snapshot_download
|
from PIL import Image
|
||||||
model_dir = snapshot_download('shiertier/ComfyUI-segformer_b2_clothes')
|
import requests
|
||||||
```
|
import matplotlib.pyplot as plt
|
||||||
Git下载
|
import torch.nn as nn
|
||||||
```
|
|
||||||
#Git模型下载
|
processor = SegformerImageProcessor.from_pretrained("mattmdjaga/segformer_b2_clothes")
|
||||||
git clone https://www.modelscope.cn/shiertier/ComfyUI-segformer_b2_clothes.git
|
model = AutoModelForSemanticSegmentation.from_pretrained("mattmdjaga/segformer_b2_clothes")
|
||||||
|
|
||||||
|
url = "https://plus.unsplash.com/premium_photo-1673210886161-bfcc40f54d1f?ixlib=rb-4.0.3&ixid=MnwxMjA3fDB8MHxzZWFyY2h8MXx8cGVyc29uJTIwc3RhbmRpbmd8ZW58MHx8MHx8&w=1000&q=80"
|
||||||
|
|
||||||
|
image = Image.open(requests.get(url, stream=True).raw)
|
||||||
|
inputs = processor(images=image, return_tensors="pt")
|
||||||
|
|
||||||
|
outputs = model(**inputs)
|
||||||
|
logits = outputs.logits.cpu()
|
||||||
|
|
||||||
|
upsampled_logits = nn.functional.interpolate(
|
||||||
|
logits,
|
||||||
|
size=image.size[::-1],
|
||||||
|
mode="bilinear",
|
||||||
|
align_corners=False,
|
||||||
|
)
|
||||||
|
|
||||||
|
pred_seg = upsampled_logits.argmax(dim=1)[0]
|
||||||
|
plt.imshow(pred_seg)
|
||||||
```
|
```
|
||||||
|
|
||||||
<p style="color: lightgrey;">如果您是本模型的贡献者,我们邀请您根据<a href="https://modelscope.cn/docs/ModelScope%E6%A8%A1%E5%9E%8B%E6%8E%A5%E5%85%A5%E6%B5%81%E7%A8%8B%E6%A6%82%E8%A7%88" style="color: lightgrey; text-decoration: underline;">模型贡献文档</a>,及时完善模型卡片内容。</p>
|
Labels: 0: "Background", 1: "Hat", 2: "Hair", 3: "Sunglasses", 4: "Upper-clothes", 5: "Skirt", 6: "Pants", 7: "Dress", 8: "Belt", 9: "Left-shoe", 10: "Right-shoe", 11: "Face", 12: "Left-leg", 13: "Right-leg", 14: "Left-arm", 15: "Right-arm", 16: "Bag", 17: "Scarf"
|
||||||
|
|
||||||
|
### Evaluation
|
||||||
|
|
||||||
|
| Label Index | Label Name | Category Accuracy | Category IoU |
|
||||||
|
|:-------------:|:----------------:|:-----------------:|:------------:|
|
||||||
|
| 0 | Background | 0.99 | 0.99 |
|
||||||
|
| 1 | Hat | 0.73 | 0.68 |
|
||||||
|
| 2 | Hair | 0.91 | 0.82 |
|
||||||
|
| 3 | Sunglasses | 0.73 | 0.63 |
|
||||||
|
| 4 | Upper-clothes | 0.87 | 0.78 |
|
||||||
|
| 5 | Skirt | 0.76 | 0.65 |
|
||||||
|
| 6 | Pants | 0.90 | 0.84 |
|
||||||
|
| 7 | Dress | 0.74 | 0.55 |
|
||||||
|
| 8 | Belt | 0.35 | 0.30 |
|
||||||
|
| 9 | Left-shoe | 0.74 | 0.58 |
|
||||||
|
| 10 | Right-shoe | 0.75 | 0.60 |
|
||||||
|
| 11 | Face | 0.92 | 0.85 |
|
||||||
|
| 12 | Left-leg | 0.90 | 0.82 |
|
||||||
|
| 13 | Right-leg | 0.90 | 0.81 |
|
||||||
|
| 14 | Left-arm | 0.86 | 0.74 |
|
||||||
|
| 15 | Right-arm | 0.82 | 0.73 |
|
||||||
|
| 16 | Bag | 0.91 | 0.84 |
|
||||||
|
| 17 | Scarf | 0.63 | 0.29 |
|
||||||
|
|
||||||
|
Overall Evaluation Metrics:
|
||||||
|
- Evaluation Loss: 0.15
|
||||||
|
- Mean Accuracy: 0.80
|
||||||
|
- Mean IoU: 0.69
|
||||||
|
|
||||||
|
### License
|
||||||
|
|
||||||
|
The license for this model can be found [here](https://github.com/NVlabs/SegFormer/blob/master/LICENSE).
|
||||||
|
|
||||||
|
### BibTeX entry and citation info
|
||||||
|
|
||||||
|
```bibtex
|
||||||
|
@article{DBLP:journals/corr/abs-2105-15203,
|
||||||
|
author = {Enze Xie and
|
||||||
|
Wenhai Wang and
|
||||||
|
Zhiding Yu and
|
||||||
|
Anima Anandkumar and
|
||||||
|
Jose M. Alvarez and
|
||||||
|
Ping Luo},
|
||||||
|
title = {SegFormer: Simple and Efficient Design for Semantic Segmentation with
|
||||||
|
Transformers},
|
||||||
|
journal = {CoRR},
|
||||||
|
volume = {abs/2105.15203},
|
||||||
|
year = {2021},
|
||||||
|
url = {https://arxiv.org/abs/2105.15203},
|
||||||
|
eprinttype = {arXiv},
|
||||||
|
eprint = {2105.15203},
|
||||||
|
timestamp = {Wed, 02 Jun 2021 11:46:42 +0200},
|
||||||
|
biburl = {https://dblp.org/rec/journals/corr/abs-2105-15203.bib},
|
||||||
|
bibsource = {dblp computer science bibliography, https://dblp.org}
|
||||||
|
}
|
||||||
|
```
|
||||||
110
config.json
Normal file
110
config.json
Normal file
@ -0,0 +1,110 @@
|
|||||||
|
{
|
||||||
|
"_name_or_path": "nvidia/mit-b2",
|
||||||
|
"architectures": [
|
||||||
|
"SegformerForSemanticSegmentation"
|
||||||
|
],
|
||||||
|
"attention_probs_dropout_prob": 0.0,
|
||||||
|
"classifier_dropout_prob": 0.1,
|
||||||
|
"decoder_hidden_size": 768,
|
||||||
|
"depths": [
|
||||||
|
3,
|
||||||
|
4,
|
||||||
|
6,
|
||||||
|
3
|
||||||
|
],
|
||||||
|
"downsampling_rates": [
|
||||||
|
1,
|
||||||
|
4,
|
||||||
|
8,
|
||||||
|
16
|
||||||
|
],
|
||||||
|
"drop_path_rate": 0.1,
|
||||||
|
"hidden_act": "gelu",
|
||||||
|
"hidden_dropout_prob": 0.0,
|
||||||
|
"hidden_sizes": [
|
||||||
|
64,
|
||||||
|
128,
|
||||||
|
320,
|
||||||
|
512
|
||||||
|
],
|
||||||
|
"id2label": {
|
||||||
|
"0": "Background",
|
||||||
|
"1": "Hat",
|
||||||
|
"2": "Hair",
|
||||||
|
"3": "Sunglasses",
|
||||||
|
"4": "Upper-clothes",
|
||||||
|
"5": "Skirt",
|
||||||
|
"6": "Pants",
|
||||||
|
"7": "Dress",
|
||||||
|
"8": "Belt",
|
||||||
|
"9": "Left-shoe",
|
||||||
|
"10": "Right-shoe",
|
||||||
|
"11": "Face",
|
||||||
|
"12": "Left-leg",
|
||||||
|
"13": "Right-leg",
|
||||||
|
"14": "Left-arm",
|
||||||
|
"15": "Right-arm",
|
||||||
|
"16": "Bag",
|
||||||
|
"17": "Scarf"
|
||||||
|
},
|
||||||
|
"image_size": 224,
|
||||||
|
"initializer_range": 0.02,
|
||||||
|
"label2id": {
|
||||||
|
"Background": 0,
|
||||||
|
"Bag": 16,
|
||||||
|
"Belt": 8,
|
||||||
|
"Dress": 7,
|
||||||
|
"Face": 11,
|
||||||
|
"Hair": 2,
|
||||||
|
"Hat": 1,
|
||||||
|
"Left-arm": 14,
|
||||||
|
"Left-leg": 12,
|
||||||
|
"Left-shoe": 9,
|
||||||
|
"Pants": 6,
|
||||||
|
"Right-arm": 15,
|
||||||
|
"Right-leg": 13,
|
||||||
|
"Right-shoe": 10,
|
||||||
|
"Scarf": 17,
|
||||||
|
"Skirt": 5,
|
||||||
|
"Sunglasses": 3,
|
||||||
|
"Upper-clothes": 4
|
||||||
|
},
|
||||||
|
"layer_norm_eps": 1e-06,
|
||||||
|
"mlp_ratios": [
|
||||||
|
4,
|
||||||
|
4,
|
||||||
|
4,
|
||||||
|
4
|
||||||
|
],
|
||||||
|
"model_type": "segformer",
|
||||||
|
"num_attention_heads": [
|
||||||
|
1,
|
||||||
|
2,
|
||||||
|
5,
|
||||||
|
8
|
||||||
|
],
|
||||||
|
"num_channels": 3,
|
||||||
|
"num_encoder_blocks": 4,
|
||||||
|
"patch_sizes": [
|
||||||
|
7,
|
||||||
|
3,
|
||||||
|
3,
|
||||||
|
3
|
||||||
|
],
|
||||||
|
"reshape_last_stage": true,
|
||||||
|
"semantic_loss_ignore_index": 255,
|
||||||
|
"sr_ratios": [
|
||||||
|
8,
|
||||||
|
4,
|
||||||
|
2,
|
||||||
|
1
|
||||||
|
],
|
||||||
|
"strides": [
|
||||||
|
4,
|
||||||
|
2,
|
||||||
|
2,
|
||||||
|
2
|
||||||
|
],
|
||||||
|
"torch_dtype": "float32",
|
||||||
|
"transformers_version": "4.24.0"
|
||||||
|
}
|
||||||
1
configuration.json
Normal file
1
configuration.json
Normal file
@ -0,0 +1 @@
|
|||||||
|
{"framework": "pytorch", "task": "other"}
|
||||||
39
handler.py
Normal file
39
handler.py
Normal file
@ -0,0 +1,39 @@
|
|||||||
|
from typing import Dict, List, Any
|
||||||
|
from PIL import Image
|
||||||
|
from io import BytesIO
|
||||||
|
from transformers import AutoModelForSemanticSegmentation, AutoFeatureExtractor
|
||||||
|
import base64
|
||||||
|
import torch
|
||||||
|
from torch import nn
|
||||||
|
|
||||||
|
class EndpointHandler():
|
||||||
|
def __init__(self, path="."):
|
||||||
|
self.device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
|
||||||
|
self.model = AutoModelForSemanticSegmentation.from_pretrained(path).to(self.device).eval()
|
||||||
|
self.feature_extractor = AutoFeatureExtractor.from_pretrained(path)
|
||||||
|
|
||||||
|
def __call__(self, data: Dict[str, Any]) -> List[Dict[str, Any]]:
|
||||||
|
"""
|
||||||
|
data args:
|
||||||
|
images (:obj:`PIL.Image`)
|
||||||
|
candiates (:obj:`list`)
|
||||||
|
Return:
|
||||||
|
A :obj:`list`:. The list contains items that are dicts should be liked {"label": "XXX", "score": 0.82}
|
||||||
|
"""
|
||||||
|
inputs = data.pop("inputs", data)
|
||||||
|
|
||||||
|
# decode base64 image to PIL
|
||||||
|
image = Image.open(BytesIO(base64.b64decode(inputs['image'])))
|
||||||
|
|
||||||
|
# preprocess image
|
||||||
|
encoding = self.feature_extractor(images=image, return_tensors="pt")
|
||||||
|
pixel_values = encoding["pixel_values"].to(self.device)
|
||||||
|
with torch.no_grad():
|
||||||
|
outputs = self.model(pixel_values=pixel_values)
|
||||||
|
logits = outputs.logits
|
||||||
|
upsampled_logits = nn.functional.interpolate(logits,
|
||||||
|
size=image.size[::-1],
|
||||||
|
mode="bilinear",
|
||||||
|
align_corners=False,)
|
||||||
|
pred_seg = upsampled_logits.argmax(dim=1)[0]
|
||||||
|
return pred_seg.tolist()
|
||||||
3
model.safetensors
Normal file
3
model.safetensors
Normal file
@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:8f86fd90c567afd4370b3cc3a7e81ed767a632b2832a738331af660acc0c4c68
|
||||||
|
size 109493236
|
||||||
109
onnx/config.json
Normal file
109
onnx/config.json
Normal file
@ -0,0 +1,109 @@
|
|||||||
|
{
|
||||||
|
"_name_or_path": "mattmdjaga/segformer_b2_clothes",
|
||||||
|
"architectures": [
|
||||||
|
"SegformerForSemanticSegmentation"
|
||||||
|
],
|
||||||
|
"attention_probs_dropout_prob": 0.0,
|
||||||
|
"classifier_dropout_prob": 0.1,
|
||||||
|
"decoder_hidden_size": 768,
|
||||||
|
"depths": [
|
||||||
|
3,
|
||||||
|
4,
|
||||||
|
6,
|
||||||
|
3
|
||||||
|
],
|
||||||
|
"downsampling_rates": [
|
||||||
|
1,
|
||||||
|
4,
|
||||||
|
8,
|
||||||
|
16
|
||||||
|
],
|
||||||
|
"drop_path_rate": 0.1,
|
||||||
|
"hidden_act": "gelu",
|
||||||
|
"hidden_dropout_prob": 0.0,
|
||||||
|
"hidden_sizes": [
|
||||||
|
64,
|
||||||
|
128,
|
||||||
|
320,
|
||||||
|
512
|
||||||
|
],
|
||||||
|
"id2label": {
|
||||||
|
"0": "Background",
|
||||||
|
"1": "Hat",
|
||||||
|
"2": "Hair",
|
||||||
|
"3": "Sunglasses",
|
||||||
|
"4": "Upper-clothes",
|
||||||
|
"5": "Skirt",
|
||||||
|
"6": "Pants",
|
||||||
|
"7": "Dress",
|
||||||
|
"8": "Belt",
|
||||||
|
"9": "Left-shoe",
|
||||||
|
"10": "Right-shoe",
|
||||||
|
"11": "Face",
|
||||||
|
"12": "Left-leg",
|
||||||
|
"13": "Right-leg",
|
||||||
|
"14": "Left-arm",
|
||||||
|
"15": "Right-arm",
|
||||||
|
"16": "Bag",
|
||||||
|
"17": "Scarf"
|
||||||
|
},
|
||||||
|
"image_size": 224,
|
||||||
|
"initializer_range": 0.02,
|
||||||
|
"label2id": {
|
||||||
|
"Background": 0,
|
||||||
|
"Bag": 16,
|
||||||
|
"Belt": 8,
|
||||||
|
"Dress": 7,
|
||||||
|
"Face": 11,
|
||||||
|
"Hair": 2,
|
||||||
|
"Hat": 1,
|
||||||
|
"Left-arm": 14,
|
||||||
|
"Left-leg": 12,
|
||||||
|
"Left-shoe": 9,
|
||||||
|
"Pants": 6,
|
||||||
|
"Right-arm": 15,
|
||||||
|
"Right-leg": 13,
|
||||||
|
"Right-shoe": 10,
|
||||||
|
"Scarf": 17,
|
||||||
|
"Skirt": 5,
|
||||||
|
"Sunglasses": 3,
|
||||||
|
"Upper-clothes": 4
|
||||||
|
},
|
||||||
|
"layer_norm_eps": 1e-06,
|
||||||
|
"mlp_ratios": [
|
||||||
|
4,
|
||||||
|
4,
|
||||||
|
4,
|
||||||
|
4
|
||||||
|
],
|
||||||
|
"model_type": "segformer",
|
||||||
|
"num_attention_heads": [
|
||||||
|
1,
|
||||||
|
2,
|
||||||
|
5,
|
||||||
|
8
|
||||||
|
],
|
||||||
|
"num_channels": 3,
|
||||||
|
"num_encoder_blocks": 4,
|
||||||
|
"patch_sizes": [
|
||||||
|
7,
|
||||||
|
3,
|
||||||
|
3,
|
||||||
|
3
|
||||||
|
],
|
||||||
|
"reshape_last_stage": true,
|
||||||
|
"semantic_loss_ignore_index": 255,
|
||||||
|
"sr_ratios": [
|
||||||
|
8,
|
||||||
|
4,
|
||||||
|
2,
|
||||||
|
1
|
||||||
|
],
|
||||||
|
"strides": [
|
||||||
|
4,
|
||||||
|
2,
|
||||||
|
2,
|
||||||
|
2
|
||||||
|
],
|
||||||
|
"transformers_version": "4.34.0"
|
||||||
|
}
|
||||||
3
onnx/model.onnx
Normal file
3
onnx/model.onnx
Normal file
@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:a93a8dac171b5c1fcc53632a8bfc180bfd9759ea69a3e207451bb07f76add54f
|
||||||
|
size 110039290
|
||||||
24
onnx/preprocessor_config.json
Normal file
24
onnx/preprocessor_config.json
Normal file
@ -0,0 +1,24 @@
|
|||||||
|
{
|
||||||
|
"do_normalize": true,
|
||||||
|
"do_reduce_labels": false,
|
||||||
|
"do_rescale": true,
|
||||||
|
"do_resize": true,
|
||||||
|
"feature_extractor_type": "SegformerFeatureExtractor",
|
||||||
|
"image_mean": [
|
||||||
|
0.485,
|
||||||
|
0.456,
|
||||||
|
0.406
|
||||||
|
],
|
||||||
|
"image_processor_type": "SegformerFeatureExtractor",
|
||||||
|
"image_std": [
|
||||||
|
0.229,
|
||||||
|
0.224,
|
||||||
|
0.225
|
||||||
|
],
|
||||||
|
"resample": 2,
|
||||||
|
"rescale_factor": 0.00392156862745098,
|
||||||
|
"size": {
|
||||||
|
"height": 512,
|
||||||
|
"width": 512
|
||||||
|
}
|
||||||
|
}
|
||||||
3
optimizer.pt
Normal file
3
optimizer.pt
Normal file
@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:4f642f5c29cb7c9ac0ff242ccf94220c88913f4a65db4727b2530a987ce14d9a
|
||||||
|
size 219104837
|
||||||
18
preprocessor_config.json
Normal file
18
preprocessor_config.json
Normal file
@ -0,0 +1,18 @@
|
|||||||
|
{
|
||||||
|
"do_normalize": true,
|
||||||
|
"do_resize": true,
|
||||||
|
"feature_extractor_type": "SegformerFeatureExtractor",
|
||||||
|
"image_mean": [
|
||||||
|
0.485,
|
||||||
|
0.456,
|
||||||
|
0.406
|
||||||
|
],
|
||||||
|
"image_std": [
|
||||||
|
0.229,
|
||||||
|
0.224,
|
||||||
|
0.225
|
||||||
|
],
|
||||||
|
"reduce_labels": false,
|
||||||
|
"resample": 2,
|
||||||
|
"size": 512
|
||||||
|
}
|
||||||
3
pytorch_model.bin
Normal file
3
pytorch_model.bin
Normal file
@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:934543143c97acf3197b030bb0ba046f6c713757467a7dcf47f27ce8c0d6264d
|
||||||
|
size 109579005
|
||||||
3
rng_state.pth
Normal file
3
rng_state.pth
Normal file
@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:a7c38376dfee2c075efd2b37186139541f47970794c545ba17f510796313aaa8
|
||||||
|
size 14575
|
||||||
3
scheduler.pt
Normal file
3
scheduler.pt
Normal file
@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:7a9a297dec0fe2336eab64ac3bbd47e4936655c43239740a40cfe5f4623a0657
|
||||||
|
size 627
|
||||||
12226
trainer_state.json
Normal file
12226
trainer_state.json
Normal file
File diff suppressed because it is too large
Load Diff
3
training_args.bin
Normal file
3
training_args.bin
Normal file
@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:210f58c34439201a03f7a2e923b10e2a9b03a8943740f452ae4e8f57ebcfc186
|
||||||
|
size 3323
|
||||||
Reference in New Issue
Block a user