Avg loss decrease a...
 
Share:

Avg loss decrease and increase when training dataset  

  RSS

abdou31
(@abdou31)
New Member
Joined: 1 year ago
Posts: 1
03/05/2019 4:47 am  

Hello ,

I have setting up a project that should detect iris region ( in eye ) in real time using deep learning , I have cloned yolo segmentation project in github : https://github.com/ArtyZe/yolo_segmentation

I compiled the project using make -j4 , and i'm trying now to training my own dataset using this project

System specifications

 

 

Ubuntu 19.04
Cuda 10.1
Nvidia Geforce 840m

 

My dataset is organized according to this :

 

/darknet-yolo-segmentation

 |-->images
 |-->C12...jpg #JPG image
 ...
 |-->C12...bmp #mask
 |-->data
 |-->test.list #containes images for test
 |-->train.list #containes images for train
 |-->obj.data #containes the path of train and test and backup
 |-->obj.names #containes the classes ( I have one class " iris " )


The command that i execute
./darknet segmenter train data/obj.data segment.cfg segment.backup


And I have the two files that you send segment12.backup and segment12.weights When i trained my own datasets I get illogical values of avg ,

Some times avg decrease and increase ( it should decrease ) and rate should be increase after number of iterations Some times i get negatives values for avg values and this is it illogical

How can i solve this issue ?

Thanks.


https://i.imgur.com/sQTukMF.png



Quote
Share:

We use cookies to collect information about our website and how users interact with it. We’ll use this information solely to improve the site. You are agreeing to consent to our use of cookies if you click ‘OK’. All information we collect using cookies will be subject to and protected by our Privacy Policy, which you can view here.

OK

Please Login or Register