You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
So far getting good results on image segmentation with cntk, and I tried different loss functions to tweak the desired result.
Using fixed class weights is simple enough, (unless I want to change the weight during training - as I don't see how to do this). My output is a per class propability map shape [numclasses, height,width]
for example dice coeff weighted per class: weight is hwere a np array shape: [numclasses]
But I want to use a weight map per pixel. The only way I can make that work is to compute the map in the loss function from label map. it would be better to precompute the weight maps and give the weight image as input to the batch when training.
def w_map_logistic(x, y,weight): # weight is now a [numclasses] array
w_map = yweight+1
w_map=w_map/C.reduce_mean(w_map,axis=(1,2))
perclasslogistic=C.reduce_sum((yC.log(x)+(1-y)*C.log(1-x))*w_map,axis=(1,2))
return -C.reduce_sum(perclasslogistic)
I want it to be like this instead:
def w_map_logistic(x, y,w_map): # w_mapis here a [numclasses,h,w] array
perclasslogistic=C.reduce_sum((y*C.log(x)+(1-y)*C.log(1-x))*w_map,axis=(1,2))
return -C.reduce_sum(perclasslogistic)
where w_map is updated with each call to train_minibatch
results=[]
for i in range(0, int(numsamples / minibatch_size)):
data_x, data_y = subslice_minibatch(trimages, trlabels,i, minibatch_size)
data_x, data_y = augment( data_x.copy(), data_y.copy(),AugmentPercent )
#assign new weight maps here somehow
trainer.train_minibatch({x: data_x, y: data_y})
results.append(trainer.previous_minibatch_loss_average)
I also tried to use the builtin function weighted_binary_cross_entropy which accepts a weight map with shape [1, h, w] and [numclasses,h,w] as you define the loss function but it gives errors during training - dynamic axis error.
Assuming this is due to not giving it the batch-related maps before calling train_minibatch. I cannot find an example how to do that.
I tried making the weight map as a C.constant with a name (instead of numpy array) and assigning a new weight to this constant , and different variations of using unpack_batch, reconsolidate dynamic axis, (so I get [batchsize,numclass,h,w] arrays) but I just cant make it work as I get different variations of dynamic axis error and matrix dimension error.
The text was updated successfully, but these errors were encountered:
Hi,
So far getting good results on image segmentation with cntk, and I tried different loss functions to tweak the desired result.
Using fixed class weights is simple enough, (unless I want to change the weight during training - as I don't see how to do this). My output is a per class propability map shape [numclasses, height,width]
for example dice coeff weighted per class: weight is hwere a np array shape: [numclasses]
def wdice_coefficient(x, y,weight,smooth=1.0):
intersection = C.reduce_sum(x * y, axis=(1,2))
weighted = ((2.0 * intersection+smooth) / (C.reduce_sum(x, axis=(1,2)) + C.reduce_sum(y, axis=(1,2)) + smooth))*weight
return C.reduce_sum(weighted)
But I want to use a weight map per pixel. The only way I can make that work is to compute the map in the loss function from label map. it would be better to precompute the weight maps and give the weight image as input to the batch when training.
def w_map_logistic(x, y,weight): # weight is now a [numclasses] array
w_map = yweight+1
w_map=w_map/C.reduce_mean(w_map,axis=(1,2))
perclasslogistic=C.reduce_sum((yC.log(x)+(1-y)*C.log(1-x))*w_map,axis=(1,2))
return -C.reduce_sum(perclasslogistic)
I want it to be like this instead:
def w_map_logistic(x, y,w_map): # w_mapis here a [numclasses,h,w] array
perclasslogistic=C.reduce_sum((y*C.log(x)+(1-y)*C.log(1-x))*w_map,axis=(1,2))
return -C.reduce_sum(perclasslogistic)
where w_map is updated with each call to train_minibatch
results=[]
for i in range(0, int(numsamples / minibatch_size)):
data_x, data_y = subslice_minibatch(trimages, trlabels,i, minibatch_size)
data_x, data_y = augment( data_x.copy(), data_y.copy(),AugmentPercent )
#assign new weight maps here somehow
trainer.train_minibatch({x: data_x, y: data_y})
results.append(trainer.previous_minibatch_loss_average)
I also tried to use the builtin function weighted_binary_cross_entropy which accepts a weight map with shape [1, h, w] and [numclasses,h,w] as you define the loss function but it gives errors during training - dynamic axis error.
Assuming this is due to not giving it the batch-related maps before calling train_minibatch. I cannot find an example how to do that.
I tried making the weight map as a C.constant with a name (instead of numpy array) and assigning a new weight to this constant , and different variations of using unpack_batch, reconsolidate dynamic axis, (so I get [batchsize,numclass,h,w] arrays) but I just cant make it work as I get different variations of dynamic axis error and matrix dimension error.
The text was updated successfully, but these errors were encountered: