My question is with reference to the U-NET implementation present on the Wolfram Neural Net Repository
The construction notebook
present on the page (link: http://www.wolframcloud.com/files/1737200a-b043-413c-ad37-477e208472ad?contentDisposition=attachment) contains all the necessary functions for constructing the net. However, it does not contain the procedure for training the neural net.
I am trying to implement a simple training procedure so that I can firstly train the net myself on the same dataset which the net was initially trained on (https://www.dropbox.com/sh/8dcqxlj94fyyop0/AADib7XPcVkJ1PHddD2Nm9Moa?dl=0). Thereafter, I would like to use a different dataset for training.
Please download the construction notebook
before proceeding. The only code that I have added to the construction notebook
is mentioned below:
(*loading the images, resizing them and augmenting them to produce the training dataset;
background labelled as 1 and cells in the foreground as 2.*)
fnamesimages = Import("C:\Users\aliha\Downloads\dataset\images\");
ordering = Ordering@Flatten@StringCases(fnamesimages, (p : DigitCharacter ..) ~~ ".tif" :> FromDigits@p);
fnamesimages = fnamesimages((ordering));
images = Import("C:\Users\aliha\Downloads\dataset\images\" <> #) &/@fnamesimages;
images = ImageResize(#, {388, 388}) & /@ images;
masks = Import("C:\Users\aliha\Downloads\dataset\segmentation\" <> #) &/@fnamesimages;
allmasks = Flatten@Table(ImageRotate(j, i), {j, masks}, {i, {0, Pi/2, Pi, 3/2 Pi}});
allmasks = Join(allmasks, ImageReflect /@ allmasks);
maskres = ImageResize(#, {388, 388}) & /@ allmasks;
m = ArrayComponents(ImageData@#, 2, {0. -> 1, n_ /; n != 0. -> 2}) &/@maskres;
allimages = Flatten@Table(ImageRotate(j, i), {j, images}, {i, {0, Pi/2, Pi, 3/2 Pi}});
allimages = Join(allimages, ImageReflect /@ allimages);
(* using a small subset of images and segmented images because of GPU memory crash*)
trained = NetTrain(unet, allimages((1 ;; 50)) -> m((1 ;; 50)), All, BatchSize -> 5, MaxTrainingRounds -> 1, TargetDevice -> "GPU");
trainedNet = trained("TrainedNet");

In addition I am using the code in the example notebook
(present on the same page) to then evaluate the trained net on a test image.
Clear@netevaluate;
netevaluate(img_, device_ : "CPU") :=
Block({net = trainedNet, dims = ImageDimensions(img), pads, mask},
pads = Map({Floor(#), Ceiling(#)} &, Mod(4 - dims, 16)/2);
mask = NetReplacePart(net,
{"Input" ->
NetEncoder({"Image", Ceiling(dims - 4, 16) + 188,
ColorSpace -> "Grayscale"}),
"Output" ->
NetDecoder({"Class", Range(2), "InputDepth" -> 3})})(
ImagePad(ColorConvert(img, "Grayscale"), pads + 92,
Padding -> "Reversed"),
TargetDevice -> device
);
Take(mask, {1, -1} Reverse(pads((2)) + 1), {1, -1} (pads((1)) + 1))
);
we can now load the test image and apply the net.
testimg = Import("C:\Users\aliha\Downloads\dataset\test image\t099.tif);
netevaluate(testimg)//Colorize

Unfortunately I do not get any segmentations back. I just get the background. Could someone kindly let me know where I may be having the issue? Thanks !