European Space Imaging Presents Sensor Stories QuickBirD This webpage is an interactive “Story Map” made to be viewed on a desktop or laptop. Please…
HELP4i FROM GEO4i A stand-alone decision support system for the identification of military equipment in satellite images Advancing equipment indentification Highly Compatible Syncs seamlessly…
Terms & Conditions PRODUCT TERMS AND CONDITIONS Version F1-28-19 PLEASE READ THESE PRODUCT TERMS AND CONDITIONS CAREFULLY BEFORE USING ANY EUROPEAN SPACE IMAGING /…
Privacy Policy We’re committed to keeping you safe European Space Imaging is serious about protecting your data, and have created this privacy policy to…

Architecture of ResNet34-UNet model

UNet architecture for semantic segmentation with ResNet34 as encoder or feature extraction part. ResNet34 is used as an encoder or feature extractor in the contracting path and the corresponding symmetric expanding path predicts the dense segmentation output.

Architecture of VGG16-UNet model

UNet architecture for semantic segmentation with VGG16 as the encoder or feature extractor. VGG16 is used as an encoder or feature extractor in the contracting path and the corresponding symmetric expanding path predicts the dense segmentation output.

Architecture of ResNet34-FCN model

In this model, ResNet34 is used for feature extraction and the FCN operation remains as is. The feature of ResNet architecture is exploited where just like VGG, as the number of filters double, the feature map size gets halved. This gives a similarity to VGG and ResNet architecture while supporting deeper architecture and addressing the issue of vanishing gradients while also being faster. The fully connected layer at the output of ResNet34 is not used and instead converted to fully convolutional layer by means of 1×1 convolution.

Architecture of VGG16-FCN model

In this model, VGG16 is used for feature extraction which also performs the function of an encoder. The fully connected layer of the VGG16 is not used and instead converted to fully convolutional layer by means of 1×1 convolution.

X