Introduction

Earth Observation aims to collect data regarding the Earth’s systems at several spatio-temporal resolutions. These data allow scientists to understand Earth’s processes, such as greenhouse gas emissions and land cover change. Segmentation is among the most used unsupervised processing methods for extracting information from satellite imagery (Hossain and Chen 2019). Segmentation is the process by which objects are extracted using image features. This process consists of delineating groups of adjacent pixels with similar characteristics such as intensity, color, and texture. Numerous segmentation algorithms are available in the Remote Sensing scientific literature (see e.g. Kotaridis and Lazaridou 2021).

Assessing segmentation results is difficult due to under and over-segmetation errors. Undersegmentation occurs when the segmentation algorithm fails to separate a contiguous pixel group while oversegmentation is the opposite, that is, the segmentation algorithm unnecessarily splits a pixel group (H. Costa, Foody, and Boyd 2018). Both under and over-segmentation come with assessment metrics that target segments’ characteristics such as area, shape, and position. One way to assess these errors is to use supervised quality metrics (H. Costa, Foody, and Boyd 2018). Supervised metrics compare segments to reference data, measuring their similarity or discrepancy in terms of under and over-segmentation (Clinton et al. 2010).

One significant challenge in the domain of Earth Observation data is the scarcity of software tools specifically designed for segmentation assessment. While several packages such as imageseg (Niedballa et al. 2022), ExpImage (Azevedo 2022), SuperpixelImageSegmentation (Mouselimis 2022b), OpenImageR (Mouselimis 2022a), and image.Otsu (Wijffels 2020) enable users to segment images but, when provided, they offer a limited set of facilities to assess the accuracy of the segmentation and some of them are not tailored to the needs of Earth Observation data or applications. This often requires users to adapt the code from these packages for their own purposes, which can be time-consuming and unrelated to their primary research goals.

In this paper, we introduce the segmetric package, which addresses the lack of R tools for assessing segmentation of Earth Observation data and provides a coherent set of metrics that can be used to compare and contrast different assessment methods for evaluating segmentation. Additionally, segmetric provides innovative visualization tools to assist qualitative spatial analysis as well as metrics that can be used to tune and assess segmentation algorithms.

Supervised segmentation metrics

Supervised metrics use reference data to assess segmentation accuracy. These metrics are grouped into two categories: geometric, which use the geometry of the objects (i.e. polygons) to determine the similarity between the segments and the reference data; and thematic, which use instead objects’ attributes such as the land cover label associated with each object  (H. Costa, Foody, and Boyd 2018). The segmetric package focuses on geometric methods that require two sets of polygons as inputs, one for the segments and other for the reference data.

The segments’ polygons, denoted by \(Y = \{y_j: j = 1, ..., m\}\), are obtained from a segmentation method and the reference polygons, denoted by \(X = \{x_i: i = 1, ..., n\}\), are typically collected in-situ by specialists. The quality metrics are defined considering different subsets of \(X\) and \(Y\). The subsets of \(Y\) used to compute metrics for each reference \(i\) are defined as follows 1:

Likewise, the subsets of \(X\) used to compute metrics for each segment \(j\) are defined as:

To illustrate these subsets definition, we depict some of them in Figure 1. Subsets contains all elements for which a metric value has to be computed. To obtain a single metric value, a summary function can be applied on all values, typically a mean or a weighted mean. The range of possible values can differ from metric to metric. Also, the optimal value varies for each metric. Table 1 lists all implemented metrics in segmetric, their ranges and optimal values. The corresponding subsets used to compute values are shown in their formulas definition.

[.33] image [.33] image [.33] image
[.33] image [.33] image

Subsets are used to compute segmentation metrics. The red squares and black hexagons represent reference and segmented polygons, respectively. (a) is made of segmentation polygons overlapping any reference polygon. (b) Y is made of the segmentation polygon overlapping the most of a reference polygon. (c) Y* is made of polygons in the segmentation that either their centroids fall inside the reference, or they cover or overlap more than half of the reference polygon, or the reference polygon centroid falls inside them. (d) is made of the reference polygons overlapping any segmentation polygon. (e) X is made of the reference polygons, which overlap the most with the segmentation polygons. Yellow represents the intersection between the references and segments included in a subset. The magenta areas are excluded from the reference-segmentation intersection but included in the subset. Adapted from .
Metrics implemented
Metric Range Opt. References
\(\rm{OS1}_{ij}=1-\frac{area(x_i\cap y_j)}{area(x_i)},\ y_j\in {Y^*}_i\) \([0,1]\) 0 Clinton et al. (2010)
\(\rm{OS2}_{ij}=1-\frac{area(x_i\cap y_j)}{area(x_i)},\ y_j\in {Y^{'}}_i\) \([0,1]\) 0 Persello and Bruzzone (2010)
\(\rm{OS3}_{ij}=1-\frac{area(x_i\cap y_j)}{area(x_i)},\ y_j\in {Ycd}_i\) \([0,1]\) 0 Yang, Li, and He (2014)
\(\rm{US1}_{ij}=1-\frac{area(x_i\cap y_j)}{area(y_i)},\ y_j\in {Y^*}_i\) \([0,1]\) 0 Clinton et al. (2010)
\(\rm{US2}_{ij}=1-\frac{area(x_i\cap y_j)}{area(y_i)},\ y_j\in {Y^{'}}_i\) \([0,1]\) 0 Persello and Bruzzone (2010)
\(\rm{US3}_{ij}=1-\frac{area(x_i\cap y_j)}{area(y_i)},\ y_j\in {Ycd}_i\) \([0,1]\) 0 Yang, Li, and He (2014)
\(\rm{AFI}_{ij}=\frac{area(x_i)-area(y_j)}{area(x_i)},\ y_j\in {Y^{'}}_i\) \((-\infty,1]\) 0
Clinton et al. (2010)
\(\rm{QR}_{ij}=1-\frac{area(x_i\cap y_j)}{area(x_i\cup y_j)},\ y_j\in {Y^*}_i\) \([0,1]\) 0
Clinton et al. (2010)
\(\rm{D}_{ij}=\sqrt{\frac{\rm{OS}_{ij}^2+\rm{US}_{ij}^2}{2}}\) \([0,1]\) 0
Clinton et al. (2010)
\(\rm{precision}_{ij}=\frac{area(x_i\cap y_j)}{area(y_i)},\ y_j\in {Y^{'}}_i\) \([0,1]\) 1
Zhang et al. (2015)
\(\rm{recall}_{ij}=1-\frac{area(x_i\cap y_j)}{area(x_i)},\ y_j\in {Y^{'}}_i\) \([0,1]\) 1
Zhang et al. (2015)
\(\rm{UMerging}_{ij}=\frac{area(x_i)-area(x_i\cap y_j)}{area(x_i)},\ y_j\in {Y^*}_i\) \([0,1]\) 0
Clinton et al. (2010)
\(\rm{OMerging}_{ij}=\frac{area(y_j)-area(x_i\cap y_j)}{area(x_i)},\ y_j\in {Y^*}_i\) \([0,\infty)\) 0
Clinton et al. (2010)
\(\rm{M}_{ij}=\sqrt{\frac{area(x_i\cap y_j)^2}{area(x_i)area(y_j)}},\ y_j\in Y^{'}_i\) \([0,1]\) 1
Feitosa et al. (2010)
\(\rm{E}_{ij}=\frac{area(y_j)-area(x_i\cap y_j)}{area(y_i)}\times 100,\ x_i\in {X^{'}}_j\) \([0,100]\) 0 Carleer, Debeir, and Wolff (2005)
\(\rm{RAsub}_{ij}=\frac{area(x_i\cap y_j)}{area(x_i)},\ y_j\in \tilde{Y}_i\) \([0,1]\) 1
Clinton et al. (2010)
\(\rm{RAsuper}_{ij}=\frac{area(x_i\cap y_j)}{area(y_i)},\ y_j\in \tilde{Y}_i\) \([0,1]\) 1
Clinton et al. (2010)
\(\rm{PI}_{i}=\sum_{j=1}^{m}{\frac{area(x_i\cap y_j)^2}{area(x_i)area(y_i)}},\ y_j\in \tilde{Y}_i\) \([0,1]\) 1 Van Coillie, Verbeke, and De Wulf (2008)
\(\rm{Fitness}_{ij}=\frac{area(x_i)+area(y_i) - 2 \: area(x_i\cap y_j)}{area(y_i)},\ x_i\in X^{'}_i\) \([0,\infty)\) 0 G. A. O. P. Costa et al. (2008)
\(\rm{ED3}_{ij}=\sqrt{\frac{OS3_{ij}^2+US3_{ij}^2)}{2}}\) \([0,1]\) 0 Yang, Li, and He (2014)
\(\rm{F{\text -}measure}_{ij}\)*=\(\frac{1}{\frac{\alpha}{\rm{precision_{ij}}}+\frac{(1-\alpha)}{\rm{recall_{ij}}}}\) \([0,1]\) 1
Zhang et al. (2015)
\(\rm{IoU}_{ij}=\frac{area(x_i\cap y_j)}{area(x_i\cup y_j)},\ y_j\in {Y^{'}}_i\) \([0,1]\) 1
Rezatofighi et al. (2019)
\(\rm{SimSize}_{ij}=\frac{min(area(x_i),area(y_j))}{max(area(x_i),area(y_j))},\ y_j\in {Y^*}_i\) \([0,1]\) 1
\(\rm{qLoc}_{ij}=dist(centroid(x_i),centroid(y_j)),\ y_j\in {Y^*}_i\) \([0,\infty)\) 0
\(\rm{RPsub}_{ij}=dist(centroid(x_i),centroid(y_j)),\ y_j\in {\tilde{Y}}_i\) \([0,\infty)\) 0
Clinton et al. (2010)
\(\rm{RPsuper}_{ij}=\frac{dist(centroid(x_i),centroid(y_j))}{max_j(dist(centroid(x_i),centroid(y_j)))},\ y_j\in {Y^*}_i\) \([0,1]\) 0
Clinton et al. (2010)
\(\rm{OI2}_{i}=max_j\left(\frac{area(x_i\cap y_j)}{area(x_i)}\frac{area(x_i\cap y_j)}{area(y_j)}\right),\ y_j\in {\tilde{Y}}_i\) \([0,1]\) 1
\(\rm{Dice}_{ij}=\frac{2\:area(x_i\cap y_j)}{area(x_i)+area(y_j)},\ y_j\in {Y^{'}}_i\) \([0,1]\) 1

It takes the optional weight argument \(\alpha\in[0,1]\) (the default is 0.5).

Metrics can be computed either from scratch using subsets or by combining other metrics. Examples of metrics using subsets include: Oversegmentation (OS), Undersegmentation (US), Area Fit Index (AFI), Quality Rate (QR), Precision, Recall, Undermerging (UMerging), Overmerging (OMerging), Match (M), Evaluation measure (E), Relative area (RAsub and RAsuper), Purity Index (PI), and Fitness Function (Fitness). The metrics computed by combining other metrics include: Index D (D), Euclidean Distance (ED3), and F-measure (F_measure). Some of these metrics are not intended to be summarized such as Relative position (RPsub and RPsuper).

The segmetric package

Installation

The stable release of segmetric package can be installed from CRAN, using:

install.packages("segmetric")

Computing metrics

segmetric depends on the sf package (Pebesma 2018) to open and manipulate geographic vector data sets. sf is an implementation of a standard issued by the Open Geospatial Consortium (OGC 2011), which was further formalized in ISO 19125-1 (2004). This standard defines a common way to store and access spatial data in the context of geographic information systems.

To start with segmetric, users should create a segmetric object using sm_read(ref_sf, seg_sf) passing to it a reference spatial data set and a segmentation spatial data set. The parameters ref_sf and seg_sf should be either sf objects or paths to a supported file vector format (e.g., shapefile).

library(segmetric)

# load example data sets data("sample_ref_sf", package = "segmetric") data("sample_seg_sf", package = "segmetric")

# create a segmetric object m <- sm_read(ref_sf = sample_ref_sf, seg_sf = sample_seg_sf)

To compute a metric, users should run the function sm_compute(m, metric_id, ...), where m is a segmetric object and metric_id is the identification of a metric in segmetric. Any extra parameter necessary to compute metrics can be informed using the ellipsis parameter. The list of available metrics can be obtained using sm_list_metrics() which returns a character vector listing all registered metrics.

The sm_compute() function can compute a set of metrics by passing a vector of values to the metric_id parameter or making a sequence of function calls using a pipe operator. The two examples below produce equivalent results:

# compute three metrics sm_compute(m, c("AFI", "OS1", "US1"))

# compute the same three metrics as above sm_compute(m, "AFI") sm_compute("OS1") sm_compute("US1")

Most metrics are computed by feature (i.e., by reference or segment). To summarize the values of a set of metrics, users can run the function summary(object, ...), which computes aggregated values for the metrics returned by sm_compute().

# compute three metrics sm_compute(m, c("AFI", "OS1", "US1")) summary()

Once created, a segmetric object stores in the cache every computed subset. Further subset requests are retrieved from the cache, speeding up the computation.

How to extend segmetric

The segmetric package is extensible by providing functions to implement new metrics. To implement a new metric, users can use sm_new_metric() to create a new metric object and register it using sm_reg_metric() function. Users can type ?sm_reg_metric() to find more details on how new metrics can be implemented. The following example implements the Jaccard index (Jaccard 1912), also known as Intersection over Union (IoU(Rezatofighi et al. 2019), which is defined between 0 and 1 (optimal):

# register ‘IoU’ metric sm_reg_metric( metric_id = "IoU", entry = sm_new_metric( fn = function(m, s, ...) # m is the metric object, s is the subset # for IoU, s is equivalent to sm_yprime(m) sm_area(s) / sm_area(sm_subset_union(s)) , fn_subset = sm_yprime, name = "Intersection over Union", optimal = 1, description = "Values from 0 to 1 (optimal)", reference = "Jaccard (1912); Rezatofighi et al. (2019)" ) )

# describes the ‘IoU’ metric sm_desc_metric("IoU") #> * IoU (Intersection over Union) #> Values from 0 to 1 (optimal) #> reference: Jaccard (1912); Rezatofighi et al. (2019)

Contributions to the package are welcome at GitHub 2 and more details on how to contribute can be found in segmetric home-page at https://michellepicoli.github.io/segmetric.

Package segmetric in action

The specific steps involved in a segmentation workflow can vary depending on researcher goals, characteristics of the input data, and the task requirements. In general, a segmentation workflow typically includes the steps in Figure 2. First, researchers working with segmentation data need to obtain satellite images and preprocess them via methods such as radiometric and geometric corrections, image mosaicking, cloud masking, indices computation, and texture extraction. Second, a segmentation method is used to obtain the segments. Typically, researchers can use supervised and unsupervised machine learning methods such as convolutional neural network (Fukushima 1980), U-Net (Ronneberger, Fischer, and Brox 2015), multi-resolution segmentation (Baatz and Schape 2000), and watershed segmentation (Beucher 1992). In this step, the segments can be stored in a vector format. Finally, the accuracy of the segmentation can be assessed by supervised quality metrics. Using reference data, researchers compute metrics to evaluate the segmentation. The last two steps may be repeatedly iterated until the desired level of accuracy is reached.

General steps of segmentation workflow.
General steps of segmentation workflow.

In the following section, we demonstrate an application of the segmetric package to assess several segmentation parameters and guide users to select the most accurate one.

Data

In agriculture studies, mapping characteristics such as the size and number of fields can provide information about productivity and other important variables such as food security, socioeconomic status, and environmental status. To demonstrate segmetric, we used data on the Luís Eduardo Magalhães (LEM) municipality, west of Bahia state, Brazil. This municipality belongs to the Brazilian agricultural frontier known as MATOPIBA, which includes the states of Maranhão (MA), Tocantins (TO), Piauí (PI), and Bahia (BA) (Figure 3).

Study area in Luís Eduardo Magalhães municipality, west of Bahia state, Brazil (Google Earth imagery). Reference data (in red) was provided by Oldoni et al. (2020).
Study area in Luís Eduardo Magalhães municipality, west of Bahia state, Brazil (Google Earth imagery). Reference data (in red) was provided by Oldoni et al. (2020).

We used three PlanetScope images acquired on Feb 18, 2020, with a 3.7-meter resolution and four spectral bands (blue, green, red, and near-infrared). Radiometric and geometric corrections were applied to the image (level 3B) (Planet Team 2017). The images were in the same projection (UTM zone 23S) and we mosaicked them.

We segmented the image applying a multi-resolution segmentation approach (Baatz and Schape 2000). We tested four scale parameters (SP) to segment the image: 200, 500, 800, and 1000; shape parameter: 0.9; and compactness: 0.1. The resulting polygons were simplified using the Douglas-Peucker algorithm (Douglas and Peucker 1973) (distance parameter: 10 meters) in QGIS software (version 3.22.2). The Self-intersections were removed using SAGA’s Polygon Self-Intersection tool (version 7.8.2). The final segmentation set is composed of polygons intersecting the reference data with an area-perimeter ratio above 25. The segmentation results are provided as part of the segmetric package.

The reference data set (ref_sf), provided by Oldoni et al. (2020), was collected in two fieldwork campaigns in March and August 2020. Oldoni et al. (2020) draw the field boundaries in-situ on top of images Sentinel-2, with a spatial resolution of 10 meters. segmetric includes only a portion of this data set. The spatial data sets can be loaded into R using sf objects. To create a segmetric object, use function sm_read():

library(segmetric)

# load data sets data("ref_sf", package = "segmetric") data("seg200_sf", package = "segmetric") data("seg500_sf", package = "segmetric") data("seg800_sf", package = "segmetric") data("seg1000_sf", package = "segmetric")

# create a segmetric object m200 <- sm_read(ref_sf = ref_sf, seg_sf = seg200_sf) m500 <- sm_read(ref_sf = ref_sf, seg_sf = seg500_sf) m800 <- sm_read(ref_sf = ref_sf, seg_sf = seg800_sf) m1000 <- sm_read(ref_sf = ref_sf, seg_sf = seg1000_sf)

Analysis

This analysis assesses four different segmentations with different Scale Parameters (SP) to verify which one fits better with the reference polygons. First, we visualize the reference polygons and the four segmentations individually using the plot() function (Figure 4).

# plot layers plot(m200, layers = "ref_sf", plot_centroids = FALSE) plot(m200, layers = "seg_sf", plot_centroids = FALSE) plot(m500, layers = "seg_sf", plot_centroids = FALSE) plot(m800, layers = "seg_sf", plot_centroids = FALSE) plot(m1000, layers = "seg_sf", plot_centroids = FALSE)

[.33] image [.33] image [.33] image
[.33] image [.33] image

  1. reference polygons; (b) segmentation using SP = 200; (c) segmentation using SP = 500; (d) segmentation using SP = 800; (e) segmentation using SP = 1000.

The metrics available in the package can be consulted using the function sm_list_metrics(). In this example, the metrics chosen to evaluate the accuracy of the segmentations and verify the best value of the scale parameter were: Area Fit Index  (Carleer, Debeir, and Wolff 2005), F-measure  (Rijsbergen 1979) (Zhang et al. 2015), Quality Rate  (Weidner 2008) (Clinton et al. 2010), Oversegmentation  (Clinton et al. 2010), and Undersegmentation  (Clinton et al. 2010).

# compute all metrics metrics <- c("QR", "F_measure", "IoU", "M", "OS2", "US2") m200 <- sm_compute(m200, metrics) m500 <- sm_compute(m500, metrics) m800 <- sm_compute(m800, metrics) m1000 <- sm_compute(m1000, metrics)

# results summary(m200) #> QR F_measure IoU M OS2 US2 #> 0.7394817 0.6988555 0.4988198 0.6569973 0.2948025 0.2585708

summary(m500) #> QR F_measure IoU M OS2 US2 #> 0.50380348 0.80671198 0.56837519 0.70140431 0.07982693 0.37207120

summary(m800) #> QR F_measure IoU M OS2 US2 #> 0.47487615 0.78764418 0.54923433 0.68297970 0.04300207 0.43014287

summary(m1000) #> QR F_measure IoU M OS2 US2 #> 0.50311268 0.75742922 0.51745883 0.65548524 0.03679037 0.46524463

The computed metrics are presented in Table 2; the optimal value of QR, OS2, and US2 is 0, and 1 for F-measure, M, and IoU. These results indicate that segmentation using SP equal to 200 had the highest oversegmentation while using SP equal to 1000 had the highest undersegmentation. Observing the metrics F-measure, IoU, and M, we conclude that the best SP is 500.

Users must pay attention to which metric better fits their goals of accuracy assessment. For more information, we suggest the user consult comparative studies dedicated to geometric metrics such as Clinton et al. (2010), Räsänen et al. (2013), Yang et al. (2015), H. Costa, Foody, and Boyd (2018), and Jozdani and Chen (2020).

Accuracy metrics of Quality Rate (QR), F-measure, Intersection over Union (IoU), Match (M), Oversegmentation (OS2), and Undersegmentation (US2) for four segmentations with different Scale Parameters (SP).
QR F_measure IoU M OS2 US2
seg 200 0.739 0.699 0.499 0.657 0.295 0.259
seg 500 0.504 0.807 0.568 0.701 0.080 0.372
seg 800 0.475 0.788 0.549 0.683 0.043 0.430
seg 1000 0.503 0.757 0.517 0.655 0.037 0.465

The segmetric package allows users to visualize subsets used to compute metrics. The example in Figure 5 shows the results of the function to plot the subset Y_tilde over the reference and the segmentation polygons (SP = 500). This allows analyzing the overlap between the reference and segmentation polygons visually.

plot( x = m500, type = "subset", subset_id = "Y_tilde", plot_centroids = FALSE, plot_legend = TRUE, extent = sm_seg(m500) )

Overlapping between reference polygons and segmentation objects (SP = 500).
Overlapping between reference polygons and segmentation objects (SP = 500).

It is also possible to visualize the metrics for each segment in choropleth maps using the function:

plot( x = m500, type = "choropleth", metric_id = c("QR", "IoU", "M", "OS2", "US2"), break_style = "jenks", choropleth_palette = "RdYlBu", plot_centroids = FALSE )

Legend bar of choropleth maps are generated automatically, and users can further customize it with options such as the number of breaks and the palette. The legend consistently uses the same color for the optimal metric value (for example, in Figure 6, blue is better while red is worse), except for those metrics in which the optimal value is in the middle of the color scale (e.g., AFI). The size and number of intervals in each color scale change accordingly to the metric values present in the data set. Users can choose the method to compute the intervals. To check available options use ?plot.segmetric and see break_style parameter.

[.49] image [.49] image
[.49] image [.49] image
[.49] image

Spatial distribution of the metrics: (a) Quality Rate, (b) Intersection over Union, (c) Match, (d) Oversegmentation, and (e) Undersegmentation.

Figure 6 presents the spatialized results of the calculated metrics. The F-measure metric was not plotted because it is a global metric with a single value for all objects. Figures 6a, d, and e show the similarity between the QR, OS, and US metrics results, for which the ideal value is zero. In the three plots of these metrics, it is noted that the objects with the best results (close to zero) are located in the southeast part of the study area. The IoU and M metric maps (Figure 6b and c), for which the ideal value is 1, are also similar. We also observed that for these two metrics, the objects located in the southwest part of the study area have values close to 1. The figure shows differences in the number of objects plotted in each of the metrics, as the subsets used to calculate each of the metrics are different.

Summary

The segmetric package provides 28 metrics that can be used to evaluate and compare the results of segmentation methods. The package also offers innovative visualization options to assist qualitative spatial assessment, allowing diagnostics of the quality, issues, and potential biases of the segmentation. Plotting the segmented objects along their reference polygons and spatially visualizing the metrics may help users to evaluate and improve segmentation procedures, select segmentation parameters, and decide on adequate validation metrics.

To the extent of our knowledge, segmetric is the first available package in R that provides several supervised metrics based on reference polygons. segmetric also enables users to implement new metrics. In the future, we plan to add more supervised metrics and other ways to visualize metrics, and to use parallel processing to speed up computations.

Acknowledgments

This research was supported by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation program (Grant agreement No 677140 MIDLAND); the Amazon Fund through the financial collaboration of the Brazilian Development Bank (BNDES) and the Foundation for Science, Technology and Space Applications (FUNCATE) (Process 17.2.0536.1); and Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq) (Process 350820/2022-8).

Azevedo, Alcinei Mistico. 2022. ExpImage: Tool for Analysis of Images in Experiments. https://cran.r-project.org/web/packages/ExpImage.
Baatz, M., and A. Schape. 2000. Multiresolution Segmentation: An Optimization Approach for High Quality Multi-Scale Image Segmentation.” In Proceedings of Angewandte Geographische Informationsverarbeitung, XII, edited by J. Strobl, T. Blaschke, and G. Griesbner, 12–23. Salzburg: Herbert Wichmann Verlag.
Beucher, Serge. 1992. “The Watershed Transformation Applied to Image Segmentation.” Scanning Microscopy 1992 (6): 28.
Carleer, A. P., O. Debeir, and E. Wolff. 2005. Assessment of Very High Spatial Resolution Satellite Image Segmentations.” Photogrammetric Engineering & Remote Sensing 71 (11): 1285–94. https://doi.org/10.14358/PERS.71.11.1285.
Clinton, Nicholas, Ashley Holt, James Scarborough, Li Yan, and Peng Gong. 2010. Accuracy Assessment Measures for Object-based Image Segmentation Goodness.” Photogrammetric Engineering & Remote Sensing 76 (3): 289–99. https://doi.org/10.14358/PERS.76.3.289.
Costa, G. A. O. P., R. Q. Feitosa, T. B. Cazes, and B. Feijó. 2008. Genetic adaptation of segmentation parameters.” In Object-Based Image Analysis: Spatial Concepts for Knowledge-Driven Remote Sensing Applications, 679–95. Berlin, Heidelberg: Springer Berlin Heidelberg. https://doi.org/10.1007/978-3-540-77058-9_37.
Costa, Hugo, Giles M. Foody, and Doreen S. Boyd. 2018. “Supervised Methods of Image Segmentation Accuracy Assessment in Land Cover Mapping.” Remote Sensing of Environment 205: 338–51. https://doi.org/10.1016/j.rse.2017.11.024.
Douglas, David H, and Thomas K Peucker. 1973. Algorithms for the reduction of the number of points required to represent a digitized line or its caricature.” Cartographica: The International Journal for Geographic Information and Geovisualization 10 (2): 112–22. https://doi.org/10.3138/FM57-6770-U75U-7727.
Feitosa, RQ, RS Ferreira, CM Almeida, FF Camargo, and GAOP Costa. 2010. “Similarity Metrics for Genetic Adaptation of Segmentation Parameters.” In 3rd International Conference on Geographic Object-Based Image Analysis (GEOBIA 2010). Vol. 29. Ghent: The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences.
Fukushima, Kunihiko. 1980. “Neocognitron: A Self-Organizing Neural Network Model for a Mechanism of Pattern Recognition Unaffected by Shift in Position.” Biological Cybernetics 36 (4): 193–202.
Hossain, Mohammad D., and Dongmei Chen. 2019. “Segmentation for Object-Based Image Analysis (OBIA): A Review of Algorithms and Challenges from Remote Sensing Perspective.” ISPRS Journal of Photogrammetry and Remote Sensing 150: 115–34. https://doi.org/https://doi.org/10.1016/j.isprsjprs.2019.02.009.
ISO 19125-1. 2004. Geographic information - Simple feature access - Part 1: Common architecture.” International Standard Organization. https://www.iso.org/standard/40114.html.
Jaccard, Paul. 1912. The Distribution of the Flora in the Alpine Zone.” New Phytologist 11 (2): 37–50. https://doi.org/10.1111/j.1469-8137.1912.tb05611.x.
Jozdani, Shahab, and Dongmei Chen. 2020. On the versatility of popular and recently proposed supervised evaluation metrics for segmentation quality of remotely sensed images: An experimental case study of building extraction.” ISPRS Journal of Photogrammetry and Remote Sensing 160 (November 2019): 275–90. https://doi.org/10.1016/j.isprsjprs.2020.01.002.
Kotaridis, Ioannis, and Maria Lazaridou. 2021. Remote sensing image segmentation advances: A meta-analysis.” ISPRS Journal of Photogrammetry and Remote Sensing 173 (March): 309–22. https://doi.org/10.1016/j.isprsjprs.2021.01.020.
Mouselimis, Lampros. 2022a. OpenImageR: An Image Processing Toolkit. https://CRAN.R-project.org/package=OpenImageR.
———. 2022b. SuperpixelImageSegmentation: Image Segmentation Using Superpixels, Affinity Propagation and Kmeans Clustering. https://CRAN.R-project.org/package=SuperpixelImageSegmentation.
Niedballa, Jürgen, Jan Axtner, Timm Fabian Döbert, Andrew Tilker, An Nguyen, Seth T. Wong, Christian Fiderer, Marco Heurich, and Andreas Wilting. 2022. “Imageseg: An r Package for Deep Learning-Based Image Segmentation.” Methods in Ecology and Evolution 13 (11): 2363–71. https://doi.org/10.1111/2041-210X.13984.
OGC. 2011. Simple Feature Access-Part 1: Common Architecture.” Open Geospatial Consortium. http://www.opengeospatial.org/standards/sfa.
Oldoni, Lucas Volochen, Ieda Del’Arco Sanches, Michelle Cristina A. Picoli, Renan Moreira Covre, and José Guilherme Fronza. 2020. LEM+ dataset: For agricultural remote sensing applications.” Data in Brief 33: 106553. https://doi.org/10.1016/j.dib.2020.106553.
Pebesma, Edzer. 2018. Simple Features for R: Standardized Support for Spatial Vector Data.” The R Journal 10 (1): 439–46. https://doi.org/10.32614/RJ-2018-009.
Persello, Claudio, and Lorenzo Bruzzone. 2010. “A Novel Protocol for Accuracy Assessment in Classification of Very High Resolution Images.” IEEE Transactions on Geoscience and Remote Sensing 48 (3): 1232–44. https://doi.org/10.1109/TGRS.2009.2029570.
Planet Team. 2017. Planet Application Program Interface: In Space for Life on Earth.” San Francisco, CA. https://api.planet.com.
Räsänen, Aleksi, Antti Rusanen, Markku Kuitunen, and Anssi Lensu. 2013. What makes segmentation good? A case study in boreal forest habitat mapping.” International Journal of Remote Sensing 34 (23): 8603–27. https://doi.org/10.1080/01431161.2013.845318.
Rezatofighi, Hamid, Nathan Tsoi, JunYoung Gwak, Amir Sadeghian, Ian Reid, and Silvio Savarese. 2019. Generalized intersection over union: A metric and a loss for bounding box regression.” In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 658–66. https://doi.org/10.1109/CVPR.2019.00075.
Rijsbergen, CJ van. 1979. Information Retrieval. 2nd ed. Butterworths.
Ronneberger, Olaf, Philipp Fischer, and Thomas Brox. 2015. “U-Net: Convolutional Networks for Biomedical Image Segmentation.” In Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015, edited by Nassir Navab, Joachim Hornegger, William M. Wells, and Alejandro F. Frangi, 234–41. Springer International Publishing.
Van Coillie, F. M. B., L. P. C. Verbeke, and R. R. De Wulf. 2008. “Semi-Automated Forest Stand Delineation Using Wavelet Based Segmentation of Very High Resolution Optical Imagery.” In Object-Based Image Analysis: Spatial Concepts for Knowledge-Driven Remote Sensing Applications, 237–56. Berlin, Heidelberg: Springer Berlin Heidelberg. https://doi.org/10.1007/978-3-540-77058-9_13.
Weidner, Uwe. 2008. “Contribution to the Assessment of Segmentation Quality for Remote Sensing Applications.” In XXI International Society for Photogrammetry and Remote Sensing Congress (XXI ISPRS 2008), 479–84. Beijing: International Society for Photogrammetry; Remote Sensing.
Wijffels, Jan. 2020. Image.otsu: Otsu’s Image Segmentation Method. https://CRAN.R-project.org/package=image.Otsu.
Yang, Jian, Yuhong He, John Caspersen, and Trevor Jones. 2015. “A Discrepancy Measure for Segmentation Evaluation from the Perspective of Object Recognition.” ISPRS Journal of Photogrammetry and Remote Sensing 101: 186–92. https://doi.org/10.1016/j.isprsjprs.2014.12.015.
Yang, Jian, Peijun Li, and Yuhong He. 2014. A multi-band approach to unsupervised scale parameter selection for multi-scale image segmentation.” ISPRS Journal of Photogrammetry and Remote Sensing 94: 13–24. https://doi.org/10.1016/j.isprsjprs.2014.04.008.
Zhang, Xueliang, Xuezhi Feng, Pengfeng Xiao, Guangjun He, and Liujun Zhu. 2015. “Segmentation Quality Evaluation Using Region-Based Precision and Recall Measures for Remote Sensing Images.” ISPRS Journal of Photogrammetry and Remote Sensing 102: 73–84. https://doi.org/10.1016/j.isprsjprs.2015.01.009.

  1. We are following the notation used by Clinton et al. (2010) and H. Costa, Foody, and Boyd (2018)↩︎

  2. https://github.com/michellepicoli/segmetric↩︎