Menu
STARS Project
close
STARS Project
close

Refine your search

STARS Project
Knowledge Portal

Semi-automatic detection of field boundaries from high-resolution satellite imagery

Semi-automatic detection of field boundaries from high-resolution satellite imagery

Information on field boundaries has always been important. This information can serve as legal protection of ownership, as well as provide valuable input for applications related to crop monitoring or yield forecasting. Attributes of interest are specifically the precise location of the boundary and the size of the area within the boundary. This information is available in many western parts of the world, where land is expensive. In less developed parts of the world this information is as valuable to the owners of fields too, but is often lacking or incomplete. Cadastral systems there are often underdeveloped. Providing this information is a laborious job, requiring high skilled surveyors, equipped with expensive instruments. Regarding this problem this research investigates the possibility of using very high resolution imagery from the WorldView-2 satellite sensor to be processed towards the end of making this field boundary information available in a more efficient way. The results are verified by other data sources and methods. These include; walking the boundaries in the study area with a handheld GPS, imaging the area using a fixed wing Unmanned Aerial Vehicle (UAV) platform equipped with a CanonS110NIR camera, and manual on-screen digitizing. The specific study area in Sougoumba, Mali, provided large challenges in the process, due to the heterogeneity of the landscape. The final results show that the methodology of image segmentation is not accurate enough for direct extraction of the exact location of the boundaries and the area involved. On the other hand many boundaries are well delineated, providing a useful aim in already existing practises of manual on-screen digitizing.  As a side result the use of UAV’s equipped with lightweight cameras and (preferably) RTK positioning systems seem very promising to have a valuable contribution.

Using spectral and textural differences

To derive at field boundary information from satellite imagery, full advantage was taken from the change in spectral and textural behaviour of the different landscape elements like trees, rock, soil, bush and cropland through time. A key-role in this process was the use of the NDVI and SAVI derived from WorldView-2 images. Figures 1 presents the colour composites of respectively the NDVI- and SAVI stacked multilayer images. Both stacks are based on three input vegetation index images from the dates: 22nd May, 26th June and 18th of October 2014. These three dates are shown in the stacks as respectively the colours red, green and blue. Therefore the different colours in the result represent the change in values of the vegetation indices. Because in general crops per field develop at different pace compared to crops in neighbouring fields, the agricultural fields seem to be represented in unique colours in the stacks.

Figure 1: NDVI and SAVI composites of a part of the study area. Different agricultural field seem to be represented by the different colours.

Segmentation: texture, size, shape and spectral information

Image segmentation was applied on the results of different processing steps in order to see the effect of the processing steps on the segmentation result. Following this method the difference between single-, versus multiple scenes as input could be evaluated. As well as the differences using the NDVI and SAVI indices. The segmentation process was used because it is known to reduce the within-class spectral variation of high resolution imagery, and can increase the classification and statistical accuracy if conducted at an appropriate scale. (Blaschke, 2010, Addink et al., 2007, Drǎguţ et al., 2010). In general this method considers spectral content as well as the segment's texture, size, and shape for merging decisions, and provides direct control over the pixel/segment ratio, and allows both minimum and maximum segment size constraints. Lambda Schedule (FLS) Segmentation was used for this research as provided in Erdas Imagine 2014, as this program allows the user to relatively easily control the parameters of the segmentation process. The segment ratio of the pixel determines the average output size of the segments and was set to 3000 pixels, which approximates the size of a common small field in the study area. The spectral content was considered important as well as size. Therefore these parameters got a weight of 0.9 on the scale from 0 to 1. The parameter for texture got a value 0.9 too, in order to take full advantages of all information from the images. The shape parameter value was kept low being 0.1, putting the emphasis on colour differences, because the different features in the landscape could have virtually any shape, ranging from highly symmetric- till non symmetric shapes. Figure 2 shows the result of segmentation applied on a single WV-2 scene of 18 October 2014.


Figure 2: FLS Segmentation result, based upon a single WV-2 scene from October 2014, of a part of the study area. A lot of field boundaries are well delineated.


Based upon this research there seems to be no direct relationship between the information derived from the used remote sensing imagery and the field boundaries in the study area. Different approaches (PCA, Segmentation, Edge Detection filters, using NDVI or SAVI, masking trees or not) in detecting and deriving these boundaries do not yield good results. The accuracy (defined in terms of area, position, and boundary length) of the methods applied in this research is limited by the difficulty of the heterogeneous landscape. However many boundaries were derived giving hope that, seen the complexity of the case in this study area, the applied method will work better in less complex landscapes.


One important recommendation concerns the use of UAV images as references/validation source. To improve the results of the UAV imagery derived boundaries, height information could be added. Since this is collected within the UAV photos during acquisition. Based on the height per location, Digital Elevation Models were derived per mosaicked cluster. However only information from the end of the growing season was available. But if the start of the season will be available in the future as well, height differences could be derived. Providing a solution for the vague boundaries caused by low plant densities. Figure 3, show the DEM image and 3d extruded model, zoomed in around a particular field. This information was derived eBee photos (October 2014) of which the mosaic was composed. When DEM data is available from different dates, field boundaries could be extracted by height-image differencing. The elevation influence of the terrain itself will also be avoided when differencing could be used.

  • Figure 3: using the height information from the fixed-wing UAV recordings over the study area. Trees are clearly visible, as well as boundaries defined by paths or higher vegetation. Figure 3: using the height information from the fixed-wing UAV recordings over the study area. Trees are clearly visible, as well as boundaries defined by paths or higher vegetation.