Wednesday, October 30, 2013

Lab 4 Miscellaneous Image Functions


 Goals and Background
                The goal of lab 4 was to explore miscellaneous image functions that can be used in remote sensing techniques to present and enhance remotely sensed images. For this lab the class explored using ERDAS Imagine 2013 software. The techniques used included image subset, pan-sharpening, Google-Earth connection, and resampling. These techniques allow users to optimize visual representation by delineating study areas and enhancing the image and its properties.
Methods
                Part one of this lab included creating an area of interest from a study area by performing an image subset. There are two methods used to create a subset, the first is simple and uses an inquire box to select a specified area. The second method, which can be seen in Figure 1, creates an area of interest based upon a shapefile, or boundary that is based on an ArcGIS file. For this example we used the shapefile that covers Chippewa and Eau Claire counties.
                Part two covers image fusion, and in this section we used ERDAS Imagine to create a higher resolution image in a technique called pan-sharpening. The “Pan” in pan-sharpening comes from the word panchromatic. In an image fusion, two images are combined to create a new output image. In this case, the panchromatic image is of a higher resolution (15m by 15m for landsat images) than the false color infrared image (30m by 30m). Fusing the two images creates a sharper more refined image. This method is often used in applications such as Google Earth to enhance images. Figure 2 in the results section shows this technique.
                Part three involves radiometric enhancement techniques. These can be used to correct haze in an image. Haze can cause the image to become washed out, by performing the haze reduction tool in ERDAS Imagine we can correct this. Correcting for haze reduction enhances the image by creating an output image that has more saturation and contrast. Figure 3 is an example of haze reduction.
                Part four of this lab allowed us to explore how ERDAS Imagine software can interact with Google Earth to compare images or create an image interpretation key. Since Google Earth has a higher resolution, through data collection and pan-sharpening techniques, its images can often be used to help interpret lesser                quality images that might be obtained through remote sensing. Figure 4 shows how we can use ERDAS Imagine to connect to Google Earth and create a linked view of the same area. In remote sensing often an image interpretation key is used to identify images. Two types of keys can be used, a selective key or an elimination key. An elimination key uses a flowchart to eliminate possible objects. Google Earth is more of a selective key, because it allows us to compare similar images to identify our study area.
                Part five is focused on the resampling tool. Resampling allows us to change the size of pixels, which does not change the spatial resolution. There are two forms, resample up and resample down. Resample up reduces the size of the pixel to create a large file, resample down increases the pixel size creating a smaller file. For this lab we compared two methods in ERDAS called nearest neighbor, and bilinear interpolation. I will admit that I am still a little confused at resampling and hope to explore this technique further. Figures 5 and 6 show this technique.
Results

Figure 1 Screen capture of a subset image. Eau Claire and Chippewa county boundaries were used to create a shapefile that outlines this area of interest.
Figure 2 Result of pan-sharpening an image. The image on the right is pan-sharpened and has a higher resolution.
 
Figure 3 Example of haze reduction. The image on the right has had the haze reduction tool performed and appears more saturated in color and has more contrast.

Figure 4 A synchronized view of ERDAS Imagine (left) and Google Earth (right). Google Earth has a higher resolution and can be used as an image interpretation key.
Figure 5 Difference in pixels between input image (left) and bilinear interpolation resampled image (right) which was reduced from 30m to 20m pixel size.
Figure 6 The input image is on the left and the two resampled technique images are on the right. I noted in red the pixel edges. I believe the resampled image has a coarser pixel edge than the original.

 
 
 
 Conclusion

      Overall learning the various miscellaneous image functions was a great exercise. I enjoyed throughly learning how to pan-sharpen an image as well as use google earth to synchronize views and use as an interpretation key. All of the techniques are useful and will prove to be beneficial in future assignments.

No comments:

Post a Comment