DATA COMPRESSION:
Data compression of Network GIS
refers to compression of geospatial data within a network GISso that volume of
data transmitted across the network can be reduced. Typically, a properly
chosen compression algorithm can reduce data size to 5~10% of original for
images , and 10~20% for vector and textual data . Such compression
ratios result in significant performance improvement.
Data compression algorithms can
be categorized into lossless and lossy. Bit streams generated by lossless
compression algorithm can be faithfully recovered to the original data. If loss
of one single bit may cause serious and unpredictable consequences in original
data (for example, text and medical image compression) lossless compression
algorithm should be applied. If data consumers can tolerate distortion of
original data to a certain degree, lossy compression algorithms are usually better
because they can achieve much higher compression ratios than lossless ones
Data compression of network GIS
is similar to other data compression algorithms on distributed computing
platforms. Image compression algorithms such as JPEG had been applied since the
first Web-based GIS emerged in 1993. However, the compression of vector data is
introduced much later, such as the Douglas-Peuker algorithm and the work done
in 2001 by Bertolotto and Egenhofer.
1 SCIENTIFIC FUNDAMENTALS
Data compression originates from
information theory, which concentrates on systematicresearch on problems
arising when analog signals are converted to and from digital signals and
digital signals are coded and transmitted via digital channels. One of the most
significant theoretical results in information theory is the so-called source
coding theorem, which asserts that there exists a compression ratio limit that
can only be approached but never be exceeded by any compression algorithms. For
most practical signals it is even very difficult to obtain compression
algorithms whose performance is near this limit. However, compression ratio is
by no means the unique principal in the development of compression algorithm.
Other important principals include fast compression speed, low resource
consumption, simple implementation, error resilience, adaptability to different
signals, etc.
Related Topics
Privacy Policy, Terms and Conditions, DMCA Policy and Compliant
Copyright © 2018-2023 BrainKart.com; All Rights Reserved. Developed by Therithal info, Chennai.