Investigation of acoustic communication

85
Investigation of acoustic communication in an ad hoc swarm-network Studienarbeit von Sascha Friedrich An der Fakult¨ at f ¨ ur Wirtschaftswissenschaften In dem Studiengang Wirtschaftsingenieurwesen eingereicht am 31.03.2013 beim Institut f ¨ ur Angewandte Informatik und Formale Beschreibungsverfahren des Karlsruher Instituts f ¨ ur Technologie Betreuer: PD Dr.-Ing. Sanaz Mostaghim

Transcript of Investigation of acoustic communication

Page 1: Investigation of acoustic communication

Investigation of acoustic communicationin an ad hoc swarm-network

StudienarbeitvonSascha Friedrich

An der Fakultat furWirtschaftswissenschaften

In dem StudiengangWirtschaftsingenieurwesen

eingereicht am 31.03.2013 beimInstitut fur Angewandte Informatikund Formale Beschreibungsverfahrendes Karlsruher Instituts fur Technologie

Betreuer: PD Dr.-Ing. Sanaz Mostaghim

Page 2: Investigation of acoustic communication

Contents1 Introduction 4

2 Physical background 62.1 Sin waves . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62.2 Traveling waves as transverse and longitudinal waves . . . . . . . . . . . . . 8

2.2.1 Transverse and longitudinal waves . . . . . . . . . . . . . . . . . . . 82.2.2 Traveling waves . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

2.3 Standing waves and definitions . . . . . . . . . . . . . . . . . . . . . . . . . 112.3.1 Standing waves . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112.3.2 Fundamental and harmonics . . . . . . . . . . . . . . . . . . . . . . 122.3.3 Reflection and transmission . . . . . . . . . . . . . . . . . . . . . . 142.3.4 Energy, power, intensity . . . . . . . . . . . . . . . . . . . . . . . . 162.3.5 Absorption . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182.3.6 Refraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182.3.7 Dispersion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192.3.8 Resonance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212.3.9 Fourier analysis and frequency spectrum . . . . . . . . . . . . . . . . 222.3.10 Huygens Fresnel principle, interference and diffraction . . . . . . . . 232.3.11 Doppler effect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

3 Biology - Evolution and application of acoustic communication 263.1 Signal transduction techniques . . . . . . . . . . . . . . . . . . . . . . . . . 263.2 Evolution of acoustic communication . . . . . . . . . . . . . . . . . . . . . 293.3 Acoustic operators in nature . . . . . . . . . . . . . . . . . . . . . . . . . . 313.4 Infra- and ultrasound . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323.5 The human ear . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343.6 CIRCE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36

4 Underwater acoustic communication 404.1 Electromagnetic waves in water . . . . . . . . . . . . . . . . . . . . . . . . 404.2 Challenges in underwater acoustic communication . . . . . . . . . . . . . . 424.3 Deep water- and shallow water acoustic communication . . . . . . . . . . . . 464.4 Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 474.5 Network stack . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49

4.5.1 Data link layer: Multiple access protocols . . . . . . . . . . . . . . . 504.5.2 Data link layer: Medium access control protocols . . . . . . . . . . . 544.5.3 Data link layer: Error control protocols . . . . . . . . . . . . . . . . 594.5.4 Network layer: Routing . . . . . . . . . . . . . . . . . . . . . . . . 604.5.5 How to improve UAN . . . . . . . . . . . . . . . . . . . . . . . . . 62

4.6 Acoustic modem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63

2

Page 3: Investigation of acoustic communication

5 Terrestrial acoustic swarm communication 655.1 Challenges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 665.2 Hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 705.3 Historical side note: Acoustic coupler . . . . . . . . . . . . . . . . . . . . . 73

6 Conclusion 74

7 Appendix 77

3

Page 4: Investigation of acoustic communication

1 Introduction

This paper investigates the use of acoustic communication in an ad hoc terrestrial swarm-network. Goal of this investigation is to find an energy efficient algorithm / hardware im-plementation to establish a low cost, energy efficient, reliable, ad hoc network for a massiveamount of mobile swarm-agents. Possible applications of that type of swarm-network are forexample localization and agent-tracking.Today terrestrial communication is dominated by the use of electromagnetic waves (EM waves),like radio, infrared and visible light waves (in Chapter 4 we will explain the properties of thedifferent waves). The largest part of the used wave-spectrum are radio waves. But in biology,with its evolution over billions of years, electromagnetic communication with wavelength inthe order of radio waves, is not really applied, besides of the fact, that some insects and batsmay use it. Is there a reason, or is it only chance? Despite of the fact, that creatures can senseelectromagnetic waves in terms of visible light and some animals like bees even furthermoreultraviolet light, there is no equivalent for transmission of that type of waves for the majorityof creatures. Only creatures in the deep sea like the anglerfish and on ground the firefly havethe ability to emit light signals via bioluminescence for hunting or mating.Maybe there are advantages of acoustic communication, that electromagnetic waves can notoffer. One very well known fact for example is, that electromagnetic waves travel not verywell through medium like (sea)water (in Chapter 4 we will explain why). Maybe there is arelationship between the missing evolutionary step to EM communication and the long periodall creatures lived in the sea. But on the other hand sensing of visible light had been establishedeven in this long period. Light and radio are short ranged under water, but pressure-waves likesound-waves have a massively higher possible distance for communication.But this fact is reversed in air and vacuum. There EM waves can travel far easier as pres-sure waves, especially in outer space, where no medium for pressure waves are available.Nevertheless successful species, like humans, use acoustic communication for bidirectionalinformation exchange without further technology (an example for further technology wouldbe mail or books, which enhance indirect bidirectional communication).Hence there could be advantages for acoustics, this paper looks for the pros and cons for thistype of communication and investigates if it can be implemented in swarm-networks.

4

Page 5: Investigation of acoustic communication

Paper walkthrough:Chapter 2 in this paper deals with the necessary physical background, to understand the dif-ferent kinds of waves. Beginning with traveling waves, we will bring up the subject of thegeneral wave theory, with its definitions for characterizing waves.Based on standing waves, we will introduce, in the second part of Chapter 2, the propertiesthat waves exhibit. Among other phenomena attenuation and the Doppler effect will be twoparts of this section.In Chapter 3, the role of acoustic waves for communication in biology is discussed and com-pared to other kinds of information transfer. Especially two animals, namely bats and ele-phants will be analyzed.From biology to technology; In Chapter 4, underwater acoustics (UWA) is being explained,which is a dominant area for acoustic communication. Adaption of UWA to terrestrial en-vironments and transferring its advantages, like low energy and low cost alignment, will beexplained in Chapter 5 in this paper. Furthermore the advantages and disadvantages of acous-tics in this environment will be discussed.In Chapter 6 we will summarize acoustic communication in an ad hoc network and discuss itspossible application in a low power swarm.

5

Page 6: Investigation of acoustic communication

2 Physical background

To fully understand acoustic communication, we need to investigate the physics of pressurewaves, or more generic the physics of waves.Because all possible waveforms can be approximated through superposition of sin waves in aFourier analysis (we will explain this later) the first section describes the simplest form of awave, the sin wave.Afterwards we will look at propagating waves, traveling waves and compose two of them, inthe third section, to a standing wave.Finally, we will distinguish different kinds of waves, like EM waves, pressure waves etcetera.

2.1 Sin waves

The following figure shows a typical sin wave with its properties. Subsequently we will outlinethe different variables for future usage.

Figure 1: Sine wave

• Amplitude (A): ”Amplitude, in physics, the maximum displacement or distance movedby a point on a vibrating body or wave measured from its equilibrium position.”[30]

• Phase (φ): ”Phase in sinusoidal functions or in waves has two different, but closelyrelated, meanings. One is the initial angle of a sinusoidal function at its origin and issometimes called phase offset. Another usage is the fraction of the wave cycle whichhas elapsed relative to the origin.”[6]

6

Page 7: Investigation of acoustic communication

• Wavelength (λ): ”Wavelength, distance between corresponding points of two consec-utive waves. ’Corresponding points’ refers to two points or particles in the same pha-sei.e., points that have completed identical fractions of their periodic motion. Usually,in transverse waves (waves with points oscillating at right angles to the direction of theiradvance), wavelength is measured from crest to crest or from trough to trough; in longi-tudinal waves (waves with points vibrating in the same direction as their advance), it ismeasured from compression to compression or from rarefaction to rarefaction.”[24]

• Period (T): ”The period is the duration of one cycle in a repeating event, so the periodis the reciprocal of the frequency”[86]

• Frequency (f): ”Frequency, in physics, number of waves that pass a fixed point in unittime; also the number of cycles or vibrations undergone during one unit of time by abody in periodic motion. A body in periodic motion is said to have undergone one cycleor one vibration after passing through a series of events or positions and returning to itsoriginal state.”[20]

• Velocity / wave velocity (v): ”Wave velocity, distance traversed by a periodic, or cyclic,motion per unit time (in any direction). Wave velocity in common usage refers to speed,although, properly, velocity implies both speed and direction.”[22]In Chapter 2.3.7 we will distinguish between phase velocity and group velocity.

• Angular frequency (ω): ”In physics, angular frequency ω (also referred to by theterms angular speed, radial frequency, circular frequency, orbital frequency, radian fre-quency, and pulsatance) is a scalar measure of rotation rate. Angular frequency (orangular speed) is the magnitude of the vector quantity angular velocity. The term an-gular frequency vector is sometimes used as a synonym for the vector quantity angularvelocity.”[49]

• Wavenumber (k): ”In the physical sciences, the wavenumber is the spatial frequency ofa wave. It can be envisaged as the number of waves that exist over a specified distance(analagous to frequency being the number of wave oscillations that take place in a spec-ified time). More formally, it is the reciprocal of the wavelength.It is also the magnitude of the wave vector.”[50]

There are some important correlations between these variables, that we need to know:

f =1

T(1)

λ = v · T = v · 2π

ω(2)

k =2π

λ(3)

7

Page 8: Investigation of acoustic communication

2.2 Traveling waves as transverse and longitudinal waves

2.2.1 Transverse and longitudinal waves

A wave can oscillate in two different directions and therefore it has to be distinguished be-tween transverse waves and longitudinal waves (l-waves).

• Transverse wave: ”Transverse wave, motion in which all points on a wave oscillatealong paths at right angles to the direction of the waves advance. Surface ripples onwater, seismic S (secondary) waves, and electromagnetic (e.g., radio and light) wavesare examples of transverse waves.”[23]

• Longitudinal wave (l-wave): ”Longitudinal wave, wave consisting of a periodic distur-bance or vibration that takes place in the same direction as the advance of the wave.A coiled spring which is compressed at one end and then released experiences a waveof compression that travels its length, followed by a stretching; a point on any coil ofthe spring will move with the wave and return along the same path, passing through theneutral position and then reversing its motion again.”[21]

To transform a longitudinal sound (pressure / acoustic) wave to a transverse wave, we haveto look at pressure-levels in time, instead of molecule positions in time. In this way a longi-tudinal can be handled like a transverse wave. Figure 2 shows the mapping of a longitudinalpressure wave to a transverse wave.

Figure 2: Mapping a l-wave to a transverse wave

8

Page 9: Investigation of acoustic communication

2.2.2 Traveling waves

Whether longitudinal or transverse, if a wave propagates in space with local displacements asa function of time and distance, we have to deal with a traveling wave.

• Traveling wave: A traveling wave is a wave, that propagates its shape with velocity (v)in a designated direction. Energy moves in the direction of propagation. If a moleculemoves apart, it will recover its position later. Therefore the wave propagates, but nomass follows. In case of l-waves, mass is actually moving back and forth, but it neverloses track of its initial position.

To get the simplest dimensional traveling wave, you can take a rope, tie one end at a pole andmove the other end of the rope once up and down and back again to its initial position. In thisway a 1D-version of a traveling wave can be seen. Without further oscillation, only one wavewill travel through the rope. The rope will be straight and tight in front and behind the wave.Figure 3 illustrates this behavior.

Figure 3: Traveling wave modeled through connected molecules

9

Page 10: Investigation of acoustic communication

Mathematically a sin traveling wave propagates in the positive or negative x-direction. There-fore it can be formulated as a traveling wave to the ”left” or to the ”right”. Especially in thenext section (standing waves) this will be important.Succeeding equations, for the 1D case of a traveling sin wave, will be frequently used in thispaper.

Traveling wave to the right:

y(x, t) = A sin2π

λ(x−vt) (4)

Traveling wave to the right (simplified):

y(x, t) = A sin (kx−ωt) (5)

k =2π

λ;ω = kv

Traveling wave to the left (simplified):

y(x, t) = A sin (kx+ωt) (6)

10

Page 11: Investigation of acoustic communication

2.3 Standing waves and definitions

2.3.1 Standing waves

Now that we know what a traveling wave is and what properties it contains, it is time to go astep further to standing waves. To get a standing wave, we need two traveling waves movingin opposite directions. Like mentioned before, these two traveling waves can mathematical bedefined as y(x, t) = A sin (kx−ωt) and y(x, t) = A sin (kx+ωt). For a more handy reading,we will utilize the notation: Travleft and Travright.We use superposition to combine these two waves:

ytot = Travleft + Travright = 2Asin(kx)cos(wt) (7)

Furthermore both waves must have the same amplitude, phase coincidence and frequency.Otherwise a partial standing wave would be created. A standing sound wave can be made bytwo speakers in opposite directions or for a given system, like for example an partial closedcylinder with an open and en closed end, via boundary conditions combined with the rightfrequencies (multiples of the fundamental).Different to a traveling wave, the time and distance variables are connected via multiplicity.As an impact of this fact, there are locations in the superimposed wave with no movement atall. For points with sin(kx) = 0 we will get static points called nodes. The opposite of nodesare antinodes. Points of the wave with the highest displacement are antinodes. In reality itis very difficult to achieve a pure standing wave. Most of the created waves will be partialstanding waves, which consists of a traveling wave superimposed on a standing wave.

Figure 4: Harmonic wave [51]

11

Page 12: Investigation of acoustic communication

2.3.2 Fundamental and harmonics

To distinguish the different harmonic waves, they are divided into harmonics/overtones. Everyharmonic consists of a frequency that is a multiple of the fundamental frequency.”The fundamental frequency, often referred to simply as the fundamental, is defined as thelowest frequency of a periodic waveform.” [85]But why do we mention harmonics and fundamentals? Because in acoustic communicationwe need to consider overtones 1.

Figure 5: Harmonics - nodes at the ends [84]

1”overtone, in acoustics, tone sounding above the fundamental tone when a string or air column vibrates asa whole, producing the fundamental, or first harmonic. If it vibrates in sections, it produces overtones, orharmonics. The listener normally hears the fundamental pitch clearly; with concentration, overtones maybe heard. Harmonics are a series of overtones resulting when the frequencies are exact multiples of thefundamental frequency. The frequencies of the upper harmonics form simple ratios with the frequency of thefirst harmonic (e.g., 2:1, 3:1, 4:1).”[35]

12

Page 13: Investigation of acoustic communication

Example: Pipe with two open ends

Boundary conditions:• atmosphere pressure at the open end (node) at x = 0.• high and low pressure at the closed end (antinode) at x = L.

Facts:• Length of the flute = L• Speed of sound (0◦C) = 331.30 m/s• Amplitude = A

Calculation:y(0, t) = 0

y(L, t) = 0

0 = 2Asin(k0)cos(wt)

0 = 2Asin(kL)cos(wt)

But it must be independent of t!

0 = 2Asin(kL)

0 = sin(kL)

k =nπ

L

λ =2π

k=

2Lπ

n

ω = kv =npv

L

fn =nv

2L

Result:Fundamental Frequency fn = nv

2L

13

Page 14: Investigation of acoustic communication

2.3.3 Reflection and transmission

In the last section we introduced standing waves as the superposition of two traveling waveswith opposite velocity vectors.If a traveling wave, for example a Travright, moves from one medium (medium1) to anothermedium2, there could be reflection and transmission.One part of the amplitude travels further away through medium2. The other part of the am-plitude, that is reflected by medium2 will travels back as a Travleft . But not only amplitudewill change, wavelength will be different, too.

yincident(x, t) = Aincidentsin(w1t−k1x) (8)

yreflected(x, t) = Areflectedsin(w1t+k1x) (9)

ytransition(x, t) = Atransitionsin(w1t−k2x) (10)

Aincident = Atransition + Areflected (11)

Because the mediums are connected, the following conditions must be true for the transitionof yincident(x, t) to ytransition(x, t):

1. continuous: yincident(xconnect) = ytransition(xconnect)

2. differentiable: δyincident(xconnect,t)δx

= δytransition(xconnect,t)δx

If the velocity in medium2 is different to medium1, then Atransition will be different toAincident and reflection will happen, too. But if the velocity in both mediums is the same,no reflection will happen and the whole amplitude will transition through the other medium.To characterize the degree of reflection and transition we need subsequent formulas:

Reflection coefficient :ArAi

=v1 − v2

v1 + v2

(12)

Transmission coefficient:AtrAi

=2v2

v1 + v2

(13)

It is crucial to notice, that the reflection coefficient is sign sensitive. The sign reflects the fact,that the traveling wave, which is reflected back, could be π out of phase and therefore minus.

14

Page 15: Investigation of acoustic communication

Figure 6: Reflection and transmission for a traveling wave with v1 < v2 [42]

Figure 7: Reflection and transmission for a traveling wave with v1 > v2 [42]

15

Page 16: Investigation of acoustic communication

2.3.4 Energy, power, intensity

Energy is a very important value for communication purposes. Since energy which is neededto create waves, decreases the potential chemical energy of a battery. That is why we considerenergy properties of all kinds of waves (EM, pressure, etcetera).First of all, a transverse wave moves molecules locally in a distinct direction and thereforeEkinetic must be present.Second, molecules are shifted from one potential to another and as a consequence the potentialenergy of these molecules will change. But important to know, molecules only move locallyin one direction. Energy travels with the wave, but no mass is really moving in that direction(there are only local displacements).

Summarized:Etotal = Epotential + Ekinetic (14)

For a Travright in a rope with mass m, tension T and µ, the energy of a wavelength λ is:

v2y = (

y(x, t)

t)2 = (−Akv cos(k(x− vt))2 (15)

1

2dmv2

y =1

2µA2k2v2

∫ λ

0cos2(k(x− vt)) (16)

Ekinetic =A2π2T

λ(17)

Epotential = Ekinetic =A2π2T

λ(18)

Etotal =2A2π2T

λ(19)

In contrast to a traveling wave, a standing wave only posses Epotential:

Etraveltotal = Etravel

potential =A2π2T

λ(20)

The power2 that is needed to create Travright is:

P =δE

δt=E

t=

2A2π2T

λ

v

λ=

2A2π2T

λf (21)

As a side note for engineers: Etotal ∝ A2 can be applied.

2Energy per unit time ( δEδt )

16

Page 17: Investigation of acoustic communication

The intensity of a wave is defined as: Power of the wave per unit area.

Intensity for a spherical sound source:

Asphere = 4πr2

I =P

A

I ∝ 1

r2(22)

But sound intensity is not the same as sound pressure. ”Hearing for example is direct sen-sitive to sound pressure”[64].

p = pressure

I ∝ p2 =1

r(23)

Intensity for a cylindrical sound source (line array of sources):

Asphere = 4πrl

I =P

A

I ∝ 1

rl(24)

17

Page 18: Investigation of acoustic communication

2.3.5 Absorption

If a wave travels through a frictional medium, its amplitude will decrease. When a mediumonly exhibits tiny friction for a wave, it is transparent for this kind of waveform. Otherwisethis medium is opaque for this kind of waveform (mechanical or electromagnetic).[25]The transition from a medium1 to a medium2 can also entail a dramatic change in friction fora traveling wave or vise versa. To be quantitative that fact is mathematically described by:

Absorpλ = − log10(ItransitionIinduced

) (25)

Iinduced is the intensity of the wave before it enters medium2 and Itransition is the intensity ofthe wave after it passes through.

2.3.6 Refraction

This topic is very important for acoustic communication underwater. In this environmentsound waves are reflected and refracted via various surfaces (water surface, bottom surface,obstacles and different pressure layers of water). This complicates signal encoding for re-ceived transmissions.But what means refraction? It is also called ”internal reflection”. Refraction is the notion foran alteration in direction of a wave, that passes through a inhomogeneous medium or transi-tion from a medium1 to a medium2.To represent refraction, you can visualize a car driving on a dirty road, then entering angulara tarred street (not perpendicular). Then one tire will first reach the tidy road and will rotatefaster as the other tire, which is still on the bad road. Therefore the car will change its direc-tion. This illustrates refraction.

Mathematically it is described by Snell’s law:

sin(σ1)

sin(σ2)=v1

v2

=n1

n2

(26)

Figure 8: Khan Academy: Snell’s law and refraction - Dirty road example can be seen [39]

18

Page 19: Investigation of acoustic communication

2.3.7 Dispersion

First of all, we have to introduce the concepts of phase velocity and group velocity, to under-stand the definition of dispersion.

Phase velocity is the velocity of a crest in a wave.Alternatively the ”group velocity of a wave is the velocity with which the overall shape ofthe waves’ amplitudes known as the modulation or envelope of the wave propagates throughspace.”[52]

If we take two traveling waves Travright1 and Travright2, which both propagate in the samedirection, we derive the following mathematically definitions:

ytotal(x, t) = y1 + y2

= 2A sin (k1 + k2

2x− ω1 − ω2

2t) = 2A sin(kx− wt) cos(

∆k

2x− ∆ω

2t) (27)

2A sin(kx − wt) can be seen as a traveling wave and cos(∆k2x − ∆ω

2t) as the envelope that

propagates through space.

Therefore the phase velocity and group velocity are:

vp =ω

k(28)

vg =∆ω

∆k(29)

Figure 9: Phase velocities and group velocity - ”Frequency dispersion of surface gravity waveson deep water. The superposition of three wave components, with respectively 22, 25and 29 wavelengths fitting in a horizontal domain of 2,000 meter length, is shown.The wave amplitudes of the components are respectively 1, 2 and 1 meter. Thedifferences in wavelength and phase speed of the components results in a changingpattern of wave groups, due to amplification where the components are in phase, andreduction where they are in anti-phase.” [52]

19

Page 20: Investigation of acoustic communication

Figure 10: A dispersive prism [75]

After illustration of group and phase velocity, we can define the term of dispersion:”Dispersion, in wave motion, any phenomenon associated with the propagation of individualwaves at speeds that depend on their wavelengths.”[34] ”The name ’dispersion relation’ origi-nally comes from optics. It is possible to make the effective speed of light dependent on wave-length by making light pass through a material which has a non-constant index of refraction,or by using light in a non-uniform medium such as a waveguide. In this case, the waveformwill spread over time, such that a narrow pulse will become an extended pulse[...]”[88]

Hence, sound waves interact with a medium, we have to take dispersion into account.

20

Page 21: Investigation of acoustic communication

2.3.8 Resonance

Like in other disciplines, resonance plays an important role for every technology, because res-onance has an amplifying effect. It can be utilized to push a low signal to a higher level. Butwe should keep in mind, that resonance has also a destructive manner, if the resulting ampli-tude is too strong for a given system. To resonate a mass-spring system without friction, wehave to induce a frequency to the system, that is identically with the characteristic frequency.Thereby we get the highest deflection for this system.In acoustics resonators are used to utilize the amplifying effect.

Resonator: ”[...]acoustical device for reinforcing sound, as the sounding board of a piano,the belly of a stringed instrument, the air mass of an organ pipe, and the throat, nose, andmouth cavities of a vocal animal. In addition to augmenting acoustic power, resonators mayalso, by altering relative intensities of overtones, change the quality of a tone. See also sound-board. The Helmholtz resonator is an enclosed volume of air communicating with the outsidethrough a small opening. The enclosed air resonates at a single frequency that depends on thevolume of the vessel and the geometry of its opening.”[26]

Figure 11: Resonance - amplitudes and frequencies [53]

21

Page 22: Investigation of acoustic communication

2.3.9 Fourier analysis and frequency spectrum

In acoustics, sound waves most often do not have the shape of pure sign waves.Fortunately every wave can be dissembled in several sin and cos waves, which approximatethe origin wave. This process is called Fourier analysis.Instruments like a violin produce sound through strings with boundary conditions of two fixedends. This means a standing wave is produced in the string and de- and compresses air. Be-cause the surface of a string is not really large, the created alterations in pressure level havea small intensity. To produce a louder sound (higher amplitude) a resonator is needed, likementioned in the subsections before. These resonators increase the surface of the vibration,which can now hub more air. But the sound wave produced through the string and resonatorcontains several frequencies with different amplitudes.A tempered violin for example, plays a pure harmonic with a high amplitude combined withother harmonics and some inharmonics. These harmonics and inharmonics combined providethe characteristic tone of an instrument.In acoustics the harmonic spectrum plays often an important role, contingent upon the fact,that many ”oscillators, including the human voice, a bowed violin string, or a Cepheid variablestar, are more or less periodic, and so composed of harmonics.”[54]

Figure 12: Fourier analysis of a sign waveFourier analysis of a square wave [55]

Figure 13: Spectrum of a square wave [41]

22

Page 23: Investigation of acoustic communication

2.3.10 Huygens Fresnel principle, interference and diffraction

The Huygens-Fresnel principle explains, how wave propagation can be modeled via pointsources of spherically waves. A (new) wavefront acts as the origin of many point sources ofspherical waves. The tangent of these spherically waves form a new wavefront in direction ofpropagation. This new wavefront in turn acts as the origin of new point sources of sphericalwaves for the next iteration.The spherical waves interfere with each other. That in turn creates wavefronts and zones withhigher and lower amplitude.

Diffraction: If for example, a water wave hits an obstacle, like shown in Figure 15, it willnot cut at the edge, but rather travels even in every direction behind the obstacle. This prop-erty of waves is responsible for smooth shadows and perceiving sound even around a corner.The wave spreading can be applied to all kinds of waves (EM waves, pressure waves,...)

Figure 14: Refraction explained via Hygens-Fresnel principle [56]

Figure 15: Diffraction of plane water waves [56]

23

Page 24: Investigation of acoustic communication

2.3.11 Doppler effect

To create a Doppler effect, we need a sound source, that continuously creates spherical waves.If the sound source does not move at all, sound can be sensed with the same frequency atevery location, any time. But once the sound source is in motion, the sensed frequency atall locations in the direction of the velocity vector of the sound emitter, will increase. Thesound source moves in the same direction like the sound waves and shortens hereby the gapbetween successive wavefronts. That means, sound will be perceived with a higher frequency,if the sound emitter travels and the sensing location is in direction of propagation of the soundsource. But in the opposite direction of the velocity vector of the emitting source, the sensedfrequency will decrease, since at these locations the gap will be enlarged.The effect of increasing and decreasing frequency due to movement is called Doppler effect.As a special side note, for pressure waves in air, Mach 1 is reached, if the velocity of the soundsource exceeds the velocity of sound in air. All created sound waves will now travel behindthe emitting source and no wave will any longer create anymore turbulences in front of it (thatis shown in Figure 16).The Doppler effect is used for radar, flow measurement, in sonar, in astronomy as redshift orblueshift, etcetera.The Doppler effect is a problem in communication, but can be handled via Doppler shift com-pensation3.

Figure 16: Doppler effect: f = const. in (left); change in f (mid); reaching Mach 1 (right) [57]

3”When an echolocating bat approaches a target, its outgoing sounds return as echoes, which are Dopplershifted upward in frequency. In certain species of bats, which produce constant frequency (CF) echolocationcalls, the bats compensate for the Doppler shift by lowering their call frequency as they approach a target.This keeps the returning echo in the same frequency range of the normal echolocation call. This dynamic fre-quency modulation is called the Doppler shift compensation (DSC), and was discovered by Hans Schnitzlerin 1968.”[1][58]

24

Page 25: Investigation of acoustic communication

Figure 17: Doppler frequency shift for different locations of an observer: from red to blue thedistance from the sound source at v = 0 to an observer is shorter [48]

25

Page 26: Investigation of acoustic communication

3 Biology - Evolution and application of acousticcommunication

Biology has generated many optimized creatures, which have excellently adapted to their en-vironment. If we want to find a solution, that performs well, we should look at nature, to findan maybe already existing answer for our problems. In contrast to our human knowledge,nature had billions of years to optimize by trial and error.That is the reason why this chapter will cover signal transduction techniques and instrumentsof living beings, that are in some sort special and perhaps helpful for this paper.First of all, we will look at transduction techniques in nature. Subsequently we will discussadvantages and disadvantages of these techniques. A brief insight in evolution and some spe-cial animals will follow. We will especially have a closer look at a bionic project (CIRCE),which tries to emulate a bat.

3.1 Signal transduction techniques

To communicate information or to distribute signals, there are following techniques that areprevalent in the realm of living things: acoustic, chemical and optical.

Chemical communication, respectively signal transmitting, is used by a high ratio of crea-tures. The main applications for this kind of signals are sexual enticement (pheromones) andterritory enclosure. But for insects, like ants, it is also used for optimization in path finding(chemicals, like pheromones, can increase or decrease the strength of the signal levels, be-cause they are accumulative, as well as evaporative over time).Chemical signals are very persistent and do not need a line of sight. This is an enormousadvantage, if an information should last for longer terms and / or should be received evenbehind obstacles and in greater distances. But the disadvantage of this fact is, that no selectivecommunication can be established. The information is only like a broadcast in a computernetwork.Because ever individual creates is own, very different chemical signature, pheromones can beapplied for authorization purposes. To indicate affiliation between individuals, barnyard is atypical means to an end.However, the passive message transfer is a great issue for this type of communication. Only ifanother individual gets in range of observable scents, it can perceive the message, that is leftbehind. Certainly, wind can help to spread out the signal, but the producer has no control ofdirection and duration.

26

Page 27: Investigation of acoustic communication

Optical communication is another way to distribute information. Light emitting (biolumines-cence)4 is only marginal meant in this context, but rather light reflection. Body-colors, gestureand countenance can reliable transport information, but only in the line of sight. This is amajor disadvantage for broadcasts, but pleasant for designated information transfer. A dog forexample, lowers his tail, to show its emotional inner state to others in his nearby neighbor-hood.But this kind of transduction technique in nature is rather located in the near field for commu-nication purposes.Optical signals, like countenance and gesture, are most often only temporary available. Long-lasting signals, like plumage color etcetera can be utilized for long-dated messages.Some fishes in the deep sea and the firefly can even emit active light for sexual enticement andhunting. But why do deep sea fishes utilize light? Possible reasons are the special conditionsin deep seawater. There is no light, pressure is high and radiated light is very short ranged.Light is conspicuous in this environment and therefore curious prey can easily be caught.

Acoustic communication is the first of all three techniques, that is applied for broader infor-mation exchange between creatures; if we except passive exchange via books and other tools.The best example is the human being with his ability to exchange a great amount of informa-tion only though speech. That of course does not only require pressure waves; a vocabularyand a grammar is needed too. But nonetheless acoustic transcending is quite often found byanimals too. If we look at birds, dolphins, whales, dogs etcetera, all of them use some kind ofspeech to exchange information.But why do we produce sounds to transmit informations? Major Advantages are, that lots ofinformation can be transmitted. Furthermore, sound needs no line of sight, but can be direc-tional applied, if preferred (speaking cone). It also can be very flexible used, too.Sound consists of pressure waves and needs therefore a medium to propagate. On earth, that isnot really a challenging problem. On th other hand, long lasting information, like a signpost orwarning label, can not be created, regarding to the evanescent property of sound. Informationis only temporary available, but if obtainable, the sound source calls attention on itself (broad-cast). That is not always a good feature, if you belong to the class of prey. Therefore, animalsoften apply sound right after a closer look in the nearby environment.Or they use sound forlocalization of other individuals and switch in the more dangerous close range to optical com-munication.Another disadvantage of sound waves is the interference of the origin information throughenvironmental noise and waves from other sound sources.

4”bioluminescence, the emission of light by an organism or by a test-tube biochemical system derived from anorganism. It could be the ghostly glow of bacteria on decaying meat or fish, the shimmering phosphorescenceof protozoans in tropical seas, or the flickering signals of fireflies. The phenomenon occurs sporadically in awide range of protists and animals, from bacteria and fungi to insects, marine invertebrates, and fish; but it isnot known to exist naturally in true plants or in amphibians, reptiles, birds, or mammals.”[27]

27

Page 28: Investigation of acoustic communication

Figure 18: Firefly (left)[60] and flashlightfish (right)[59]

Table 1: Comparison of communication technologies [15]Advantages Disadvantages Application field

Chemical large specificity,persistence

passivetransportation,

inflexible

warning,sexual attracting

scent,group scent

Optical reliable needs line of sight diverseapplications

Acoustic flexible,lots of

information,needs no line of

sight

evanescent,revealing

diverseapplications

28

Page 29: Investigation of acoustic communication

3.2 Evolution of acoustic communication

Evolution is a fascinating topic in biology. Evolution is characterized by optimal adaption ofliving beings to their corresponding habitat.We will start with the first vertebrates, fishes in the sea, which do not use acoustic communi-cation at all. Except for some rare species, like snarling Guramis. Not until later, amphibians,like frogs and toads, developed as first animals lungs. They went on shore, to lay their eggs ina more secure environment.With lungs equipped, the first self created sounds appeared. Reptiles like crocodiles, turtlesor geckos went a step further and spent more time onshore, but no remarkable progress inacoustic communication was achieved. But generations afterwards refined the sound creationsubstantially. Mammals and especially birds can nowadays produce a variety of sounds.There are two types of instruments (vocal apparatus) which generate tones: Syrinx5 and Lar-ynx6.

There is to be mentioned, that some revertive adaption of larynx animals back to the seaenvironment occurred. Species like whales and dolphins turned back towards the ocean andhad to adapt the new apparatus to a medium, which had a much higher density and soundvelocity. This adaption happened through special tissue structures in both species.

Contrary to transmitting sound, perception had been created most often with higher prior-ity. But why? Prey had needed the ability to detect possible safety hazards and predators to’look’ for possible victims. Instruments, like membranes, outer ears etcetera, were needed.But only to sense signals, do not work. That is why signal processing was required, too. Sig-nal processing had to fulfill following tasks: localization, tracking, early cognition, alertnessand to estimate the distance to a sound source.

As a fact, acoustic communication gains in importance, as the social complexity increases.To take a single example, for complex communication patterns, we will have a closer look atvervet monkeys (Chlorocebus Pygerythrus).Observations showed, that these animals have mainly three predators: snakes, birds of preyand big cats. All species mainly hunt in different locations. Birds attack from above, snakescrawl in holes and big cats assault on the ground. The observations showed, that vervets notonly made a warning cry to alarm other members of a group, they all (the warned and thewatchman) look in the supposed direction of the traced predator. Consequential, the monkeyshave to posses a special vocabulary (referential signals) to label their enemies.

5 ”[...]the vocal organ of birds, located at the base of the windpipe (trachea), where the trachea divides intothe bronchi (tubes that connect the trachea with the lungs). The syrinx is lacking in the New World vultures(Cathartidae), which can only hiss and grunt, but reaches great complexity in the songbirds, in which it con-sists of paired specialized cartilages and membranes (the inner, or mediuml, walls of the bronchi), controlledby as many as six pairs of minute muscles.”[29]

6”[...] also called voice box, a hollow, tubular structure connected to the top of the windpipe (trachea); airpasses through the larynx on its way to the lungs. The larynx also produces vocal sounds and prevents thepassage of food and other foreign particles into the lower respiratory tracts.”[28]

29

Page 30: Investigation of acoustic communication

1982, the American zoologist Moynihan observed several species, to detect their communica-tion repertoire (not only acoustic). By this time, science supposed a correlation between phy-logenetically stage of an species and its communication repertoire. But astonishing, Moynihanrevealed, that the correlation actually lies between the communication repertoire and the quan-tity of problems in their social domain, which they have to solve. Figure 19 shows this insight.

Figure 19: Correlation: number social tasks (x-axis) and number of signals used (y-axis) [15]

30

Page 31: Investigation of acoustic communication

3.3 Acoustic operators in nature

From preceding sections we know, that there are only Syrinx and Larynx living beings, whichmake use of airflows in combination with vibration of specific body parts. Moreover, thesecreatures are the specialists for acoustic communication.Insects are different to Syrinx and Larynx species, since their respiratory system consists ofspiracles. But they apply sounds for communication, too. A cricket, for example, utilizes itslegs to produce its typical chirping. Bees or other flying insects make use of their wings toproduce sounds through vibrations.It is curious, that swarm animals, like ants, do not exhibit acoustic communication, despite ofthe fact, that they have to coordination a mass amount of individuals. Maybe there is a tip-ping point for acoustic collaboration, from which efficient verbal communication is no longerachievable. For small groups, like clans of stone age men, the application of sounds werenot really a challenging problem, since the quantity of communicating individuals in a clanwas not really large. Another good comparison for mass collaboration together with infor-mation exchange is a computer network (LAN). If a large number of clients are plugged ina LAN, packet collisions occur very often and retransmitting is needed. Maybe communica-tion in a swarm should generally be handled as low as possible. That could exempli gratiabe achieved through rule based agents, mainly known as reflex agents. If a agent knows in acertain situation how to react, it does not have to coordinate with other agents via communi-cation (Example: A swarm of birds in the air are rule based. That is applied in particle swarmoptimization (PSO), too). A major advantage of reflect based agents is the real-time responsetime. This is observable for every creature in perilous situations, which forbid for time reasonsextended thinking.

Figure 20: Larynx (left) [66], Syrinx (right) [65]

31

Page 32: Investigation of acoustic communication

3.4 Infra- and ultrasound

Humans can detect sounds in the range of 15-20 Hz till 10-20 kHz. Above this bandwidthultrasound is located and below infrasound has its range.

Figure 21: Sound frequencies[67]

Infrasound is sensed by many animals, but only a few can themselves produce this type ofsound. Earthquakes, Tsunamis and other natural disasters are signalized by low frequencysounds. As a consequence, many living beings have adapted to low frequencies, to exploit thiswarning signal.Elephants and giraffes are animals which take in addition advantage of the application of bidi-rectional infrasound information exchange. Whereas low frequencies are less absorbed viaswinging air molecules, as higher frequencies, these elephant sounds can travel a maximumrange of around 10 kilometers. Especially at dusk and dawn, when the air at ground lowers itstemperature, a sound channel is formed and triples the range for acoustic infrasound commu-nication7. The effect of a sound channel is used by radio waves, too.8

At last, we should mentioning a new observation at homing pigeons. Sometimes homingpigeons can not find their destination, if their starting position exhibits predetermined condi-tions. It has been a mystery, since J.T. Hagstrum discovered 2013 the ability of home pigeonsto use infrasound as an beacon for their home location. ”Hagstrum came up with a possible so-lution to the problem when he read that pigeons can hear incredibly low frequency infrasound.Explaining that infrasound which can be generated by minute vibrations of the planet surfacecaused by waves deep in the ocean travels for thousands of kilometers,[...]” [14]. Thereforestarting points, that are shielded against these infrasonic signals, cause home pigeons to loosetrack to their final destination.

7Sound is emitted in colder air (medium1) and is reflected as it reaches warmer atmosphere (medium2)).Therefore reflection and refraction happens and less sound is lost to the upper atmosphere through the curva-ture of the earth

8”During unusual upper atmospheric conditions, FM signals are occasionally reflected back towards the Earthby the ionosphere, resulting in long distance FM reception”[68]

32

Page 33: Investigation of acoustic communication

Ultrasound is located in the higher frequency spectrum and contains, through the small wave-length, a high amount of energy. Technologies that take advantage of this technology are forexample: range finder, ultrasound identification (USID), real time locating system (RTLS),indoor positioning system (IPS), medical sonography and many more9.As you can see, there are plenty of applications for these higher frequency sound waves. Nosurprise, do many animals utilize this waves. Dolphins utilize for example sonar10 and batsecholocation11.At least, we want to mention a remarkable animal, which uses communication via soundin a special manner. Odorrana tormota (concave-eared torrent frog) native in China, has itshabitat at Huangshan Mountains in Anhui and Jiande and Anji counties in northern Zhejiang.It lives in and nearby fast-flowing streams (like for example waterfalls) with low-frequencybackground noise. It has therefore itself adapted to its environment by deploying ultrasound.12

9More examples can be found at http://en.wikipedia.org/wiki/Ultrasound10”sonar, (from sound navigation ranging), technique for detecting and determining the distance and direction

of underwater objects by acoustic means. Sound waves emitted by or reflected from the object are detectedby sonar apparatus and analyzed for the information they contain.”[32]

11”echolocation, a physiological process for locating distant or invisible objects (such as prey) by means ofsound waves reflected back to the emitter (such as a bat) by the objects. Echolocation is used for orientation,obstacle avoidance, food procurement, and social interactions.”[31]

12Further information abaout these frogs can be found in: ”Ultrasonic communication in frogs” from Feng,Albert S and Narins, Peter M and Xu, Chun-He and Lin, Wen-Yu and Yu, Zu-Lin and Qiu, Qiang and Xu,Zhi-Min and Shen, Jun-Xian, published in Nature Publishing Group

33

Page 34: Investigation of acoustic communication

3.5 The human ear

In this subsection we will have a closer look at human speech and sound perception. Soundcreation for humans (Larynx) was previously explained, but how humans sense acoustic sig-nals have not been annotated. Hence we will explain the human perception apparatus in detail.

Sound waves, as we in the meantime know, are pressure waves, that arrive first of all at ourouter ear. The outer ear consist, from outside to inside, of the pinna, the ear canal and the eardrum. The pinna is especially important for sound localization and frequency filtering. Soundlocalization is performed by usage of the head, that produces a shadow for arriving wavesfrom one side. Second, the pinna causes different reflections and interferences according tothe origin of sound source. Therefore the frequency response of the pinna contains directioninformation. It works best with wavelength in the order of the pinna (f = several kHz orλ = 1cm)The ear canal is approximately a cylinder with an opening and a closing end attached with amembrane. In Chapter 2 we introduced the definition of standing waves and resonance. Andfor the ear canal exact this fact of a harmonic system holds. The ear canal has a resonancefrequency at about 2-3 kHz. This increases enormous the sensitivity in this frequency range.The sensitivity of the ear is astonishing, because an opening of 1cm2 let only power in theorder of 10−16 Watts per cm2 through the ear canal.

Figure 22: The outer ear [40]

34

Page 35: Investigation of acoustic communication

The middle ear consist of the ear drum and a auditory ossicle (a little bone). After attainingthe flexible membrane, called eardrum, the pressure variations in the air are translated intopressure waves in water. But if sound waves would directly encounter water, the waves wouldonly be reflected and effectively nothing would be further transmitted (look at Chapter 2).Thus, the descent of medium1 to medium2 is translated via mechanical motion, through amembrane with the attached auditory ossicle. An Advantages through the transition from airto water are: higher pressure in water and a force on the extensive eardrum, that is furthermoreconcentrated at a smaller area at the end of the auditory ossicle. This results in an amplifiedtransferred pressure wave in water.

Figure 23: The middle ear [40]

The last part of the ear is the inner ear. Its main part is the cochlea scull, filled with water.Hence the liquid in the cochela is not really compressible, the vibrations can travel starting atthe oval window through the inner ear hole till to the ”round window” at the end. The ”roundwindow” is a flexible part, that compensates the energy of arriving pressure waves. Betweento pipes in the cochela is the basilar membrane located. This membrane is flexible and de-formable through pressure. Attached to basilar membrane are little hairs, which themselvesproduce, if vibrated, digital electrical pulses. Frequency filtering is done by different stiffnessand cross-section of the basilar membrane. High frequencies produce lots of vibrations at thefar end and low frequencies lots of vibrations at the part that is located near the oval window.

Figure 24: The inner ear - Cochela [40]

35

Page 36: Investigation of acoustic communication

3.6 CIRCE

To understand CIRCE, we have to explain the functionality of a biosonar. To localize ob-jects in the environment, sounds are emitted, which are reflected by objects and echoes arereceived. The distance to an object is therefore: vsound∆t

2l. To get an accurate picture of the

environment, high frequencies are used, since low wavelength can be reflected by smaller ob-ject, too. Therefore a closer resolution can be achieved.Bats use frequencies in the range of 8-160 kHz to scan their habitat. Vertical localization iscaptured through interference at the tragus13 and by independent alignments of the ears. Forhorizontally localization, the shadow of the head is used, which results in different travelingtimes for reflected waves of an object. Some bats adapt their frequency bandwidth, as theyapproach possible prey. In the distance, they use sounds with lower frequencies and as theyapproach their victims, the bats increase their frequency spectrum, such that the emitted band-width generates a high resolution picture. These bats are called frequency modulated bats.The other group of bats are constant frequency bats.

Circe stands for: ”Chiroptera-Inspired Robotic Cephaloid”, that is a project of following con-sortium of universities :• Universiteit Antwerpen, (UA, coordinator) University of Bath, (UoB)• Eberhard-Karls-Universitt Tbingen, (EKUT)• University of Edinburgh, (UEDIN)• Friedrich-Alexander Universitt Erlangen, (FAU)• Katholieke Universiteit Leuven, (KULeuven)• Syddansk Universitet, (MIP)

”The goal of the CIRCE project is to reproduce, at a functional level, the echolocation systemof bats by constructing a bionic bat head that can then be used to investigate how the worldis actively explored by bats”. This bionic bat head must be of similar size to a real bat headto reproduce the relevant physics, i.e. wave phenomena, and capable of generating / process-ing realistic bat vocalisations in real-time, as well as performing realistic pinnae movements.Once constructed, the bionic head will be used to gain more insight into neural sensory-dataencoding from using it in echolocation tasks routinely executed by bats.”[9]Components for this bionic head were, on the one hand, an emission- and reception-subsystemwith ultrasonic transducer, for a manageable beam and driver electronics. On the other hand,a realistic bat pinna was needed, to achieve an optimal directivity pattern. Especially for thepinna, many bat ears were scanned and through these blueprints created. Furthermore an ac-tuating system was applied to the ear, to enhance sound localization. A neuromorphic cochleafor extracting biologically plausible features has been developed, too.Now we will look step for step at the different components and their properties.

13”The tragus is a small pointed eminence of the external ear, situated in front of the concha, and projectingbackward over the meatus.”[80]

36

Page 37: Investigation of acoustic communication

Figure 25: The complete bionic bat head[9]

Ultrasonic TransducerFirst, we examine the emitter part of the transducer. The emitter has a bandwidth of 20-200kHz and an approximate size of about 2cm2. It consists of an cellular polymer film, calledelectro mechanical film (EMFi)14, fixed on top of a printed circuit board (PCB). This piezo-electric emitter had to be developed, because purchasable ultrasonic transducer at this time,had only a low bandwidth, that was not suitable for this project. But since resonance of thecreated piezoelectric element has been at 300 kHz, they have gotten no resonant advantage at20-200 kHz and only a flat response.

Figure 26: Transmitter (nose) [9]

The receiver is composed of the optimized pinna, with the attached actuators and the sameEMFi material, as mentioned before, to perceive the transmitted signals. An preamplifiercircuit, built with SMD technology was used to shrink the size of the receiver. This can beseen as the amplification effect in the human ear, as pressure waves in air are converted intopressure waves in water.

14”ferroelectricity, property of certain nonconducting crystals, or dielectrics, that exhibit spontaneous electricpolarization (separation of the centre of positive and negative electric charge, making one side of the crystalpositive and the opposite side negative) that can be reversed in direction by the application of an appropriateelectric field. Ferroelectricity is named by analogy with ferromagnetism, which occurs in such materials asiron.”[33]

37

Page 38: Investigation of acoustic communication

Figure 27: Receiver (ear)[9]

Another part of the bionic head is the neuromorphic processing of received ultrasonic sound.In the CIRCE project a simplified cochlear model was used (only electronically), instead of anexact modeled one for bats[9]. ”It consists of linear bandpass filters (BPF), half wave rectifiers(HWR), lowpass filters (LPF), automatic gain control (AGC) and neural spike generation”[9].It is followed with a signal demodulation (envelope extraction) realized by a half- wave recti-fier and lowpass filter. Figure 28 shows an overview of the different applications of filters inthis system.The last part of the bionic bat head is the biosonar task to identify spikes and correlate thespectrogram to location information. The neuromimetic cochlear model was divided into thefilter, the AGC and spike generation blocks. Based on the spectral information contained inthe echoes, they established localization via a multi-spiking model.

Figure 28: Filtering, amplifying and spike tree production [9]

38

Page 39: Investigation of acoustic communication

Essentially for CIRCE was to find the optimal shape of the outer ear: ”It has been known forsome years that pinna and related structures play an important role in the generation of audi-tory sensory cues. Of these, two cues stand out as having generally recognized importance fortarget localization: inter-aural time differences, due largely to diffraction of sound around thehead; and inter-aural level differences, caused by the shadowing effect of the head and diffrac-tion of the incoming sound field in interaction with the structures of the head and pinna. Thevariation of inter-aural level differences with sound frequency is important for determinationof acoustic target elevation in many species”[9].

39

Page 40: Investigation of acoustic communication

4 Underwater acoustic communication

4.1 Electromagnetic waves in water

Underwater acoustics is the most important source of technologies for terrestrial acoustic com-munication. Since in deep sea water, only pressure waves can affectively be applied. Like wementioned before, water is, despite for visible light, a very absorbing medium for electromag-netic waves. If we examine Figure 29 and 30 we see the different absorption rates for distinctfrequencies. Only for visible light water is ”almost” transparent. But if you dive 1000 me-ters and deeper in the sea, there is complete darkness. But less than 1000 meters for verticalcommunication purposes are not really convenient for underwater communication. Thereforevisible light is not useful for longer distances information exchange, besides the fact of narrowbeam features of light and the need of directivity.As a side note, in Figure 29 we see, that water is less absorbed by blue light and as a con-sequence we perceive water as blue colored, the farther we dive in the sea (at the end allwavelength are absorbed and therefor black will be the resulting perceived color).

Figure 29: Absorption for different EM frequencies. Look at the visible light bump. [61]

Figure 30: Zoomed-version for absorption. Limitation to visible light. [61]

40

Page 41: Investigation of acoustic communication

A preciser explanation of the absorption effect provides [16]:”We know that electromagnetic radiation of wavelength around l > 2cm (i.e., long mi-crowaves and radio waves) interacts strongly with the dipoles of H2O molecules, and thatthe energy of these interactions is dissipated in the work done shifting these dipoles in theelectric field of the wave[...]”This effect is used in microwaves to heat up food containing water, since absorption results inheat.In addition, most of the cosmic x-rays are absorbed by the large amount of water-surfaces onearth and converted into heat. This is a major component for climatology.In air, with scattered water molecules, this is not a big problem. In moist air, water moleculesabsorb radio waves, too, but at some degree lower as in the sea. Salt and other inorganicsubstances in the sea are another problem and enhance the absorption rate via highly electricconductive rates.

Summarized: Water has absorbing effects on electromagnetic waves, but is less absorbingfor visible light. But sea water does not only consists of pure water and thereby absorption isfurther increased. If additionally pressure and / or temperature create different layers in water,reflection, transition and absorbing happens. This lowers the range of electromagnetic wavesfurther more. As a consequence, acoustics is mainly utilized in sea environments15.

Figure 31: Different layers in the water. Blue light survives longest [46]

15[3] showed reliable short range communication underwater with LEDs up to 5 - 10 meters

41

Page 42: Investigation of acoustic communication

4.2 Challenges in underwater acoustic communication

Underwater acoustic communication (UWAC) is characterized through low available band-width, highly varying multipath (result in severe intersymbol interference [ISI]16), large prop-agation delays and large Doppler shifts, relative to radio channels (EM waves). But in contrastto electromagnetic waves, pressure waves in water posses a lower absorption coefficient andcan therefore be applied for longer distance information exchange.Subsequently, we will explain challenges for sound transmitting in the sea in detail.

In underwater acoustics (UWA), frequencies between 10 Hz and 1 MHz are used. Soundwaves below 10 Hz would penetrate too deep into the seabed and above 1MHz sound wavesare too vigorous absorbed by water. This results in a small usable bandwidth for communica-tion purposes. Since, bandwidth is directly proportional to the amount of data transmitted orreceived per unit of time, UWAC has a problem for a high amount of data. Figure 32 illustratesa comparison of EM and sound waves in seawater. An attenuation of 3dB/km corresponds toa bisection in transmission power.

Figure 32: Damping factor for EM waves (blue) and sound waves (red) in seawater [2]

16”In telecommunication, intersymbol interference (ISI) is a form of distortion of a signal in which one symbolinterferes with subsequent symbols. This is an unwanted phenomenon as the previous symbols have similareffect as noise, thus making the communication less reliable. ISI is usually caused by multipath propagationor the inherent non-linear frequency response of a channel causing successive symbols to ”blur” together.The presence of ISI in the system introduces errors in the decision device at the receiver output. Therefore, inthe design of the transmitting and receiving filters, the objective is to minimize the effects of ISI, and therebydeliver the digital data to its destination with the smallest error rate possible. Ways to fight intersymbolinterference include adaptive equalization and error correcting codes.”[87]

42

Page 43: Investigation of acoustic communication

In contrast to air, the ocean consists of considerably more layers of distinct properties. Firstof all, pressure level increases with depth and creates layers. Second, temperature and oceansalinity differs and contributes to different water-area configurations. Furthermore, the watersurface acts as a medium with different velocity for sound and therefore reflection, refractionand dispersion are added to the system. All this together, let pressure waves in water, otherthan in air, scatter in closer intervals and results in curvilinear propagation paths. For com-munication, this results in various multipaths, delays and contortions. Figure 33-34 show thereflection and refraction (internal reflection) problem in sea.

Figure 33: Shallow water reflection and refraction, that result in several mutlipaths [2]

Figure 34: Transmission loss per reflection at a surface [63]

43

Page 44: Investigation of acoustic communication

To improve interference resistance, digital transmission can be applied.Digital technology is characterized by distinct low and high areas, which are separated via arefuse channel.Bit transmission in UWAC is applied trough a modulated carrier wave. Either the carrier waveis amplified or the phase is modulated via the digital signal. Figure 35 illustrates an analogamplitude modulation (AM) and analog frequency modulation (FM) of a signal. But amplitudemodulation has a major disadvantage: Different amplitudes imply different power levels. Andisadvantage of both techniques is time synchronization. That is relative simple to solve forEM waves with c = 299792 km/sec but for sound waves with cwater = 1482 m/s (at 1 atm and20◦C), that is a more challenging problem. Not forgetting the time delay caused by reflection,refraction and dispersion.But even with the digital static discipline applied, the receiver gets signals like in Figure 35.Only through a learning period, signals can be improved interpreted.

Big Doppler shifts, relative to radio channels, emerge if the receiver or transmitter moves.For EM waves with c = 299792 km/sec in contrast to sound waves withcwater = 1482 m/sthis is not a big deal to handle, through little changes in frequencies. But in the ocean it is abig problem for UWA, since tides, water waves, etcetera are sources of motion for transducer.Also underwater vehicles drive by themselves and create Doppler shifts. This effect is solvedby Doppler shift compensation.

Figure 35: Amplidtude Modulation (AM) and frequency Modulation (FM)[2]

44

Page 45: Investigation of acoustic communication

The biggest challenge in UWA is nevertheless power consumption, since batteries in appara-tus deep in the sea can not easily be recovered. To change a battery in an apparatus at theseabed, ships, submarines etcetera are needed and would blow up every available budget. Asa result, UWAC must be energy conservative. This requirement in UWAC is the main reasonwhy we have a look at this scientific area. For swarm technologies on earth we need exactlythis energy restriction.A single transmitter needs more energy, to send a message to a faraway receiver, as severaltransmitters in an ad hoc network, that just repeat the signal to its final destination. Thereforehop networks are favored in UWAC.The main energy saving is achieved through optimized network protocols and hierarchies.Hence every transmission costs energy it is assigned with a penalty factor in UWAC. Retrans-missions are a result of packet collisions and that is why we will look at a variety of networkhierarchies and network protocols in the next subsection.

Figure 36: Amplitude modulation (left) and phase modulation (right) [2]

Figure 37: Received signals. Before learning and after (right) [2]

45

Page 46: Investigation of acoustic communication

4.3 Deep water- and shallow water acoustic communication

There are two main regions, in which underwater sensor networks operate: in deep water andin shallow water. Each region has its own special conditions.We will start to define deep water, what is a difficult task, because ”deep” is a relative term,as you can see in [37]. Historically seen, deep water starts at 200 meters for Europeans. Butwith depths above 2,500 meters, to define 200 meter as ”deep”, does not make any sense.In deep water acoustic networks we have to take many layers of different water pressuresinto account. If we are near the seabed, reflections from this surface must be considered too.But especially vertical, we have to deal with a lots of refraction, created through the massamount of layers. Furthermore, deep water mountains block the line of sight for horizontaldata transmission, but since data is needed at the hub, which is located at the water surface,this problem is most often negligible.A more challenging area are shallow water acoustic networks. They have to deal with refrac-tion, produced through the different water layers, dispersion in water and especially the reflec-tion of two surfaces. Reflections from seabed and the water surface result in many multipaths.hence, shallow water is the most difficult territory for high throughput acoustic networks.If we talk about UAN, we will look mainly at long ranged communication, established throughexpensive underwater modems. Inexpensive short ranged UAN are subject recent research [1].

46

Page 47: Investigation of acoustic communication

4.4 Network

”Underwater acoustic networks (UAN) are generally formed by acoustically connected ocean-bottom sensors, autonomous underwater vehicles, and a surface station, which provides a linkto an on-shore control center.”[11]. UWA has to maximize throughput, as well as, reliabilityand minimize at the same time power consumption. In the old days, oceanography consistedof alignment of sensors on the bottom of the ocean or in a particular layer in the sea, followedby collecting data and retrieve of the data containers after several months of recording. Butsome problems made this approach to a lottery. First of all, sometimes there were malfunc-tions in the sensors or battery life has gone by and data has been collected only several days.To wait eagerly certain months for data and to get only a few days of observation, was notreally gratifying. But sometimes special phenomenons were present in the sea and researcherswould have wanted, to collect more data in these time periods. But with no connection tothe apparatus, no reconfiguration could be adjusted. Therefore a real-time connection to thesensor nodes had to be established and UAN was born.

Figure 38: Structure of Seaweb, a underwater acoustic network [13]

A typical UAN topology is shown in Figure 38-39 . As you can see, it is a mutltihop peerto peer ad hoc topology. Optimal routes are calculated via shortest path algorithm, like Bell-manFord algorithm or Dijkstra. But why a tree like structure? Because one strong transmitter,which transfers all the data to the mainland and vise versa, is needed. This strong transmitternode has to act as a hub for all other nodes, which want to send information to the mainland.Disadvantage of this type of topology is obviously the single point of failure for data exchangeto the base station. However, this hub can, compared with the other nodes, that lie underwater,can relatively easy retrieved. This hub in addition makes use of EM waves to transmit longerdistances, for example via radio waves, to the main station.But as mentioned before, this topology has a variable tree structure. Only one point of failureis present in the system and nodes can easily be added or removed. This topology is robust,except of this single point of failure, as well as flexible.

47

Page 48: Investigation of acoustic communication

But then, why not use a fully connected peer to peer topology? It is apparent of the fact, thatthe acoustic range of a single modem, which varies from 10 to 90 km [11], can be massivelyextended through hopping. The output power needed for direct communication between twospread nodes, would be too enormous. Additionally, a node transmitting to a very distantreceiver, would block transmissions from other nodes in the neighborhood. That is callednear-far problem. But a major drawback of hopping, however results in an extended timedelay per node forwarding. Since energy consumption is higher weighted as latency, peer topeer multihop networks are preferred for UAN.

Figure 39: Network topology of an UAN [11]

48

Page 49: Investigation of acoustic communication

4.5 Network stack

In the following paragraphs, we will walk trough the layers of the network stack. These threelayers are: the physical layer, the data link layer, and the network layer.

The physical layer converts bits (0 and 1) to signals which can be transmitted over the com-munication channel. At the receiving end, the transmitted signal, that is now distorted bymultipath, noise and Doppler shifts, is adjusted and converted back to bits.

The data link layer frames and corrects transmission errors. Bits are grouped to packets,which consist of data, synchronization preambles, source address and destination address. Er-ror correction in this layer is most often handled via cyclic redundancy check (CRC). Redun-dant bits are calculated through bits of a packet and attached to it. The receiver looks for thechecksum and if an error occurred retransmission is demanded. This procedure of requestingretransmission is called automatic repeat request (ARQ). Generally used protocols are: Stop& Wait, Go Back N and selective repeat protocol. They form the logical link control (LLC), asublayer of the data link layer.Since nodes are not connected via distinct cables, they had to divide the communicationmedium with other nodes. This results in multiple access control (MAC). That is a furthersublayer of the data link layer.

The last part of the UAN protocol stack is the network layer. It handles routing proceduresand like every layer above, does not query the correctness of preceding layers. There are staticand dynamic routing algorithm or a combination of both.We will now look in detail at data link and network layer and their, for acoustic communica-tion purposes adapted, configurations.

Figure 40: OSI-Model [82]

49

Page 50: Investigation of acoustic communication

4.5.1 Data link layer: Multiple access protocols

In many networks, data exchange is bursty and most of the time the system is in an idle state.Since bandwidth in UAN is very valuable and the transmission medium is accessible for allnodes, time and frequencies have to be applied in an efficient and multi accessible manner. Forthat reason, we will describe in the following paragraphs: frequency division multiple access(FDMA), time division multiple access (TDMA) and code division multiple access (CDMA).

Figure 41: Multiple Access Protocols [73]

50

Page 51: Investigation of acoustic communication

In frequency division multiple access (FDMA) the bandwidth is divided into subbands. Thespecific channel is only usable and unlocked for one node, until transmission is closed. But inthe presents of fading17 and bursty traffic, this protocol should be discarded.

Figure 42: FDMA for electromagnetic mobile application [73]

17”The presence of reflectors in the environment surrounding a transmitter and receiver create multiple pathsthat a transmitted signal can traverse. As a result, the receiver sees the superposition of multiple copiesof the transmitted signal, each traversing a different path. Each signal copy will experience differencesin attenuation, delay and phase shift while traveling from the source to the receiver. This can result ineither constructive or destructive interference, amplifying or attenuating the signal power seen at the receiver.Strong destructive interference is frequently referred to as a deep fade and may result in temporary failure ofcommunication due to a severe drop in the channel signal-to-noise ratio.”[69]

51

Page 52: Investigation of acoustic communication

Time division multiple access (TDMA) on the other hand, divides a time period, calledframe, into separate time slots. Each time slot is assigned to a separate node. To avoid packetcollisions from adjacent time slots, an additional guard time is applied. Guard times have tobe proportional to the propagation delay in the channel. Since, UWAC has to deal with highdelay, overhead is remarkable increased. Furthermore the large amount of idle times causeslow throughput.As an example for FDMA in combination with TDMA we can look at the european GSM(global system for mobile communications). In GSM bandwidth is divided into seven sub-bands. On the other hand, every subband is used as the available bandwidth for TDMA. Thisresults in a higher resistance against frequency-based fading and a better usage of the availablebandwidths.Because in TDMA data has to be buffered, until a timeslot is available for transmission, burstynetwork traffic is caused. But this needs the appliance of adaptive equalizers.1819

The transmitter can be set in idle mode in such waiting intervals to decrease power consump-tion and save battery power.However, bursty transmission needs higher bitrates as FDMA and boosts ISI.The major disadvantage of TDMA is the needed time-synchronization. Like mentioned be-fore, this is difficult in UWA. It can be established by sending periodic probe signals and bykeeping time delays in mind for the calculation of these time slots.

Figure 43: TDMA for electromagnetic mobile application [73]

18”An adaptive equalizer is an equalizer that automatically adapts to time-varying properties of the communi-cation channel.[1] It is frequently used with coherent modulations such as phase shift keying, mitigating theeffects of multipath propagation and Doppler spreading.”[70]

19”In digital communications, its purpose is to reduce intersymbol interference to allow recovery of the transmitsymbols. It may be a simple linear filter or a complex algorithm”[71]

52

Page 53: Investigation of acoustic communication

Code division multiple access (CDMA) lets all nodes use the whole frequency bandwidthand operate simultaneously. That is possible, because every node has been assigned its ownpseudonoise, that is added to its transmissions. Therefore nodes differ from each other throughtheir specific pseudonoise.There are two spreading techniques, direct sequence (DS) and frequency hopped (FH), whichspread the spectrum. In DS, the pure signal is linearly modulated by wide band pseudonoise.In FH, the carrier wave is enhanced by pseudonoise. But to spread by adding pseudonoise tothe signal or carrier-signal causes a decreasing usable bandwidth for pure data.Problems occur via multiple access interference (MAI), which is caused through increasednoise of simultaneous sending nodes, that accidentally interfere. A careful design with largespreading gains can mitigate this effect. Furthermore CDMA is vulnerable by near-far ef-fects20. In turn, this can be decreased by joint detection (multi user detection), what we willnot further explain in this paper and power control algorithm, which adjust the signals strengthand the SNR for all nodes, indifferent if near or far from the receiver. Therefore a transmittercalculates the minimal signal strength to a specific node to interfere as little as possible othernodes in the neighborhood. That lowers the interferences and is anyway important for lowpower consumption of a node. For further information look at: [11]CDMA and spread-spectrum signaling are the most compromising multiple access techniquesunderwater.

Figure 44: CDMA for electromagnetic mobile application [73]

20”The near-far problem is a condition in which a receiver captures a strong signal and thereby makes it im-possible for the receiver to detect a weaker signal.[1]The near-far problem is particularly difficult in CDMAsystems, where transmitters share transmission frequencies and transmission time. By contrast, FDMA andTDMA systems are less vulnerable.”[72]

53

Page 54: Investigation of acoustic communication

4.5.2 Data link layer: Medium access control protocols

The rare resources in a limited underwater channel should be used in an efficient manner by amedia access protocol. Therefore we will review in detail the different media access protocolsfor UWAC.

The origin ALOHA protocol consists of random access for nodes. If a node has informationto transmit, it immediately sends it. Is the transmission successful, that means no collisionwith other packets occured, the receiver sends an acknowledgment (ACK) back to the trans-mitter. Otherwise, after waiting a random time, the transmitting node resends its information.Because of the random character of these waiting times further collisions are relative unlikely.The maximum throughput of ALOHA lies around 18% [11].

The advanced ALOHA synchronizes the clocks of the nodes and allows only requests atthe beginning of designated time slots. Since the probability for collisions for only designatedpoints in time decreases, this results in a throughput of about 36 %.In bursty networks both ALOHA protocols nevertheless result in too much collisions andtherefore too much power consumption.

54

Page 55: Investigation of acoustic communication

Carrier sense media access (CSMA) puts out its feelers to sense for ongoing carriers ofother nodes. Only if a node supposes a free channel, it tries to send its packet. There are twoproblematic scenarios where although collisions can occur: hidden node and exposed nodescenario. On the basis of Figure 45 we will explain both scenarios.If node A sends a packet to B, C does not sense any active carrier and sends therefore itsown packet to B. This results in an collision at B. Hence A was hidden for C, this scenario isdenoted as hidden node scenario. If the destination of the packet from node C would not beB, B should not detect a collision, when it can handle interference properties caused by twomessages send at the same time.On the other hand, if B sends a message to A, C is blocked, even if it is not the destination forthe packet of B. That defines the exposed node scenario.CSMA can only handle these two effects via applied guard times, which are proportional tothe maximum transmission delay present in the network. Thus acoustic information exchangein water, which exposes high propagation delay through low velocity of sound waves in wa-ter, CSMA results in high idle times and small throughput. As a consequence CSMA is notapplicable for UWN.

Figure 45: Example node arrangement in an UAN [11]

55

Page 56: Investigation of acoustic communication

The multiple access with collision avoidance (MACA) uses two signaling packets to handlepacket traffic: request-to-send (RTS) and clear-to-send (CTS). If A wants to send a messageto B, it sends an RTS, that contains the length of the message, which A wants to transmit. If Breceives the RTS it sends back a CTS. Once A receives the CTS, it begins to transmit its packet.This dissolves both scenarios, the hidden node and exposed node, too. Automated powercontrol can be created via learned minimal transmission power by RTS-CTS sequences forevery node. The exposed node scenario can therefore be solved by a decreased signal strength,that is used for transmission to an nearby node. Now its signal fades out before it reaches afar distant node, that sends an CTS. This decreases idle periods and increases throughput. Asan disadvantage it adds overhead to the system. But less idle periods overcompensate thisproblem and increase throughput.

Figure 46: Example node arrangement for the next two figures [8]

Figure 47: A handshake between A and B [8]

Figure 48: RTS from C collides with data packet from A [8]

56

Page 57: Investigation of acoustic communication

A modified version of MACA is MACAW. MACAW uses RTS-CTS-DATA packets. As youcan see in Figure 49 no acknowledgment of properly received packets are involved. The taskfor error handling is delegated to the upper layer, the transport layer. This can only be appliedin the presence of highly reliable links. This decreases overhead and increases throughput.But if the communication channel is of poor quality, error handling at the data link layer shouldbe performed. Therefore it is not preferable for UAN.

Figure 49: RTS timeout and retransmission of a new RTS [10]

57

Page 58: Investigation of acoustic communication

Slotted floor acquisition multiple access (Slotted-FAMA)Slotted FAMA is based on FAMA, which takes carrier sensing into account, to avoid packetcollisions. The origin FAMA however needs two premise: RTS length must be minimum ofthe size of the maximum propagation delay in the network. Second, CTS length has to begreater than RTS plus twice the maximum propagation delay and the hardware transmit-to-receive transition time.But this produces a far too long delay in an UWAC system. Hence this protocol has beenadjusted to the UW environment.

Slotted FAMA is established through slotting time to distinct slots. Only at the beginningof every slot a RTS, CTS, DATA or ACK can be transmitted. That takes out the asynchronouscharacter of the original FAMA, but decreases delivery delay. The slot length is defined by:

ttimeslot = max(∆tpropagationdelay) + tCTS (30)

Therefore RTS and CTS can be received, including delay and other problems, in one slot.To prevent out of sync effects, guard times can be implemented.

Figure 50: A successful handshake between A and B in slotted FAMA [8]

Figure 51: Priority applied to traffic handling . Therefore the child node gets a CTS first [10]

58

Page 59: Investigation of acoustic communication

4.5.3 Data link layer: Error control protocols

Automatic repeat request (ARQ) looks for errors in the data link layer. Like mentioned before,error handling in this layer is to favor for UAN.There are three types of ARQ: Stop&Wait, Go Back N and selective repeat protocol.

In Stop&Wait ARQ, a node waits for an ACK for further packet transmission, until a pre-defined time-out is reached. If it times out, it retransmits this packet to the other node. Adisadvantage of this approach is a large idle time and consequently a smaller throughput. Infull duplex21 mode, the transmitting node can continuously resend its packet at channel1 untilit gets an ACK through channel2. But this is very energy consuming and therefore not prefer-able for UAN.

In Go Back N a window size is defined, in which packets can be transmitted, indifferentof perceiving an ACK. The receiver sends back the number of the last error free packet, so thatthe sender node has only to retransmit packets above that number.

Selective repeat protocol extends the Go Back N through a buffer. Now packets can beaccepted even out of order and the transmitting node must only resend the incorrect packages.This ARQ scheme is the most effective of all three one.Selective repeat protocol needs full duplex mode, that means one channel for ACK and an-other for DATA, and decreases consequently the available frequency bandwidth for pure dataexchange. Like so often before, this decreases generally throughput, but less idle times in-creases it significantly.

21”A half-duplex (HDX) system provides communication in both directions, but only one direction at a time(not simultaneously). Typically, once a party begins receiving a signal, it must wait for the transmitter tostop transmitting, before replying (antennas are of trans-receiver type in these devices, so as to transmit andreceive the signal as well).A full-duplex (FDX), or sometimes double-duplex system, allows communication in both directions, and,unlike half-duplex, allows this to happen simultaneously. Land-line telephone networks are full-duplex,since they allow both callers to speak and be heard at the same time, the transition from four to two wiresbeing achieved by a Hybrid coil. A good analogy for a full-duplex system would be a two-lane road with onelane for each direction.”[74]

59

Page 60: Investigation of acoustic communication

4.5.4 Network layer: Routing

The last part in this chapter is dedicated to routing. Routing needs shortest path algorithms.Such a shortest path algorithm is either handled from a central omniscient node coordinator,at the beginning of an routing initialization of a network, or is calculated in an distributedmanner at every node separately. For distributed shortest path algorithm, the nodes need toexchange neighborhood-hop-information (costs for sending to a neighboring node).In UAN nodes are allowed to move, so we obtain a so called ad hoc network. Properties ofan ad hoc network are distinct costs in time in the local routing tables and a non predefinednetwork constellation. Thus an high amount of overhead is added to the network through con-tinuous transmission-cost updates for neighbors, which have to be send to all members in thenetwork.Routing protocols can be divided into: Circuit switching and packet switching.

Packet switching differs from circuit switching in the ability to chop packets in smallerpackets that can take individually routes or a predefined one through a network, instead of apredefined path for non divided packets, like in circuit switching [17].

Packet switching can be further divided in routing protocols called: Virtual circuit routingand datagram routing.

Virtual circuit routing is quite the same as circuit switching at a theoretical point of view,but of a engineering point of view, circuit switching is hard-coded and virtual circuit routingsoftware based.

In packet switching every node can send divided packets to different nodes, but with thesame final destination.

60

Page 61: Investigation of acoustic communication

Subsequent, we will look at the following ad hoc routing algorithms: destination sequencedistance vector (DSDV), temporally ordered routing algorithm (TORA), dynamic source rout-ing (DSR) and ad hoc on-demand distance vector (AODV).

In destination sequence distance vector (DSDV) every node has its own database with op-timal next-hops for every destination node. Thus, periodic update-broadcasts for every nodeare needed.

Temporally ordered routing algorithm (TORA) is a distributed routing algorithm with localoptimality. Routes are discovered by demand and several routes are provided for one destina-tion from one node. It reduces overhead for the costs of suboptimal routing.

On the other hand, dynamic source routing (DSR) eliminates local databases for routes atevery node by adding the predefined route into the header of a package. Therefore no decisionat the nodes are needed.

Ad hoc on-demand distance vector (AODV) combines DSR with TORA.

61

Page 62: Investigation of acoustic communication

4.5.5 How to improve UAN

What else can improve acoustic communication in UAN?

Error-handling: For environments with high BER22, ACK packets can be applied, insteadof forwarding error handling to upper one, that exhibits point to point connection properties.For further detailed information look at: [8]

Backoff algorithm: If a sender transmits a RTS and does not receive a CTS after two timeslots, it tries to resend its request after a randomly chosen multiple of the slot-interval.In the original FAMA, every time a node sends a RTS and does not receive a CTS, it goesto backoff state. After a randomly chosen time, it goes out of backoff state again and tries toresend its RTS. But if a carrier is sensed, it changes to Receiving State, followed by continuingbackoff state with a reseted random waiting period.In high traffic environments, that means in an environment with many neighbors, this canresult in a dead-lock situation for this node. To avoids this lock, the random-time can be de-creased, but this would result in further collisions, because the probability to send at the sametime for different nodes would increase ( the number of different transmission times decreases,over a smaller random-interval).The better solution is not to reset the waiting timer, after a node returns to the previous backoffstate.

Transmission priority: To smooth the flow for packets, that need many hops in the network,a priority assignment can be added. This happens through excluding the backoff state afterreceiving a packet. Thereby a node, that received a packet can immediately send its new RTS,if data is available in the buffer, at the next possible time slot.

Trains of packets: If a node buffer contains several packets to the same destination, a trainrequest can be send as a flag in RTS. This signalizes, that the transmitter wants to send morethan one data packet in a row. But this needs to be coordinated between all nodes, since themaximum packet delay for several packets that will be sent one after another differs from thetime delay for only one. Through a flag in ACK / NACK, a node that is ready to send, willinterpret this like a perceived CTS and will wait to send its RTS.To smooth further, older packets are favored to send.

Slot time and packet priority: Because the time interval, a RTS / CTS can be received,is the length of a slot that is greater than the length of one RTS or CTS, several control packetscan arrive at the receiver. As a result, priority rules have to be set. For higher throughput, CTSfor the receiving node and CTS for other nodes must have the highest priority.

22”In digital transmission, the number of bit errors is the number of received bits of a data stream over a commu-nication channel that have been altered due to noise, interference, distortion or bit synchronization errors. Thebit error rate or bit error ratio (BER) is the number of bit errors divided by the total number of transferred bitsduring a studied time interval. BER is a unitless performance measure, often expressed as a percentage.”[81]

62

Page 63: Investigation of acoustic communication

4.6 Acoustic modem

To communicate underwater, we need a transmitter and receiver or a combination of both,which is called transducer. But we should not mix up a sonar with a acoustic underwatermodem. Both produce and sense sound, but a sonar produces pulses and looks for echoes forlocalization and is not used for communication purposes like a modem. The following pictureshows some commercial underwater modems.

Figure 52: Commercial underwater acoustic modems [38]

Figure 53: Low cost acoustic transducer [1]

Most underwater-modems (UWM) consist of an piezoelectric material for transmitting andreceiving at once. If you look at Figure 52 you will see some piezoelectric transducer. Somescientific projects take use of separate piezoelectric modules for transmission and receiving(Figure 55 is such an UWM and Figure 54 shows the separate modules). Furthermore thismodule is combined with some electronics, like amplifier, equalizer, filters, etcetera. Essentialfor acoustic modems is to choose the right impedance of the attached material to the piezo-electric material, to avoid reflection and refraction and therefore transmit most of the powerstraight to the transceiver through the medium and vice versa. Manufacturer make a secretout of the exact structure of the modem, because building a good underwater modem is noteasy. There is much magic in creating a underwater acoustic modem, like good mechanicaland analog engineering, which is not broad knowledge.23

23for further information on low cost underwater acoustic modems: Link: Presentation of ”A Low-Cost AcousticModem for Underwater Sensor Networks”

63

Page 64: Investigation of acoustic communication

Figure 54: Modules of a low cost acoustic underwater transducer [1]

Figure 55: Low cost underwater modem, but with separate receiver and transmitter [5]

64

Page 65: Investigation of acoustic communication

5 Terrestrial acoustic swarm communication

In this chapter we will put all together, what we described so far, and adapt it to the medium ofair. terrestrial acoustic communication is characteristically applied by many biology creatures.But technically, acoustics is only used for music, sound signals production and recording. Fortransmission of information there is no application worth mentioning. But why is there notechnological use of acoustics for information exchange? That is simple, electromagneticwaves are preferred in the newer ages, mainly of the fact of high velocity and relatively fewsignificant natural noise sources. If the velocity is high, Doppler effects, transmission delayand synchronization can most often be neglected.For acoustic waves, now especially in an non-ideal gas, called air, that is totally different.Compared to water we have to deal with an even lower speed of propagation for the waves.This results in higher Doppler effects, synchronization problems and a long time delay. Butas an advantage in air, we do not have to deal with a high amount of layers, that producereflection and refraction. In air we can even assume only a low dispersion factor for soundwaves. If air consists only of nitrogen and oxygen it would be a non dispersive medium, butCO2 makes it a little bit dispersive for ultrasonic frequencies.[4]In the next chapter we will look in detail at the challenges, we have to deal with, in terrestrialacoustic networks.

Table 2: Comparison of acoustic and electromagnetic communicationProperty Sound waves EM wavesVelocity csound = 343.2m/s clight = 299, 792, 458m/s

Dependencies temperature, wind,pressure, density, humidity

electric fields, magneticfields, conductors, isolators

SNR many acoustic noisesources

electromagnetic noise

Effects refraction, reflection,transmission, dispersion,

absorption, Doppler effect

refraction, reflection,transmission, dispersion,

absorption, Doppler effect

65

Page 66: Investigation of acoustic communication

5.1 Challenges

Like mentioned several times before, speed for pressure waves in air lies at atmospheric pres-sure, temperature of 20◦C and dry air by csound = 343.2m/s. That causes problems in timesynchronization for slotting protocols, like Slotted-FAMA. Furthermore propagation delayslag transmissions and increase waiting-phases, in which the nodes have to be in backoff state.Either we apply asynchronous transmission or we accept these delays and lower throughput.But an asynchronous transmission will result in more collisions, if we admit nodes to send atany time, they think the channel could be free, like in situations where no carrier is sensed orCTS, RTS or ACK / NACK is present. In UAN the trade-off has been decided in flavor of lesscollisions and therefore for synchronous communication. However speed of sound in water isabout 4.3 times higher than in air and should be a subject of further research.Time delay and lower throughput should be proportional through the distance:

v =s

t=⇒ t =

s

v(31)

As a consequence smaller distances should be preferred. But if we confront nodes to an en-vironment full of nearby neighbors, communication gets complicated as long as transmissionrange exceeds too many neighboring nodes. But if we reduce the transmission range to aminimum, that will save battery power and less problems are caused through direct neighbors.Another advantage through small distance communication, is a shorter lag in the system andless noise, which can be attached to the raw signal.

Figure 56: Dependency of speed of sound from temperature [76]

66

Page 67: Investigation of acoustic communication

Acoustic noise is omnipresent in most environments, except in vacuum. By acoustic noise, allranges in frequencies are meant. That includes infrasound, which is produced by waves at theseabed, that penetrate through the water and air. Furthermore that includes audible frequen-cies, produced by a variety of things and creatures and at least ultrasound, created through forexample, pushing a keyboard button. The amount of noise, dependent on frequency, should bean area of further research, too. Every environment has its own noise characteristics, distinctenvironments should be considered for modeling.Surfaces that observe, reflect and refract the sound signals have to be considered, too. Forsmall distances absorption should be a smaller problem, hence a big swarm scatters around it,but reflection, refraction from this object results in multipaths that increase SNR.For dynamic nodes and environments an ad hoc ability is a must have, to react to changingconditions and locations for nodes. This results in adaptive routing.Because air is a multi access medium, protocols of that type have to be applied, like in UAN.Even temperature and humidity should be measured for synchronization and abstract for theprotocols, since both change the speed of sound and the attenuation coefficient. This can beseen in Table 3 and Figure 57-58. Therefore velocity and maximum range will change, iftemperature and / or humidity increase or decrease. This sensors are cheap and can easily beintegrated into a node.

Figure 57: Different air layer and impact of humidity, pressure and temperature on v [76]

67

Page 68: Investigation of acoustic communication

But which frequencies should be used for an terrestrial acoustic communication network?Frequencies in the audible region of humans can be ruled out, same as for animals in a spe-cific environment. Since infrasound lies below this region, we will inspect the properties ofthis frequency bandwidth in detail.Low frequencies can travel longer distances, since their attenuation coefficient is low. Butlow frequencies (clock-times) decrease throughput significantly. Besides the fact, that ultra-sounds have a very small bandwidth for communication and to bypass noise, through fre-quency change. For very low bit rates, infrasound can be considered, hence they are even re-flected by less surfaces, because of the length of the waves (Look therefore at Chapter CIRCE).In contrast ultrasound is already applied in many applications like mentioned in Chapter 3.4.This frequency bandwidth features higher intensity, throughput, attenuation coefficient, morereflective surfaces and an more directional beam channel propagation. If we apply infrasoundonly for small distances, we should not attend problems with fast fading signals, but contraryget a high throughput. Maximum usable ultrasound frequency is dependent of usable pres-sure intensity, transducer capabilities and intended maximum communication range. Figure58 shows the velocities for different frequencies.

Figure 58: Damping factor for different frequencies in air [90]

Since we can control amplitude and frequency, we are capable to adapt battery life to ourrequirements. Higher frequencies result in higher energetic acoustic waves, but as a conse-quence more power is needed to create these type of waves. A higher amplitude effects theloudness. But the lower the produced amplitude, the lower the load for the battery. The opti-mal frequency therefore has to be an area of research for optimal application of sound waves.The applied frequency must be the optimal trade-off between throughput, needed bandwidth,energy consumption and noise through the environment.

68

Page 69: Investigation of acoustic communication

Table 3: Atm. attenuation coefficient α (dB/km) at selected frequencies at 1 atm [90]T Rel.

hum.(%)

62.5Hz

125Hz

250Hz

500Hz

1000Hz

2000Hz

4000Hz

8000Hz

30◦C 10 0.362 0.958 1.82 3.40 8.67 28.5 96.0 26030◦C 20 0.212 0.725 1.87 3.41 6.00 14.5 47.1 16530◦C 30 0.147 0.543 1.68 3.67 6.15 11.8 32.7 11330◦C 50 0.091 0.351 1.25 3.57 7.03 11.7 24.5 73.130◦C 70 0.065 0.256 0.963 3.14 7.41 12.7 23.1 59.330◦C 90 0.051 0.202 0.775 2.71 7.32 13.8 23.5 53.520◦C 10 0.370 0.775 1.58 4.25 14.1 45.3 109 17520◦C 20 0.260 0.712 1.39 2.60 6.53 21.5 74.1 21520◦C 30 0.192 0.615 1.42 2.52 5.01 14.1 48.5 16620◦C 50 0.123 0.445 1.32 2.73 4.66 9.86 29.4 10420◦C 70 0.090 0.339 1.13 2.80 4.98 9.02 22.9 76.620◦C 90 0.071 0.272 0.966 2.71 5.30 9.06 20.2 62.610◦C 10 0.342 0.788 2.29 7.52 21.6 42.3 57.3 69.410◦C 20 0.271 0.579 1.20 3.27 11.0 36.2 91.5 15410◦C 30 0.225 0.551 1.05 2.28 6.77 23.5 76.6 18710◦C 50 0.160 0.486 1.05 1.90 4.26 13.2 46.7 15510◦C 70 0.122 0.411 1.04 1.93 3.66 9.66 32.8 11710◦C 90 0.097 0.348 0.996 2.00 3.54 8.14 25.7 92.40◦C 10 0.424 1.30 4.00 9.25 14.0 16.6 19.0 26.40◦C 20 0.256 0.614 1.85 6.16 17.7 34.6 47.0 58.10◦C 30 0.219 0.469 1.17 3.73 12.7 36.0 69.0 95.20◦C 50 0.181 0.411 0.821 2.08 6.83 23.8 71.0 1470◦C 70 0.151 0.390 0.763 1.61 4.64 16.1 55.5 1530◦C 90 0.127 0.367 0.760 1.45 3.66 12.1 43.2 138

69

Page 70: Investigation of acoustic communication

5.2 Hardware

Like in UWA, there are two options to build an acoustic modem. Either we take a transducer orwe separate the receiver from the transmitter, like seen before. But indifferent of which optionwe choose, the material should be piezoelectric, based on the energy conversion efficiency24

of this substance.Furthermore stress is low and very high frequencies can be achieved. Only cooling problemsby high vibrations and a too small wavelength interfere with unlimited frequency extension.

But why not use the highest frequency for high throughput? Because damping increases forhigher frequencies and limits together with the distance attenuation decrease the possible com-munication range. Furthermore, higher frequencies result in higher directivity of sound. Thiscan be seen in Figure 59. For very low frequencies we obtain an omnidirectional sphere withthe same intension. Nevertheless with increasing frequency, the intensity distribution differs alot.

Figure 59: Directivity and frequency (red = high intensity) [78]

24”Piezoelectric transducers are extremely efficient due to the direct conversion of electrical to mechanical en-ergy in a single step. Direct application of the power to the piezoelectrically active ceramic causes it to changeshape and create the sound wave. Energy losses in the ceramic due to internal friction and heat are typicallyless than 5%. This means that up to 95% of the power delivered to the transducer is used to do cleaning.Modern ultrasonic generators used to drive piezoelectric transducers are generally over 75% efficient makingthe overall system efficiency 70% or higher.” [36]

70

Page 71: Investigation of acoustic communication

In that case, why not use two or more sound sources with gigantic frequency? That is simple,like you can see in Figure 60 the propagating waves would interfere with each other with areasof complete cancellation. The optimal number and frequency has therefore to be an subject offurther research.

Figure 60: Interference: Two sound sources [89]

In this context polar diagrams have to be mentioned. A polar diagram or beam pattern showsthe directivity property of an transmitter or receiver. The line(s) is defined by putting sensorsor speakers at different angles, but same radial distance around the examination object andmeasure the sound pressure values. This results in a characteristic map for different frequen-cies. The following figures show different polar diagrams.

Figure 61: Point sound source that radiates omnidirectional [79]

Figure 62: bidirectional sound source[79]

71

Page 72: Investigation of acoustic communication

For a swarm, we need energetic efficient and low cost transducers or speakers and micro-phones. In Figure 63 we look at an pure piezoelectric element, that can be seen as an trans-ducer. No filtering and amplification through special formed and attached material is available.But this type can be self produced and adapted to the special requirements we need.Out of the product catalog:

”The piezoceramic element is the heart of every piezoelectric sound generator. In generalit consists of a piezoceramic layer, glued together with a flexible metal plate. An alternatingelectric signal, connected between metal plate and ceramic layer, causes the vibration of thepiezo element by means of the piezo effect.”

Figure 63: Piezoelectric element (Endrich)

The piezoelectric speaker has even an greater surface to create louder sound and has an broaderfrequency range. But for high frequencies, the attached material begins to produce smallvibrations, too and therefore emit waves, that interfere with the transmission signal.

Figure 64: Piezoelectric speaker (Endrich)

The last one, is an piezoelectric transducer (buzzer). It is very cost efficient and robust. But ithas only a low bandwidth. The range of this commercial transducer lies between 2 kHz and 5kHz.

Figure 65: Piezoelectric Buzzer (Endrich)

72

Page 73: Investigation of acoustic communication

5.3 Historical side note: Acoustic coupler

Prior 1984, Bell System’s owned a monopoly over telecommunication in the United Statesand made strict rules to access their network. Only equipment from Bell was permitted tobe directly attached to the wires. Due to this responsibility, Robert Weitbrecht invented aworkaround for Bell’s network through an acoustic coupler, which converted electrical toacoustic signals. No handshake protocol or error correction was used. Only the pure modu-lated signal on an carrier was transmitted. For further information Defcon17 25 had an inter-esting contribution to an old acoustic coupler, in which ASCII code was transferred through atelephone line [44].

Figure 66: An old acoustic coupler [77]

25”DEF CON (also written as DEFCON or Defcon) is one of the world’s largest annual hacker conventions,held every year in Las Vegas, Nevada. The first DEF CON took place in June 1993. Many of the attendeesat DEF CON include computer security professionals, journalists, lawyers, federal government employees,security researchers, and hackers with a general interest in software, computer architecture, phone phreak-ing, hardware modification, and anything else that can be ’hacked.’ The event consists of several tracks ofspeakers about computer- and hacking-related subjects, as well as social events and contests in everythingfrom creating the longest Wi-Fi connection and cracking computer systems to who can most effectively coola beer in the Nevada heat.”[47]

73

Page 74: Investigation of acoustic communication

6 Conclusion

To summarize acoustic communication, we can capture following properties: acoustic com-munication needs a medium, otherwise mechanical waves can not be transmitted through anarea of interest. Air is a medium, that offers little dispersion, which improves SNR. But asa major disadvantage, we have to deal with low velocity, low usable bandwidth, plus a highamount of reflections, that worsen SNR. Because of these facts, we should consider followingquotation:

”The reasons why mainly electromagnetic waves are used to transfer information in the classicwireless air channel lie in their fast propagation speed, in the wide usable frequency spectrumand in the small environment noise compared for example with acoustics, factors that all leadto high possible data rates. Furthermore, the electromagnetic wave has the ability to propa-gate without a carrier medium and the electric-magnetic field conversion enables in generalvery large communication ranges.”[7]

In water, EM waves can not be used, as a consequence of beam like data transmission forvisible light and low range for lower frequencies. Therefore acoustics has its right to exist inUAN, but for mobile agents in the medium air, this can not conclusively be said, because notonly delay and throughput are crucial, but rather energy consumption.On any account we should highly consider acoustic communication in environments full ofEM noise and / or with a mass amount of EM transmitting nodes. Even very humid environ-ments can be an area of operation (look at Table 3 in Chapter 5.1).Shielded environments, like Faraday cages, conducing nets and closed surface conductors, letEM waves not pass through, because of the dynamic changing magnetic field characteristicof EM waves. As a consequence, in such environments, like gas bottles, copper water pipes,etcetera, we can benefit from acoustic communication, since pressure waves penetrate throughsolid structures, too. As a side effect, velocity of sound waves in solid material is even higheras in water. But the transition, for example, from air to copper and vise versa causes refractionand reflection and has to be in mind.

74

Page 75: Investigation of acoustic communication

Figure 67: Electromagnetic wave spectrum [62]

For environments, dominated by EM apparatus, only economic and energetic properties canbe reasons to deploy acoustic communication. But what are cost-cutting or energy saving fea-tures of acoustic apparatus, compared to wireless modems? The only bigger differences inboth modems is the process in which energy is transmitted and therefore translated.In Wifi EM modems, an electrical signal is converted over an antenna to electromagneticwaves. That needs an antenna, which is in the order of the intended wavelengths, becauseof impedance matching26. But for higher frequencies, that results in small antennas, that caneasily be accommodated. Furthermore, additional windings in the antenna at the receiver sideincrease the current flowing through it and amplifies the signal.In acoustic modems, the transducer is most often an piezoelectric material (with an attachedmaterial with equal impedance factor to reduce reflection and refraction in UWA). Conversionfrom electrical energy to pressure energy is accompanied by creating heat and therefore lost-energy is introduced to the system. But since piezoelectric materials are very energy efficient,energy lost can be even between 5-12%. On the other hand, loss in commercial antennas liesaround 20%-70%. Further research should be done for an energetic comparison between an-tennas and piezoelectric transducers.And are there cost-cutting features in acoustic modems? Is the piezoelectric material cheaperthan the conducting material in an antenna? If we look at the price of an small antenna for 2.4GHz, we can find results in the order of several EUR compared to 1.0 - several EUR for onepiezoelectric transducer. But if these transducer meet our requirements, must be investigated.

26”As an electro-magnetic wave travels through the different parts of the antenna system (radio, feed line, an-tenna, free space) it may encounter differences in impedance (E/H, V/I, etc.). At each interface, dependingon the impedance match, some fraction of the wave’s energy will reflect back to the source,[11] forming astanding wave in the feed line. The ratio of maximum power to minimum power in the wave can be mea-sured and is called the standing wave ratio (SWR). A SWR of 1:1 is ideal. A SWR of 1.5:1 is considered tobe marginally acceptable in low power applications where power loss is more critical, although an SWR ashigh as 6:1 may still be usable with the right equipment. Minimizing impedance differences at each inter-face (impedance matching) will reduce SWR and maximize power transfer through each part of the antennasystem.”[83]

75

Page 76: Investigation of acoustic communication

For ad hoc swarm networks, all the properties of sound waves imply the need of small dis-tance neighborhoods for individuals, to increase throughput, improve SNR, decrease powerconsumption and decrease the lag in the system. But small distances are a problem for highfrequency transducer, since directivity increases. In this area further research has to be applied.

If sound waves can be an alternative to electromagnetic waves, depends on the environment inwhich the swarm will operate. In any case, a dense swarm network can partial compensate thelow velocity and small bandwidth of sound in air. In water and other application environmentspredestined for sound waves, we mentioned before, acoustics for swarm networks can andshould be used.

In air, only special conditions indicate sound waves, as a carrier medium, for informationexchange if we look mainly at throughput and real-time communication. But for energy con-sumption and cost effectiveness further investigation is required.

76

Page 77: Investigation of acoustic communication

7 Appendix

Figure 68: EM absorption through the atmosphere [62]

77

Page 78: Investigation of acoustic communication

References

[1] B. Benson, Y. Li, R. Kastner, and B. Faunce. Design of a low-cost, underwater acousticmodem for short-range sensor networks. Ieee, May 2010.

[2] Johann Friedrich Bohme. Warum Kommunikation unter Wasser ? Warum akustischeKommunikation ? Digitale Ubertragung von Daten., 2001.

[3] Heather Brundage. Designing a wireless underwater optical communication system.2010.

[4] EA Dean. Atmospheric effects on the speed of sound. 1979.[5] L. Freitag, M. Grund, S. Singh, J. Partan, P. Koski, and K. Ball. The WHOI Micro-

Modem: An Acoustic Communications and Navigation System for Multiple Platforms.Proceedings of OCEANS 2005 MTS/IEEE, pages 1–7.

[6] G. Ballou et coll. Handbook for sound engineers. Howard W. Sams Company, 1991.[7] J Maye and E Hagmann. Design of a High Speed, Short Range Underwater Communi-

cation System. students.asl.ethz.ch, 2009.[8] Marcal Molins and Milica Stojanovic. Slotted FAMA: a MAC protocol for underwater

acoustic networks. OCEANS 2006 - Asia Pacific, pages 1–7, May 2006.[9] Herbert Peremans. CIRCE - Chiroptera-Inspired Robotic Cephaloid: a Novel Tool for

Experiments in Synthetic Biology Final Report. Technical Report 3, March 2006.[10] JG Proakis and EM Sozer. Shallow water acoustic networks. Communications . . . , 2001.[11] E.M. Sozer, M. Stojanovic, and J.G. Proakis. Underwater acoustic networks. IEEE

Journal of Oceanic Engineering, 25(1):72–83, January 2000.[12] Milica Stojanovic and James Preisig. Underwater acoustic communication channels:

Propagation models and statistical characterization. Communications Magazine, IEEE,(January):84–89, 2009.

[13] Scott R Thompson, Joseph A Rice, John A Colosi, Form Approved, and O M B No.NAVAL POSTGRADUATE THESIS by. PhD thesis, 2009.

[14] BAS TO. Inside JEB. mysite.science.uottawa.ca, 2013.[15] Dietmar Todt. AKUSTISCHE KOMMUNIKATION: INTERAKTIVES PROBLEM-

LOESEN ODER SCHRITTE AUF DEM WEG ZUR SPRACHE ?[16] B. Wozniak and J. Dera. Light absorption in sea water. Atmospheric and oceanographic

sciences library. Springer Science+Business Media, LLC, 2007.[17] Yan Zhang. Virtual Circuit Routing. Class Report,(October 2003), pages 1–11, 2003.[18] M. Hazas and a. Hopper. Broadband ultrasonic location systems for improved indoor

positioning. IEEE Transactions on Mobile Computing, 5(5):536–547, May 2006.[19] Encyclopdia Britannica Online. Dispersion. http://www.britannica.com/

EBchecked/topic/165792/dispersion, March 2013.[20] Encyclopdia Britannica Online. Frequency. http://www.britannica.com/

EBchecked/topic/219573/frequency, March 2013.[21] Encyclopdia Britannica Online. Longitudinal wave. http://www.britannica.

com/EBchecked/topic/347557/longitudinal-wave, March 2013.[22] Encyclopdia Britannica Online. Wave velocity. http://www.britannica.com/

EBchecked/topic/637913/wave-velocity, March 2013.

78

Page 79: Investigation of acoustic communication

[23] Encyclopdia Britannica Online. Transverse wave. http://www.britannica.com/EBchecked/topic/603299/transverse-wave, March 2013.

[24] Encyclopdia Britannica Online. Wavelength. http://www.britannica.com/EBchecked/topic/637928/wavelength, March 2013.

[25] Encyclopdia Britannica Online. Absorption. http://www.britannica.com/EBchecked/topic/1868/absorption, March 2013.

[26] Encyclopdia Britannica Online. Resonator. http://www.britannica.com/EBchecked/topic/499474/resonator, March 2013.

[27] Encyclopdia Britannica Online. Bioluminescence. http://www.britannica.com/EBchecked/topic/66087/bioluminescence, March 2013.

[28] Encyclopdia Britannica Online. Larynx. http://www.britannica.com/EBchecked/topic/330791/larynx, March 2013.

[29] Encyclopdia Britannica Online. Syrinx. http://www.britannica.com/EBchecked/topic/579069/syrinx, March 2013.

[30] Encyclopdia Britannica Online. Amplitude. http://www.britannica.com/EBchecked/topic/21711/amplitude, March 2013.

[31] Encyclopdia Britannica Online. Echolocation. http://www.britannica.com/EBchecked/topic/178017/echolocation, March 2013.

[32] Encyclopdia Britannica Online. Sonar. http://www.britannica.com/EBchecked/topic/554214/sonar, March 2013.

[33] Encyclopdia Britannica Online. Ferroelectricity. http://www.britannica.com/EBchecked/topic/205124/ferroelectricity, March 2013.

[34] Encyclopdia Britannica Online. Dispersion. http://www.britannica.com/EBchecked/topic/165792/dispersion, March 2013.

[35] Encyclopdia Britannica Online. Overtone. http://www.britannica.com/EBchecked/topic/436017/overtone, March 2013.

[36] Cleaning technology group. Power efficiency. http://www.ctgclean.com/blog/technology-library/articles/magnetostrictive-versus-piezoelectric-transducers-for-power-ultrasonic-applications/,March 2013.

[37] Deepwater Deep water. http://www.deepwater.co.uk/info.htm, March2013.

[38] Evologics. Underwater Modems. http://www.evologics.de, March 2013.[39] Khan Academy. Refraction of Seismic Waves. https://www.

khanacademy.org/science/physics/waves-and-optics/v/refraction-and-snell-s-law, March 2013.

[40] School of Physics - The University of New South Wales - Sydney AustraliaPhysclips Learning Platform http://www.animations.physics.unsw.edu.au/waves-sound/, March 2013.

[41] Simon Frasier University. Square wave http://www.sfu.ca/sonic-studio/handbook/Square_Wave.html, March 2013.

[42] The Open Door Web Site. Traveling waves. http://www.saburchill.com/physics/chapters2/0005.html, March 2013.

79

Page 80: Investigation of acoustic communication

[43] Youtube. Kastern - low cost underwater acoustic modem. http://www.youtube.com/watch?v=iAymeTCpPQI, March 2013.

[44] Youtube. Defcon17. http://www.youtube.com/watch?v=RxM_0BguTkE,March 2013.

[45] Wikipedia(de). Echolocation. http://de.wikipedia.org/wiki/Echoortung_(Tiere), March 2013.

[46] Wikipedia(de). Diver physics. http://de.wikipedia.org/wiki/Tauchphysik, March 2013.

[47] Wikipedia(en). DEFCON. http://en.wikipedia.org/wiki/DEF_CON,March 2013.

[48] Wikipedia(de). Doppler Effect. http://de.wikipedia.org/wiki/Dopplereffekt, March 2013.

[49] Wikipedia(en). Angular frequency. http://en.wikipedia.org/wiki/Angular_frequency, March 2013.

[50] Wikipedia(en). Wavenumber. http://en.wikipedia.org/wiki/Wavenumber, March 2013.

[51] Wikipedia(en). Standing wave. http://en.wikipedia.org/wiki/Standing_wave, March 2013.

[52] Wikipedia(en). Group velocity. http://en.wikipedia.org/wiki/Group_velocity, March 2013.

[53] Wikipedia(en). Resonance. http://en.wikipedia.org/wiki/Resonance,March 2013.

[54] Wikipedia(en). Harmonic. http://en.wikipedia.org/wiki/Harmonic,March 2013.

[55] Wikipedia(en). Fourier Analysis. http://www.brains-minds-media.org/archive/289, March 2013.

[56] Wikipedia(en). Fresnel Principle. http://en.wikipedia.org/wiki/Huygens%E2%80%93Fresnel_principle, March 2013.

[57] Wikipedia(en). Doppler Effect. http://en.wikipedia.org/wiki/Doppler_effect, March 2013.

[58] Wikipedia(en). Doppler Shift Compensation. http://en.wikipedia.org/wiki/Doppler_Shift_Compensation, March 2013.

[59] Wikipedia(en). Deep sea. http://en.wikipedia.org/wiki/Deep_sea,March 2013.

[60] Wikipedia(en). Bioluminescence. http://en.wikipedia.org/wiki/Bioluminescence, March 2013.

[61] Wikipedia(en). Electromagnetic absorption by water. http://en.wikipedia.org/wiki/Electromagnetic_absorption_by_water, March 2013.

[62] Wikipedia(en). Electromagnetic radiation. http://en.wikipedia.org/wiki/Electromagnetic_radiation, March 2013.

[63] Wikipedia(en). Underwater acoustics. http://en.wikipedia.org/wiki/Underwater_acoustics, March 2013.

[64] Wikipedia(en). Sound intensity. http://en.wikipedia.org/wiki/Sound_

80

Page 81: Investigation of acoustic communication

intensity, March 2013.[65] Wikipedia(en). Bird anatomy. http://en.wikipedia.org/wiki/Syrinx_

(bird_anatomy), March 2013.[66] Wikipedia(en). Larynx. http://en.wikipedia.org/wiki/Larynx, March

2013.[67] Wikipedia(en). Ultrasound. http://en.wikipedia.org/wiki/Ultrasound,

March 2013.[68] Wikipedia(en). Radio. http://en.wikipedia.org/wiki/Radio, March 2013.[69] Wikipedia(en). Fading. http://en.wikipedia.org/wiki/Fading, March

2013.[70] Wikipedia(en). Adaptive Equalizer. http://en.wikipedia.org/wiki/

Adaptive_equalizer, March 2013.[71] Wikipedia(en). Equalizer communications http://en.wikipedia.org/wiki/

Equalizer_(communications), March 2013.[72] Wikipedia(en). Near far problem. http://en.wikipedia.org/wiki/

Near-far_problem, March 2013.[73] DolceraWiki(en). CDMA. http://www.dolcera.com/wiki/index.php?

title=CDMA_Basics, March 2013.[74] Wikipedia(en). Telecommunications. http://en.wikipedia.org/wiki/

Duplex_(telecommunications), March 2013.[75] Wikipedia(en). Dispersion in optics. http://en.wikipedia.org/wiki/

Dispersion_(optics), March 2013.[76] Wikipedia(en). Speed of sound. http://en.wikipedia.org/wiki/Speed_

of_sound, March 2013.[77] Wikipedia(en). Acoustic coupler. http://en.wikipedia.org/wiki/

Acoustic_coupler, March 2013.[78] Wikipedia(en). Loudspeaker. http://en.wikipedia.org/wiki/

Loudspeaker, March 2013.[79] Wikipedia(en). Microphone. http://en.wikipedia.org/wiki/

Microphone, March 2013.[80] Wikipedia(en). Tragus. http://en.wikipedia.org/wiki/Tragus_(ear),

March 2013.[81] Wikipedia(en). Bit error rate. http://en.wikipedia.org/wiki/Bit_error_

rate, March 2013.[82] Wikipedia(en). OSI. http://en.wikipedia.org/wiki/OSI_model, March

2013.[83] Wikipedia(en). Antenna impedance. http://en.wikipedia.org/wiki/

Antenna_(radio)#Impedance, March 2013.[84] Wikipedia(en). Harmonic wave. http://en.wikipedia.org/wiki/

Harmonic_wave, March 2013.[85] Wikipedia(en). Fundamental frequency. https://en.wikipedia.org/wiki/

Fundamental_frequency, March 2013.[86] Wikipedia(en). Frequency. http://en.wikipedia.org/wiki/Frequency,

81

Page 82: Investigation of acoustic communication

March 2013.[87] Wikipedia(en). Intersymbol interference. http://en.wikipedia.org/wiki/

Intersymbol_interference, March 2013.[88] Wikipedia(en). Dispersion relation. http://en.wikipedia.org/wiki/

Dispersion_relation, March 2013.[89] Wiki(en). Interference two sources. http://en.wikipedia.org/wiki/

Interference_(wave_propagation), March 2013.[90] Wikibooks(en). Outdoor sound propagation. http://en.wikibooks.org/

wiki/Engineering_Acoustics/Outdoor_Sound_Propagation, March2013.

82

Page 83: Investigation of acoustic communication

List of Figures1 Sine Wave . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62 l-wave . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83 Traveling wave modeled through connected molecules . . . . . . . . . . . . 94 Harmonic wave [51] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115 Harmonics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126 Reflection and transmission for a traveling wave with v1 < v2 [42]

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157 Reflection and transmission for a traveling wave with v1 > v2 [42] . . . . . . 158 Khan Academy: Snell’s law and refraction - Dirty road example can be seen

[39] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189 Harmonics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1910 A dispersive prism [75] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2011 Resonance - amplitudes and frequencies [53] . . . . . . . . . . . . . . . . . 2112 Fourier analysis of a sign wave . . . . . . . . . . . . . . . . . . . . . . . . . 2213 Spectrum of a square wave [41] . . . . . . . . . . . . . . . . . . . . . . . . 2214 Refraction explained via Hygens-Fresnel principle [56] . . . . . . . . . . . . 2315 Diffraction of plane water waves [56] . . . . . . . . . . . . . . . . . . . . . 2316 Doppler effect: f = const. in (left); change in f (mid); reaching Mach 1 (right)

[57] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2417 Doppler frequency shift for different locations of an observer: from red to blue

the distance from the sound source at v = 0 to an observer is shorter [48] . . 2518 Firefly (left)[60] and flashlightfish (right)[59]

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2819 Correlation: number social tasks (x-axis) and number of signals used (y-axis)

[15] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3020 Larynx (left) [66], Syrinx (right) [65] . . . . . . . . . . . . . . . . . . . . . . 3121 Sound frequencies[67] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3222 The outer ear [40] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3423 The middle ear [40] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3524 The inner ear - Cochela [40] . . . . . . . . . . . . . . . . . . . . . . . . . . 3525 The complete bionic bat head[9] . . . . . . . . . . . . . . . . . . . . . . . . 3726 Transmitter (nose) [9] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3727 Receiver (ear)[9] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3828 Filtering, amplifying and spike tree production [9] . . . . . . . . . . . . . . . 3829 Absorption for different EM frequencies. Look at the visible light bump. [61] 40

83

Page 84: Investigation of acoustic communication

30 Zoomed-version for absorption. Limitation to visible light. [61] . . . . . . . 4031 Different layers in the water. Blue light survives longest [46] . . . . . . . . . 4132 Damping factor for EM waves (blue) and sound waves (red) in seawater [2] . 4233 Shallow water reflection and refraction, that result in several mutlipaths [2] . 4334 Transmission loss per reflection at a surface [63] . . . . . . . . . . . . . . . . 4335 Amplidtude Modulation (AM) and frequency Modulation (FM)[2] . . . . . . 4436 Amplitude modulation (left) and phase modulation (right) [2] . . . . . . . . . 4537 Received signals. Before learning and after (right) [2] . . . . . . . . . . . . . 4538 Structure of Seaweb, a underwater acoustic network [13]

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4739 Network topology of an UAN [11] . . . . . . . . . . . . . . . . . . . . . . . 4840 OSI-Model [82] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4941 Multiple Access Protocols [73] . . . . . . . . . . . . . . . . . . . . . . . . . 5042 FDMA for electromagnetic mobile application [73] . . . . . . . . . . . . . . 5143 TDMA for electromagnetic mobile application [73] . . . . . . . . . . . . . . 5244 CDMA for electromagnetic mobile application [73] . . . . . . . . . . . . . . 5345 Example node arrangement in an UAN [11] . . . . . . . . . . . . . . . . . . 5546 Example node arrangement for the next two figures [8] . . . . . . . . . . . . 5647 A handshake between A and B [8] . . . . . . . . . . . . . . . . . . . . . . . 5648 RTS from C collides with data packet from A [8] . . . . . . . . . . . . . . . 5649 RTS timeout and retransmission of a new RTS [10] . . . . . . . . . . . . . . 5750 A successful handshake between A and B in slotted FAMA [8] . . . . . . . . 5851 Priority applied to traffic handling . Therefore the child node gets a CTS first

[10] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5852 Commercial underwater acoustic modems [38] . . . . . . . . . . . . . . . . 6353 Low cost acoustic transducer [1] . . . . . . . . . . . . . . . . . . . . . . . . 6354 Modules of a low cost acoustic underwater transducer [1] . . . . . . . . . . . 6455 Low cost underwater modem, but with separate receiver and transmitter [5] . 6456 Dependency of speed of sound from temperature [76] . . . . . . . . . . . . . 6657 Different air layer and impact of humidity, pressure and temperature on v [76] 6758 Damping factor for different frequencies in air [90] . . . . . . . . . . . . . . 6859 Directivity and frequency (red = high intensity) [78] . . . . . . . . . . . . . . 7060 Interference: Two sound sources [89] . . . . . . . . . . . . . . . . . . . . . 7161 Point sound source that radiates omnidirectional [79] . . . . . . . . . . . . . 7162 bidirectional sound source[79] . . . . . . . . . . . . . . . . . . . . . . . . . 7163 Piezoelectric element (Endrich) . . . . . . . . . . . . . . . . . . . . . . . . . 7264 Piezoelectric speaker (Endrich) . . . . . . . . . . . . . . . . . . . . . . . . . 7265 Piezoelectric Buzzer (Endrich) . . . . . . . . . . . . . . . . . . . . . . . . . 7266 An old acoustic coupler [77] . . . . . . . . . . . . . . . . . . . . . . . . . . 7367 Electromagnetic wave spectrum [62] . . . . . . . . . . . . . . . . . . . . . . 7568 EM absorption through the atmosphere [62] . . . . . . . . . . . . . . . . . . 77

84

Page 85: Investigation of acoustic communication

List of Tables1 Comparison of communication technologies [15] . . . . . . . . . . . . . . . 282 Comparison of acoustic and electromagnetic communication . . . . . . . . . 653 Atm. attenuation coefficient α (dB/km) at selected frequencies at 1 atm [90] . 69

85