Einführung Computergraphik (SS 2019)held/teaching/einfuehrung_graphik/cg_study.pdf · William A....
Transcript of Einführung Computergraphik (SS 2019)held/teaching/einfuehrung_graphik/cg_study.pdf · William A....
Einführung Computergraphik(SS 2021)
Martin Held
FB ComputerwissenschaftenUniversität Salzburg
A-5020 Salzburg, [email protected]
17. Februar 2021
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Personalia
Instructor: M. Held.Email: [email protected]
Base-URL: http://www.cosy.sbg.ac.at/~held.Office: Universität Salzburg, Computerwissenschaften, Rm. 1.20,
Jakob-Haringer Str. 2, 5020 Salzburg-Itzling.Phone number (office): (0662) 8044-6304.Phone number (secr.): (0662) 8044-6328.
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 2
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Formalia
URL of course: Base-URL/teaching/einfuehrung_graphik/cg.html.
Lecture times (VO): Friday 1130–1300.
Lecture times (PS): Friday 1030–1115.
Venue: Online via eduMEET:https://edumeet.geant.org/held_sbg_lectures.I recommend to use Google’s “Chrome” browser.
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 3
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Electronic Slides and Online Material
In addition to these slides, you are encouraged to consult the WWW home-page ofthis lecture:
http://www.cosy.sbg.ac.at/~held/teaching/einfuehrung_graphik/cg.html.
In particular, this WWW page contains links to online manuals, slides, and code.
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 4
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
A Few Words of Warning
I hope that these slides will serve as a practice-minded introduction to various aspectsof computer graphics. I would like to warn you explicitly not to regard these slides asthe sole source of information on the topics of my course. It may and will happen thatI’ll use the lecture for talking about subtle details that need not be covered in theseslides! That is, by making these slides available to you I do not intend to encourageyou to attend the lecture on an irregular basis.
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 5
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Acknowledgments
Several students contributed to the genesis of these slides, by assembling reports ongraphics projects, producing electronic transcripts of my own lectures, and by writingLATEX code and generating computer-based figures:
Richard Bauer, Stephan Czermak, Gerd Dauenhauer, Mohamed Elkattahf, Christian Gasperi, Martin Hargassner, Claudia Horner, Christian
Koidl, Florian Krisch, Claudio Landerer, Lothar Mausz, Kathrin Meisl, Oskar Schobesberger, Roland Schorn, Rolf Sint, Alex Stumpfl, Oliver
Suter, Florian Treml, Christian Zödl, and Gerhard Zwingenberger; Matthias Ausweger, Günther Gschwendtner, Herwig Höfle, Balthasar
Laireiter, Bernhard Salzlechner, and Gerald Wiesbauer; and Markus Amersdorfer, Martin Angerer, Matthias Ausweger, Richard Bauer,
Fritz Bischof, Ronald Blaschke, Michael Brachtl, Markus Chalupar, Walter Chalupar, Werner Dietl, Johann Edtmayr, Gregor Haberl, Dorly
Harringer, Sandor Herramhof, Martin Hinterseer, Hermann Huber, Gyasi Johnson, Wolfgang Klier, August Mayer, Albert Meixner, Christof
Meerwald, Michael Neubacher, Michael Noisternig, Christoph Oberauer, Christoph Obermair, Peter Palfrader, Marc Posch, Christopher
Rettenbacher, Herwig Rittsteiger, Gerhard Scharfetter, Josef Schmidbauer, Ingrid Schneider, Harald Schweiger, Stefan Sodar, Gerald
Stieglbauer, Marc Strapetz, Johanna Temmel, Christopher Vogl, Werner Weiser, Gerald Wiesbauer, Franz Wilhelmstötter.
I would like to express my thankfulness for their help with these slides. My apologiesgo to all those who should be on this list and who I omitted by mistake.This revision and extension was carried out by myself, and I am responsible for allerrors.
Salzburg, February 2021 Martin Held
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 6
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Legal Fine Print and Disclaimer
To the best of our knowledge, these slides do not violate or infringe upon somebodyelse’s copyrights. If copyrighted material appears in these slides then it wasconsidered to be available in a non-profit manner and as an educational tool forteaching at an academic institution, within the limits of the “fair use” policy. Forcopyrighted material we strive to give references to the copyright holders (if known).Of course, any trademarks mentioned in these slides are properties of their respectiveowners.
Please note that these slides are copyrighted. The copyright holder(s) grant you theright to download and print it for your personal use. Any other use, including non-profitinstructional use and re-distribution in electronic or printed form of significant portionsof it, beyond the limits of “fair use”, requires the explicit permission of the copyrightholder(s). All rights reserved.
These slides are made available without warrant of any kind, either express orimplied, including but not limited to the implied warranties of merchantability andfitness for a particular purpose. In no event shall the copyright holder(s) and/or theirrespective employers be liable for any special, indirect or consequential damages orany damages whatsoever resulting from loss of use, data or profits, arising out of or inconnection with the use of information provided in these slides.
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 7
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Recommended Textbooks I
E. Angel, D. Shreiner.Interactive Computer Graphics: A Top-Down Approach With Shader-BasedWebGL.Addison-Wesley, 7th edition, 2015; ISBN 978-0133574845.https://www.cs.unm.edu/~angel/BOOK/INTERACTIVE_COMPUTER_GRAPHICS/SEVENTH_EDITION/.
S. Guha.Computer Graphics Through OpenGL: From Theory to Experiments.CRC Press, 3rd edition, Jan 2019; ISBN 978-1138612648.
J. Kessenich, G. Sellers, and D. Shreiner.The OpenGL Programming Guide.Addison-Wesley, 9th edition, 2016; 978-0134495491.http://www.opengl-redbook.com/
G. Sellers, R.S. Wright, and N. Haemel.OpenGL SuperBible.Addison-Wesley, 7th edition, 2015; ISBN 978-0672337475.http://www.openglsuperbible.com/
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 8
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Recommended Textbooks II
T. Akenine-Möller, E. Haines, N. Hoffman, A. Pesce, M. Iwanicki, and S. Hillaire.Real-Time Rendering.CRC Press, 4th edition, 2018; ISBN 9781138627000.http://www.realtimerendering.com
W. Engel.GPU Pro 360 Guide to Rendering.CRC Press, 1st edition, July 2018; ISBN 9780815365501.
W. Engel.GPU Pro 360 Guide to Geometry Manipulation.CRC Press, 1st edition, July 2018; ISBN 9781138568242.
M. Pharr, W. Jakob, and G. Humphreys.Physically Based Rendering.Morgan Kaufmann, 3rd edition, 2016; ISBN 978-0-12-800645-0.https://www.pbrt.org/, http://www.pbr-book.org/.
S. Marschner, P. Shirley.Fundamentals of Computer Graphics.A K Peters/CRC Press, 4th edition, Dec 2015; ISBN 978-1482229394.
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 9
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Recommended Textbooks III
D.D. Hearn and M.P. Baker and W. Carithers.Computer Graphics with OpenGL.Pearson, 4th edition, 2014; ISBN 978-1-292-02425-7.
P. Shirley.Ray Tracing mini series. 2016.https://github.com/petershirley/raytracinginoneweekend/releases.https://github.com/petershirley/raytracingthenextweek/releases.https://github.com/petershirley/raytracingtherestofyourlife/releases.
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 10
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Table of Content
1 Introduction
2 OpenGL
3 Representation and Modeling
4 Raster Graphics
5 Basic Rendering Techniques
6 Ray Tracing
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 11
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
1 IntroductionA First Step into Computer GraphicsBasics
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 12
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
What is Computer Graphics?
The term “computer graphics” was coined by William Fetter in 1960 to describe thework he was pursuing at Boeing.
“ . . . a consciously managed and documented technology directed towardcommunicating information accurately and descriptively.”
William A. Fetter, “Computer Graphics” (1960).
Computer graphics is generally regarded as the creation, storage and manipulation ofobjects for the purpose of generating images of those objects.
“ . . . the use of computers to produce pictorial images. The images pro-duced can be printed documents or animated motion pictures, but the termcomputer graphics refers particularly to images displayed on a video displayscreen, or display monitor. ”
Encyclopedia Britannica.
Why Computer Graphics?
Humans enjoy visual information.
Visual information is easy to comprehend.
Visual information is difficult to generate manually.
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 14
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Photorealism
What is a realistic image? What does it mean for a picture, whether painted,photographed, or computer-generated, to be “realistic”?
Answer is subject to much scholarly debate!
Photorealism
Hall&Greenberg (1983):“Our goal in realistic image synthesis is to generate an image that evokesfrom the visual perception system a response indistinguishable from thatevoked by the actual environment.”
Physical properties of objects have to be taken into account!
The term “photorealism” is normally used to refer to a picture that captures manyof the effects of light interacting with real physical objects.
It is an attempt to synthesize the field of light intensities that would be focused onthe film plane of a camera aimed at the objects depicted.
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 15
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Photorealism
There exist applications, however, where perfection is not such a mandatoryfeature. For instance, flight simulators need a fairly believable output, but neednot to be perfect in every detail. The dominant challenge here is real-timeinteractive control.
A more realistic picture is not necessarily a more desirable or useful one. E.g., toconvey information: A picture that is free of the complications of shadows andreflections may well be more successful than a tour de force of photorealism!
In molecular modeling, the realistic depictions are not of “real” atoms, but ratherof stylized ball-and-stick and volumetric models that permit special effects, suchas animated vibrating bands and color change representing reactions.
In many applications reality is intentionally altered for esthetic effect or to fulfill anaïve viewer’s expectation.
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 16
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Visualization
Defined as the art to produce images of objects that could not (or hardly) be seenotherwise. E.g., since they are too small, too abstract, too slow or too fast, orsimply invisible for some other reason.
Typical examples include weather forecast charts in meteorology, hearts, brainsand bones of living creatures in medicine, temperature distributions on brakes,the growth of plants over years, geological changes like volcano eruptions orcontinental movements.
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 17
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Applications of Computer Graphics
Entertainment: Games, Commercials, Movies (Tron, Toy Story, Jurassic Park,Ants, A Bug’s Life, Star Wars, Toy Story 2, Titanic, Gladiator, Troja, . . .), VirtualReality.
Computer-Aided Design (CAD, CAM):One of the earliest applications.Car parts, Boeing 777, submarine design.City models, architectural walk-throughs.Control of robots and manufacturing cells.
Education and Training: Simulated environment, Virtual Reality, AugmentedReality.
Flight simulation, pilot training.Maintenance and assembly training.Military training (digitized battlefields, mission rehearsal).Telemedicine
3D models of heart, brain, skeleton, etc.Haptic interface.Minimal non-invasive surgery.
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 18
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Applications of Computer Graphics: Augmented Reality
The term “augmented reality” was coined around 1990 by Thomas Caudell andDavid Mizell at Boeing.
[Image credit: The Boeing Company.]
Sample augmented reality in today’s consumer products: Head-up displays incars.
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 19
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Applications of Computer Graphics
Scientific Visualization and Data Analysis:Molecular graphics (protein structures)Geographic information systems (maps, topographic maps).Turbulence, temperature, stress, etc.Weather models.
Business Visualization: Data mining, visualization of massive commercial data.
Graphical User Interfaces (GUI).
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 20
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
History of Computer Graphics: 1950s, 1960s, and 1970s
early 1950s US military used an interactive CRT graphics called SAGE.
1959 Computer drawing system DAC-1 by IBM and GM.
1961 Sketchpad developed by Ivan Sutherland at MIT.
1963 Douglas Englebart used a mouse as an input device.
1965 Jack Bresenham introduced his line-drawing algorithm.
1966 First computer-controlled head-mounted display (HMD) designed byIvan Sutherland.
1971 Henri Gouraud developed Gouraud shading.
1972 2D raster display for PC workstations at Xerox.
1973 First SIGGRAPH Conference. Roughly 600 attendees.
1974 Ed Catmull introduced texture mapping (and z-buffering).
1974 Phong Bui-Tong developed Phong shading.
1975 Benoit Mandelbrot published the paper “A Theory of Fractal Sets”.
1977 Nintendo entered the graphics market.
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 21
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
History of Computer Graphics: 1980s
1980 “Bol Libre” (by Loren Carpentar) shown at SIGGRAPH.
1980 Ray tracing developed by Turner Whitted.
1982 “Tron” produced by Disney; Perlin noise.
1982 Silicon Graphics founded by Jim Clark. Sun Microsystems, Autodesk,and Adobe Systems founded.
1982 AutoCAD developed by John Walker and Dan Drake.
1984 Radiosity method developed at Cornell University by Ben Battaile,Cindy Goral, Don Greenberg, Ken Torrance.
1985 Adobe System introduced Postscript.
1986 Pixar founded.
1988 Pat Hanrahan implemented and released Renderman.
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 22
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
History of Computer Graphics: 1990s
1990 Autodesk introduced 3D Studio.
1991 “Terminator 2” was released.
1991 JPEG and MPEG standards were introduced.
1992 SGI specified OpenGL.
1992 Wavelets were used for radiosity.
1994 Industrial Light and Magic won an Academy Award for TechnicalAchievement for its special-effects work on “Jurassic Park”.
1994 Sun Microsystems introduced Java.
1995 Pixar released “Toy Story”.
1997 SIGGRAPH’97 had more than 48 000 attendees.
1997 “Titanic” released.
1997 Ken Perlin won an Academy Award for Technical Achievement for“Perlin noise”.
1998 “Ants” and “A Bug’s Life” released.
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 23
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
History of Computer Graphics: 2000 and Beyond
2001 Microsoft’s “Xbox console” (based on NVIDIA graphics) makes itsdebut.
2003 Graphics cards (NVIDIA, ATI, Matrox, . . .) become widely available.2004 “Half Life 2” released; graphics cards for mobile phones and PDAs.2004 OpenGL Shading Language formally included into OpenGL 2.0.2005 “StarWars Episode III”.2007 CUDA (Compute Unified Device Architecture) released by NVIDIA.2008 OpenCL (Open Computing Language) specified by the Khronos
Group.2010 GPUs with native 64bit floating-point precision and support for
massively-parallel computing become widely available.2014 OpenGL 4.5 released.2015 Vulkan introduced as “next generation OpenGL” at GDC 2015.2016 Vulkan 1.0 released.2017 OpenGL 4.6 released.2020 Hardware-accelerated ray tracing on NVIDIA/AMD GPUs.20?? Real-time radiosity rendering? Photo-realistic consumer graphics?
Realistically rendered humans?
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 24
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Foundations of Computer Graphics
Computer Science:algorithms, data structures, programming, software engineering, architecture,artificial intelligence, 3D modeling.
Mathematics:linear algebra, analytical geometry, complex analysis, numerical analysis,differential geometry, topology, 3D modeling.
Physics:optics, fluid dynamics, energy, kinematics and dynamics.
Psychology:human light and color perception.
Biology:human body, behavioral and cognitive systems, nervous system.
Art:realism, esthetics.
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 26
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
3D Graphics System: Software Components
A system for 3D graphics consists of three major (software) parts:
the modeler,
the renderer,
image handling and display.
Image Handling
Image handling is often only a device driver to make the computed image visiblefor the user on a screen or on a hard copy device.
It can also be an image processing system to improve the quality of images or toalter or transform them before displaying.
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 27
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Modeling System
Geometry-based modeling:Lines, polygons, polyhedra,Free-form curves and surfaces,Quadtrees, octrees, bounding volumes,
Physics-based modeling:Kinematics and dynamics (contact detection, contact resolution, forcecalculation, natural gait),Fluid dynamics (e.g., for modeling water and waves),Gas, smoke, fire,Deformable objects (e.g., clothes, cords),Haptics (e.g., touch sensors).
Cognitive-based modeling:Domain knowledge, learning.Interaction with real world.
Wide-spread simple modelers
CAD systems,
3D editors,
object description languages.
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 28
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Rendering System
The output of the modeler is used as input to the rendering system.
Rendering is the process of reducing the 3D information of the scene to a 2Drepresentation, the image.
A camera definition is necessary to project the 3D scene onto the desired imageplane.
Conventional rendering is very simple: visibility determination and simpleshading are performed.
In a more realistic image synthesis system the rendering consists of several parts:
Visibility determination,
Shading,
Texturing,
Anti-aliasing.
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 29
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Rendering System
Shading: It is the main part of every rendering system.Shadows must be determined.The intensity and color of light leaving an object, given theincoming light distribution and the surface properties, must becomputed.
Texturing: It achieves surface details which, in reality, are caused by varyingoptical properties of the object.Such variations are often stored in texture maps for fast access.Texture mapping is the process of determining the transformationfrom a texture map onto the surface of an object and then ontothe screen.
Anti-aliasing: It tries to correct all errors that occur by aliasing.Aliasing can be caused during various steps in the imagegeneration process.Aliasing is, more or less, usually due to a discretization of acontinuum.
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 30
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
3D Graphics System: Hardware Components
1 Input devices (keyboard, mouse, joystick, data glove, eye tracker, . . .)2 Processor (CPU)3 CPU memory4 Graphics Processing Unit (GPU)5 GPU memory6 Frame buffer7 Output devices (monitor, printer, . . .)
CPU GPU Framebuffer
. . . -
- -
-
CPUmemory
GPUmemory
- -
?
-
-
-
-
. . .
?
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 31
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Frame Buffer
A common graphics technique is to use a frame buffer or refresh buffer.
In its simplest meaning, a frame buffer is simply the video memory that holds thepixels from which the video display (frame) is refreshed.
The frame buffer can be manipulated by the rendering algorithm, and its contentscan be moved to the screen when desired.
Double buffering is a technique whereby the graphics system displays onefinished buffer while the hardware renders the next frame into a second buffer.Without double buffering, the process of drawing is visible as it progresses.
Double buffering generally produces flicker-free rendering for simple animations.
Triple buffering makes use of one front buffer and two back buffers, thus enablingthe graphics hardware and the rendering algorithm to progress on their ownspeeds.
Quadruple buffering means the use of double buffering for stereoscopic images.
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 32
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Graphics Processing Unit
By today’s definition, a “graphics processing unit” (GPU) is a graphics outputdevice that manipulates the frame buffer and provides accelerated 2D or 3Dgraphics.
The frame buffer usually is stored in the memory chips on the GPU.
Modern GPUs provide much more than z-buffer memory! (E.g., a stencil bufferfor computing shadows and reflections has become common.)
Modern GPUs also offer a (comparatively cheap) hardware for massively parallelcomputations, at a floating-point precision of 64 bits.
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 33
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Device-Independent Graphics Primitives
Since graphics output devices are many and diverse, it is imperative to achievedevice independence.
Thus, it is generally preferred to work in world coordinates rather than devicecoordinates.
Typical graphics commands will be similar to the following commands:DrawLine(x1, y1, x2, y2);DrawCircle(x1, y1, r );DrawPolygon(PointArray);DrawText(x1, y1, "A Message");
where x1, y1, x2, y2, r are specified in world coordinates.
Graphics primitives have attributes, such as style, thickness and color for a line,or font, size and color for a text.
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 34
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Application Programmer’s Interface (API)
Graphic APIs provide the programmer with procedures for handling menus,windows and, of course, graphics.
Well-known APIs for 3D graphics:OpenGL,WebGL,Direct3D,Java3D,Vulkan.
Vulkan can be expected to replace OpenGL on standard consumer GPUs withinthe next few years.
Vulkan offers a better CPU/GPU balance (than OpenGL) and parallel processing,but it is considerably more low-level than OpenGL.
We will use OpenGL for practical work in this course.
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 35
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
A Note on Direct3D and Java3D
Direct3D:Advantages:
More high-level functionality;Better control of resources.
Disadvantages:Only supported by MS Windows machines;Lack of backwards compatibility of newer versions.
Fahrenheit was an attempt by Microsoft and SGI to unify OpenGL andDirect3D in the 1990s, but it got cancelled.
Java3D:Advantages:
Based on true object-oriented approach.Ties natively into Java.Open-source code since 2004.
Disadvantages:Runs atop of Java OpenGL (JOGL); delay in use of new GPU features.Reputation and use has been badly hit by a pause in developmentduring 2003 and 2004.
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 36
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
2 OpenGLIntroduction to OpenGLBasic OpenGLCoordinates and TransformationsEvent-Handling and CallbacksTexturesLoading 3D Models
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 37
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
What is OpenGL?
OpenGL stands for “Open Graphics Library”.
Designed by Silicon Graphics Inc. (SGI) in 1991.
Initial design in 1982 (“IRIS GL”).
For many years, development of OpenGL had been coordinated by anArchitectural Review Board (ARB).
In 2006, the ARB and the Khronos Board of Directors voted to transfer control ofthe OpenGL API standard to the non-profit technology consortium KhronosGroup.
As of February 2021, the following companies were promoter members of theKhronos Group: AMD, Apple, ARM, Epic Games, Google, HUAWEI, IKEA,Imagination Technologies Group, Intel, Nvidia, Qualcomm, Samsung, Sony,Valve, VeriSilicon.
The Khronos Group now controls the adaption/extension of OpenGL to reflectnew hardware and software advances,
“. . . to bring advanced 3D graphics to all hardware platforms and operatingsystems — from supercomputers to jet fighters to cell phones.”
Official website: https://www.opengl.org/.
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 39
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
What is OpenGL?
OpenGL is a high-performance system interface to graphics hardware.
It is the most widely used library for high-end platform-independent computergraphics; de-facto industry standard.
It runs on different operating systems (including Unix/Linux, Windows, MacOS)without requiring changes to the source code.
Platform-specific features can be implemented via extensions.
OpenGL is a C Library of several hundreds of distinct functions.
OpenGL is not object-oriented.
Several (commercial) versions of an OpenGL library have been implemented.
OpenGL functionality is also provided by Mesa, http://www.mesa3d.org,which is free. Mesa 20.x implements the OpenGL 4.6 API. (OpenGL 3.3 andMesa 10.x would be perfectly fine for this course, though!)
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 40
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
What is OpenGL?
OpenGL takes advantage of graphics hardware where it exists; whether or nothardware acceleration is used depends on the availability of suitable drivers.OpenGL does not come with any windowing functionality; it has to rely onadditional libraries (such as GLUT or GLFW).It ties into standard C/C++; various other language bindings exist, too. Inparticular, OpenGL can be used from within
C, C++,Java,Python,Fortran,Ada.
OpenGL supportspolygon rendering,texture mapping,anti-aliasing,shader-level operations.
OpenGL does not provide or (directly) support high-level graphics likeray tracing,radiosity calculations,volume rendering.
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 41
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
OpenGL 3.x/4.x versus OpenGL 2.x
Note that OpenGL 1.x and 2.x differ substantially from OpenGL 3.x and OpenGL4.x:
Modern OpenGL is entirely shader-based.Modern OpenGL no longer relies on tons of state variables.
Be careful . . .
. . . when studying tutorials in the Web! A surprisingly large number of tutorials stillteach old-style “legacy” OpenGL.
Hint: It is old-style OpenGL if you see statements like glBegin or glColor4f.
No GLU anymore
OpenGL 3.0 deprecated the entire Graphics Library Utilities (GLU) of “legacy”OpenGL 1.x/2.x. It was removed in OpenGL 3.1. This means that GLU will fail towork in OpenGL 3.x/4.x contexts.
Similarly, GLUT commands like glutSolidSphere() do no longer work.
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 42
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
OpenGL Tutorials
Sample Tutorialshttps://open.gl/:Requires a GPU compatible with OpenGL 3.2, CMake; uses GLFW for contextand window creation and GLEW for access to newer OpenGL functions.
http://www.opengl-tutorial.org:Requires a GPU compatible with OpenGL 3.3. Similar to https://open.gl/.
LEARN OPENGL, https://learnopengl.com/:Similar to http://www.opengl-tutorial.org.
http://ogldev.org/:Tutorials that require a GPU compatible with OpenGL 3.3.
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 43
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
OpenGL Libraries
OpenGL proper does not provide any windowing functionality! That is, it does notsupport opening a window or getting input from the mouse or a keyboard.
Quote taken from the OpenGL 3.1 Specification (chapter 2, first paragraph):OpenGL is concerned only with rendering into a frame buffer (and readingvalues stored in that frame buffer). There is no support for other peripheralssometimes associated with graphics hardware, such as mice and keyboards.Programmers must rely on other mechanisms to obtain user input.
Thus, a link to the underlying windowing system (GLX for X windows, WGL forWindows, AGL for Macintosh) and an add-on library (e.g., GLFW) is required!
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 44
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
OpenGL Libraries: GLFW
GLFW is a light-weight multi-platform library for OpenGL.
It supports Windows (XP and later), OS X (10.7 Lion and later) and Unix-likeoperating systems that run the X Window System.
Its commands start with the prefix glfw. E.g., glfwInit().
It can create and manage windows as well as handle standard input (viakeyboard, mouse or joystick).
It can control multiple monitors and enumerate video modes.
In addition to portability, its single biggest advantage is its simplicity.
Its biggest disadvantage is its lack of menus and buttons.
Xlib, Xtk Frame BufferGLXGLProgram
ApplicationOpenGL
GLEW
GLFW
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 45
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
OpenGL Libraries: GLEW
The OpenGL Extension Wrangler Library (GLEW) is a cross-platform library thatprovides efficient run-time mechanisms for determining which OpenGLextensions are supported on the target platform.
That is, it makes it easy to access OpenGL extensions that are available on aparticular system.
GLEW commands start with the prefix glew.
Easy to use: Include glew.h and run glewInit().
Xlib, Xtk Frame BufferGLXGLProgram
ApplicationOpenGL
GLEW
GLFW
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 46
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Compiling and Linking an OpenGL Program
The source code for an OpenGL program has to contain the following directivesfor including OpenGL header files:
If GLEW is used:
#include <GL/glew.h>
If GLFW is used:
#include <GLFW/glfw3.h>
Note: When compiling and linking an OpenGL program, the OpenGL header filesand libraries have to be available for inclusion. This means, e.g., using the -lglloader flags, and possibly, with -L flags for the X libraries.
Better alternative: Resort to cmake!See the sample files in my WWW graphics resources.
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 47
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Basic OpenGL Program Structure
#include <HeadersOpenGL>
int main()
CreateWindow(title, width, height);CreateOpenGLContext(settings);
while (windowIsOpen) /* event processing and drawing */
while (event == GetNextEvent())HandleEvent(event); /* e.g., handle mouse clicks */
UpdateScene(); /* e.g., move objects */
RenderScene(); /* generate next image */
DisplayGraphics(); /* e.g., swap buffers */
Every real-time graphics application will have a program flow that boils down tothis structure, no matter whether it uses OpenGL or some other library.
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 49
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Creating an OpenGL Window and Context
We use GLFW to create an OpenGL display window.It comes as no surprise that you need to load the header file and initialize GLFW.
#include<GLFW/glfw3.h>
/* initialization of GLFW */
glfwSetErrorCallback(errorCallback);if (glfwInit() != GLFW_TRUE)
fprintf(stderr, "Cannot initialize GLFW\n");exit(EXIT_FAILURE);
.../* termination of GLFW */
glfwTerminate();
GLFW error callback function:
static void errorCallback(int error, const char* logText)
fprintf(stderr, "GLFW error %d: %s\n", error, logText);
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 50
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Creating an OpenGL Window and Context
The glfwWindowHint() function is used to set some GLFW options.
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3);glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3);glfwWindowHint(GLFW_OPENGL_FORWARD_COMPAT, GL_TRUE);glfwWindowHint(GLFW_RESIZABLE, GL_FALSE);
Window creation:
const GLuint WIDTH = 800, HEIGHT = 600;GLFWwindow* myWindow = glfwCreateWindow(WIDTH, HEIGHT,
"OGL Demo", NULL, NULL);if (myWindow == NULL)
fprintf(stderr, "Cannot open GLFW window\n");exit(EXIT_FAILURE);
The first two parameters specify the width and height of the drawing area.The fourth parameter tells GLFW to use the monitor in windowed mode, and thelast parameter would allow to share resources with an existing OpenGL context.Roughly, a context stores all of the state data associated with an instance ofOpenGL. A process can create multiple OpenGL contexts, and each context canrepresent a separate drawing area, e.g., a window in a graphics application.
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 51
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Creating an OpenGL Window and Context
For fullscreen mode:
GLFWwindow* myWindow = glfwCreateWindow(WIDTH, HEIGHT,"OGL Demo",glfwGetPrimaryMonitor(), NULL);
Making an OpenGL context active:
glfwMakeContextCurrent(myWindow);
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 52
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Creating an OpenGL Window and Context: Event-Handling Loop
This should be enough to get an OpenGL window mapped to your screen:
/* event - handling and rendering loop */
while (!glfwWindowShouldClose(myWindow)) /* poll events */
glfwPollEvents();/* Swap buffers */
glfwSwapBuffers(myWindow);/* close window upon hitting the escape key */
if (glfwGetKey(myWindow, GLFW_KEY_ESCAPE) == GLFW_PRESS)glfwSetWindowShouldClose(myWindow, GL_TRUE);
The event-handling loop always needs to call glfwSwapBuffers() andglfwPollEvents().
You can ignore events that you do not want to handle. (We’ll learn more on eventhandling later . . .)
Do not forget to handle the escape key (or some other key) to return to thedesktop if using the fullscreen mode.
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 53
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Creating an OpenGL Window and Context
One technical issue remains: At runtime a graphics application needs to checkwhich functionality is supported by a GPU, as specified in the driver provided bythe vendor of the GPU, and needs to link to them.This is tedious and is best handled by resorting to GLEW:
#include<GL/glew.h>
/* initialization of GLEW */
glewExperimental = GL_TRUE;GLenum glewStatus = glewInit();if (glewStatus != GLEW_OK)
fprintf(stderr, "Error: %s\n",glewGetErrorString(glewStatus));
exit(EXIT_FAILURE);
Make sure to include glew.h prior to other OpenGL-related headers!Setting glewExperimental forces GLEW to use a “modern” OpenGL methodfor checking whether a function is available.See window.cc in my WWW graphics resources.
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 54
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Vertex Buffer Object
Classical bottleneck in pre-OpenGL 3.1: Whenever a vertex is specified in apre-OpenGL 3.1 application, by means of glVertex, its coordinates need to besent to the GPU.
Goal: Increase performance by using the GPU rather than the CPU and bydecreasing the amount of data that is exchanged between CPU and GPU.
Basic idea:We pack the vertex and attribute data into arrays.A vertex array is transferred to the GPU and stored in the GPU memory.Array data that is in the GPU memory can be rendered via a simple call to acallback function: glDrawArrays()
This leads to vertex array objects and vertex buffer objects.
No object-oriented “object”
OpenGL is fairly liberal in its use of the word “object”! That is, an OpenGL “object” isnot to be understood as an object in the object-oriented programming meaning.Rather, OpenGL objects tend to be simple arrays of data for which we get a handle(i.e., an identifier) to interact with.
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 55
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Vertex Buffer Object
OpenGL expects vertices to be stored in arrays.
float vtx[] = 0.0f, 0.0f, /* x - and y - coords of 1 st vertex */
0.5f, 0.5f, /* x - and y - coords of 2 nd vertex */
0.5f, -0.5f /* x - and y - coords of 3 rd vertex */
;
A Vertex Buffer Object (VBO) is an array of data, typically floats.
E.g., it may hold data like world coordinates, color, texture coordinates and,possibly, application-specific data.
It will reside in the high-speed memory of the GPU.
GLuint myVBO;glGenBuffers(1, &myVBO);glBindBuffer(GL_ARRAY_BUFFER, myVBO);
Since GPU memory is managed by OpenGL, you get a positive number as areference to it.
The glBindBuffer() function turns a VBO into the active buffer.
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 56
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Vertex Buffer Object
Once an VBO is active, we can copy the vertex data to it.
glBufferData(GL_ARRAY_BUFFER, sizeof(vtx),vtx, GL_STATIC_DRAW);
Depending on the intended type of use, the last argument of glBufferData()determines the kind of GPU memory (relative to writing and drawing speed) inwhich the data is stored:GL_STATIC_DRAW: Generated once, no changes, drawn many times.GL_DYNAMIC_DRAW: Changed a few times, drawn many times.GL_STREAM_DRAW: Changed and drawn many times.
We can store more than just the 2D or 3D coordinates of points in a VBO. E.g.,store 2D texture coordinates or 3D normals.
Hence, it is likely that most of the VBOs will consist of arrays of two and threefloats.
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 57
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Vertex Array Object
A Vertex Array Object (VAO) is used to tell OpenGL how the VBO is arranged.E.g., it might be divided into variables of two floats each.
That is, a VAO is not the actual object storing the data, but a descriptor of thedata.
GLuint myVAO;glGenVertexArrays(1, &myVAO);glBindVertexArray(myVAO);
Once a VAO has been bound, every call to glVertexAttribPointer() willcause the attributes associated with a VBO to be stored in that VAO.
Warning
Only attribute bindings performed after binding a VAO refer to it! Thus, make sure tobind an VAO at the start of your code!
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 58
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Shader Programs
GPU-based rendering is envoked through so-called shader programs.An application sends data to the GPU, and the GPU does all the rendering.Starting with OpenGL 3.1, OpenGL is entirely shader-based:
The state model is replaced by a data-flow model.Several pre-OpenGL 3.1 functions are deprecated, and backwardscompatibility is not required. (At least not in Core Mode.)No default shaders — but each application has to provide at least a vertexand a fragment shader.
Vertex Shader:Processes input vertices (e.g., of triangles) individually.Influences the attributes of a vertex, e.g., position, color, and texturecoordinates.Performs the perspective transformation.
Fragment Shader:Calculates individual fragment colors.E.g., it might sample a texture or simply output a color.It may also be used for lighting and for creating advanced effects likebump-mapping effects.
More shaders (e.g., Tesselation and Geometry Shaders) added with OpenGL 4.1.
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 59
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Shader Programs: Execution Pipeline
vertex shader shape assemblygeometry shader
rasterization fragment shader tests and blending
vertex data
(arrays)
tesselation shaders
culling, clipping
The geometry shader is optional. It can discard, modify or pass throughprimitives, or even generate new ones. E.g., it could generate squares out ofinput vertex data.
In the shape assembly, the GPU forms primitives (e.g., triangles, line segments)out of the vertices. Up to this stage, all operations are carried out on the vertices.
Rasterization converts the primitives into pixel-sized fragments.
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 60
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
OpenGL Shading Language
Shaders are written in the OpenGL Shading Language (GLSL).
The GLSL is C/C++-like with overloaded operators.
New data types (e.g., matrices, vectors) and C++-like constructors.E.g., vec3 myVec=vec3(1.0,0.0,1.0).
Similar in use to NVIDIA’s Cg and Microsoft’s HLSL.
GLSL code is sent to the shaders as source code.
New OpenGL functions added to compile and link that code and to exchangeinformation with the shaders.
Shaders can be written inside the C/C++ code, or can be stored in files andloaded.
We will only discuss vertex shader and fragment shader very briefly.
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 61
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
OpenGL Shading Language: Sample Vertex Shader
Since the triangle in our sample is already given by 2D vertices, a vertex shadercan be fairly simple.
/* define the vertex shader */
const char* vertexShaderSource = GLSL(in vec2 position;void main()
gl_Position = vec4(position, 0.0f, 1.0f);
);/* compile the vertex shader */
GLuint vertexShader = glCreateShader(GL_VERTEX_SHADER);glShaderSource(vertexShader, 1, &vertexShaderSource,
NULL);glCompileShader(vertexShader);
Since we deal with homogeneous coordinates, the last argument ofgl_Position() will, in general, be 1.0 . . ..
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 62
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
OpenGL Shading Language: Sample Vertex Shader
Warning
No error will be reported by glGetError() if a shader fails to compile!
Hence, make sure to check explicitly!
/* check whether the vertex shader has compiled */
GLint status;glGetShaderiv(vertexShader, GL_COMPILE_STATUS, &status);if (status != GL_TRUE)
fprintf(stderr, "Vertex shader did not compile\n");char vertexCompilerLog[512];glGetShaderInfoLog(vertexShader, 512, NULL,
vertexCompilerLog);fprintf(stderr, "%s", vertexCompilerLog);exit(EXIT_FAILURE);
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 63
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
OpenGL Shading Language: Sample Fragment Shader
For simplicity, we’ll draw the triangle entirely red and get the following simplefragment shader.
/* define and compile the fragment shader : */
/* we ’ll get a red triangle */
const char* fragmentShaderSource = GLSL(out vec4 outColor;void main()
outColor = vec4(1.0f, 0.0f, 0.0f, 1.0f);
);
GLuint fragmentShader = glCreateShader(GL_FRAGMENT_SHADER);glShaderSource(fragmentShader, 1, &fragmentShaderSource,
NULL);glCompileShader(fragmentShader);
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 64
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Creating a Shader Program
We form a shader program by linking the vertex and fragment shader into oneunit.
GLuint shaderProgram = glCreateProgram();glAttachShader(shaderProgram, vertexShader);glAttachShader(shaderProgram, fragmentShader);glBindFragDataLocation(shaderProgram, 0, "outColor");glLinkProgram(shaderProgram);
To make the shader program active, we use the following statement:
glUseProgram(shaderProgram);
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 65
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Creating a Shader Program
Again, no error checking is done by OpenGL! Hence, make sure to checkwhether linking the shader program worked.
bool checkShaderProgramLinkStatus(GLuint programID)
GLint status;glGetProgramiv(programID, GL_LINK_STATUS, &status);if(status == GL_FALSE)
GLint length;glGetProgramiv(programID, GL_INFO_LOG_LENGTH,
&length);GLchar* log = new char[length + 1];glGetProgramInfoLog(programID, length, &length,
&log[0]);fprintf(stderr, "%s", log);return false;
return true;
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 66
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Specifying the Vertex Atributes
We obtain a reference to the position input in the vertex shader — in our examplethis will be 0 — and then use glVertexAttribPointer() to specify how theinput data is organized:
const char* attrName = "position";GLint posAttrib = glGetAttribLocation(shaderProgram,
attrName);if (posAttrib == -1)
fprintf(stderr, "Error for attribute %s\n", attrName);exit(EXIT_FAILURE);
glEnableVertexAttribArray(posAttrib);glVertexAttribPointer(posAttrib, 2, GL_FLOAT, GL_FALSE,
2*sizeof(float), 0);
The arguments of glVertexAttribPointer() are as follows:1 Reference to the input.2 Number of values for that input, i.e., components ot vec.3 Type of each component.4 Normalization to [-1.0,1.0] requested?5 Stride: How many bytes are between every position attribute in the array?6 Offset : Byte offset for the first component of the first attribute.
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 67
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Drawing the Sample Triangle
Finally, we set the background to black and draw the triangle.
/* set the window background to black */
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);glClear(GL_COLOR_BUFFER_BIT);
/* draw the triangle */
glDrawArrays(GL_TRIANGLES, 0, 3);
The call glClearColor(0.0,0.0,1.0,0.0) would set the background colorto “no red, no green, maximum blue”. (The fourth parameter pertains to blending,and can be ignored for the moment.)
Each argument is a floating-point value in the range [0, 1], specifying the amountof red, green and blue.
The arguments of glDrawArrays() are as follows:1 Type of primitives to be rendered.2 How many vertices shall be skipped at the beginning.3 Number of vertices. (Not the number of primitives!)
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 68
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Cleaning Up in the End
Do not forget to release all resources in the end:
glDeleteProgram(shaderProgram);glDeleteShader(fragmentShader);glDeleteShader(vertexShader);glDeleteBuffers(1, &myVBO);glDeleteVertexArrays(1, &myVAO);
See drawing.cc in my WWW graphics resources.
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 69
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Drawing a Colorful Triangle
We will now modify the sample code to draw a colorful triangle, by assigning theRGB values for red, green and blue to the vertices; see colored_tri.cc in myWWW graphics resources.:
float vtx[] = 0.0f, 0.0f, 1.0f, 0.0f, 0.0f, /* coords , red */
0.5f, 0.5f, 0.0f, 1.0f, 0.0f, /* coords , green */
0.5f, -0.5f, 0.0f, 0.0f, 1.0f /* coords , blue */
;
Modified vertex shader:
const char* vertexShaderSource = GLSL(in vec2 position;in vec3 colorVtxIn;out vec3 colorVtxOut;void main()
colorVtxOut = colorVtxIn;gl_Position = vec4(position, 0.0, 1.0);
);
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 70
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Drawing a Colorful Triangle
Modified fragment shader:
const char* fragmentShaderSource = GLSL(in vec3 colorVtxOut;out vec4 outColor;void main()
outColor = vec4(colorVtxOut, 1.0f);
);
Modified bindings:
GLint posAttrib = glGetAttribLocation(shaderProgram,"position");
glEnableVertexAttribArray(posAttrib);glVertexAttribPointer(posAttrib, 2, GL_FLOAT, GL_FALSE,
5*sizeof(float), 0);GLint colAttrib = glGetAttribLocation(shaderProgram,
"colorVtxIn");glEnableVertexAttribArray(colAttrib);glVertexAttribPointer(colAttrib, 3, GL_FLOAT, GL_FALSE,
5*sizeof(float), (void*)(2*sizeof(float)));
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 71
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Index Buffer Object
Typically, geometric objects will reuse vertices. E.g., the two triangles of arectangle share two vertices, and it is a waste of precious GPU memory to storethem twice.
Index Buffer Objects (IBOs) are applied for re-using vertex data:
We model a rectangle formed by five vertices and four triangles:
float vtx[] = -0.5f, -0.5f, 1.0f, 0.0f, 0.0f, /* lower - left corner */
0.5f, -0.5f, 0.0f, 1.0f, 0.0f, /* lower - right corner */
0.5f, 0.5f, 0.0f, 0.0f, 1.0f, /* upper - right corner */
-0.5f, 0.5f, 1.0f, 1.0f, 1.0f, /* upper - left corner */
0.0f, 0.0f, 0.0f, 0.0f, 0.0f /* center */
;GLuint idx[] =
0, 1, 4,1, 2, 4,2, 3, 4,3, 0, 4
;
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 72
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Index Buffer Object
Creation of an IBO:
/* generate one Index Buffer Object */
GLuint myIBO;glGenBuffers(1, &myIBO);glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, myIBO);/* copy the element data to it */
glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(idx), idx,GL_STATIC_DRAW);
For drawing we use glDrawElements() rather than glDrawArrays() in therendering loop:
glDrawElements(GL_TRIANGLES, 12, GL_UNSIGNED_INT, 0);
The arguments of glDrawElements() are as follows:1 Type of primitives to be rendered.2 Number of element indices. (Not the number of triangles!)3 Type of element indices.4 Offset.
See colored_quad.cc in my WWW graphics resources.Note that the square appears to be a quad in the graphics window: distortion dueto mismatch of aspect ratios!
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 73
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
OpenGL Naming Conventions
All OpenGL functions have the prefix gl, followed by one or more capitalizedwords to denote the function. E.g., glBindBuffer().
GLEW and GLFW use the same scheme for naming their functions.
Recall that OpenGL is C-based and, thus, does not have function overloading.
As consequence, suffixes after the main part of a function name are used forprodiving information on the specific number and type of arguments that afunction accepts. E.g.:
glUniform2f() indicates that this function takes two parameters (inaddition to its standard arguments) which are of type GLfloat.glUniform2fv() indicates that these two floats are passed as aone-dimensional array rather than two individual parameters.
All OpenGL constants begin with GL and use underscores to separate words.E.g., GL_STATIC_DRAW.
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 74
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
OpenGL Data Types
Data Type Min. Prec. Description SuffixGLbyte 8 bits signed integer bGLubyte 8 bits unsigned integer ubGLshort 16 bits signed integer sGLushort 16 bits unsigned integer usGLsizei 32 bits integer size iGLint 32 bits signed integer iGLuint 32 bits unsigned integer uiGLenum 32 bits enumeration type uiGLfloat 32 bits floating-point value fGLclampf 32 bits floating-point value clamped to [0.0, 1.0] fGLdouble 64 bits floating-point value dGLclampd 64 bits floating-point value clamped to [0.0, 1.0] d
An OpenGL implementation must use at least the minimum number of bitsspecified. It may use more bits than the minimum number required to represent aGL type.An OpenGL data type may but need not match the “corresponding” C data typein a specific implementation.Thus, use the OpenGL types to assure portability!
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 75
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
OpenGL Graphical Primitives
OpenGL Primitive Description Min. #(vertices)
GL_POINTS 1
GL_LINES 2
GL_LINE_STRIP 2
GL_LINE_LOOP 2
GL_TRIANGLES 3
GL_TRIANGLE_STRIP 3
GL_TRIANGLE_FAN 3
Make sure to pay close attention to how vertices are grouped for the strips andfans.
0
1
2
3
4
5
0
1 23
4
5
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 76
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
OpenGL Coordinate Systems
The units of the coordinates of a vertex depend on the application; thosecoordinates are called world coordinates.
A standard right-handed coordinate system is assumed for the world coordinates:the positive x-axis is to your right, the positive y -axis is up and the positive z-axispoints out of the screen towards you.
Internally, OpenGL will convert to camera coordinates and later to windowcoordinates.
OpenGL’s camera is placed at the origin pointing in the negative z-direction of theworld coordinate system.
The camera cannot be moved. Rather, one has to apply an inversetransformation to the scene to be rendered.
OpenGL supports the definition of a viewing volume: Only (those portions of)objects that are inside this 3D region will be drawn (“clipping”).
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 78
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
OpenGL Coordinate Transformation Pipeline
ModelingTransformation
ViewingTransformation
ProjectionTransformation
PerspectiveDivision
ViewportTransformation
OCS
DCS
WCS VCS
NDCS
CCS
CCS
The coordinates of a 3D point p undergo several transformations until eventuallya pixel on the screen corresponding to its 2D equivalent p′ is set.
This sequence of transformations is encoded in the OpenGL transformationpipeline: from object coordinate system (OCS) to world coordinate system(WCS), viewing coordinate system (VCS), clipping coordinate system (CCS),normalized device coordinate system (NDCS), and finally to device coordinatesystem (DCS).
p′ = Mproj ·Mview ·Mmodel · p
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 79
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
OpenGL Coordinate Transformations: Modeling Transformation
ModelingTransformation
ViewingTransformation
ProjectionTransformation
PerspectiveDivision
ViewportTransformation
OCS
DCS
WCS VCS
NDCS
CCS
CCS
The modeling transformation places an object somewhere in the world. Typically,this re-positioning of an object is carried out by
1 scaling it,2 rotating it, and3 translating it.
pWCS = Mmodel · pOCS , with Mmodel := T · R · S.
It can be as simple as an identity transformation, though.
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 80
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
OpenGL Coordinate Transformations: Viewing Transformation
ModelingTransformation
ViewingTransformation
ProjectionTransformation
PerspectiveDivision
ViewportTransformation
OCS
DCS
WCS VCS
NDCS
CCS
CCS
Suppose that position and orientation of a camera are specified in worldcoordinates as a frame [p, < a, b, v >], where p is the position and < a, b, v >form a coordinate system such that a points right and b points up in a planeparallel to the image plane, and v is the direction of viewing.
The viewing transformation Mview transforms the world coordinate system suchthat p becomes the origin, b points into the y -axis and v points into the negativez-axis. (Recall that the actual OpenGL camera is not moved!)
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 81
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
OpenGL Coordinate Transformations: Model View Transformation
ModelingTransformation
ViewingTransformation
ProjectionTransformation
PerspectiveDivision
ViewportTransformation
OCS
DCS
WCS VCS
NDCS
CCS
CCS
Modeling transformation and viewing transformation combined are called modelview transformation.
Older versions of OpenGL forced the user to resort to model viewtransformations.
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 82
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
OpenGL Coordinate Transformations: Projection Transformation
ModelingTransformation
ViewingTransformation
ProjectionTransformation
PerspectiveDivision
ViewportTransformation
OCS
DCS
WCS VCS
NDCS
CCS
CCS
The projection transformation transforms the world such that the viewing volumespecified by the camera is mapped into an axis-aligned box as a canonicalviewing volume.
This is the earliest time that clipping can be implemented (in hardware) in acamera-independent manner.
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 83
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
OpenGL Coordinate Transformations: Perspective Division
ModelingTransformation
ViewingTransformation
ProjectionTransformation
PerspectiveDivision
ViewportTransformation
OCS
DCS
WCS VCS
NDCS
CCS
CCS
The perspective division maps an axis-aligned viewing box to the axis-alignedcube [−1, 1]3.
The projection transformation and the perspective division are carried out as onefunctional unit by OpenGL, in dependence on the type of perspective.
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 84
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
OpenGL Coordinate Transformations: Viewport Transformation
ModelingTransformation
ViewingTransformation
ProjectionTransformation
PerspectiveDivision
ViewportTransformation
OCS
DCS
WCS VCS
NDCS
CCS
CCS
In the final viewport transformation, OpenGL uses information obtained from thegraphics window or the parameters of
glViewport(x, y, width, height);
where(x , y) is the location of the lower-left corner of the viewport, andwidth and height are its dimensions,
to map NDC to screen coordinates (DC).All arguments of glViewport() are specified in pixels.
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 85
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Handling Transformations within OpenGL: GLM
OpenGL (internally) uses homogeneous coordinates and 4× 4 matrices to carryout transformations.There are several ways to handle the math related to transformations . . .I find it easiest to employ GLM, the OpenGL Math library.It is a headers-only library and provides vector and matrix classes for handlingthe math of (likely) all the transformations that you will need, including support forquaternions.Since it is based on the GLSL specifications, it ties into GLSL neatly.Usage:
#include<glm/glm.hpp>#include<glm/gtc/matrix_transform.hpp>#include<glm/gtc/type_ptr.hpp>
The second header file provides functions that make the computation oftransformation matrices easy.The third file is used for converting an GLM matrix into an array of floats (forusage by OpenGL proper).
Make sure to get a recent version of GLM in order to avoid tons of warningsabout deprecated functions.
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 86
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Handling Transformations within OpenGL: GLM Constructors
If a single scalar parameter is given to a vector constructor, it is used to initializeall components of the vector to that value:
glm::vec4 Position = glm::vec4(glm::vec3(0.1), 1.0);
If a single scalar parameter is given to a matrix constructor, then all diagonalelements will be set to the value, and all other elements will be set to 0.0f:
glm::mat4 Model = glm::mat4(1.0);
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 87
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Sample Model Transformation
We rotate our colored square by 45 (around the z-axis):
/* define a model - view transformation */
glm::mat4 model = glm::mat4(1.0);model = glm::rotate(model, glm::radians(45.0f),
glm::vec3(0.0f, 0.0f, 1.0f));
The first command gives us a 4× 4 unit matrix, and the second commandmultiplies this matrix by a rotation around the z-axis.
Degree vs. radians
Some versions of GLM take the angle in degrees instead of radians. We force radiansby means of #define GLM_FORCE_RADIANS and use glm::radians(degrees).
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 88
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Sample Model Transformation
The next step is to instruct the shader program to apply this model transformationto every vertex:
const char* uniformName;uniformName = "model";/* pass the model matrix to the shader program */
GLint uniformModel = glGetUniformLocation(shaderProgram,uniformName);
if (uniformModel == -1) fprintf(stderr, "Error: could not bind uniform %s\n",
uniformName);exit(EXIT_FAILURE);
glUniformMatrix4fv(uniformModel, 1, GL_FALSE,
glm::value_ptr(model));
The first parameter of glUniformMatrix4fv() is the handle of the matrix, thesecond parameter specifies the number of matrices, the third parameterconcerns transposing of the matrix prior to its use, and the glm::value_ptr()function in the fourth parameter converts the matrix into 16 floats.
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 89
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Sample Model Transformation
We do also have to modify the code for the vertex shader:
/* vertex shader with model - view matrix added */
const char* vertexShaderSource = GLSL(in vec2 position;in vec3 colorVtxIn;out vec3 colorVtxOut;uniform mat4 model;void main()
colorVtxOut = colorVtxIn;gl_Position = model * vec4(position.x, position.y,
0.0, 1.0);
);
See transformed_quad.cc in my WWW graphics resources.
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 90
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Sample Model Transformation
We now make the quad spin around the origin in a continuous fashion by addingan animation matrix anim in the vertex shader and in the event-handling loop:
const char* vertexShaderSource = GLSL(in vec3 position;in vec3 colorVtxIn;uniform mat4 anim;uniform mat4 model;out vec3 colorVtxOut;void main()
colorVtxOut = colorVtxIn;gl_Position = anim * model * vec4(position, 1.0);
);
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 91
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Sample Model Transformation
Definition and binding of animation matrix:
/* define a transformation matrix for the animation */
glm::mat4 anim = glm::mat4(1.0f);uniformName = "anim";GLint uniformAnim = glGetUniformLocation(shaderProgram,
uniformName);glUniformMatrix4fv(uniformAnim, 1, GL_FALSE, glm::value_ptr
(anim));
Animation matrix in the event-handling loop, prior to the actual draw command:
/* make the quad spin around */
anim = glm::rotate(anim, glm::radians(0.1f),glm::vec3(0.0f, 0.0f, 1.0f));
glUniformMatrix4fv(uniformAnim, 1, GL_FALSE,glm::value_ptr(anim));
Of course, a suitable value for the angular increment depends on the speed ofyour GPU.And, of course, this is a brute-force way to keep an object spinning . . .See spinning_quad.cc in my WWW graphics resources.
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 92
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Camera and View Transformation
We now add matrices for the view and projection transformations.
It is easiest to use GLM’s glm::lookAt() function to position the camera:
/* define a view transformation */
glm::mat4 view = glm::lookAt(glm::vec3(0.0f, 0.0f, 2.0f),glm::vec3(0.0f, 0.0f, 0.0f),glm::vec3(0.0f, 1.0f, 0.0f));
wherethe first parameter specifies the position of the camera,the second parameter specifies a point towards the camera is aiming, andthe third parameter specifies a vector that is pointing up.
In our case, this is a trivial view onto the xy -plane.
Caveat
Re-positioning the camera causes the appropriate reverse transformation to beapplied to the model. (The OpenGL-internal camera always stays at the origin!) Thistransformation can cause parts or all of the objects to become invisible if the viewportis not changed appropriately.
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 93
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Camera and View Transformation
Of course, we do have to pass the view and projection matrices to the shader:
/* pass the view matrix to the shader program */
GLint uniformView = glGetUniformLocation(shaderProgram,"view");
glUniformMatrix4fv(uniformView, 1, GL_FALSE,glm::value_ptr(view));
/* define a currently trivial projection transformation */
glm::mat4 proj = glm::mat4(1.0f);/* pass the projection matrix to the shader program */
GLint uniformProj = glGetUniformLocation(shaderProgram,"proj");
glUniformMatrix4fv(uniformProj, 1, GL_FALSE,glm::value_ptr(proj));
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 94
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Moving to 3D
We now use the following sample setting, see mvp_quad.cc in my WWWgraphics resources:
/* quad consisting of four triangles in the plane z =1 */
float vtx[] = -0.5f, -0.5f, 1.0f, 1.0f, 0.0f, 0.0f,0.5f, -0.5f, 1.0f, 0.0f, 1.0f, 0.0f,0.5f, 0.5f, 1.0f, 0.0f, 0.0f, 1.0f,
-0.5f, 0.5f, 1.0f, 1.0f, 1.0f, 1.0f,0.0f, 0.0f, 1.0f, 0.0f, 0.0f, 0.0f
;/* vertex shader */
const char* vertexShaderSource = GLSL(in vec3 position; /* quad is in 3D! */
uniform mat4 model; uniform mat4 view; uniform mat4 proj;in vec3 colorVtxIn; out vec3 colorVtxOut;void main()
colorVtxOut = colorVtxIn;gl_Position = proj*view*model * vec4(position, 1.0);
);
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 95
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Moving to 3D
Of course, with a modified definition of vtx[], we also have to adaptglVertexAttribPointer() accordingly:
glVertexAttribPointer(posAttrib, 3, GL_FLOAT, GL_FALSE,6*sizeof(float), 0);
glVertexAttribPointer(colAttrib, 3, GL_FLOAT, GL_FALSE,6*sizeof(float),(void*)(3*sizeof(float)));
We put the camera at (0, 0, 2):
glm::mat4 view = glm::lookAt(glm::vec3(0.0f, 0.0f, 2.0f),glm::vec3(0.0f, 0.0f, 0.0f),glm::vec3(0.0f, 1.0f, 0.0f));
We will continue with discussing different ways to project our 3D scene to 2D,again using functions provided by GLM.
Multiplication order
Matrix multiplication is not commutative. Watch the order of your matrices!
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 96
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Orthographic Projection
The viewing volume of an orthographic projection is set up by the following GLMcommand:
glm::mat4 proj = glm::ortho(-2.0f, 2.0f, /* left / right */
-1.5f, 1.5f, /* top / bottom */
0.5f, 1.5f); /* near / far */
Camera’s View!
Note that near and far are specified as seen from the camera!
xy
z
(left,bottom,-near)
(right,top,-near)
(right,top,-far) For a camera positioned atthe origin, a point with worldcoordinates (x , y , z) isrendered if and only if
left ≤ x ≤ right,
bottom ≤ y ≤ top,
−far ≤ z ≤ − near.
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 97
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Perspective Projection: Frustum
The viewing volume of a perspective projection can be set up by the followingGLM command:
glm::mat4 proj = glm::frustum(-2.0f,2.0f, /* left / right */
-1.5f,1.5f, /* top / bottom */
0.9f,1.1f);/* near / far */
Camera’s View!
Note: near and far are positive and specified as seen from the camera!
x
yz
O rightleft
bottom
top
nearfar
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 98
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Perspective Projection: Field-of-View
The viewing volume of a perspective projection can also be specified in a moreintuitive manner (with 1.3 in radians being roughly 75):
glm::mat4 proj = glm::perspective(1.3f, /* angle */
4.0f/3.0f, /* aspect */
0.9f, /* near */
1.1f); /* far */
with height = 2 · near · tan(angle/2), and aspect = width/height.
Again, near and far both are positive terms that indicate the distance from thecamera!
x
yz
O height
width
nearfar
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 99
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Perspective Projection: Field-of-View
The angle specifies the field-of-view angle in the y -direction.The availability of the aspect ratio in glm::perspective() makes it easier torespond adequately to a reshaping of the graphics window.Decreasing angle without moving the objects or changing the camera positioncorresponds to switching from a wide-angle lens to to a telephoto lens, i.e., tozooming in.Increasing angle corresponds to zooming out.Re-positioning the camera causes the appropriate reverse transformation to beapplied to the objects. This transformation can cause parts or all of the objects tobecome invisible if near and far are not also changed appropriately.Recall that distortion may occur if the aspect ratios do not match!
x
yz
O height
width
nearfar
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 100
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Event Handling: GLFW Callback Functions
As virtually all other graphics APIs, OpenGL/GLFW also handle events (such asthe pressing of a mouse button or a keystroke) by means of callback functions.
Roughly, an OpenGL program runs in a loop, polling the hardware for newevents, and calling callback functions for those events for which callbackfunctions were declared. (All other events are ignored!)
Each callback function has to be registered. The command
glfwSetXXXCallback(myWindow, YYY);
tells OpenGL to use the function YYY() as callback function for events related toXXX.
GLFW provides two ways to check for events:glfwPollEvents() continually checks for events and processes eventsupon receipt.glfwWaitEvents() puts the thread that runs the OpenGL program tosleep until at least one event has been received.
A decent OpenGL program will always offer a way to terminate it gently, e.g., byreacting appropriately if “Q(uit)” or “ESC” was pressed by a user.
See GLFW’s documentation of input handling for more details.
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 102
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Event Handling: Keyboard Input
GLFW recognizes two types of events related to keys — key events and characterevents (related to Unicode code) — but we will focus only on key events.The following callback function instructs OpenGL to close the window if the userhas pressed “Q”, “q” or “ESC”:
static void keyCallback(GLFWwindow* myWindow, int key,int scanCode, int action, int mod)
if (((key == GLFW_KEY_ESCAPE) || (key == GLFW_KEY_Q)) &&
(action == GLFW_PRESS))/* close window upon hitting the escape key or Q/q */
glfwSetWindowShouldClose(myWindow, GL_TRUE);
The scancode is system-specific stuff, and mod is a bit field describing whichmodified keys were held down. E.g., GLFW_MOD_SHIFT, GLFW_MOD_CONTROL.We register this callback function by using the following command:
glfwSetKeyCallback(myWindow, keyCallback);
Key and button actions are GLFW_RELEASE, GLFW_PRESS and GLFW_REPEAT.(The last action means that a key was held pressed until it repeated.)
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 103
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Event Handling: Mouse Input
Whenever the mouse cursor is moved, a callback is triggered and the currentposition is passed to a callback function (if registered):
static void cursorPosCallback(GLFWwindow* window,double x_pos,double y_pos)
printf("Mouse is at (%6.1f,%6.1f)\n", x_pos, y_pos);
glfwSetCursorPosCallback(myWindow, cursorPosCallback);
The coordinates can be converted to integers with the floor function.
One can also query the cursor coordinates directly:
double x_pos, y_pos;glfwGetCursorPos(window, &x_pos, &y_pos);
Mouse coordinates . . .
. . . have their origin at the upper-left corner of the window!
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 104
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Event Handling: Mouse Input
An enter/leave callback provides notification when the mouse cursor enters orleaves a window:
static void cursorEnterCallback(GLFWwindow* myWindow,int entered)
if (entered) printf("Cursor entered window!\n");else printf("Cursor left window!\n");
glfwSetCursorEnterCallback(myWindow, cursorEnterCallback);
A scroll callback notifies about scrolling:
static void scrollCallback(GLFWwindow* myWindow,double x_off, double y_off)
printf("Scrolled by (%6.1f,%6.1f)\n", x_off, y_off);
glfwSetScrollCallback(myWindow, scrollCallback);
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 105
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Event Handling: Mouse Input
A mouse button callback provides information on button presses and releases:
static void mouseButtonCallback(GLFWwindow* myWindow,int button, int action,int mods)
if ((button == GLFW_MOUSE_BUTTON_LEFT) &&
(action == GLFW_PRESS)) double x_pos, y_pos;glfwGetCursorPos(myWindow, &x_pos, &y_pos);printf("Lft mouse button pressed at (%6.1f,%6.1f)\n",
x_pos, y_pos);
glfwSetMouseButtonCallback(myWindow, mouseButtonCallback);
See callbacks_quad.cc in my WWW graphics resources.
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 106
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Loading a Texture Image
OpenGL does not directly support the loading of textures. Rather one has toresort to third-party libraries.Up to version 2.0, GLFW allowed to load some types of texture files. However,this feature has been removed from GLFW 3.0.We resort to SOIL, the Simple OpenGL Image Library, for loading images:
/* load texture image */
GLint texWidth, texHeight;GLint channels;unsigned char* texImage = SOIL_load_image("katze.png",
&texWidth,&texHeight,&channels,SOIL_LOAD_RGB);
if (texImage == NULL) fprintf(stderr, "Image file could not be loaded\n");exit(EXIT_FAILURE);
SOIL also offers SOIL_load_OGL_texture but this function dates back to2008 and uses features that are not supported by modern OpenGL.
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 108
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Generating a Texture
Once the image has been loaded, we can generate the texture:
GLuint textureID;glActiveTexture(GL_TEXTURE0); /* texture unit 0 */
glGenTextures(1, &textureID);glBindTexture(GL_TEXTURE_2D, textureID);glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, texWidth, texHeight,
0, GL_RGB, GL_UNSIGNED_BYTE, texImage);SOIL_free_image_data(texImage);
The function glActiveTexture() specifies which texture unit a texture objectis bound to when glBindTexture() is called. (Unit 0 is default.)Parameters of glTexImage2D:
1 Texture target.2 Level-of-detail, with 0 being the base image. Can be used for mipmaps.3 Internal pixel format to be used by GPU.4 Width of texture image.5 Height of texture image.6 According to the specification, it should always be 0 . . .7 Format of the pixels in the image array.8 Format of the pixels in the image array.9 Image array.
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 109
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Using a Texture
Texture coordinates — denoted by s and t for a 2D texture — are, by default,normalized and range in the interval [0.0, 1.0].
By convention, (0.0, 0.0) corresponds to the lower-left corner of the texturespace, and (1.0, 1.0) corresponds to the upper-right corner.
The simplest approach to supplying texture coordinates is to specify them on aper-vertex basis, as we did in the case of color for our colored quad(colored_quad.cc):
float vtx[] = /* vertex coords texture */
-0.5f, -0.5f, 1.0f, 0.0f, 0.0f, /* lower - left */
0.5f, -0.5f, 1.0f, 1.0f, 0.0f, /* lower - right */
0.5f, 0.5f, 1.0f, 1.0f, 1.0f, /* upper - right */
-0.5f, 0.5f, 1.0f, 0.0f, 1.0f, /* upper - left */
0.0f, 0.0f, 1.0f, 0.5f, 0.5f /* center */
;
Note that sharing vertices within different faces becomes problematic oncedifferent texture coordinates are supposed to be used.
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 110
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Using a Texture
Wrapping is needed for texture coordinates that are outside of the unit square.GL_CLAMP_TO_BORDER: Specified color is used outside of border.GL_CLAMP_TO_EDGE: Texture values at the border are extended.GL_REPEAT: Texture image is repeated.GL_MIRRORED_REPEAT: Texture image is repeated in mirrored fashion.GL_MIRROR_CLAMP_TO_EDGE: One repetition, then clamp to edge.Texture options/parameters are set by means of glTexParameter():
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S,GL_CLAMP_TO_BORDER);
GLfloat bdColor[] = 0.0f, 1.0f, 0.0f, 1.0f ;glTexParameterfv(GL_TEXTURE_2D, GL_TEXTURE_BORDER_COLOR,
bdColor);
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 111
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Using a Texture
It is unlikely that the resolution of the texture image will match the resolutionrequired during rendering: Filtering:GL_NEAREST: Take color information from texel closest to query point.GL_LINEAR: Interpolate the colors of the four neighboring texels.
Filters can be specified both for maximizing and minimizing if the pixel maps toan area smaller (greater, resp.) than one texel:
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER,GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER,GL_LINEAR);
Alternatively, OpenGL can be instructed to use mipmaps. E.g.,
glGenerateMipmap(GL_TEXTURE_2D);glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER,
GL_LINEAR_MIPMAP_LINEAR);
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 112
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Adapting the Shaders for a Texture
We need to adapt our shaders to deal with vertices. Vertex shader:
const char* vertexShaderSource = GLSL(in vec3 position;in vec2 textureCoordIn;uniform mat4 mvp;out vec2 textureCoordOut;void main()
textureCoordOut = vec2(textureCoordIn.x,1.0 - textureCoordIn.y);
gl_Position = mvp * vec4(position, 1.0);
);
The inversion of the y -coordinates of the texture image is a technical twist (orhack) to deal with the problem that images tend to have their coordinate origin inthe upper-left corner . . .
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 113
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Adapting the Shaders for a Texture
Modified fragment shader:
const char* fragmentShaderSource = GLSL(in vec2 textureCoordOut;out vec4 outColor;uniform sampler2D textureData;void main()
outColor = texture(textureData, textureCoordOut);
);
Texture uniform to be passed to shader:
uniformName = "textureData";GLint uniformTex = glGetUniformLocation(shaderProgram,
uniformName);if (uniformTex == -1)
fprintf(stderr, "Error: could not bind uniform %s\n",uniformName);
exit(EXIT_FAILURE);glUniform1i(uniformTex, 0);
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 114
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Loading 3D Models
While loading 3D models is not exactly an OpenGL task, you are likely to run intoit as soon as you try to build more complex scenes by resorting to models built byothers.
The Open Asset Import Library (“Assimp”), http://www.assimp.org/ is aportable Open Source library to import various 3D model formats in a uniformmanner.
More recent versions of Assimp also can export 3D models, thus turning Assimpinto a general-purpose 3D model converter.
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 116
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
3 Representation and ModelingPrimitive ObjectsRegions in 2DCurved Surfaces in 3DSolids in 3DMiscellaneous Modeling Schemes
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 117
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Line Segments
A line segment can be specified by its endpoints.
PSfrag replacements
y
z
a b c
u v w
x
1
A polygonal curve (or polygonal chain) is a sequence of finitely many verticesconnected by line segments such that each segment (except for the first) starts atthe end of the previous segment.
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 119
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Polygons
A polygon is a closed polygonal curve where every vertex belongs to exactly twosegments. (We will always assume that all vertices of a polygon lie in one plane.)
simple polygonno polygon not a simple polygon
star-shaped polygon convex polygon x-monotone polygon
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 120
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Circular Arcs
Circular arcs can be represented in several ways. (And none of them is universallygood!)
Center C, start point S, end point E , orientation (CW or CCW): redundancyproblem!
E SCCW
SE
CWCC
Center C, radius r , start angle α, end angle β, orientation: start and endunknown, potential numerical problems!
αβ
αβ
x
y
C C
SE SE
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 121
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Circular Arcs
Start point S, end point E , as : suggested by Sabin, no redundancy and
numerically reliable, but center and radius unknown (and difficult to compute)! Aline segment can be treated as a degenerate arc, but a full circle cannot berepresented.
E Sas
C
a > 0: CCWa < 0: CW
r
Known as bulge factor in the DXF file format.
Wide-spread practical problem if start point and end point (nearly) coincide: fullcircle or degenerate circle?
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 122
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
General Curves
In addition to lines, circular arcs, and quadrics, more general types of curves areused in CAD systems.
Well-known representatives of so-called free-form curves includeBézier curves,B-splines,NURBS.
uniform clamped cubic B-spline
We will not discuss free-form curves in this lecture — see my lecture ongeometric modelling.
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 123
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Half-Space
The closed half-space defined by a point P in 3D and a unit vector n is given by
H(p, n) := u ∈ R3 : 〈n,u〉 − 〈n,p〉 ≤ 0.
If 〈n,u〉 − 〈n,p〉 = 0 then u ∈ ε(p, n) := u ∈ R3 : 〈n,u〉 − 〈n,p〉 = 0,〈n,u〉 − 〈n,p〉 > 0 then u in the half-space , into which n points,〈n,u〉 − 〈n,p〉 < 0 then u in the half-space , into which n does not point.
VoidMaterial
Void
Universe
Material
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 124
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Cylinder
An upright circular cylinder is specified by:two points (xc , yc , zmin) and (xc , yc , zmax ) on the axis of rotation (parallel tothe z-axis),radius r .
Implicit representation:
(x − xc)2 + (y − yc)2 = r 2 with zmin ≤ z ≤ zmax .
Parametrization:
(xc + r cosϕ, yc + r sinϕ, z) with ϕ ∈ [0, 2π[ and zmin ≤ z ≤ zmax .
r
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 125
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Cone
An upright frustum of a cone is specified by:two points (xc , yc , zmin) and (xc , yc , zmax ) on the axis of rotation (parallel tothe z-axis),radii r1 and r2.
Parametrization:
(xc + (r1 + λ(r2 − r1)) cosϕ, yc + (r1 + λ(r2 − r1)) sinϕ, zmin + λ(zmax − zmin)),
where ϕ ∈ [0, 2π[ and 0 ≤ λ ≤ 1.
r
r1
2
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 126
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Sphere
A sphere is specified by:center (xc , yc , zc),radius r .
Implicit representation:
(x − xc)2 + (y − yc)2 + (z − zc)2 = r 2.
Parametrization:
(xc + r cos δ cosϕ, yc + r cos δ sinϕ, zc + r sin δ),
with ϕ ∈ [0, 2π[, δ ∈ [−π2 ,π2 ]. z
y
x
δ
ϕ
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 127
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Torus
A torus is specified by:a center point (xc , yc , zc) on the axis of rotation (parallel to the z-axis),radii R and r .
Implicit representation:
(√
(x − xc)2 + (y − yc)2 − R)2 + (z − zc)2 = r 2.
Surface blending may generate portions of the surface of a torus.
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 128
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Parametrization of a Torus
Parametrization:
((R + r cos δ) cosϕ, (R + r cos δ) sinϕ, r sin δ),
with ϕ ∈ [0, 2π[, δ ∈ [0, 2π[.
ϕ δ
y
x
R R
x
z
r
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 129
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Parametrization of a Torus
m = (R cosϕ,R sinϕ, 0)
p = m + r cos δ(cosϕ, sinϕ, 0)
= (mx + r cos δ cosϕ,my + r cos δ sinϕ, 0)
q = (px , py , r sin δ)
= (R cosϕ+ r cos δ cosϕ,R sinϕ+ r cos δ sinϕ, r sin δ)
= ((R + r cos δ) cosϕ, (R + r cos δ) sinϕ, r sin δ)
x
y
R
r Q
MP
z
ϕ
δ
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 130
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Quadrics: Ellipsoid
General implicit representation of a quadric:
a11x2 + a22y2 + a33z2 + a12xy + a13xz + a23yz + a1x + a2y + a3z + b = 0.
Implicit representation after coordinate transformation:
a11x2 + a22y2 + a33z2 + b = 0.
Implicit representation of an ellipsoid:
x2
a2 +y2
b2 +z2
c2 = 1.
If a = b or a = c or b = c: ellipsoid of revolution.
If a = b = c: sphere.
z
y
x
b
a
cO
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 131
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Quadrics: Hyperboloid
One-sheet hyperboloid:Implicit representation:
x2
a2 +y2
b2 −z2
c2 − 1 = 0.
a = b: hyperboloid of revolution.
x
y
z
a b
c
x
y
z
Two-sheet hyperboloid:Implicit representation:
x2
a2 +y2
b2 −z2
c2 + 1 = 0.
If a = b: hyperboloid of revolution.c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 132
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Quadrics: Elliptical Paraboloid
Implicit representation:
x2
a2 +y2
b2 − 2z = 0.
a = b: paraboloid of revolution.
x
y
za b
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 133
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Superellipsoids
Implicit representation (dependent on the shape parameters e1, e2):
(|xa|2/e1 + |y
b|2/e1 )e1/e2 + |z
c|2/e2 = 1.
Parametrization (based on the convention st := sign(s) · |s|t for s, t ∈ R):
(a cose2 δ cose1 ϕ, b cose2 δ sine1 ϕ, c sine2 δ),
with δ ∈ [−π2 ,π2 ] and ϕ ∈ [0, 2π[.
If e1 ≤ 2 and e2 ≤ 2: convex solid.
If e1 = e2 = 2: polyhedron.
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 134
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Sample Superellipsoids
a = 1.0b = 1.0c = 1.0e1 = 0.5e2 = 0.5
a = 1.0b = 1.0c = 1.5e1 = 2.0e2 = 1.0
a = 1.0b = 1.0c = 1.5e1 = 3.0e2 = 3.0
a = 1.0b = 1.0c = 1.0e1 = 3.0e2 = 2.0
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 135
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Exact Representation of the Boundary
When representing a planar region by its boundary curves the key issue is to beable to extract its interior unambiguously.
Warning
Different interpretations of “interior” are in practical use for the regions depicted above!
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 137
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Exact Representation of the Boundary
The boundary curves of a planar region R should meet the following conditions:all curves are simple and closed,one of them is the “outer” boundary,all other boundary curves (“islands” or “holes”) lie strictly in the interiorregion of the outer boundary,the island curves (and their interior regions) are pairwise disjoint,all curves are oriented such that R lies on the same side of every curve.
In mathematical terms, a collection of curves that meets these conditions boundsa multiply-connected region.
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 138
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Bridge Edges
We may find it convenient to transform a multiply-connected region into asimply-connected region by means of zero-width bridges.
Note that the resulting curve is not a simple polygon in the strict meaning of ouroriginal definition!
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 139
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Cell Decomposition in 2D
A cell decomposition provides an approximate representation of a region R.A user-defined subset of the plane (“workspace”) is overlayed with a regular grid.Every cell is classified as full, empty, or partially full depending on whether it liescompletely in the interior or exterior of R, or whether it intersects the border of R.The region is modeled as the union of those cells that are classified as full.Whether or not the cells that are partially full are added to the approximaterepresentation depends on the application.
1 2 3 4 · · · · · · n− 1 n
1
2
3
4
...
...
n− 1
n
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 140
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Cell Decomposition in 2D
There is an obvious trade-off between modeling accuracy and memoryconsumption.
The high memory consumption tends to be a serious problem unless a verycoarse approximation suffices: for an n × n grid, the number of cells increases bya factor of four if the resolution of the grid is doubled!
1 2 3 4 · · · · · · n− 1 n
1
2
3
4
...
...
n− 1
n
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 141
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Quadtree
Goal: Model the boundary of a region R with sufficient detail, but use larger cellswithin the interior of R.
We subdivide the rectangular workspace recursively into four sub-rectangles bybisecting it in both x and y .
Again, a cell is classified as full, empty, or partially full.
The recursive subdivision of cells that are partially full continues until a minimumresolution or maximum depth of the quadtree is reached.
1 2
3
4 5
67 8
9 1011 12
1314 15 16 17 18 19
20 21 22 23 24 25
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 142
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Quadtree: Construction
Input Region
3
Level 1
full emptyc© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 143
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Quadtree: Construction
1 2
3
4 5
6 9 10
13
Level 1+2
1 2
3
4 5
67 8
9 1011 12
14 15 16 17 18 19
20 21 22 23 24 2513
Level 1+2+3
full emptyc© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 145
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Quadtree: Tree Structure
It is natural to store a quadtree as a tree.
1 2
3
4 5
67 8
9 1011 12
1314 15 16 17 18 19
20 21 22 23 24 25
3
tree nodeempty leaffull leaf
1 2 4 5 6 13 9 10
7 8 11 12 14 15 20 21 16 17 22 23 18 19 24 25
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 146
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Quadtree: Boolean Operations
S T
Union(S,T)Int(S,T)
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 147
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Quadtree for Curved Data
It is natural to extend quadtree representations to regions with curvedboundaries.
Warning
A recursive quadtree decomposition will, in general, never terminate unless aminimum cell size is specified.
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 148
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Quadtree: Pros and Cons
Pros: Standard advantages of hierarchical modeling, such as a fast testfor disjointness.Boolean operations are easy to compute (provided that thequadtrees are aligned).Point-in-region test is straightforward.
Cons: The representation is coordinate-dependent and not invariantunder transformations!The representation is only approximate, and memory maybecome an issue.A suitable approximation accuracy may be hard to predict.Graphical “zooming in” is only supported until the representationaccuracy is reached.Neighbor finding is tricky.
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 149
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Ruled Surface
Consider two curves C1, C2 : [α, β]→ R3, for α, β ∈ R with α ≤ β.
The ruled surface (Dt.: Regelfläche) S : [α, β]× [0, 1]→ R3 defined by C1, C2 isgiven by the linear interpolation of C1 and C2:
S(s, t) := (1− t)C1(s) + tC2(s) with s ∈ [α, β], t ∈ [0, 1].
Note that S may be curved even if C1, C2 are line segments!
C1
C2
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 151
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Ruled Surface
Example:
C1(s) :=
001
+ s
100
, C2(s) :=
011
+ s
100
, with s ∈ [0, 1].
We get the ruled surface S : [0, 1]× [0, 1]→ R3
S(s, t) = (1− t)
001
+ s
100
+ t
011
+ s
100
=
001
+ s
100
+ t
010
,i.e., the “top” of the unit cube.
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 152
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Surface of Revolution
Consider the 3D curve C(s) :=
C1(s)0C2(s)
parameterized by C1, C2 : [α, β]→ R.
Obviously, C is constrained to the xz-plane of R3.
A rotation of C about the z-axis yields the surface of revolution (Dt.:Rotationsfläche)
S(s, ϕ) :=
C1(s) · cosϕC1(s) · sinϕC2(s)
, where s ∈ [α, β], ϕ ∈ [0, 2π[.
Properties:Every point of C which does not lie on the z-axis creates a circle in a planeparallel to the xy -plane;A line segment which is parallel to the xy -plane creates a disk or circularannulus;A line segment which is parallel to the z-axis creates a cylinder;Any other line segment creates a cone;A circular arc that is part of C creates a portion of a torus.
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 153
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Surface of Revolution
For s ∈ [−π2 ,π2 ], let C1(s) := cos s and C2(s) := sin s.
This yields
S(s, ϕ) =
cos s · cosϕcos s · sinϕ
sin s
with ϕ ∈ [0, 2π] and s ∈ [−π2,π
2]
as surface of revolution, i.e., the surface of the unit sphere.
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 154
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
General Surfaces
In addition to quadratic surfaces, ruled surfaces, or surfaces of revolution, moregeneral types of surfaces are used in CAD systems.Well-known representatives of so-called free-form surfaces include
Bézier surfaces,B-splines and T-splines,NURBS.
We will not discuss free-form surfaces in this lecture — see my lecture ongeometric modeling.
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 155
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Wire-frame Model
Wire-frame models “represent” solids by specifying the set of edges of the solid.
Outdated nowadays — mentioned for historical reasons only!
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 157
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Spatial Decomposition
Divide the space into cells, aka voxels.
Often, the collection of cells forms a regular grid.
Represent all cells lying in the object.
Popular representation in volume rendering.
High storage requirement.
Similar pros and cons as in 2D.
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 158
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Octree
Hierarchical representation.
Requires much less space than a standard spatial decomposition.
Extension of 2D quadtree.
Each cube is divided into eight octants.
PSfrag replacements
2 36 2 3
6 7
66 7 7
3 3
4 5 51
4 55
1
y
z
x
1
Useful for many operations, e.g., collision detection, ray tracing.
Similar pros and cons as quadtrees.
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 159
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
CSG
Constructive Solid Geometry (CSG) combines simple solids — so-calledprimitives — by using Boolean operations.
∪ ∩ \
∪
\
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 160
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
CSG Tree
CSG models are commonly used to describe man-made shapes.
Sample primitives includehalf-space,sphere,cylinder,cone,pyramid,cube,box,ellipsoid.
A CSG object is stored as a tree with operators at interior nodes, and theprimitives at the leaves.
Every interior node stores the position and orientation of its children, and theBoolean operation to be applied to them.
Edges of the tree are ordered.
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 161
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
CSG and Boolean Set Operations
CSG combines solid objects by using three or sometimes four different Booleanoperations:Union: Create a new solid that is the union of two solids; denoted by ∪ or
+.Intersection: Create a new solid that is the intersection of two solids; denoted
by ∩ or ?.Difference: Create a solid by subtracting one solid from another solid;
denoted by \ or −.Complement: Create a new solid by subtracting a solid from the universe.
In theory, the set-theoretic difference can be replaced by a complement andintersection operation.
In practice, the difference is often more intuitive as it corresponds to removing asolid volume.
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 162
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
CSG Representation Caveats
Not commutative
Boolean operations are not commutative!
Not unique
A CSG representation is not unique.
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 163
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Problems of Standard Boolean Operations
Boolean operations may create dangling faces or edges, or result inlower-dimensional “solids”.
Possible types of intersection of two solids:
PlaneSolid Line
Point Empty Set
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 164
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Regularized Boolean Operations
To eliminate those lower-dimensional sets, the Boolean set operations areregularized:
1 Compute the interior of the solids. This yields objects without theirboundaries.
2 Apply the standard Boolean set operation.3 Compute the closure of the resulting object. This will add back the boundary.
More formally, let ∪,∩, \ be the standard Boolean operations. We define theirregularized counterparts ∪∗,∩∗, \∗ as follows:
A ∪∗ B := int(A) ∪ int(B),A ∩∗ B := int(A) ∩ int(B),A \∗ B := int(A) \ int(B).
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 165
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Pros and Cons of CSG
Pros: A CSG tree mimics the design and construction process.Boolean operations are trivial for CSG objects.
Cons: The surface of a CSG object is not readily available.Rendering a CSG objects is difficult unless (massively parallel)ray tracing is used.CSG trees are not unique: same-object detection and null-objectdetection are difficult.Support for free-form surfaces requires complicated mathematics.
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 166
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Boundary Representation
Describes a solid in a graph-like structure in terms of its surface boundaries:vertices, edges, faces.Common abbreviation: b-rep.It is imperative to model the full set of topological and numerical properties.
Solid
PSfrag replacements
f1
f1 f2 f3 f4
v1v1 v2
v2
v3
v3
v4
v4
e1
e1
e2
e2
e3e3
e4
e4
e4
e5e5
e6
e6
x1
y1
z1
x2
y2
z2
x3
y3
z3
x4
y4
z4
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 167
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Boundary Representation
If curved faces are involved then the supporting surfaces of the faces have to bestored, too. Similarly for the edges of a b-rep model.
Most b-rep modelers support only solids whose boundaries are 2-manifolds.
So-called Euler operators can be used to guarantee that b-rep modelingproduces 2-manifolds.
Boolean operations require sophisticated mathematical tools in order torepresent the resulting object as a (valid) b-rep model.
In practice, dual and hybrid representation schemes are often used in order to beable to benefit from the advantages of the individual schemes.
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 168
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Polyhedra
A polyhedron is bounded by a set of polygonal faces, where each edge isadjacent to an even number of faces.
Faces do not intersect except in common edges.
In order to guarantee a 2-manifold surface, every edge has to be shared byexactly two faces.
All faces are required to be planar.
Deficiencies of polyhedral models
Be warned that (freely available) polyhedral models that are used purely for renderingpurposes tend to be of extremely low quality, and may violate our rules drastically!
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 169
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Polyhedra of Genus 0 and Euler’s Formula
Polyhedron of genus 0: Can be deformed to a ball; no holes.Examples: Cube, tetrahedron, pyramid.Torus is not a genus-0 polyhedron.
Euler’s formula for genus-0 polyhedra:
V − E + F = 2,
whereV : #(vertices),E : #(edges),F : #(simply-connected 2D faces).
V = 8
E = 12
F = 6
V = 5
E = 8
F = 5
V = 6
E = 12
F = 8
1
The validity of Euler’s formula is necessary but not sufficient for a polyhedron tobe of genus 0.
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 170
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Polyhedra of Higher Genus
Euler’s formula generalizes to polyhedra of higher genus with 2-manifoldboundaries:
V − E + F − H = 2(C −G),
whereH: #(holes in 2D faces),G: #(holes passing through the polyhedron),C: #(connected components).
PSfrag
24 36 15 3 1 1
1
V − E + F −H = 2(C −G)
Practical consequence of Euler’s formula
A polyedron with n vertices can be stored in O(n) memory units.
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 171
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Winged-Edge Representation
Common way to represent 2-manifold polyhedra of genus 0.Each edge e stores
two faces f1, f2 adjacent to e,two endpoints v1, v2 of e,two edges incident to v1 immediately before and after e in clockwisedirection,two edges incident to v2 immediately before and after e in clockwisedirection.
f1 f2
v1
v2
e1
e2 e3
e4 e5
Each vertex v stores a pointer to one of the edges incident to v .Each face f stores a pointer to one of the edges bounding f .Other common alternatives: Doubly-connected edge list (DCEL), half-edge datastructure.
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 172
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Particle Systems
Points (aka “particles”) that follow laws of physics are used to model an object.
Sample phenomena generated by particle systems:Smoke, fire, fog;Deformable objects: clothes, elastic objects, rope;Waves, turbulent air flow, storm.
Independent particles: Position of a particle does not depend on others, e.g.,particles under gravity. A time step for an n-particle simulation requires Θ(n) time.
Interactive particles: Position of a particle depends on the others; particles are“linked”. Each time step requires Θ(n2) time.
In practice the dynamics of aparticle do often depend on itsneighbors, e.g., clothing simulation,ropes, stars.
Spring forces can be used to modelthe interaction of adjacent particles.
pi,j
pi,j+1
pi,j−1
pi+1,j
pi−1,j
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 174
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Fractal Models
The term “fractal” is used for self-similar objects.
Sample fractals: Mandelbrot and Julia sets.
When constructed by an algorithm, we repeat the same construction schemerecursively.
E.g., Koch Curve.
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 175
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Fractal Models
Fractals are used to model mountains, rocks, trees, coast-lines, etc.
Fractal dimension: We use the formula
d =log nlog s
to calculate the fractal dimension d , where n is the number of small pieces thatgo into the larger one, and s is the scale at which the smaller pieces compare tothe larger one.
A line in the Koch Curve breaks up into four smaller pieces, which are three timesas short as the original. This yields d = 1.2619. . . as the fractal dimension of theKoch Curve.
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 176
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Fractals: Modeling Peaks
Fournier, Fussel and Carpenter [1982] first used recursive subdivision to modelpeaks.
PSfrag replacements
y
0(a)
1 x
y
0(b)
1 x
y
0(c)
1 x
1
! !
"#
$%
&'()
*+
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 177
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Grammar Models
Generalization of fractals.
Sample grammar with alphabet A,B, [, ], (, ), where A is a vertical segment, Bis a horizontal segment, (, ) models a right branch, [, ] models a left branch, andrules
A→ AA,B → A[B]AA[B],B → A[B]AA(B).
Sample grammar model relative to those rules, where only the left branch isused:
1 B,2 A[B]AA[B],3 AA[A[B]AA[B]]AAAA[A[B]AA[B]].
PSfrag replacements
B
BAA
BA
B
A A
B
A
AA
AAAAA
B BAA
1
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 178
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Grammar Models
Another sample grammar model: Both left and right branches, where B models adiagonal segment.
1 B,2 A[B]AA(B),3 AA[A[B]AA(B)]AAAA(A[B]AA(B)).
PSfrag replacements
B
B
AA
BA
BB AAA
AAAA
BA
AB
AAA
1
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 179
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
4 Raster GraphicsLight and ColorColor ModelsScan Conversion
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 180
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
The Physical Nature of Light
Caveat
The following slides present a simplified view of the physical nature of light. Consult aphysics textbook for an in-depth coverage of these topics!
Light is electromagnetic energy that is visible to humans.According to physicists, light exhibits a wave-particle duality.
The wave model: makes an analogy comparing light to water waves.The particle model: light is made up of many little particles.
Neither model is really complete or correct.Under some circumstances light behaves like a wave.Under other circumstances light behaves like a particle.
The particle model alone can go a long way towards understanding andexplaining light, though.The basic particle of light is called a photon: We can see it as an object thatmoves along a straight line and vibrates during its move.This vibration is a kind of mathematical abstraction.It is useful since much of the mathematics that describe vibrations seem to workin describing the behavior of light.
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 182
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
The Physical Nature of Light
With every photon we can associate a particular frequency f of vibration.The frequency is measured in Hertz (Hz).Visible frequencies are within 4.3× 1014 Hz to 7.5× 1014 Hz.
An alternate way to characterize the vibration of a photon is to consider itswavelength λ.
The wavelength is measured in meters, or nanometers (with1 nm = 10−9 m).Visible wavelengths lie in the 400 nm to 740 nm range, or, at best, within380 nm to 780 nm.Long wavelengths are perceived as reds and short wavelengths areperceived as blues.
360 380 400 420 440 460 480 500 520 540 560 580 600 620 640 660 680 700 720 740 760 780 800
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 183
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
The Physical Nature of Light
Wavelength and frequency are closely related:
λ · f = c,
where c := 299 792 458 m s−1 is the speed of light traveling in vacuum.
The energy, E , of a photon is directly related to its frequency (Planck-Einsteinrelation):
E = h · f ,
where h := 6.626 070 15 J s is Planck’s constant (Dt.: PlanckschesWirkungsquantum); energy is measured in Joule (J).
360 380 400 420 440 460 480 500 520 540 560 580 600 620 640 660 680 700 720 740 760 780 800
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 184
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Color and Spectra
Have you ever seen a whiteband in a rainbow?
Likely not. White is not apure spectral color : Nosingle photon can give usthe impression of white light.
A prism can be used toshow that white light is reallya mixture of differentspectral colors.
red
sunlight
glass prism
orangeyellowgreenblueindigoviolet
Wavelength is import!
Reflection and refraction of light depend on the wavelength!
White light:photons of many different spectral colorsstrike the same region of our eyenearly simultaneously.
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 185
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Color and Spectra
How can we characterize different amounts of photons at different wavelengths?
We couldset up a measuring instrument,count the average number of photons,at each visible wavelength,over some period of time,and then plot the results.
Such an intensity versus wavelength plot is often called a frequency spectrumplot, which is often abbreviated simply as spectrum.
Hence, one way to describe color is to attach a spectrum with each light ray,describing the light traveling along that ray.
Except for a few cases, such as fluorescent light that has spikes, the spectrum(regarded as a function of wavelength) tends to be smooth.
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 186
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Color and the Eye: Rods and Cones
Human vision relies on light sensitive cells (“sensors”) in the retina of the eye:rods and cones.Rods (Dt.: Stäbchen) are cells which can work at low intensity, but cannot
handle color.Cones (Dt.: Zäpfchen) are cells which can handle color, but require brighter
light to function.
Cones are concentrated near the fovea (Dt.: Sehgrube) of the retina (Dt.:Netzhaut).
Many more rods (≈ 120M) than cones (≈ 6M).
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 187
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Color and the Eye: Trichromatic Theory
First proposed by Thomas Young (1801), refined by Hermann von Helmholtz(1861).
The standard assumption is that the retina has three different types of cones,with peak sensitivity to
yellow – the peak response is around 580 nm, most commonly but notcorrectly also referenced as red;green – the peak response is around 545 nm; andblue – the peak response is around 440 nm.
Every single wavelength triggers all three kinds of cones, but by differentamounts.
The trichromatic theory helps to explain color blindness:Protanope (red blindness),Deuteranope (green blindness),Tritanope (blue blindness, very rare).
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 188
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Color and the Eye
B
GR
400 480 560 6400
.20
.18
.16
.14
.12
.10
.08
.06
.04
.02
400 500 600 700
100
80
60
40
20
0
Left: Spectral-response functions of each of the three types of cones on the humanretina, describing the fraction of light absorbed by each cone with respect towavelength.Right: Luminous-efficiency function for the human eye, describing the relativesensitivity of the eye with respect to wavelength.
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 189
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Color and the Eye
We have a peak sensitivity to yellow-green light of wavelengths around 550 nm:About two thirds of the cones are sensitive to yellow.Almost one third is sensitive to green.Only a few percent have a sensitivity to blue.
400 500 600 700
100
80
60
40
20
0
Thus, the eye’s response to blue light is much weaker than its response to yellowor green.
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 190
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Human Color Perception
The human eye can distinguish about 350 000 different colors in color space:about 128 different hues,for each hue around 20–30 different saturations, depending on the hue,60–100 different brightness levels.
When colors differ only in hue, the difference in wavelength, between justnoticeably different colors varies from more than 10 nm at the extremes of thespectrum to less than 2 nm around 480 nm (blue) and 580 nm (yellow).
400 500 600
Wavelength
700
2468
10121416
∆(λ)
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 191
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Optical Illusions
The human visual system is very susceptible to optical illusions!
Humans are not particularly good at judging absolute quantities, such as anglesor areas. E.g., we tend to overestimate small angles and underestimate largeangles.
Kanizsa triangles: Our visual systemfills in the missing portions of theedges in order to allow us to seetriangles (“subjective contours”).
Ebbinghaus illusion: The two innercircles are of the same size! Colors,shapes, and relative sizes can foolhumans easily . . .
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 192
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Optical Illusions
Geometrical-optical illusions are characterized by an incorrect perception of size,length or curvature.
Müller-Lyer illusion: The horizontalline segments are of the same length!
Café wall illusion: The gray horizontallines between staggered rows ofalternating black and white squaresare parallel.
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 193
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Optical Illusions
Simultaneous contrast illusion: The luminocity of an object perceived by a humandepends on its surrounding background.
The two inner squares are coloredequally!
The background is a color gradientthat progresses from black to white.The horizontal bar is shadeduniformly!
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 194
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Optical Illusions
Checker shadow illusion (Adelson 1995): The two cells labelled A and B are ofexactly the same color. (E.g., a color picker returns the hex value 787878, or 47%each of red, green and blue.) Seehttps://www.youtube.com/watch?v=z9Sen1HTu5o for a video.
[Image credit: Wikipedia]
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 195
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Optical Illusions
Ambiguous garage roof illusion (Sugihara 2015): The garage roof is neitherround nor corrugate. The illusion exploits the fact that a single image does notconvey depth information, and that the human brain prefers to take the silhouettecurve of the roof as the intersection of the roof with a plane normal to the obviousaxis of the roof.
[Image credit: K. Sugihara]
More information (plus several videos) onhttp://www.isc.meiji.ac.jp/~kokichis/Welcomee.html.
See https://www.youtube.com/watch?v=xYe4-7I5ot0 for a video on similar tricks.
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 196
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Light and Color: Recommendations
As blue gives only poor contrast, it is not appropriate for text, fine lines, or smallobjects.
Neighboring colors should vary in more than their value of blue.
Red or green should be avoided on the boundary of large displays.
Associated objects should be shown on the same background.
Similar colors should signal similar meaning: consistency is vital!
Keep in mind that the meaning of a color may be dependent on the cultural ornational background of the user.
Too many colors with different meanings demand too much from the humanperception. The magic number of objects recognized at a single glance: 5± 2.
Kiss – keep the color scheme simple! The magic number for short-term memoryof colors is 7± 2.
Unfortunately . . .
. . . it is more difficult to use color effectively than to use it ineffectively.
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 197
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Primary Colors
It is natural to attempt to model colors as a “mixture” of a small number ofprimary colors.
There are two basic ways of mixing color: one is additive, by combining emittedlight of different colors, while the other is subtractive, by preventing certainportions of white light from being reflected.
Additive representation: Starting from black, create colors by adding differentamounts of the primaries. E.g., adding red and blue generates magenta.Subtractive representation: Starting from white, create colors by subtractingportions of white. E.g., magenta dye blocks green from being reflected.
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 199
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Perceptual Color Matching: CIE-XYZ
Long before the advent of computers, the need for a succinct specification ofprimaries became apparent.In 1931, CIE (Commission Internationale de l’Eclairage) defined a “standardobserver”:
Roughly, a standard observer is a small group of 15–20 individuals. It issupposed to be representative of normal human color vision.The observer viewed a split screen with (close to) 100% reflectance.On one half, a test lamp casts a pure spectral color on the screen.On the other half, three lamps emitting varying amounts of red, green, andblue light attempted to match the spectral light of the test lamp.The observer determined when the two halves of the split screen wereidentical, thus defining the tristimulus values for each distinct spectral color.
It was realized that a linear combination (with non-negative coefficients) of thered, green and blue primary lamps could not reproduce all spectral light.Since negative coefficients were considered inadequate, CIE defined three(artificial) additive primaries and a corresponding color model, CIE-XYZ , in 1931.The CIE color model was developed to be completely independent of any meansof emission or reproduction and is based as closely as possible on how humansperceive color.In 1960, CIE-XYZ model was modified, and again revised in 1976 to becomeCIE-1976-L*a*b and CIE-1976-L*u*v, in an attempt to linearize the perceptibilityof color differences. (The basic principles remained the same, though.)c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 200
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
CIE Chromaticity Diagram
Every visible color (and some invisible ones, too) can be expressed as acombination of the CIE primaries: α · X + β · Y + γ · Z .
This defines a 3D linear color space with respect to X ,Y and Z .
It is common to project this space onto the plane X + Y + Z = 1.
The coordinates of this projected 2D plane are usually called x and y , where
x =X
X + Y + Zand y =
YX + Y + Z
and z = 1− x − y =Z
X + Y + Z.
The resulting diagram is known as chromaticity diagram.
Note that this diagram captures only color but not luminance of the light source.
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 201
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
CIE Chromaticity Diagram
Spectral colors are on theboundary of the “horseshoe”.
The line joining violet andred, the purple line, is notpart of the spectrum.
White is near the middle.(CIE D6500 is at position(0.313,0.329).)
Any color that results fromadditive mixing of two colorsand will lie on the linesegment joining these twocolors.
Any color that results fromadditive mixing of threecolors will lie in the triangleconnecting these threecolors.
780620
610600
590
580
570
560
550540
530520
510
500
490
480
470460 380
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0
x
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1.0y
white
green
cyan
yellow
blue
violet
purple
pink
orange
red
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 202
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
RGB Color Model
The RGB color space is given by the unit cube, where the primaries red (R),green (G), and blue (B) correspond to the coordinate axes.
In this system, (0, 0, 0) corresponds to black and (1, 1, 1) is white.
Red = (1,0,0)
Yellow = (1,1,0)
Green = (0,1,0)
Cyan = (0,1,1)
Blue = (0,0,1)
Magenta = (1,0,1)
Black = (0,0,0)
White = (1,1,1)
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 203
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
RGB Color Model
The RGB model is the most widely used color model for specifying the color of apixel on a monitor.
Its practical importance is derived from the fact that triads of three phosphor dotsor LCD/LED cells – with colors red, green, and blue – are used to produce a colorin an additive way on a standard monitor.
Although the arithmetic interpolation between two RGB triples is geometricallylinear, such an interpolation need not be linear perceptually: An incrementalchange of an RGB triple may produce no perceivable difference in one part of theRGB cube, while it may create visually different colors in some other part of thecube.
Always bear in your mind that the class of all colors that can be displayed on amonitor is a subset of the colors perceivable by humans.
Recall that an RGB image may look different on different monitors.
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 204
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
CMYK for Color Printing
When a white surface is coated with cyan ink, no red light is reflected: Cyansubtracts red from the reflected white light.
CMY color model: The inks used in color printing are cyan (light blue), magenta(purple), and yellow.
To maintain black color purity, and to speed-up the drying process, a separateblack ink is used rather than to rely on cyan, magenta, and yellow to generateblack: CMYK.
Dye Color Absorbs Color Reflects ColorsCyan (C) Red Blue and Green
Magenta (M) Green Blue and RedYellow (Y) Blue Green and RedBlack (K) All None
As the RGB model, the CMY model can be regarded as a unit cube, where(0, 0, 0) corresponds to white and (1, 1, 1) is black.
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 205
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
RGB-to-CMYK Conversion
Given intensity values R,G,B, where each value is between 0 and 1, we canconvert to CMY using the following masking equations:
C = 1− R and M = 1−G and Y = 1− B.
This is approximate: It assumes that the printed cyan is equal to white minus thered of the monitor, and this is rarely the case.Adding black (K) as an additional color further complicates the matter.Typically, a color printer cannot print all colors a computer monitor can display,and a computer monitor cannot display all colors a color printer can!E.g., pure green or pure blue is outside of the gamut of printers.Consequently, the same image displayed on a computer monitor may not matchto that printed in a publication.Color shifts may occur when the RGB-to-CMYK conversion takes place.Nevertheless, this “four-color process” or “full-color” printing generates the vastmajority of magazines and marketing publications.High-fidelity conversions from RGB to CMYK currently require careful tweaking tocompress and stretch the RGB gamut of a particular image so that it fits into theavailable CMYK gamut.This is an area of active research!
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 206
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Color Gamut
The color gamuts of films, monitors and color printers form (fairly small) subsetsof the chromaticity diagram: gamut mapping may be required! (Note thathardware vendors sometimes prefer to claim larger/different gamuts!)
780620
610600
590
580
570
560
550540
530520
510
500
490
480
470460 380
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0
x
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1.0y
Adobe-RGB (1998)
sRGB
CMYK (DIN 16539)
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 207
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Other Color Models
The CIE color diagram is a scientific formalism, but it does not provide a naturaluser interface for specifying colors.
RGB and CMY(K) are great from a technical point of view, but both are equallybad from an artist’s perspective.
Several other color models have been developed:HSV: Hue, Saturation, Value.HLS: Hue, Lightness, Saturation.
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 208
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Hue, Saturation, Value
The HSV dates back to a color notation proposed by Munsell in 1905:Hue: “It is that quality by which we distinguish one color from another, as redfrom yellow”. It is given by the dominant wavelength of the light in that color.Saturation: “The degree of departure of a color sensation from that of whiteor gray”. It models the purity of the color.Value: “It is that quality by which we distinguish a light color from a darkone”. It models the brightness (i.e., amount of energy) of the light.
Red0
YellowGreen120
Cyan
Blue240
Magenta
SH
V
o
o
o
Blackc© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 209
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
HSV Pyramid
Hue is measured from 0 to 360 degrees counter-clockwise, with red at 0.
Saturation is the distance away from the center line; decreasing S corresponds toadding white.
Value is the vertical distance above black; decreasing V corresponds to addingblack.
The spectral colors are given by V = S = 1 and arbitrary H.
Red0
YellowGreen120
Cyan
Blue240
Magenta
SH
V
o
o
o
Blackc© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 210
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
HSV-to-RGB Conversion
A hexagonal cross-section of the HSV pyramid can be regarded as a sub-cube ofthe RGB cube projected onto a plane that is normal to its main diagonal.
This establishes a one-to-one mapping between RGB and HSV.
Thus, the arithmetic interpolation between two HSV triples is neithergeometrically linear nor perceptually linear.
Red Yellow
Green
CyanBlue
Magenta
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 211
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Hue, Lightness, Saturation
Developed at Tektronix, the HLS model is very similar to HSV.
It accounts for the fact that as light gets too bright or too dark, the range ofperceivable colors narrows to only white or only black.
Red0
YellowGreen120
Cyan
Blue240
Magenta
SH
L
o
o
o
Black
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 212
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Raster Devices
The most common graphics output device is the raster display.
An image is generated by a 2D array of small dots or squares: pixels (shorthandfor “picture elements”).
Every pixel can be set individually; a typical (API) command might beSetPixel(x ,y ,color )
where x and y are pixel coordinates.
Depending on the number of different color and intensity values of every pixel wedistinguish among the following displays.
Monochrome display:Each pixel can either be on or off. Can only display one color.Typical device: laser printer.Grey-scale display:Each pixel can have one of n possible brightness values (“intensities”).Color display:Each pixel can have one of 2k possible colors (if k bits per pixel are used).Each pixel is composed of a cluster of single-color pixels that fool the eye.Typical device: color monitor.
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 214
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Different Device Coordinate Systems
Unfortunately not all systems adopt the same pixel addressing conventions:some systems have the origin at the upper-left corner, some have it at thelower-left corner.
100x
50
y
y
50
100x
1
Warning
X11 has its coordinate origin in the upper-left corner, while OpenGL coordinates havetheir origin in the lower-left corner!
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 215
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Drawing a Line
We will always assume that the end-points of a straight-line segment L are givenin integer coordinates relative to the resolution of the output device.
Which pixels should be turned on?
Pixels should be as close to L as possible!
L
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 216
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Specifying the Desired Output
The general goal isto minimize the stair-case effect (“jaggies”) due to the replacement of acontinuum of width zero by a discrete set of pixels of non-zero area,to have a uniform line density,to make sure that a line drawn is independent of whether we draw it from P1
to P2 or from P2 to P1,to cast the conversion into an algorithm that is fast.
The simple fact that a continuum is replaced by a discrete set of pixels is thesource of many serious problems in graphics that are known as aliasingproblems!
The following two specifications are widely used:The set of pixels whose centers are covered by a parallelogram of width 1centered on the line.The shortest sequence of eight-connected pixels that most closelyapproximate the line.
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 217
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Specifying the Desired Output
Parallelogram Coverage: Select pixels within strip of width 1.
Eight-Connectedness: Used by Bresenham’s algorithm.
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 218
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Brute-Force Scan Conversion
Consider a line segment between(
x1
y1
)and
(x2
y2
), where x1, y1, x2, y2 ∈ N,
and x1 ≤ x2 and y1 ≤ y2.
The equation of the line is given by
y = s · x + c,
where
s =y2 − y1
x2 − x1and c =
y1 · x2 − y2 · x1
x2 − x1.
We get a simple scan-conversion algorithm by incrementing x , computing thecorresponding y , and rounding it to the nearest integer value.
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 219
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Brute-Force Scan Conversion
Algorithm Brute-Force Scan ConversionInput: P1,P2: point(∗ P1 = (x1, y1),P2 = (x2, y2) ∗)1. var s, c: real;2. var x , y : integer;3. s ← (y2 − y1)/(x2 − x1);4. c ← (y1 · x2 − y2 · x1)/(x2 − x1);5. for x ← x1 to x2 do6. y ← bs · x + cc;7. SetPixel(x , y);
The rounding operation b c is expensive.
The algorithm involves floating-point arithmetic.
In any case, this simple algorithm will not work particularly well . . .
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 220
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Handling Different Inclinations
We need a different algorithm depending on whether the change in y is biggerthan the change in x .
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 221
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Bresenham’s Algorithm for Drawing a Line
Developed in the early 1960s to control the pen movements of plotters.
Be warned that several improvements to Bresenham’s original algorithm havebeen proposed since its invention. So, by now, dozens of slightly differentscan-conversion algorithms are denoted as “Bresenham’s Algorithm”.
The following description is limited to line segments that lie in the first octant, i.e.
y = s · x + c where 0 ≤ s ≤ 1.
Let ∆x := x2 − x1 and ∆y := y2 − y1, where x1, x2, y1, y2 ∈ N, and x1 ≤ x2 andy1 ≤ y2. Furthermore, ∆y ≤ ∆x .
We will draw the segment from left to right.
Assume that pixel (xi , yi ) has been set. Which pixel is next? The pixel (xi + 1, yi )or the pixel (xi + 1, yi + 1)?
Remember that we seek an eight-connected set of pixels. Thus, (xi , yi + 1) is nooption once (xi , yi ) was drawn.
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 222
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Basic Idea of a Midpoint Algorithm
previouspixel
choices
pixelcurrentfor
pixelnextforchoices
E
NE
M
E
M
NE
E
NE
M
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 223
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
The Mathematics of Bresenham’s Line Algorithm
In implicit form we get
F (x , y) := x ∆y − y ∆x + c ∆x
for the equation of the line through(
x1
y1
)and
(x2
y2
), where
F (x , y)
<=>
0 ⇔
(x , y) above line,(x , y) on line,(x , y) below line.
Bresenham’s Algorithm always increments x . Whether or not y is incrementeddepends on the position of the midpoint relative to the line.
ei := F (xi + 1, yi +12
)
< 0 : xi+1 := xi + 1, yi+1 := yi (E),≥ 0 : xi+1 := xi + 1, yi+1 := yi + 1 (NE).
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 224
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
The Mathematics of Bresenham’s Line Algorithm
Goal: derive the error variable ei+1 directly from the last error variable ei .
Case (E):
ei+1 = F (xi + 2︸ ︷︷ ︸xi+1+1
, yi +12︸ ︷︷ ︸
yi+1+12
) = xi ∆y + ∆y + ∆y − yi ∆x − 12 ∆x + c ∆x
= F (xi + 1, yi + 12 ) + ∆y = ei + ∆y .
Case (NE):
ei+1 = F (xi + 2︸ ︷︷ ︸xi+1+1
, yi +32︸ ︷︷ ︸
yi+1+12
) = xi ∆y + ∆y + ∆y − yi ∆x − 12 ∆x −∆x + c ∆x
= F (xi + 1, yi + 12 ) + ∆y −∆x = ei + ∆y −∆x .
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 225
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
The Mathematics of Bresenham’s Line Algorithm
The first error variable e1 is initialized as
e1 := F (x1 + 1, y1 + 12 ) = x1 ∆y + ∆y − y1 ∆x − 1
2 ∆x + c ∆x
= F (x1, y1)︸ ︷︷ ︸=0
+∆y − 12 ∆x = ∆y − 1
2 ∆x .
For the purposes of Bresenham’s algorithm we may replace F (x , y) by 2F (x , y),thus eliminating the division by 2 in e1:
e1 = 2∆y −∆x ,
ei+1 :=
ei + 2∆y if (E),ei + 2∆y − 2∆x if (NE).
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 226
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Bresenham’s Line Algorithm
Algorithm BresenhamInput: P1,P2: point(∗ P1 = (x1, y1),P2 = (x2, y2) ∗)1. var x , y ,∆x ,∆y , error , c1, c2 :integer;2. ∆x ← x2 − x1; ∆y ← y2 − y1;3. x ← x1; y ← y1;4. c1 ← 2 ·∆y ; error ← c1 −∆x ; c2 ← error −∆x ;5. repeat6. SetPixel(x , y);7. inc(x);8. if error < 0 then (∗ (E) ∗)9. error ← error + c1;10. else (∗ (NE) ∗)11. inc(y);12. error ← error + c2;13. until x > x2;
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 227
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Drawing a Circle
The standard parameterization of a circle with radius r centered at the origin isgiven by
x(ϕ) = r cosϕ and y(ϕ) = r sinϕ,
with 0 ≤ ϕ < 2π.
Discretization based on ϕi := i · 2πn for 0 ≤ i ≤ n − 1 yields
xi+1 := x(ϕi+1) = x(ϕi + δ) = r(cosϕi cos δ − sinϕi sin δ) = xi cos δ − yi sin δ,
yi+1 := y(ϕi+1) = y(ϕi + δ) = r(sinϕi cos δ + cosϕi sin δ) = xi sin δ + yi cos δ,
where
δ :=2πn
and x0 := r and y0 := 0.
A brute-force scan-conversion algorithm for circular arcs and ellipses is easilyderived from these equations.
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 228
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Bresenham’s Algorithm for Drawing a Circle
We consider the second octant of circles with integer radius centered at theorigin.The circular arc is drawn from 90o to 45o!Use symmetry to draw the other portions of the circle.
PSfrag replacements
(x,y)(−x,y)
(−x,−y) (x,−y)
(y,x)
(y,−x)
(−y,x)
(−y,−x)
45o
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 229
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
The Mathematics of Bresenham’s Circle Algorithm
We have
F (x , y) := x2 + y2 − r 2
> 0= 0< 0
⇔
(x , y) outside the circle,(x , y) on the circle,(x , y) inside the circle.
Once again, we use the idea of a midpoint algorithm.If the midpoint lies inside the circle then (E) else (SE).
previous currentchoice
pixel
pixelfor
nextpixel
E
M
SE
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 230
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
The Mathematics of Bresenham’s Circle Algorithm
The error variable is defined as
ei := F (xi + 1, yi −12
) = (xi + 1)2 + (yi −12
)2 − r 2.
Furthermore,
ei
< 0 : xi+1 := xi + 1, yi+1 := yi (E),≥ 0 : xi+1 := xi + 1, yi+1 := yi − 1 (SE).
We get
(E): ei+1 = F (xi + 2, yi − 12 ) = (xi + 2)2 + (yi − 1
2 )2 − r 2
= F (xi + 1, yi − 12 ) + 3 + 2xi = ei + 2xi + 3
= ei + 2xi+1 + 1.
(SE): ei+1 = F (xi + 2, yi − 32 ) = . . . = ei + 2xi − 2yi + 5
= ei + 2xi+1 − 2yi+1 + 1.
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 231
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
The Mathematics of Bresenham’s Circle Algorithm
The first error variable is initialized as e1 := F (1, r − 12 ) = 5
4 − r .
Once again, we substitute F (x , y) by 2F (x , y). Also, we add 12 to e1. Thus, we
end up with
ei+1 =
ei + 4xi+1 + 2 (E),ei + 4xi+1 − 4yi+1 + 2 (SE).
and
e1 := 3− 2r .
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 232
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Bresenham’s Circle Algorithm
Algorithm BresenhamInput: rad : integer1. var x , y , error : integer;2. x ← 0; y ← rad ;3. error ← 3− 2rad ;4. while x ≤ y do5. SetPixel(x , y);6. inc(x);7. if error ≥ 0 then8. dec(y);9. error ← error − 4y ;10. error ← error + 4x + 2;
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 233
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
5 Basic Rendering TechniquesClippingHidden-Surface RemovalIllumination ModelIncremental Shading TechniquesAliasing and Anti-AliasingTexture Mapping
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 234
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
View Volume and Clipping
The portion of the view plane (or image plane) that we are interested in is definedby the view window.
Together with the view point the view window defines a pyramid-shaped portionof the space: the so-called view volume (or view frustum).
Typically, a near plane (or front plane) and a far plane (or back plane) are addedin order to exclude objects from being projected that are very far from or veryclose to the viewer.
The process of restricting an object to the view volume/window is called clipping.
VolumeView
Plane
Near
FarPlane
ViewWindow
Plane
View
PointView
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 236
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
View Volume and Clipping
The view volume is a frustum of a pyramid in the case of perspective projection,and a parallelepiped in the case of parallel projection.
Since clipping objects to a box is much simpler than clipping to a genuinepyramid, it is common to apply a perspective normalization in order to convert aperspective projection into an orthographic projection.
Clipping can be performed in 3D prior to projecting the objects onto the viewplane, or in 2D after projecting the objects onto the view plane.
VolumeView
Plane
Near
FarPlane
ViewWindow
Plane
View
PointView
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 237
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Clipping in 2D
The task of clipping is to replace linesegments with shorter segments that fitneatly into the view/clip window.
(xmin, ymin)
(xmax, ymax)
Clipping can be performed in object space or in image space:Object-space clipping: Compute intersections analytically, and scan-convert
only clipped segments.Image-space clipping: scan-convert full segments and perform point clipping
afterwards.
We assume that the window is given by the axis-parallel rectangleW := [xmin, xmax ]× [ymin, ymax ].
Point clipping:xmin ≤ x ≤ xmax ,ymin ≤ y ≤ ymax .
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 238
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Line Clipping
Let W := [xmin, xmax ]× [ymin, ymax ] be the clipwindow, and consider the clipping of a linesegment ` := AB.
(xmin, ymin)
(xmax, ymax)
Cases:A ∈ W ,B ∈ W : Accept `.A ∈ W ,B /∈ W : Compute P = ` ∩ ∂W , accept AP.A /∈ W ,B ∈ W : Compute P = ` ∩ ∂W , accept BP.A /∈ W ,B /∈ W :
If A,B lie outside the same boundary line of W then reject `.Otherwise, a more complicated test is needed . . ..
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 239
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Cohen-Sutherland Algorithm
We classify the position of every point P with respect to the supporting lines of Wby assigning a 4-bit out code, OC(P), as follows:
0001 : P to the left of x = xmin.0010 : P to the right of x = xmax .0100 : P below of y = ymin.1000 : P above of y = ymax .
0000
PSfrag replacements
xmin xmax
ymin
ymax
0001
1001
0010
01100101 0100
1000 1010
0000
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 240
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Cohen-Sutherland Algorithm
If OC(A) |OC(B) = 0000 then accept `, where | is the bitwise OR operator.
If OC(A) & OC(B) 6= 0000 then reject `, where & is the bitwise AND operator.
Otherwise:If OC(A) = 0000 then swap A and B.Find the rightmost bit i such that OCi (A) = 1.Compute intersection P of ` with the supporting line of W which defines bit i .Let A := P, and update OC(A).
Repeat the above steps until ` is accepted or rejected.
Easily generalized to more general convex clip windows.
0000
PSfrag replacements
xmin xmax
ymin
ymax
0001
1001
0010
01100101 0100
1000 1010
0000
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 241
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Cyrus-Beck-Liang-Barsky Algorithm
We consider the standard parameterization p(t) := (1− t)A + tB, with t ∈ [0, 1].
Goal: Compute tE , tL ∈ [0, 1] such that W ∩ ` = p(tE )p(tL).
We pick an arbitrary point Wi on clip edge Ei , and denote the outwards normalvector of Ei by ni .
Where does p(t) lie relative to Ei?〈ni , p(t)− wi〉 = 0 ⇐⇒ p(t) on Ei ,〈ni , p(t)− wi〉 > 0 ⇐⇒ p(t) outside,〈ni , p(t)− wi〉 < 0 ⇐⇒ p(t) inside.
possibly insideclip window
A
B
ni
Ei
Wi
〈ni, p(t)− wi〉 > 0〈ni, p(t)− wi〉 = 0
〈ni, p(t)− wi〉 < 0`
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 242
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Cyrus-Beck-Liang-Barsky Algorithm
Thus, the equation for the intersection of ` with Ei is
〈ni , p(ti )− wi〉 = 0.
Suppose that d := b − a 6= 0. We get
〈ni , a− wi〉+ ti〈ni , b − a〉 = 0,
that is
ti =〈ni , a− wi〉−〈ni , d〉
if 〈ni , d〉 6= 0.
If 〈ni , d〉 = 0 then Ei ‖ `.
possibly insideclip window
A
B
ni
Ei
Wi
〈ni, p(t)− wi〉 > 0〈ni, p(t)− wi〉 = 0
〈ni, p(t)− wi〉 < 0`
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 243
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Cyrus-Beck-Liang-Barsky Algorithm
We define the predicates "‘potentially entering"’ (PE) and "‘potentially leaving"’(PL) for each intersection:
(PL)i ⇐⇒ 〈ni , d〉 > 0,(PE)i ⇐⇒ 〈ni , d〉 < 0.
PE
PE
PE
PE
PLPLPLPL
Line 3Line 2
Line 1
A
B
AA
B
B
Line 4
B APE
PL
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 244
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Cyrus-Beck-Liang-Barsky Algorithm
This yields the parameters tE , tL as
tE := max (ti : (PE)i ∪ 0) ,
tL := min (ti : (PL)i ∪ 1) ,
where
ti :=〈ni , a− wi〉−〈ni , d〉
.
If the clip edge Ei is parallel to a coordinate axis then the term for ti is muchsimpler. E.g., for Ei ≡ x = xmin we get
〈ni , d〉 = ax − bx and ti =ax − xmin
ax − bx.
Similar to the Cohen-Sutherland algorithm, also the CBLB algorithm can easilybe generalized to general convex clip windows.
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 245
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Polygon Clipping
Clipping the edges of a polygon individually is not good enough.
Note that the polygon clipped may be disconnected!
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 246
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Polygon Clipping
Two main options, and none of them is ideal . . .
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 247
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Sutherland-Hodgeman Algorithm
A conventional line-clipping algorithms would clip one edge of the polygon withrespect to the entire clip window, and would loop over all edges of the polygon.
The Sutherland-Hodgeman Algorithm clips the entire polygon with respect to oneclip edge of the clip window, and loops over all clip edges.
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 248
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Sutherland-Hodgeman Algorithm
We distinguish four cases, depending on how the start-point A and end-point B ofan edge E are located relative to the clip edge.
AB A
BB′
A
B
B′
A B
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 249
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Sutherland-Hodgeman Algorithm
Initialize the output list L for the clipped polygon: L := /0.
For all clip edges of the clip window:1 Pick an edge E of the polygon, and let E0 := E .2 Let A be its start-point and B be its end-point. .3 Distinguish the following four cases:
If both A and B are on the interior side of the clip edge, then add B to L.If point A is on the interior side of the clip edge and point B is on theexterior side of the clip edge, then add the intersection B′ := AB ∩ E toL.If point A is on the exterior side of the clip edge and point B is on theinterior side of the clip edge, then add both the intersectionB′ := AB ∩ E and point B to L.If both A and B are on the exterior side of the clip edge, then no outputis generated.
4 Let E be the polygon edge starting in B.5 If E0 6= E then goto Step 2.
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 250
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Clipping in 3D
The Cohen-Sutherland Algorithm and the Cyrus-Beck-Liang-Barsky Algorithmare straightforward to extend to 3D.
E.g., for the Cohen-Sutherland Algorithm it suffices to maintain a six-bit out code,where Bit 5 is set if pz < zmin, and Bit 6 is set if pz > zmax .
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 251
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Invisible Polygons
A point P1 is occluded by a point P2 ifP1 and P2 project onto the same point in the view plane.P2 is closer to the viewer than P1.
Why can a polygon be invisible?The polygon is outside of the view frustum.The polygon is on the back side (with respect to the viewer) of an opaqueobject.The polygon is occluded by some other polygon(s) that are closer to theviewer.
For the sake of efficiency, we want to know whether a polygon is outside of theview frustum: view frustum culling!
Also for the sake of efficiency, we want to know whether a polygon is occluded bysome other polygon(s): occlusion culling!
For correctness reasons, we need to know when a polygon is invisible —hidden-surface removal (HSR) and visible-surface determination!
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 253
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Invisible Polygons
In the subsequent slides on hidden-surface removal, we will always assume thatthe canonical orthographic projection from z = −∞ onto the xy -plane is used.
Recall that a coordinate transformation and perspective normalization suffices totransform any projection to this canonical set-up.
Thus, P1 occludes P2 exactly if x1 = x2 and y1 = y2 and z1 < z2.
For simple rendering of complex scenes, hidden-surface removal accounts for asubstantial portion of the total rendering time. Thus, efficiency is a key issue!
A major step on the path to efficiency is to avoid sending many polygons to theHSR algorithm that are “obviously” not visible — and this is true even forGPU-based HSR!
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 254
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Back-Face Culling
On a closed manifold surface, polygons whose exterior normal vectors pointaway from the viewer are always invisible.The process of removing all back-facing polygons is called back-face culling orback-face removal.Note that back-face culling does, in general, not solve the HSR problem for anon-convex object!Also, note that back-face culling may not be applied if the surface is not a closedmanifold!
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 255
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Back-Face Culling
Consider the supporting plane of a polygon. To check whether the polygon isback-facing under a parallel projection, it suffices to check whether the view pointis in the “inside half-space”.For a general parallel projection, this test is quickly performed by computing thesign of the dot product between the normal vector n and the viewing direction.For the canonical orthographic projection, this test boils down to checkingwhether nz > 0.
Inside Outside
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 256
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Object Space versus Image Space
Object-Space HSR Algorithm:For each polygon σ in the scene:
1 Compute the visible portion σ′ of σ analytically.2 Project σ′ onto the image plane and scan-convert the
projection.3 Use σ to assign the appropriate color to each pixel
corresponding to σ′.
Image-Space HSR Algorithm:For each pixel π in the image plane:
1 Find the polygon σ closest to the view point that is pierced bythe projector through π.
2 Use σ to assign the appropriate color to this pixel.
Several algorithms employ a hybrid strategy: the polygons are re-arranged andsubdivided in object space, until drawing them in proper order suffices to solvethe visibility problem in image space. E.g., Binary Space Partitioning.
Even in the presence of hardware support (mostly for image-space algorithms),there still is a need for object-space algorithms!
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 257
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Object Space Versus Image Space
A lower bound on the worst-case complexity of object-space algorithms is Ω(n2)for a scene consisting of n polygons.
A lower bound on the worst-case complexity of image-space algorithms isΩ(n · p), where p denotes the resolution of the image. (Typically, p ≈ 106.)Note that both Ω-terms are not very realistic for practical applications, and thatthey hide multiplicative constants!
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 258
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Painter’s Algorithm
For objects that can be placed in an occlusion-compatible order, the painter’salgorithm combined with depth sorting is sufficient:
1 Sort all of the potentially visible polygons by decreasing z-coordinates2 Draw them in this order.3 Polygons in front of other polygons will be drawn later, so they will be visible,
and they will occlude the polygons behind them.The painter’s algorithm fails for polygons that interpenetrate each other, or in thecase of cyclic overlaps.
x
y
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 259
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Hidden-Surface Removal for Octrees
Recursively visit and draw the cells of an octree in the proper order.
0 1
2 3
x
y
x
z
y
1 5
3 7
3 72 6
7
5
6
4
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 260
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Bounding-Box Test
The (axis-aligned) bounding box (AABB) of an object is the smallest axis-alignedrectangle (in 2D) or box (in 3D) that encloses the object.
If the bounding boxes do not intersect then the objects do not intersect.
If two objects intersect then their bounding boxes intersect.
No conclusion is possible if the bounding boxes intersect each other.
Obvious problem: A lot of bounding boxes might intersect even if their objects donot intersect.
This idea can be generalized to other bounding volumes, like oriented boundingboxes (OBB), discrete-orientation polytopes (k -dop), spheres, etc.
x
y
x
y
x
y
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 261
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Binary Space Partitioning
A binary space partition tree (BSP tree) subdivides 3D space along thesupporting planes of particular polygons [Fuchs&Kedem&Naylor 1980].
CA
B
B C
A
Appropriately traversing this tree enumerates the polygons from back to front.
Analogously for 2D and edges of polygons.
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 262
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Constructing BSP Trees
BSP trees are constructed much like Quicksort works.
Suppose that all polygons are triangular.
We use the supporting plane of a (randomly selected) triangle to split the spaceinto triangles on one side and triangles on the other side.
Triangles must be split (and re-triangulated) if they cross the plane.
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 263
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Constructing BSP Trees
This process continues recursively until every cell contains at most one triangle,or until the depth of the tree exceeds a threshold.
Care has to be taken to keep the vertices of new triangles in consistent order.
Ideally, a BSP tree should be small in size and balanced!
Note that naive splitting of n input triangles (that do not intersect) may causeO(n3) output cells in the worst case.
Paterson&Yao (1990): O(n2) size can be achieved.
Practical experience: In 2D and in 3D,the space complexity tends to be ino(n log n), based on a randomized approach.
HSR removal based on BSP trees is far too slow when compared to GPU-basedapproaches.
But BSP trees are a very versatile structure with their merits!
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 264
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Sample BSP Tree in 2D
Left cells/children are “left of” the splitting line segment; right cells/children are“right of” the splitting line segment.
All resulting cells I, II, . . ., VII are convex.
BSP tree: Each node stores all line segments that are collinear with the splittingline segment.
A
B
C
D
E
F
IV
II
III
I
V
VI
VII
A
B C
D E
F
D
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 265
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Sample BSP Tree in 2D: Tree Traversal
Locate the viewpoint in the BSP subdivision.
At each splitting plane, first draw the stuff on the farther side, then the polygonthat defines the splitting plane, and finally the stuff on the nearer side.
Moving the viewpoint does not render the BSP tree invalid, but merely changesthe traversal order.
A
B
C
D
E
F
IV
II
III
I
V
VI
VII
A
B C
D E
F
BSP Tree Traversal
VI
E
V
F
I
B
III
D
II
A
VII
C
IV
D
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 266
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Depth-Buffer Algorithm
First described by Strasser and, independently, by Catmull in 1974.
Idea: Determine visibility independently for each pixel.
The depth-buffer algorithm, aka z-buffer algorithm, makes use of two buffers:Frame/color buffer: F [i, j] contains the color of pixel (i, j).Z buffer: Z [i, j] contains the z-coordinate of the object visible at pixel (i, j).
Algorithm Depth-Buffer1. ∀i, j : Z [i, j]← +∞. (∗ initialization of z buffer ∗)2. ∀i, j : F [i, j]← background color. (∗ initialization of frame buffer ∗)3. for each polygon π do4. for each pixel (i, j) in projection of π do5. z ←z-coordinate of point P of π that projects onto pixel (i, j).6. if z ≤ Z [i, j] then7. Z [i, j]← z.8. compute color at P and assign it to F [i, j].
Interpolation can be used to speed-up the computation of the z-values.
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 267
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Depth-Buffer Algorithm: Pros and Cons
Pros:Simple, easy to implement, and ideally suited for hardware implementation.Primitives can be processed in arbitrary order.Interpenetration of primitives poses no problem.
Cons:The basic z-buffer algorithm does not handle translucency.Most GPUs offer only 32-bit floating-point precision; 64-bit precision not yetmandatory!A perspective-to-orthogonal transformation tends to reduce z precision.Aliasing problems if different polygons share the same pixel.Co-planar primitives are handled unpredictably (“z-buffer fighting”).
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 268
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Depth-Buffer Algorithm: z-Buffer Fighting
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 269
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Depth-Buffer Algorithm: Handling Translucent Objects
Translucent objects require a modification of the standard z-buffer:1 Draw all the opaque objects first, using the standard z-buffer.2 Draw translucent objects with blending:
Translucent objects behind an opaque object do not have any effect.Translucent objects in front of all opaque objects do not change thez-value.Blend colors appropriately.
E.g., “alpha blending” as utilized by OpenGL:(R,G,B) ; (R,G,B,A), where smaller values of A denote highertranslucency.A := 0 means transparent and A := 1 means opaque.
Modern GPUs use several other specialized buffers to simulate other specialeffects, e.g., an accumulation buffer for simulating multiple exposures, motion bluror depth of field.
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 270
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Light Interacting with an Object
Light in
Reflection(diffuse)
Reflection(specular)
B
Absorption
Internalreflection
Transmittedlight
A
Scattering and Emission (Fluorescence)
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 272
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Surface Characteristics and Types of Reflection
Light
Perfectly matt surface:
Incident
Perfect mirror: specular reflection
IncidentLight
Slightly specular (shiny) surfacediffuse reflection
Light
Incident
LightIncident
Highly specular (shiny) surface
Direction
Reflected
Direction
Reflected
Direction
Reflected
Perfectly diffuse or perfectly specular surfaces hardly occur in the real world.
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 273
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Modeling Reflection
A reflectance spectrum indicatesfor a particular angle of incidencethe percentage of the incoming lightreflected at each wavelength by the surface.
To find the color of the light leaving the surface, we multiply the amount ofincoming light by the percentage reflectance of the surface at each wavelength.Generalization: Bidirectional reflectance distribution function (BRDF).
400 500 600 700 800nm
%
1020
30
405060
7080
90100
reflectance of a “greenish” surface400 500 600 700 800
nm
%
1020
30
405060
7080
90100
reflectance of copper at normal incidencec© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 274
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Basics of Illumination Models
An illumination model defines the nature of the light emanating from a point, e.g.,the geometry of its intensity distribution, etc.
An illumination model can be expressed by an illumination equation in variablesassociated with the point on the object being shaded.
An illumination equation can be interpreted as an equation for intensities – e.g.,in a grey-scale image – as well as an equation for colors, for example RGB.
This makes the illumination equation a vector equation, which must be evaluatedfor the red, green, and blue component separately.
Obvious trade-off between the accuracy and complexity of a physics-basedmodel and the convenience and speed of a purely heuristic model.
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 275
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Phong’s Illumination Model
The standard illumination model in computer graphics that compromises betweenacceptable results and processing cost is the Phong model [Phong 1975].
This model handles reflected light in terms of a diffuse and specular componenttogether with an ambient term:
I = Ia + Id + Is
I . . . intensity at a point,Ia . . . ambient part of I,Id . . . diffuse part of I,Is . . . specular part of I.
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 276
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Diffuse Reflection
Diffuse reflection: Light is scattered uniformly in all directions from a point on thesurface of the object.
The amount of reflected light seen by the viewer does not depend on the viewer’sposition. Such surfaces are dull.
N
LV
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 277
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Diffuse Reflection
The intensity of diffusely reflected light is given by Lambert’s Cosine Law:
Id = Il · kd · cos θ (0 ≤ θ ≤ π
2)
whereIl . . . intensity of the light source,θ . . . angle between surface normal N and vector L (pointing to the light),kd . . . diffuse-reflection coefficient, ranging between 0 and 1.
The value kd depends on the material and the wavelength of the incident light.
N
LV
θ
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 278
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Diffuse Reflection
For simplicity, all vectors are normalized!
Since cos θ = 〈L,N〉, the illumination equation Id = Il · kd · cos θ can be rewrittenusing the dot product:
Id = Il · kd · 〈L,N〉
If a point light is sufficiently distant from the objects:It makes essentially the same angle with all surfaces sharing the samesurface normal.In this case, L is a constant for the light source.
If L does not vary then two parallel surfaces with identical surface normal will beshaded the same, no matter how different their distances from the light source orviewer are.
This effect can be mitigated by using a light-source attenuation factor.(Problematic in practice, though.)
Perfect Lambertian diffusers do not exist in nature.
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 279
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Ambient Light
Using the diffuse illumination model, any surface visible by the viewer but notdirectly lit by the light (since 〈N, L〉 = 0) is . . . . . .. . . . . . black!
N3
N2
N1
L
This does not match reality!
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 280
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Ambient Light
Ambient light is the result of multiple reflections from walls and objects, and it isincident on a surface from all directions.
It is modeled as a constant term Ial for a particular object using a constantambient-reflection coefficient ka ranging between 0 and 1:
Ia = Ial · ka
The ambient-reflection coefficient is a material property.
Caveat
The ambient-light term is an empirical convenience and does not correspond directlyto any physical property of real materials.
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 281
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Specular Reflection
The law of reflection tells us thatthe reflected ray remains within the plane of incidence, and the angle ofreflection equals the angle of incidence. The plane of incidence includes theincident ray and the normal to the point of incidence.the reflected ray leaves a glossy surface at angle θ, where θ is the angle ofincidence.
RL
N
θ θ
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 282
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Specular Reflection
However, most surfaces – and their reflectance properties – are somewhere inbetween a perfect diffuser and perfect glossiness, i.e., a perfect mirror.
In practice, specular reflection is not perfect and reflected light can be seen forviewing directions close to the direction of the reflected beam.
Thus, the degree of specular reflection seen by a viewer depends on the viewpoint.
The area over which specular reflection is seen for a given view point iscommonly referred to as highlight.
The color of the specularly reflected light is different from the color of the diffuselyreflected light.
In simple models the specular component is assumed to be the color of the lightsource.
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 283
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Specular Reflection
For a perfect mirror, a highlight is only visible if φ – the angle between the viewingdirection V and the reflection vector R – is zero.
In practice, however, specular reflection is seen over a range of φ that dependson the glossiness of the surface.
R
V
L
N
θ θφ
R
V
L
N
φ
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 284
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Coefficient of Glossiness
Phong modeled this behavior empirically by a term cosnφ:
Is = Il · ks · cosnφ (0 ≤ ks ≤ 1; 0 ≤ n ≤ ∞)
where ks is the specular reflection coefficient, usually taken to be amaterial-dependent constant.
Note that large values of n are required for a tight highlight to be obtained: Formetals values between 100 and 200 are common, while values between 5 and10 will result in a plastic appearance.
For a perfect mirror surface, this coefficient of glossiness is infinite.
R
V
L
N
θ θφ
R
V
L
N
φ
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 285
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Specular Reflection
The expense of the specular illumination equation can be reduced considerablyby making some geometric approximations.
Since the vector R is expensive to compute, in 1977 Blinn suggested to use thevector H instead: Blinn-Phong reflection model.
H is the mean vector of L and V . The specular term then becomes a function of〈N,H〉 rather than 〈R,V 〉.
R
V
L
N
θ θφ V
L
N H
φ
β β
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 286
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Specular Reflection
Thus,
Is = Il · ks · (〈N,H〉)n.
As the angle between R and V is twice the angle between N and H, the use of Nand H spreads the highlight over a greater area.
Like the diffuse term this simple model of specular reflection is a local model.
Light reflected onto the surface that originates from specular reflections in otherobjects is not considered!
R
V
L
N
θ θφ V
L
N H
φ
β β
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 287
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Specular Reflection
Summarizing, for colored objects the easiest way to model the specularhighlights is to use the color of the light source, and to control the color of theobjects by setting the diffuse reflection coefficients appropriately:
Ir = Ia · kar + Ii [kdr 〈L,N〉+ ks(〈N,H〉)n]
Ig = Ia · kag + Ii [kdg〈L,N〉+ ks(〈N,H〉)n]
Ib = Ia · kab + Ii [kdb〈L,N〉+ ks(〈N,H〉)n]
Combining these three equations in a single vector expression yields:
Ir,g,b = Ia · ka(r , g, b) + Ii [kd (r , g, b)〈L,N〉+ ks(〈N,H〉)n]
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 288
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Discussion of Phong’s Illumination Model
Light sources are assumed to be point sources. Any intensity distribution of thelight source is ignored.All geometry except the surface normal is ignored and no distance information isconsidered. That is, light source(s) and viewer are (assumed to be) located atinfinity.The diffuse and specular terms are modeled as local components.
No reflections of other objects in the surface of the object being rendered areconsidered.Phong’s model lacks shadows!Lack of shadows means not only that objects do not cast a shadow on otherobjects, but self-shadowing within an object is omitted.Concavities in an object that are hidden from the light source areerroneously shaded simply on the basis of their surface normal.
An empirical model is used to simulate the decrease of the specular term aroundthe reflection vector modeling the glossiness of the surface.The color of the specular reflection is assumed to be that of the light source. Thatis, for white light highlights are rendered white regardless of the material.Phong’s illumination model (or some variant thereof) is in wide-spread use due itsapparent main advantage: Its simplicity is unruled, and it produces acceptableresults for many applications.
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 289
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Flat Shading
The simplest shading model for a polygon is flat shading, also known as facetedshading or constant shading.
This approach applies an illumination model once for a polygon to determine asingle intensity value (or color value) that is then used to shade an entire polygon.
Basically, the illumination equation is sampled once for each polygon, and thisvalue is used across the polygon to reconstruct the polygon’s shade.
This approach is only valid if the following assumptions are true:The light source is at infinity, so 〈N, L〉 is constant across the polygon’s face.The viewer is at infinity, so 〈N,V 〉 is constant across the polygon face.The polygon represents the actual surface being modeled, and is not anapproximation to a curved surface.
Otherwise, constant shading produces a “faceted” appearance.
The simple solution of using a finer surface mesh turns out to be surprisinglyineffective: The perceived difference in shading between adjacent facets isaccentuated by the Mach band effect (Mach ≈ 1860).
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 291
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Flat Shading: Mach Band Effect
Mach banding is caused by lateral inhibition of the receptors in the eye: Themore light a receptor receives, the more that receptor blocks the response of thereceptors adjacent to it.
That is, the human visual systemperforms some form of edgeenhancement by exaggerating theintensity change at any edge wherethere is a discontinuity of intensity.
As a result, at the border betweentwo facets the dark facet appearseven darker and the light facetappears even lighter. L
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 292
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Gouraud Shading
Gouraud shading [Gouraud 1971], also called intensity interpolation shading,extends the concept of interpolated shading applied to individual polygons byinterpolating polygon vertex illumination values that take into account the surfacebeing approximated.
The technique first calculates the intensity at each vertex of the polygon byapplying an illumination model.
These vertex intensities are afterwards interpolated over the polygon.
The normal vector N used in these equations is the so-called vertex normal: It iscalculated as the average of the normals of the polygons that share the vertex.
This is an important featureof the method since thevertex normal is anapproximation to the truenormal of the surface (whichthe polygon represents) atthat point.
Gouraud shading eliminatesintensity discontinuities up tosome extent.
Vertexnormal
Vertexnormal
Original surface
Facenormal
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 293
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Gouraud Shading
The interpolation process that calculates the intensity over a polygonal surfacecan then be integrated with a scan-conversion algorithm that evaluates thescreen position of the edges of a polygon from the vertex intensities, and theintensities along a scan line from these.
This yields the following bilinear interpolation scheme:
Ia =1
y1 − y2[I1(ys − y2) + I2(y1 − ys)]
Ib =1
y1 − y4[I1(ys − y4) + I4(y1 − ys)]
Is =1
xb − xa[Ia(xb − xs) + Ib(xs − xa)]
These equations can beimplemented as incrementalcalculations.
Gouraud shading handlesspecular reflection onlycorrectly if the highlightoccurs in one of the polygonvertices.
I3(x3, y3)
I4(x4, y4)
I1(x1, y1)
I2(x2, y2)
Ia(xa, ys)
Ib(xb, ys)Is(xs, ys)
scan line at ys
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 294
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Phong Shading
Phong shading [B.T. Phong 1975], which is also known as normal-vectorinterpolation shading, makes use of the vertex normal vectors for bilinearinterpolation in the following steps:
1 Interpolation of the normal vectors along the edges between the vertices.2 By sliding a horizontal scan line from, say, bottom to top the normal vectors
of the surface inclosed by the edges are interpolated.3 So there exists a normal vector for each point of the polygon surface. This
normal vector can then in turn be used for evaluating the illuminationequation.
The interpolation of the normal vectors tends to “restore” curvature.
Interpolatednormals
Vertexnormal
Vertexnormal
Original surface
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 295
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Bilinear Interpolation of the Vertex Normals
With Phong shading, normal vectors are interpolated rather than the vertexintensities.
Current scan linePSfrag replacements
N1
N2
N3
N4
Ns
Na Nb
Scan Line
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 296
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Characteristics of Phong Shading
With Phong shading, vector interpolation replaces intensity interpolation.
The normal vectors are interpolated rather than the vertex intensities.
Normal vectors of interior points are calculated incrementally by using bilinearinterpolation.
An individual intensity is evaluated for every pixel from the interpolated normals.
Specular reflection is handled by Phong shading.
Results of normal vector interpolation are in general superior to intensityinterpolation, because an approximation to the normal is used at each point.(This is true even without taking into account specular reflection.)
Phong shading tends to be much more expensive as Gouraud shading. Inparticular, the illumination equation has to be evaluated at every pixel.
To avoid the costs of normalizing the interpolated normals, approximationtechniques (such as Taylor series expansion [Bishop&Weimer 1986]) may beused.
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 297
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Sample Shading: Ellipsoid
The images show, from left to right, line drawing, flat shading, Gouraud shading,and Phong shading.
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 298
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Sample Shading: Torus
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 299
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Problems of Interpolated Shading
Silhouette edges: No matter how good a polygonal approximation an interpolatedshading model offers to the actual shading of a curved surface, the silhouetteedges of the mesh are still clearly polygonal.
Orientation Dependencies: The results of interpolated shading are notindependent of the projected polygon’s orientation.
This problem can be mitigated by using a larger number of smaller polygons, orby decomposing the polygon into triangles and using barycentric coordinates forthe interpolation.
1
1
0
0scan line
1
1
0
0
≈ 0
≈ 1
dark midpoint light midpoint
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 300
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Problems with Interpolated Shading
Perspective distortion: Anomalies can appear in animated sequences becausethe intensity/normal-vector interpolation is carried out in screen coordinates fromvertex-normals as calculated in world coordinates. This is not invariant withrespect to transformations, and may cause frame-to-frame disturbances inanimations.
Problems with shared vertices: Shading discontinuities can occur when twoadjacent polygons fail to share a vertex that lies along their common edge.
Thus, a vertex has to be shared by all adjacent areas. As further improvement,such a vertex is connected to other vertices of the adjacent polygon.
0 0
1
1≈ 0 0 0
1
1
≈ 1≈ 1≈ 1
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 301
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Problems with Interpolated Shading
Unrepresentative surface normals: The process of averaging surface normals toprovide vertex normals for the intensity calculation can cause errors which resultin corrugations being smoothed out. This would result in a visually flat surface.
Surface normals
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 302
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Aliasing and Anti-Aliasing
Aliasing is the collective term for any form of visual artifact caused by mapping acontinuum to a discrete set of samples.
The aliasing problem thoroughly permeates computer graphics. Its most familiarmanifestations are
spatial aliasing,temporal aliasing.
Sample spatial aliasing: In the figure, the first letter suffers from aliasing andlooks coarse compared to the second letter.
Anti-aliasing is one of the most important classes of techniques for makinggraphics visually pleasing and text easy to read.
It is a way of fooling the eye into seeing straight lines, smooth curves, andsmooth motions where there are none.
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 304
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Spatial Aliasing
Spatial aliasing is due to the discrete nature of pixels on a monitor, and results insilhouette edges that do not look smooth: “jaggies” and “stair-casing”.
A silhouette edge is the boundary of a polygon, or of any surface unit thatexhibits a high contrast over its background. (In general, contrast means light anddark areas of the same color.)
A long thin object may break up depending on its position with respect to thepixel array.
Another aliasing artifact occurs when small objects, whose spatial extent is lessthan the area of a pixel, are rendered or not depending on whether they areintersected by a sample point.
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 305
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Temporal Aliasing
New problems may occur when still images are shown in an animated sequence:“Crawling” edges (i.e., moving jaggies);“Scintillating” objects (i.e., objects (dis-)appearing during a move).
A slight change in the position of a line in world coordinates can cause a huge“jump” in the position of the digitized line in screen coordinates, i.e., a “crawling”of the pixels that represent the line.
!
" "# $%&' ()
*+,-
./01
2 23 4 45 67 89: : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : :: : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : :: : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : :
; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ;; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ;; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ;
< < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < << < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < <= = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = == = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = =
Such changes can be very distracting and are intolerable in some applications(e.g., flight simulators).
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 306
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Temporal Aliasing: Scintillating and Jumping
suddenly ’pops’
Time = 0
Time = 1
Time = 2
Screen Moving Polygon
Flow oftime(successiveframes)
This row
on when the moving edgecovers the pixel centers
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 307
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Temporal Aliasing: Spinning Wheel
In the third row, the wheel is spinning at 5 revolutions per second, but appears tobe spinning backwards at 1 revolution per second. Thus, the fast speed isaliasing as a slower speed after sampling.
Time: t=0 t=1/6 t=2/6 t=3/6 t=4/6 t=5/6 t=1
No. rev.:
No. rev.:
0 1/6 2/6 3/6 4/6 5/6 1
0 1/2 1 1 1/2 2 2 1/2 3
No. rev.: 0 5/6 1 4/6 2 3/6 3 2/6 4 1/6 5
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 308
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Nyquist-Shannon Sampling Theorem
Nyquist-Shannon sampling theorem (Dt.: Abtasttheorem)
The Nyquist-Shannon sampling theorem tells us that a periodic signal can be properlyreconstructed from equally-spaced samples if the the sampling rate is greater thantwice the frequency of the highest-frequency component in its spectrum. This lowerbound on the sampling rate is known as the Nyquist rate.
be sampled
x
x
aliased sine wave
f (x)
x
f (x)
f (x)f (x)
x
aliased sine wave
sampling interval
function to
sample points
The phenomenon of high frequencies masquerading as low frequencies in thereconstructed signal is aliasing.
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 309
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Anti-Aliasing: Area Sampling
Area sampling (aka prefiltering), which is one of the more prominent anti-aliasingmethods, attempts to assign an intensity to a pixel that depends on thepercentages of the areas that are covered by the objects.
The actual pixel intensity is obtained as a weighted average of the objectintensities, by using the overlap areas as weights.
0 1 2 3 4
0
1
2
3 pixel %(intensity)(1,0) 10%(1,1) 20%(2,1) 80%(3,1) 40%
Despite several advances, prefiltering is rarely used since it tends to becomputationally expensive.
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 310
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Anti-Aliasing: Supersampling
In supersampling (aka postfiltering) more than one sample per pixel is evaluated.In practice, supersampled images are computed by applying standardimage-generation techniques to an n-times increased resolution, and byobtaining the value of an actual pixel as the (weighted) average of itscorresponding n2 supersampled pixels.Rather than combining samples with an unweighted average one might use aweighted filter: (a) Box filter (unweighted average), (b) Bartlett filter (linearlyweighted average), (c) Gaussian filter.
(a) (b) (c)
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 311
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Anti-Aliasing: Supersampling
Note that n × n supersampling increases the number of samples and theimage-generation time by a factor of n2!
Supersampling works well with most computer graphics images and is easilyintegrated into a depth-buffer algorithm.
Main draw-backs:Non-adaptive supersampling does not work with images whose spectralenergy does not fall off with increasing frequency. (Texture rendered inperspective is a common example of an image that does not exhibit a fallingspectrum with increasing spatial frequency.)Blurring effects occur because information is integrated from a number ofneighboring samples.The fixed, regular grid used for supersampling may create a variety of newaliasing artifacts in an image: Humans tend to recognize regular patterns!
Variants: Adaptive supersampling and stochastic/jittered supersampling.
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 312
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Anti-Aliasing: Sample Images
[Image credit: SIGGRAPH Educator’s Slide Sets.]
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 313
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Anti-Aliasing: Sample Images
[Image credit: SIGGRAPH Educator’s Slide Sets.]
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 314
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Anti-Aliasing: Sample Images
[Image credit: SIGGRAPH Educator’s Slide Sets.]
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 315
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Adding Surface Details
All the shading techniques dealt with so far produce uniform and smooth surfaces— in sharp contrast to real-world surfaces!
The simplest approach to add gross detail is to use so-called surface-detailpolygons.
Every surface-detail polygon is coplanar with its base polygon, and is marked inorder to exclude it from hidden-surface removal.
As details become finer and more intricate, explicit modeling with polygons orother geometric primitives becomes less feasible . . .
[Catmull (1974):] Suggested as an alternative to map an image, either digitized orsynthesized, onto a surface.
This approach is known as texture mapping (or pattern mapping).
Ed Catmull
Catmull is the recipient of four Academy Awards (1993, 1996, 2001, 2008); he is aformer president of Pixar and Walt Disney Animation Studios.
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 317
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Adding Surface Details
Even rudimentary textures make an image much more pleasing and conveyadditional information! (Both images shown below are based on the samenumber of polygons.)
[Image credit: SIGGRAPH Educator’s Slide Sets.]
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 318
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Dimensionality of Texture Space
The process of mapping a texture onto an object is called texture mapping.
The texture space can be one-dimensional, two-dimensional, orthree-dimensional.
T uT u vT u v w
x y z x y
Texture Object Screen
For the sequel, except for the slides on “solid texturing”, we will focus on a 2Dtexture space.
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 319
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Texture Map
The 2D image that is mapped onto an object is called a texture, and its individualelements are often referred to as texels.
A one-to-one mapping between pixels and texels need not exist!
Texture domain Screen domain
Texels
Pixel
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 320
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Texture Mapping Caveats
A simple-minded approach to texture mapping assigns texture coordinates tovertex coordinates and uses a sweep line (scan line) for the interpolation of thetexture coordinates
This approach may result in horrible anomalies, e.g., the “bent” checkerboard.
An improved approach performs the interpolation in texture and screen space inparallel.
This approach avoids bent lines, but still gives way to incorrectly spaced lines,i.e., to incorrect perspective views.
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 321
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Texture Mapping Caveats
The bent checkerboard is caused by the scan-line interpolation (in image space)of the texture coordinates assigned to the vertices of the quadrilateral (in objectspace).
u = 0
u = 1
u = 1
u = 0
v = 1
v = 1
v = 0
v = 0
lines of constant u lines of constant v
direction of scan line
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 322
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Correct Texture Mapping
The only remedy applicable is to determine the texture coordinates explicitly forevery pixel via transformations from screen space to object space, and fromobject space to texture space.
x
y
u
v
Texture mapSurface of object
Four corners ofpixel on screen
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 323
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Texture Mapping Problems: Inverse Mapping
In general, the inverse mapping from screen to the object surface and to thetexture space is highly non-trivial. This is particularly true for curved objects.Two approaches are commonly used:
Unfolding the polygon mesh – The dimensionality is reduced from 3D to 2Dby “unfolding” adjacent polygons, thus generating a flat polygonal meshwhich is easier to project into the texture space.Two-part mapping – the 3D object surface is mapped to an intermediatesurface (such as a cylinder), and then into the texture space.
For texturing a parametric surface its parametric representations can beemployed. Note, though, that parametric mapping does not take care ofperspective foreshortening!
[Image credit: SIGGRAPH Educator’s Slide Sets.]c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 324
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Texture Mapping Problems: Aliasing
Even if the inverse mapping is performed accurately, serious aliasing errors mayoccur, and, in the worst case, may ruin the visual appearance of a textured object.
[Image credit: SIGGRAPH Educator’s Slide Sets.]
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 325
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Texture Mapping Problems: Aliasing
Aliasing is due to the point sampling problem in texel space and due toperspective foreshortening.
Typically, all texels that correspond to the area of a pixel are summed by applyinga weighing process (such as a box filter).
However, neighboring pixels need not map to neighoring texels, and informationof the texture map may be lost.
This is particularly problematic if the texture contains thin curves or small details.
Also, simple box filtering is not sufficient in the case of perspective foreshorteningbecause texels in the back have the same weight as texels in the front.
Such aliasing artifacts are particularly noticeable for textures that exhibitcoherence or regularity (e.g., a checkerboard).
Correct filtering of non-linearly mapped areas would require space-variant filters,i.e., filters whose shape and area change as they move across the texturedomain.
However, prefiltering, supersampling or mipmapping go a long way to reducealiasing artifacts.
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 326
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Texture Mapping Problems: Aliasing
Prefiltering and supersampling to cope with the point sampling problem in texturespace.
[Image credit: SIGGRAPH Educator’s Slide Sets.]
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 327
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Texture Mapping Problems: Fixed Resolution
A major problem of texturing is that the texture space has a fixed resolution!
Too small a resolution causes zooming to result in poor-quality images.
Too high a resolution causes blurring and other aliasing problems, and is asource for computational inefficiency.
What is a good resolution??
Even if the resolution of the texture space has been chosen judiciously, anextremely large number of texels may have to be weighted and summed just totexture a single pixel.
This phenomenon may arise when a large number of texels maps onto a surface,but the projection of the surface in screen space is small, either because of itsdepth or because of its orientation with respect to the viewing direction.
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 328
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Mipmapping
Mipmapping computes and stores textures at diverse resolutions.
[Image credit: SIGGRAPH Educator’s Slide Sets.]
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 329
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Mipmapping
To compute the color of a pixel, we determine the number of texels thatcorrespond to the pixel in the original texture map, find the two texture mapsclosest in size, and average the pixel colors obtained from those two texturemaps.
[Image credit: SIGGRAPH Educator’s Slide Sets.]
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 330
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Solid Texturing
For wood, granite, marble, and other natural materials, simply pasting a 2Dtexture onto the exterior of the object does not give the desired result: the texturedoes not appear to be seamless!
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 331
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
2D Textures Versus Solid Textures
3D (“solid”) textures allow the user to “carve” an object out of a solid block.
[Image credit: SIGGRAPH Educator’s Slide Sets.]
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 332
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Solid Texturing
In order to avoid gaps and inconsistencies, and also to circumvent the mappingproblem, a 3D texture space can be employed.
Imagine that a texture value exists for every point in 3D object space.
We may then assume that the texture coordinates of a point on a 3D surface aregiven by the identity mapping.
The color of the object is determined by the intersection of its surface with thepredefined 3D texture field of the block.
This is equivalent to sculpting or carving an object out of a solid block of material.
A major advantage of the elimination of the mapping problem is that objects ofarbitrary complexity can receive a texture on their surface in a “coherent” fashion.
In an animated sequence, the texture space would have to be transformed in thesame way as the object space. (The “incorrect” approach of moving the objectthrough the texture space can produce unique visual effects, though.)
A digitizing approach is not applicable to generate solid textures due to memoryconstraints.
Typically, 3D textures are generated procedurally.
A 3D texture can also be generated by sweeping a 2D texture through 3D space.
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 333
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Sample Solid Texture: Marble
Marble textures are typically generated as procedural textures, based on noisefunctions, e.g., Perlin noise [1982] or Perlin simplex noise [2001].
[Image credit: SIGGRAPH Educator’s Slide Sets.]
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 334
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Sample Solid Texture: Perlin Noise
Developed by Ken Perlin for “TRON”, with work started in 1981.Published in 1985 as a SIGGRAPH paper on “An Image Synthesizer”.Computational costs: O(2d ) for the interpolation of the 2d corners of a cell in Rd .Perlin noise is coherent, i.e., the noise function changes smoothly as one movesacross the texture space.
Standard and fractional (aka hierarchical) Perlin noise. [Image credit: Wikipedia.]
Simplex noise:Replaces a d-dimensional cube by a simplex, i.e., by a d-dimensional“triangle”.The complexity can be brought down to O(d2).It has no directional artifacts. (At least none that are easily visible.)
Perlin was awarded an Academy Award for Technical Achievement in 1997, andreceived a patent for the use of implementations of simplex noise.
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 335
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Sample Solid Texture: Perlin Noise
Initialization:Allocate a regular 3D grid within [0, 1]3 or within [−1, 1]3.Assign a random unit vector (“gradient vector”) to each node of the grid.
Noise function:Determine the (cubic) grid cell that contains a texture point p := (u, v ,w).For each node of that cell:
1 Compute a direction vector from the node to p (aka “distance vector”.2 Compute the dot product between this vector and the corresponding
gradient vector.Compute an interpolation of the values obtained; the interpolation functionhas zero first derivative at the nodes of the grid.
[Image credit: https://en.wikipedia.org/wiki/Perlin_noise (CC BY-SA)]
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 336
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Sample Solid Texture: Black&White
Again assign random numbers to the vertices of a regular (coarse) grid.Obtain intermediate texture values by (bilinear) interpolation. If a texture value isabove a threshold then paint the pixel white, else black.
[Image credit: SIGGRAPH Educator’s Slide Sets.]
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 337
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Sample Solid Texture: Wood Grain
Wood grain can be simulated by a set of concentric cylinders, whose referenceaxis is, in general, tilted with respect to a reference axis of the object.
The texture field is given by a modular function of the radius, returning a color fortexture space coordinates (u, v ,w).
r :=√
u2 + v2
α := arctanuv
r := r + 2 sin(20α +w
150)
c := brc mod 60
Assign a dark brownish color if c > 40, and a light color, otherwise.
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 338
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Example: Wood Grain
[Image credit: SIGGRAPH Educator’s Slide Sets.]
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 339
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Example: Wood Grain
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 340
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Bump Mapping
Texture mapping affects the shading of a surface, but the surface will still looksmooth: It changes the color, diffuse, and specular reflection properties, but itdoes not change the surface normal.
How can we make a surface look rough?
If a photograph of a rough surface is used as a texture map then the shadedsurface will not look quite right because the direction to the light source used tocreate the texture map is likely different form the direction to the light sourceilluminating the surface.
Blinn’s bump mapping (1978) is a way to provoke the appearance of a roughsurface geometry that avoids explicit geometric modeling.
It involves perturbing a surface normal before it is used in the illumination model,just as a slight roughness in a surface would perturb the surface normal in thereal world.
A bump map is an array of displacements, each of which can be used to simulatedisplacing a point on a surface a little above or below that point’s actual position.
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 341
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Bump Mapping
Original surface
A bump map
Simulate the displacementof the surface
Normal vectors to the’new’ surface
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 342
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Bump Mapping for a Parametric Surface
Let P(s, t) = [x(s, t), y(s, t), z(s, t)] be a parametric surface.
To compute its surface normal at point P(s, t), we compute
Ps(s, t) =∂P(s, t)∂s
,
Pt (s, t) =∂P(s, t)∂t
,
N(s, t) = Pt (s, t)× Ps(s, t),
n(s, t) = N(s, t)/||N(s, t)||.
PSfrag replacements
ABPS
LALB∆A
θφr
XYZxyz
N jNiA jAi
dA jdAi
φiφ j
N
s
t
Pt(s, t) Ps(s, t)
Let B(s, t) be the bump map value that will be applied at P(s, t). (For simplicity,we assume that the bump map is also parameterized over s, t .)
We add this amount in the direction normal to P(s, t):
P ′(s, t) = P(s, t) + B(s, t) · n(s, t).
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 343
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Bump Mapping for a Parametric Surface
Compute partial derivatives of the altered surfaceP ′(s, t) = P(s, t) + B(s, t) · n(s, t):
P ′s(s, t) :=∂P ′(s, t)
∂s= Ps(s, t) + Bs(s, t) · n(s, t) + ns(s, t) · B(s, t),
P ′t (s, t) :=∂P ′(s, t)
∂t= Pt (s, t) + Bt (s, t) · n(s, t) + nt (s, t) · B(s, t).
Blinn showed that a good approximation to the new (unnormalized) normal N ′ isobtained by ignoring the last term in each partial derivative and by taking theircross-product. (Recall that (A + B)× (C + D) = A×C + A×D + B×C + B×D.)
N ′(s, t) := P ′t (s, t)× P ′s(s, t)
≈ [Pt (s, t) + Bt (s, t) · n(s, t)]× [Ps(s, t) + Bs(s, t) · n(s, t)]
= Pt (s, t)× Ps(s, t) + Pt (s, t)× (Bs(s, t) · n(s, t)) +
(Bt (s, t) · n(s, t))× Ps(s, t) + (Bs(s, t) · Bt (s, t) · (n(s, t)× n(s, t))
= N(s, t) + Bs(s, t) · (Pt (s, t)× n(s, t)) + Bt (s, t) · (Ps(s, t)× n(s, t))
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 344
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Computing the Altered Surface Normal
By dropping the parameters, we obtain the more concise formula
N ′ = N +Bs(Pt × N)− Bt (Ps × N)
||N|| .
s
t
N
N ′
Ps×nPt ×n
Ps
Pt
N ′ is then normalized and substituted for the true surface normal in theillumination equation.
Note we do not actually compute the altered surface – it suffices to compute only(an approximation of) the altered normal!
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 345
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Sample Bump Map
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 346
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Bump Mapping: Embossing
Embossing, a technique borrowed from 2D image processing, can be used toachieve a chiseled look on 3D surfaces.
[Image credit: SIGGRAPH Educator’s Slide Sets.]
Varying the surface normal on a per-pixel basis tends to be too time-consumingfor real-time imaging.Real-time embossing avoids the modification of surface normals by using, e.g.,the so-called dot-product method.
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 347
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Displacement Mapping
A method similar to bump mapping for adding wrinkles to a surface isdisplacement mapping.
Displacement mapping is applied to a surface by first dividing the surface up intoa mesh of coplanar polygons.
The vertices of these polygons are then perturbed according to the displacementmap.
The resulting model is then rendered with any standard polygon renderer.
Displacement mapping can be used to convert the visual appearance of acylinder into a screw.
However, to achieve a fine resolution in the texture of the wrinkles, the additionalpolygons would get ever smaller and more numerous, placing a tremendousburden on the renderer.
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 348
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Displacement Mapping
Displacement mapping does alter the object’s geometry!
[Image credit: SIGGRAPH Educator’s Slide Sets.]
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 349
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Reflection Mapping
Reflection mapping (aka “environment mapping”) refers to the process ofreflecting the surrounding environment on a shiny, reflective object withoutresorting to ray tracing (or similar means).
Let V ′ be the reflection vector of aviewer direction V for a particular pointon the surface of the reflective object.
The intersection of V ′ with a surface,such as the interior of a sphere thatcontains an image of the environment tobe reflected in the object, gives theshading attributes for the point P on theobject surface.
reflective object
V NE
V’
P
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 350
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Reflection Mapping
In practice, four rays through the pixel point define a reflection “cone” with aquadrilateral cross-section. The region that subtends the environment map isthen filtered to give a single shading attribute for the pixel.
Surface
View point
Area subtendedin environment map
Pixel
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 351
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Reflection Mapping: Cube Mapping
Nowadays, the environmentis mostly mapped into a cube(“cube mapping”) or someother polyhedral object.
E.g., “sky box”.
[Image credit: Wikipedia.]
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 352
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Reflection Mapping: Sample Cube Maps
[Image credit: Wikipedia.]
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 353
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Problems with Reflection Mapping
A practical difficulty is in the production of the environment map. Predistorting an(photographed) environment to fit the interior surface of a sphere is difficult, andstoring a reflection on the six sides of a cube requires tricks to produce seamlessreflections.
A general disadvantage is that reflection mapping is geometrically accurate onlyfor (rather) small reflective objects located at the center of the surroundingenvironment sphere/cube.
As the object size becomes large with respect to the environment sphere/cube,reflections tend to appear in the wrong place on the reflected object.
If the reflective object is positioned away from its center then the geometricdistortion increases.
Also, reflection mapping works well only if the reflective object is mostly convex.(A non-convex object does not appear as self-reflection in the reflection!)
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 354
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
6 Ray TracingBasics of Ray TracingRay CombinationRecursive Ray TracingRay Tracing CSG ObjectsEfficiency Considerations and Acceleration TechniquesExtending Conventional Ray TracingRefined Monte Carlo Methods
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 355
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Ray Tracing for Image Synthesis
Real images are formed when light rays exit a light source, bounce off variousobjects in the scene, and are reflected into the eye.
The more accurately we simulate the physics of light transfer, the more realisticour images will be.
Ray tracing can correctly model shadows and multiple specular reflections.
It can handle refraction and transparent objects.
It can deal withCSG objects, i.e.,with Booleancombinations ofobjects.
It can handlenon-polygonalobjects providedthat normal vectorscan be computed.
It tends to be rathertime consuming.
[Image credit: Wikipedia.]
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 357
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Ray Tracing for Image Synthesis
Ray tracing supports the generation of fairly realistic images.
[Image credit: Wikipedia.]
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 358
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Pinhole Camera Model and Screen Pixels
Every pixel on the screen in the computer graphics camera model correspondsdirectly to a region of film in the pinhole camera.
For convenience in programming the classic computer-graphics version of thepinhole camera moves the plane of the film out in front of the pinhole andrenames the pinhole as the eye.
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 359
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Light-Based and Eye-Based Ray Tracing
Consider a particular pixel in the image plane.
Photons in a three-dimensional scene originate at light sources.
Photons leave a light source and bounce around the scene.
Usually, light gets a little dimmer on each bounce.
Only photons that eventually hit the screen and then pass into the eye (when theyare still bright enough) actually contribute to the image.
Light-based ray tracing (aka “forward ray tracing”) means tracing the path ofphotons from the light sources via reflections at objects to the eye.
Efficiency problem: Most of the photons emitted by a light source will never passinto our eye . . .
This problem has been partially overcome by clever algorithms, e.g., GPU-basedimplementations of Metropolis light transport. Still . . .
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 360
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Light-Based and Eye-Based Ray Tracing
The key insight for computational efficiency is to reverse the algorithm by tracingphotons backward instead of forward: We would like to trace only those photonswhich certainly contribute to the image.
The relevant photons are the photons that actually strike the image plane andthen pass into the eye.
Finding the path taken by a photon is easy:We follow rays from the eye to objects to the light sources.If we extend the ray taken by the photon into the world, we can look for thenearest object along the path of the ray.The photon must have come from this object.
Thus, we follow a ray not forward, from the light source to the eye, but backward,from the eye to the objects to the light sources.
Ambiguous Terminology
Note that there is some controversy about the terminology and, in particular, themeaning of backward ray tracing, since early ray tracing was always done from theeye. Hence, it seems best to talk about eye-based versus light-based ray tracing.
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 361
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Ray Casting
Ray casting: Every ray is stopped at the first object intersected.
Ray casting solves the hidden-surface problem: The object first encountered by aray is the visible object.
The scene consists of a checkerboard and of a reflective sphere. All eye rayswere stopped at their first intersection with the scene.
[Image credit: SIGGRAPH Educator’s Slide Set (Slide #10).]
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 362
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Light and Ray Combination
There are three components that contribute to the color of light at a point on thesurface of an object:
1 Reflected contribution: Light originating at some (light) source that isreflected into the eye, based on the physical laws of specular reflection.
2 Transmitted contribution: Light originating at some (light) source that isrefracted through the object into the eye.
3 Local contribution: Light resulting from direct exposure to light sources.Ambient light can be used as a simple alternative for multiply reflected light thatfinds its way into the eye.
Hence, we divide the rays into four classes:1 Reflection rays, which carry light reflected by an object surface; and2 Transparency rays, which carry light passing through an object;3 Illumination rays (or shadow rays), which carry light from a light source
directly to an object surface;4 Pixel rays (or eye rays), which carry light directly to the eye through a pixel
on the screen.
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 364
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Illumination Rays and Shadow Feelers
Imagine yourself at the point P on the surface of an object. Question: Is any lightcoming to you from the light sources?
To determine the illumination at P, we ask whether photons could possibly travelfrom each light source to P.
Shadow rays are like any other ray, except that we use them to “feel around” forlight. That is why they are often called shadow feelers.
Illumination ray: When a shadow ray is able to reach a light source withoutinterruption, then we stop thinking of it as a “shadow feeler” and prefer to think ofit as an illumination ray which carries light to us from the light source.
Point lights
Note that standard ray tracing deals only with point light sources!
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 365
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Illumination Rays and Shadow Feelers
Which light sources are visible from P in the example setting below?
We answer this by sending the shadow ray LA towards light source A. It arrives atA, so LA is actually an illumination ray from P to A.
On the other hand, ray LB is blocked from light source B by sphere S. Thus, nolight arrives at P from B.
P
A
B
S
LA
LB
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 366
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Illumination Rays and Shadow Feelers
If the ray reaches the light source, then we apply a suitable illumination model forcomputing the contribution of the light source to the light reflected in the directionof the eye.
E.g., we may use Phong’s rule to compute a (partial) specular highlight forspecular surfaces, or use a diffuse reflection model for dull surfaces.
LN
V
Ray
P Viewpoint
E
R
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 367
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Illumination Rays and Shadow Feelers: Sample
The scene consists of a checkerboard and of a reflective sphere. All eye rayswere stopped at their first intersection with the scene, and illumination rays wereconsidered.
[Image credit: SIGGRAPH Educator’s Slide Set (Slide #16).]
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 368
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Reflection Rays
When determining the illumination at a point, recall that we originally found thatpoint by following a ray to the object: incident ray.
Our goal is to find the color of the light leaving the object in the direction oppositeto the incident ray: reflection ray.
Note that perfect specular reflection is assumed.
Recall that the angle of reflection is equal to the angle of incidence for a perfectmirror surface.
Q = (L · N) · N where ||N|| = 1
R = L + 2(Q − L)
= L + 2((L · N) · N − L)
= 2(L · N) · N − L
Surface
α α
N
Q
RL
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 370
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Reflection Rays
At point P we want to know the color of the light coming in on ray A, since thatlight is then bounced into the eye. Other rays passing through P, such as B, donot have any impact on what is reflected towards the eye.
A
Image plane
reflectedThrice
ReflectedB
B
P
Twicereflected
reflected
Lightsource
Once
For finding the color of a reflection ray, we follow it backwards to determine theobject where it originated.The color of the light leaving that object along the line of the reflected ray is thecolor of the reflected ray.When we know the reflected ray’s color, we add it to any other light leaving theoriginal surface struck by the incident ray.
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 371
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Reflection Rays: Sample
The scene consists of a dull checkerboard and of a reflective sphere. Illuminationrays and first-level reflection rays were considered.
[Image credit: SIGGRAPH Educator’s Slide Set (Slide #16).]
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 372
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Reflection Rays and Back-face Culling
It is important to observe that "back" surfaces of an object may be visible in aray-traced scene.
Thus, back-face culling is not applicable when ray tracing is applied!
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 373
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Transparency Rays
There a single direction from which light can be perfectly transmitted into thedirection of the incident ray.
The ray created to determine the color of this light is called the transmitted ray, ortransparency ray.
One has to pay attention to the refraction of light as it passes from one mediumto another: Each material has an index of refraction, η, given by the ratio of thespeed of light in vacuum and the speed of light in the material.
In general, the index of refraction depends on the wavelength of the light — thiscauses dispersion in a prism!
Snell’s law tells us that thetransmitted ray remains within theplane of incidence, and that the sineof the angle of refraction is directlyproportional to the sine of the angleof incidence:
sinα
sinβ=ηb
ηa.
material
material
N
L
α
β b
a
R
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 374
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Transparency Rays: Sample
[Image credit: SIGGRAPH Educator’s Slide Set (Slide #16).]
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 375
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Transparency Rays: Math
N
L A
MB
R-N
C
β
α
D ||L|| = ||N|| = 1
A = N · cosα = N · (L · N)
D = A− L
M = D/ sinα (and we have ||M|| = 1)
B = M · sinβ with sinβ =ηa
ηbsinα
C = −N · cosβ with cosβ =
√1− sin2 β
R = B + C
Summarizing, R = N
ηa
ηb(L · N)−
√1−
(ηa
ηb
)2
(1− (L · N)2)
− ηa
ηbL.
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 376
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Transparency Rays: Total Internal Reflection
When a light ray hits a boundary between a dense region to a less dense region,the square root in the formula for R may not exist.
If this happens, the light ray is reflected internally, and we compute reflectioninstead of refraction.
material
material b
a
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 377
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Recursive Ray Tracing
The colors of the reflected and the transmitted light were found by finding theobjects from which they originated.
What was the color of light leaving those objects? It was a combination of thelight rays reaching them, which can be found with the same analysis.
This suggests a recursive algorithm and we we get (recursive) ray tracing, asopposed to mere ray casting:
First we send an eye ray through every pixel of the screen.This ray is stopped at the first intersection with any object.From this intersection point we send shadow feelers to the light sources ofthe scene.In addition, we send a reflection ray and a transparency ray.For objects hit by these two rays we apply this scheme recursively.No recursive rays are spawned if a dull surface is hit.With every reflection, the brightness of light is reduced and after a number ofreflections the contribution to our top object’s brightness and color is notsignificant anymore.Thus, we may terminate the recursion after a certain number of recursivecalls, e.g., 10–15.
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 378
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Recursive Ray Tracing
An eye ray E propagated through a scene. Many of the intersections spawnreflected, transmitted and shadow rays.
E
O2
O1
O3
O4
R2
S4
S2
R1
S3
S1
S5
T2
S6
R3
L1 L2
T1
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 379
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Recursive Ray Tracing: Ray Tree
Recursive ray tracing generates a ray tree.
Eye ray
R2
S4
S3
T2
S6
R3
Object 4Object 2
S1
R1
S5
T1
S2Object 1
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 380
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Recursive Ray Tracing: Samples
[Image credit: SIGGRAPH Educator’s Slide Set (Slides #24–26).]
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 381
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Recursive Ray Tracing: Samples
The Sphereflake consists of 7381 spheres; the floor’s texture was modeled by aprocedural function.
[Image credit: Eric Haines]
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 382
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Ray Tracing CSG Objects
Basic idea:Shoot a ray toward each of the primitives and compute hit lists of linked listswhere the ray enters and exits each primitive.Use the hit lists to compute where the ray enters and exits the combinedsolid and adjust the surface normals properly.At this point a shading calculation must be performed, and, if necessary,secondary rays have to be generated.The hit lists for the left and right children of a node are combined by usingthe so-called Roth Diagram.
CSG trees can be pruned during raytracing:
If the left or right subtree of anintersection operation returns anempty list, then the othersubtree need not be processed.If the left subtree of a minusoperation returns an empty list,then the right subtree need notbe processed.
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 384
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Ray Tracing CSG Objects: Hit Lists and Roth Diagram
Left Right
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 385
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Efficiency Considerations
One of the greatest challenges of ray tracing is efficient execution. Efficiency hastherefore been the focus of much research from the early days on.
For accelerating the process of ray tracing, there are three very distinct strategiesto consider:
1 Reducing the average cost of intersecting a ray with the environment: Fasterintersection tests and fewer intersection tests.
2 Reducing the total number of rays intersected with the environment: FewerRays.
3 Replacing individual rays with a more general entity : Generalized Rays. Thisincludes approaches like pencil tracing, cone tracing (with both circular andpolygonal cross-sections), and beam tracing. The main idea of all theseapproaches is to trace many rays simultaneously.
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 387
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Fewer Rays Due to Intensity Attenuation
In order to end recursion, in a naive ray tracer it is necessary to define amaximum depth — e.g., 10 to 15 — to which rays are traced recursively.For a particular scene this maximum depth is preset to a value which will dependon the nature of the scene. (Highly reflective surfaces and transparent objectsneed greater maximum depth than scenes with lots of opaque objects.)Hall and Greenberg (1983) pointed out that the percentage of the scene thatconsists of highly transparent or reflective surfaces is, in general, small and it isthus inefficient to trace every ray to the maximum depth.In particular, light is attenuated in various ways as it passes through a scene.E.g., a ray that is reflected at a surface is attenuated by the global specularreflection coefficient of this surface.A ray that is examined as a result of ray tracing will make a contribution to the eyeray that is attenuated by several of these coefficients.If the product of these coefficients falls below some threshold value then there isno point in tracing back further than the actual ray: the recursion can be stopped.Limiting the recursion based on the attenuation factors accumulated is calledadaptive-tree depth control.Hall and Greenberg report that adaptive tree-depth control results in an averagedepth of about two for a highly reflective scene.However, there are theoretical arguments (and practical examples) that show thatadaptive tree-depth control can be arbitrarily wrong.
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 388
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Faster Intersection Tests
The main goal is to reduce the average cost of computing an intersection.
We distinguish between the following two sub-goals:Faster Tests for Ray/Object Intersections: The number of intersection tests isnot reduced, but conservative pre-tests (e.g., by means of boundingvolumes) are employed in order to reduce the average cost of anintersection test.Note that such an approach will not be of much help if the sheer magnitudeof the number of intersection tests constitutes a problem: In terms of theO-notation, we would only change the multiplicative constants but could notdecrease the order!Fewer Tests for Ray/Object Intersections: Bounding volume trees and similarconcepts are employed in order to reduce the average number ofintersection tests.
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 389
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Faster Intersection Tests: Bounding Volumes
The most fundamental and ubiquitous tool for ray tracing acceleration is thebounding volume: A bounding volume is a volume which contains a given objectand permits a simpler ray intersection check than the object.
Common bounding volumes are spheres, axis-aligned bounding boxes (AABBs),oriented bounding boxes (OBBs), discrete-orientation polytopes (k -dops,plane-sets), convex hulls, . . .
Only if a ray intersects the bounding volume the object itself need to be checkedfor intersection.
Normally, the use of bounding volumes results in a significant net gain in speed.
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 390
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Faster Intersection Tests: Bounding Volumes
Virtually all bounding volumes form convex objects: This is a desirable factbecause it guarantees that any ray will intersect the object at most twice.
Intersection (and union) of multiple bounding volumes can be used to obtain abetter fit.
Each approach requires a different ray-intersection algorithm for bestperformance.
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 391
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Faster Intersection Tests: Bounding Volumes
A plane-set normal defines a family of parallel planes orthogonal to it. Two valuesassociated with a plane-set normal select two of these planes and define a slab.
The intersection of several such slabs form a parallelepiped bounding volume.
For normals chosen among a given set of k2 normals, the resulting bounding
volume is also known as a discrete-orientation polytope, or k -dop for short.
Min
Max
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 392
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Faster Intersection Tests: Bounding Volume Trees
By enclosing a number of bounding volumes within a larger bounding volume itmay be possible to eliminate many objects from consideration with a singleintersection check: If a ray does not intersect the parent volume, there is no needto test it against the bounding volumes or objects contained within.
A hierarchy is formed by repeated application of this principle.
Since a hierarchy of bounding volumes forms a tree, the resulting structures arecommonly called bounding-volume trees (BVTs).
BVTs are also widely employed in other applications, e.g., for collision detection.
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 393
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Faster Intersection Tests: Bounding Volume Trees
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 394
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Blurring Effects: Motivation
In the real world many things are not so discrete or sharp as we assumed so far:light sources are no points,times of exposure are not zero,aperture and focal length of an optical system are badly modelled by acamera “hole” that is infinitely small,mirrors are not perfect, andmirrors are not even.
These issues cause blurring effects, i.e., some amount of fuzziness that makesphotographs look natural in detail.
Since our visual system is accustomed to look for these visual cues, we tend toperceive pictures as unreal if most or even all of these blurring effects aremissing.
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 396
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Blurring Effects: Penumbra and Soft Shadows
Real light sources are never pure point lights. They do not produce sharpshadows, but instead a penumbra region occurs:
That part of a light source’s shadow that is totally blocked from the lightsource is the shadow’s umbra (Dt.: Kernschatten).That part of the shadow that is only partially shielded from the source is theshadow’s penumbra (Dt.: Halbschatten).
Penumbra
Umbra
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 397
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Blurring Effects: Penumbra and Soft Shadows
Real light sources can be handled by tracing several rays to points on the lightsource, and by averaging over all these rays.
In order to avoid regular anomalies in the darkness of the shadow, the points onthe light source are usually randomly distributed.
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 398
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Blurring Effects: Diffuse Reflection and Transparency
The mirrored view of an object will always exhibit some diffusion.
The diffusion caused by an uneven mirror can be simulated by tracing rays fromthe surface in the mirror direction, where each ray is slightly perturbed.
The distribution is weighted according to the same function that determineshighlights.
The problem of translucency is similar to the problem of diffuse reflection.
Translucency is calculated by distributing the secondary rays about the maindirection of the transmitted light.
The distribution of the transmitted rays is defined by a specular transmittancefunction.
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 399
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Blurring Effects: Depth of Field
A camera never produces a sharp image of all objects in a scene.
Rather, only those objects that are located within a range determined by the focallength and the aperture of the camera’s optical system will appear sharp: depthof field.
Depth of field can be generated by distributing several rays along the main raydirection, using a weighted distribution.
That is, more rays are cast with a small variation and less rays are cast with alarger variation.
This corresponds to several rays entering a real world camera since the apertureis always greater than a single point.
The wider the rays are distributed, the fewer objects will appear sharp.
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 400
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Blurring Effects: Motion Blur
The image of any moving object will always be blurred to some extent as soon asthe time of exposure is a finite interval.
This effect is easy to simulate, too: Instead of calculating one picture only for onedistinct moment, several pictures are calculated for different moments within theexposure time.
The mean of these calculations will yield a picture with motion blur.
Motion blur is particularly important when generating animations. Without motionblur, figures in animated still frames move cartoon-like, i.e., they do not movesmoothly.
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 401
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Distributed Ray Tracing
To render pictures with all the effects described on the previous slides is verycostly, if not impossible, if they are calculated in a straightforward way.
For example, if one uses10 points of time per pixel for motion blur,10 reflectance directions per ray for gloss,10 shadow feelers per intersection point, and10 points on the lens for the depth-of-field effect,
then one will have to cope with 10 000 rays per pixel, and with some 10 billionrays for only the very top part of the ray tree when ray-tracing a scene on a1000× 1000 pixel display.
The number of rays needed sky-rockets once truly recursive ray tracing isapplied.
Obviously, such a simple-minded approach is hardly feasible even onstate-of-the-art rendering platforms . . .
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 402
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Distributed Ray Tracing
Distributed ray tracing utilizes the following theoretical results: The true value of apixel can be written as a multi-dimensional integral
over time,over the lens aperture,over the light sources,over the pixel area (for anti-aliasing), andover all reflectance (and transparency) directions.
If r reflectance/transparency directions are considered, we get a(7 + 2r)-dimensional integral.
This integral can be estimated as a whole with the Monte Carlo method:For every sample during this integration only one discrete value in eachdimension is needed.This one value can be a random value from the relevant interval in therespective dimension.
For the example of the previous slide, the effort is reduced from billions of rays tomillions, which can be calculated quite easily.
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 403
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Distributed Ray Tracing
1 Determine the spatial location of the starting point of the ray randomly within thepixel.
2 Determine the time for the ray randomly within the relevant time interval, and setup the scene and camera at that time.
3 Randomly determine a point of the lens for the ray to go through and calculatethe refraction of the ray.
4 Intersect this ray with the scene and evaluate the closest intersection point to thelens, using standard ray tracing.
5 For each light source determine a random point on it and trace a ray to this point.If no object is intersected, include the influence of the light source to the color ofthe ray.
6 Determine a reflection ray randomly, evaluate the reflection ray with distributedray tracing and include the influence of this ray to the color.
7 Determine a transparency ray randomly, evaluate it with distributed ray tracingand include the influence of this ray to the color.
8 Repeat Steps 1–7 until the mean of the obtained colors fulfills some qualitycriterion.
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 404
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Distributed Ray Tracing
Samplepoint
Film plane
Lens
Reflected ray
Transmittedray
Light
Surface
The random selections in Steps 1–3 and 5–7 depend on different distributionfunctions for different dimensions, of course:
for Steps 1 and 2, the Gaussian turns out to be well suited,for Steps 3 and 5, a uniform distribution is obvious, whilefor Steps 6 and 7, the distributions correspond to the specular reflection termof the underlying shading model.
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 405
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Refined Monte Carlo Methods
Distributed ray tracing is a fine example of a Monte Carlo method used toapproximate the solution to the rendering equation (Kajiya 1986).
Some more recent approaches that also make use of Monte Carlo techniques:Bidirectional path tracing (Lafortune 1996),Metropolis light transport (MLT, Veach&Guibas 1997),Photon mapping (Jensen 1996),Progressive photon mapping (PPM, Hachisuka&Ogaki&Jensen 2008).
Both bidirectional path tracing and Metropolis light transport are unbiasedrendering techniques: They do not introduce a systematic error into the(approximate) solution.
Note, though, that a biased rendering technique need not necessarily be wrongand, thus, generate “worse” results.
E.g., photon mapping (and its more recent variants) is biased but allows toreproduce caustics. And since it is consistent, any desired accuracy can beachieved by increasing the number of photons.
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 407
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
Refined Monte Carlo Methods: Caustics Generated by Photon Mapping
[Image credits: Wikipedia.]
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 408
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
The End!
I hope that you enjoyed this course, and I wish you all the best for your future studies.
Computational Geometry and Applications Lab
UNIVERSITAT SALZBURG
c© M. Held (Univ. Salzburg) Einführung Computergraphik (SS 2021) 409