From saponara@tcgould.TN.CORNELL.EDU Thu Sep  8 15:51:09 1988
Return-Path: <saponara@tcgould.TN.CORNELL.EDU>
Received: from tcgould.TN.CORNELL.EDU by turing.cs.rpi.edu (4.0/1.2-RPI-CS-Dept)
	id AA05229; Thu, 8 Sep 88 15:51:02 EDT
Date: Thu, 8 Sep 88 15:49:44 EDT
From: saponara@tcgould.tn.cornell.edu (John Saponara)
Received: by tcgould.TN.CORNELL.EDU (5.54/1.2-Cornell-Theory-Center)
	id AA25808; Thu, 8 Sep 88 15:49:44 EDT
Message-Id: <8809081949.AA25808@tcgould.TN.CORNELL.EDU>
To: kyriazis@turing.cs.rpi.edu
Subject: latest RT News
Status: RO


George,
	Excuse me if you've already received this - my lists are hazy about it.

Eric Haines


_ __                 ______                         _ __
' )  )                  /                           ' )  )
 /--' __.  __  ,     --/ __  __.  _. o ____  _,      /  / _  , , , _
/  \_(_/|_/ (_/_    (_/ / (_(_/|_(__<_/ / <_(_)_    /  (_</_(_(_/_/_)_
             /                               /|
            '                               |/

			"Light Makes Right"

			 September 1, 1988

Compiled by Eric Haines, 3D/Eye Inc, ...!hplabs!hpfcla!hpfcrs!eye!erich
All contents are US copyright (c) 1988 by the individual authors

Contents:
    Intro, by Eric Haines
    Capsule Autobiographies, by lots of new people
    SIGGRAPH '88 RT Roundtable Summary, by Paul Strauss and Jeff Goldsmith
    Commercial Ray Tracing Software & SIGGRAPH, by Eric Haines & others
    A Letter, Jeff Goldsmith
    Best of USENET

-----------------------------------------------------------------------------

Intro

    Well, I'm back from my honeymoon, I've just moved into a new house, and
    I have returned to work and am confronted with a heavy duty schedule.
    However, the workstation's on the fritz for a few hours so here's my
    chance to compile an issue of the RT News.

    Other major changes around here are that Michael Cohen (of radiosity
    fame, and a subscriber) has begun his cross-country drive from Ithaca
    to Salt Lake City.  He'll be continuing his education at the University
    of Utah's computer graphics facility.  Roy Hall is taking over Michael's
    place at the Cornell Program of Computer Graphics, and should be added to
    this mailing list soon.  Andrew Glassner is now with Xerox PARC, and his
    new address should be available by next issue, too.

    I plan to get another issue out within a week, as there is also an
    interesting research idea that I need to talk about with a few people
    before presenting it here.  Stay tuned...and please feel free to submit to
    the next issue.

-----------------------------------------------------------------------------

Capsule Biographies

    What follows are short self-descriptions by various new people at the
    roundtable at SIGGRAPH.  These are simply in the order I received them.
    The latest address list (email and physical) is attached at the end of
    this issue.

    By the way, if you never submitted an autobiographical sketch, (or if you
    are doing something that's different from your last sketch) please feel
    free to send one on to me for the next issue.


# Russ Tuck - SIMD parallel algorithms and architectures for ray tracing
# Computer Science Dept, Univ. of North Carolina

My dissertation research is on parallel methods of programming SIMD parallel
computers.  I have developed a measure of program portability, and a system
which lets the programmer specify in advance how portable a program must be.
The compiler can then provide (and enforce) this amount of portability.  I
call this an "optimally portable" programming system.  I'll be presenting my
work at "Frontiers 88: The 2nd Symp. on the Frontiers of Massively Parallel
Computation" in October.  Thinking about ways to do ray tracing (and radiosity)
on massively parallel SIMD machines is a fun sidelight.  (Some of my ideas
were in the June '88 hardcopy Ray Tracing News.)


# Greg Ward - accuracy and realism, daylight, efficient light sources.
# Lawrence Berkeley Laboratory

The purpose of my work is the development of more accurate calculations
for lighting design and engineering.  Electric lighting contributes
directly or indirectly to about 20% of the nation's energy
consumption.  Although significant progress has been made in the area
of energy-efficient light sources, wasteful installations frequently
result from poor building design rather than low fixture efficiency.
It is hoped that with better computational tools, architects and
engineers will produce more energy-efficient designs.  I am currently
working on a validated reflection model and method for measuring
surface properties using a digitizing camera.  By streamlining the
measurement process, it should be possible to obtain accurate
simulations of designs using a realistic variety of construction
materials.  I am also comparing the accuracy and applicability of
different classes of lighting calculation, and exploring the use of
animation in design visualization.


# Pete Litwinowicz -- realistic image sythesis, etc.
# Apple Computer, INC

Working on 3D animation, rendering and modelling at Apple.


# Tom Malley - parallelism, complex scenes, space subdiv., shading methods
# Evans & Sutherland Computer Corporation

My interests are primarily in how to make ray traced images that are
significantly more realistic than the computer generated images we
often see now. The complexity of the models must increase for realism.
If we want to ray trace large databases on many processors, we must
determine efficient ways to make pieces of the data set available
dynamically to processors as they perform intersection searches.
Additionally, I'm interested in analysis of reflection models to see
where the typical simplifying assumptions steer us wrong, and what the
alternatives may be.  I'd also like to see more closed loop
comparisons of computer generated images with real scenes.


# Chuck Grant - tracing fewer rays, coherency, antialiasing
# Lawrence Livermore National Laboratory
                                                                               
I am interested in algorithms and data structures for most areas of image
synthesis: visible surface algorithms, volumetric visualization, antialiasing,
texturing, shading, etc., etc... Ray tracing is a big part of my interests but
not all. I am currently working on a comparison of all existing visible surface
algorithms, and have invented several new ones as a result. I am not interested
in "Scientific Data Visualization" but the demands of my job make me spend a
lot of time doing just that. I am finishing my PhD. in computer graphics at the
University of California at Davis. 


# Steve Stepoway -- efficient intersection calc., parallelism

My interests in ray tracing these days are centering around finding more
efficient ray-object intersection calculations, as well as issues of
parallelism in rendering. I am working with Sam Uselton (U. Tulsa) and
Mark Lee (AMOCO) on developing some new space sub-division techniques
(next year's SIGGRAPH paper).


# Daniel Bass - Realism, diffraction & wavelength effects
# Apollo Computer Inc.

[no autobiographical note]


# Ken Joy - intersection tests, efficiency, coherency
# Computer Graphics Research Lab, Computer Science Division,
# University of California, Davis

[no autobiographical note]


# Fred Fisher - realism, radiosity enhancements, ray tracing manufacturing
#   databases

Hello. I'm now working at DEC doing electronics design for a
geometry pipeline. My interest in ray tracing has involved rendering
polygonal databases (presumably ones similar to those used in
a quick-Gouraud-render CAD environment ). I've been writing a lot
of software on my own (ray-tracing and otherwise) to understand the
image synthesis process better, with the intent of designing hardware
to help the rendering phase.
 

# Pierre Poulin - illumination models, natural phenomenon
# University of Toronto
# Department of Computer Science

I am a graduate student working right now on a model for anisotropic reflection
that would save the world and computing time. This should be my introduction in
the fascinating science of the simulation of natural phenomenon (or how to 
deceive the eye with less than 1 million polygons...).


# Andrew C. Woo - intersection culling algorithms and complexity
# University of Toronto
# Department of Computer Science

I am currently a first year (fast approaching second year) master's student
at the University of Toronto.  My current thesis work is on a new approach
to shadow acceleration (beyond trans-warp drive, of course) and comparing
it to existing shadow accelerators (eg. the impulse-power "light buffer"
--> just kidding, Eric).


# Chuan Chee  -  natural phenomena, ray tracing, animation
# University of Toronto
# Department of Computer Science

I am finishing my first year as a master's student at the University of
Toronto.  I currently do not have a thesis topic.


# Robert E. Webber, Prof. - ray tracing large databases of volume density data.
# Computer Science Department
# Rutgers University, Busch Campus

Am currently tuning a ray tracer that handles hierarchically arranged
voxel data.  The basic unit of organization is a 16 by 16 by 16
``voxel'' block where each voxel contains 32 bits of information.
This information is interpreted as either a density description of the
contents of that voxel or a pointer to another voxel block
representing a recursive decomposition of the current block.  16 by 16
by 16 times 4 bytes means that my basic block is 16k, which is a size
that fits our local disk system rather well from the point of view of
i/o overhead.  The program is organized to allow the database to be
spread over a number of files (max 255) thus avoiding the 4 gigabyte unix file
size problem (since the pointers only have to be able to address the
start of each 16k block, 32 bit pointers can address a terabyte of
data -- should we ever be lucky enough to have it) as well as the
problem of finding alot of space inside of a single disk partition.
Rutgers currently organizes its computing around a cluster of a
hundred networked Sun workstations including 2's, 3's, and 4's.

The current status of the system is that 5 megabyte scenes (roughly
400 blocks) representing densities at the 256 by 256 by 256 level have
been ray traced.  Surface interpolation permitted images of
demonstrating shadows, shading, and inter-reflection to be generated.
Various techniques for determining the surface represented by a local
cluster of voxel densities are being experimented with.  Will soon be
running the ray tracer in parallel (don't expect much bus contention
as the system is currently extremely cpu bound).  We are also
installing some WORM drives locally and so soon expect to have yet
another layer of memory hierarchy to fiddle with.


# Michael Natkin
# Brown University

No current paragrapho for now, as i'm rewriting our whole environment
at the moment, no time for ray-tracing.

---------

That's all for now, with about 6 people who are new but haven't contacted
me yet.

-----------------------------------------------------------------------------

SIGGRAPH '88 RT Roundtable Summary, by Paul Strauss and Jeff Goldsmith

0.  Women don't talk about ray tracing.
1.  It would be nice to have something that does both ray tracing and
    radiosity.
2.  It would be nice to have caustics.
3.  It would be nice to do analytical anti-aliasing.
4.  Motion blur is a Good Thing.
5.  Adaptive subsampling don't work good.
6.  There are many ways to minimize ray-object intersection tests.
7.  Hardware would be nice.  Maybe later.
8.  We should use ray tracing for real applications.

-------------------------------------------------------------------------------

Commercial Ray Tracing Software and SIGGRAPH, by Eric Haines and others

    Last issue I listed all the commercially available ray tracers I knew as of
SIGGRAPH.  There were a few new entries I saw this year:

	- ElectroGIG, from England, shown at the Meiko booth, is the first
	  commercial ray tracer based on using constructive solid geometry
	  that I've seen.

	- SoftImage, (the people with the nice rocks image) from Montreal,
	  had a nice looking Wavefront/Alias/etc-looking package with a ray
	  tracer.  They claim it was the fastest ray tracer on the floor (it
	  runs on a number of machines) and seemed to use octree subdivision.
	  I never got to see the demo, but people tell me that it was fast,
	  and that the programmer is a very careful and efficient coder.

	- Shima-Seiki, from Japan.  They claim to have a ray-tracer with
	  hardware assistance (whatever that means), though they're not
	  marketing it yet and couldn't tell us anything about it.  The
	  realtime texturing system from Mars, with lots of cool buttons and
	  knobs, was pretty impressive though.  System cost is $500K, $300K,
	  or "we haven't decided", depending on when you talked with them.

	- Ray Tracing Corp is now Ray Tracing Research Corp, and they released
	  a ray tracer for the Mac II for $895.

	- Apollo: not a product, but their ray tracer was fun to watch as a
	  demo of both the speed of their new DN10000 (not sure about the
	  number of zeros in that one) super-mini-super-workstation and of
	  Apollo's networking abilities.  Darn fast.

	- Silicon Graphics: also not a product, they had a very nice demo
	  (which, sadly, a number of their marketing people didn't know about)
	  showing a scene from a god's-eye point of view, with rays shooting
	  through the image plane and bouncing around.  The image would build
	  up scan line by scan line on both the image plane in the scene and
	  in a separate window.  You could also change the god's-eye view
	  interactively as the scene was being ray-traced.  Nice.


Other things of interest:  AT&T Pixel Machines' ray tracer is now even faster,
purely due to software tuning.  They plan to cut even more time by further
changes.  "Sphereflake", which ran at about 30 seconds last year, now runs
at around 16 seconds.  There is a public domain modeller which should be
out in mid-October which will run on the Iris 3130 and on the Pixel Machine.
It was developed by NASA (demoed at AT&T), and since they're governmental they
can't make it a for-profit product.  In a month and a half contact their
distributor, "COSMIC" (which stands for something), at 404-542-3265.  They
won't really know much about it until then.

SGI showed radiosity images on the floor, though there is no announced product
yet.  They seemed to have developed their own adaptive refinement technique.
HP showed ray tracing film loops and stills, and radiosity images and on-the-fly
adaptive refinement (which will be a product sometime this winter).  The new
radiosity technique introduced by Cornell this year has the promise of being
"radiosity for the rest of us".

Those were my major impressions (other than the usual "the art show was better
this year", "course lunches were worse", etc) - anyone else care to add their
two cents?


Jeff Goldsmith points out:

    Cray Research has "Oasis," Gray Lorig's ray tracer, available,
I think, for free.  Of course, you have to buy a Cray at $20 million
first....

    I don't get it.  Why doesn't every CG Software vendor supply a
ray tracer.  It's definitely the easiest renderer to write.  Yes,
they are slo-o-o-o-o-o-w, but they sound glitzy and (I bet) would
stimulate sales, even if buyers never used them.


Gray Lorig responded to my query for info on Oasis:

  The current status of the Oasis package, which is basically a
raytracing animation package, is it's an unsupported product.
What that really means is that we will give it to Cray customers
for free, give them updates every once and a while, and provide
support only when I feel like it.


John Peterson says:

  I would add RenderMan to your list of commercial ray tracers - I think you
can explicitly implement classical ray tracing by writing some code in their
magic shading language.

Also, the ray tracer I wrote at Utah is (or at least may be) commercially
available.  It specializes in ray tracing NURB (B-spline) surfaces, and 
supports the usual suspects of features (refract/flection, shadows (with
penumbra), texture mapping, anti-aliasing (with stochastic sampling) and
motion blur).  It comes bundled with a geometric modelling system called
Alpha_1, distributed by Engineering Geometry Systems in Salt Lake City.

The contact is:
	Engineering Geometry Systems
	275 East South Temple, Suite 305
	Salt Lake City, UT, 84111

I think the price is something like $10K for commerical sites and < $1K for
government or university sites.  The ray tracer isn't available separately,
but the modelling package is so full of goodies it's worth looking at in its
own right.

-------------------------------------------------------------------------------

A Letter [my response is in brackets]

   1) A student of mine found a typographical error in
your "Intro to Ray Tracing" notes.  (I gave him some
pages from same to teach him about texture mapping; good
job!)  In Equation set E11, the third line should
read: Qvy = -Nb/Dv2 rather than Nc.  I noted that you didn't
catch it in this years' issue.  (Better typesetting, though.)
Thanks for publishing that.

[Thanks!  If anyone else notices typoes, omissions, or anything else that bugs
them in the "Intro" SIGGRAPH notes, please contact Andrew Glassner or me
or the individual author ASAP.]


    2) Does anyone know where I can obtain 2D wood texture maps?
FTP, email, tape, disc, whatever, is fine.

[Any leads, anyone?  I'm interested, too.]

-- Jeff Goldsmith

-------------------------------------------------------------------------------

Best of USENET
---- -- ------

[What follows are things posted to USENET that might be of interest here.
Many readers either don't receive USENET or haven't the time to perform chaff
separation.  What follow are my own winnowings. - EAH]

-------------------------

Thomas Palmer writes:

Has anyone done any work on vectorizing ray-object intersection
calculations?  The vectorizing C compiler on a Cray X/MP will not
(automatically) vectorize loops smaller than about 5 or 6 iterations
(nor should it - the vector register load time outweighs the gain).
Typical ray tracing vector operations involve vectors of length 3 or 4.

  -Tom

 Thomas C. Palmer	NCI Supercomputer Facility
 c/o PRI, Inc.		Phone: (301) 698-5797
 PO Box B, Bldg. 430	Uucp: ...!uunet!ncifcrf.gov!palmer
 Frederick, MD 21701	Arpanet: palmer@ncifcrf.gov

-----

>From: spencer@tut.cis.ohio-state.edu (Stephen Spencer)
Subject: Re: Vectorizing ray-object intersection calculations
Organization: Ohio State Computer & Info Science

There was an article in IEEE CG&A about four years ago about vectorizing
ray-sphere intersections on a Cyber which included an algorithmic workup 
of how it worked.

As far as ray-polygon intersection goes, the article in SIGGRAPH '87 dealing
with tesselation of polygonal objects (it had the pictures of grass and the
Statue of Liberty in it) included the algorithm for fast ray-polygon 
intersection, and I did implement it and it did vectorize quite nicely. 
I unfortunately no longer seem to have that piece of code, but it wasn't 
difficult.  Of course you still have to do the sorting of the intersection
points but the heavy intersection calculations can be done quickly.

-----------------

>From: u-jmolse%sunset.utah.edu@utah-cs.UUCP (John M. Olsen)
Newsgroups: comp.graphics
Subject: Polygon teapot ftp'able
Summary: Come 'n get it.
Organization: University of Utah, Computer Science Dept.

One of the kind and generous People In Charge compressed the Utah Teapot
and has made it available (for a week or so) as ~ftp/pub/teapot.Z so those
who want the polygon version of it can get it now.  It compressed to about
270K from 990K, so it's not too bad to transfer.  The machine name is
cs.utah.edu.  Have fun with it, and let me know if you have any problems
getting it transferred.

Just so I don't get hate mail, and you don't think your software is 
broken:  The Utah teapot
was designed with a hole in the bottom.  I had forgotten about this,
but was reminded by someone (Jean Favre?) who rendered it.

-----------------

>From: markv@uoregon.uoregon.edu (Mark VandeWettering)
Newsgroups: comp.graphics
Subject: Ray Tracer: Part 01/01
Organization: University of Oregon, Computer Science, Eugene OR
Lines: 2353

Here is the first release of my totally whiz-bang ray tracer.  Okay, so
it isn't TOTALLY whiz bang, but it does have some nice features.  It
uses bounding volumes, has cones, spheres, polygons and cylinders as
primitives, reads Eric Haines NFF file format images and more....

Use it, abuse it, sell it, but be sure to have fun with it.  I had alot
of fun hacking on it.   If any of you find bugs, or better yet fix bugs
and add features, send them to me, and I will try to get them worked
into a future release which I hope might make it into comp.sources.unix.

[ code not included here:  either get it from USENET, or you can write Mark
or myself for a copy.  The source weighs in at about 55K bytes.]

-----------------

Reply-To: kjohn@richp1.UUCP (John K. Counsulatant)
Organization: RICH Inc. , Franklin Park,IL

    If anyone is interested in *other* ray tracers, I
have source to DBW Render (an excellent ray tracer ported from the VAX to
the Amiga) and Tracer (a crude spheres only (but a good starter :-) tracer
for the Amiga).  I am in the process of obtaining QRT (quick ray tracer (if
there is such a thingee ;-)) source (also for the Amiga).

I can be reached at:

RealTime (prefered):  1(312)418-1236  (6pm to 10:30pm central time)

USmail:               John Kjellman             E-Mail: kjohn@richp1.UUCP
                      17302 Park Ave.
                      Lansing IL 60438

-----------------

Reply-To: sean@ms.uky.edu (Sean Casey)
Organization: The Leaning Tower of Patterson Office @ The Univ. of KY

In article <2660@uoregon.uoregon.edu> markv@drizzle.UUCP (Mark VandeWettering) writes:
>	Isn't DBW Render copyrighted?  I believe the source code may not
>	be redistributed, I tried to obtain the sources, but aborted
>	because of the restriction on use/redistribution.

The first release of DBW Render was freely redistributable. I believe it
even found it's way to a Fish disk. The story I hear is of an evil employer
seeing $$$ and telling Dave that since he wrote it on the company computer
it belongs to the company and that no further releases may be given away.

The current release uses "Look Up A Word In The Manual" type copy
protection, the most annoying kind I have experienced to date.

If someone really wants to pay for an Amiga ray tracer, I'd send him in the
direction of one of Turbo Silver, Videoscape 3D, or Sculpt 3D, all excellent
products. These products are in fierce competition with each other, and get
better all the time. I saw a pamphlet in Turbo Silver that had an ad for a
fascinating terrain generator module.  Just think, combine a high quality
terrain generator and the logistics of a board wargame...

Oh yeah, I hear that some of the commercial Amiga ray tracing software is
being ported to the Mac II. These products have been around for a while, so
it's a good chance for Mac users to get their hands on some already-evolved
ray-tracing software.

Sean
-- 
***  Sean Casey                        sean@ms.uky.edu,  sean@ukma.bitnet
***  (Looking for his towel)           {backbone|rutgers|uunet}!ukma!sean
***  U of K, Lexington Kentucky, USA   Internet site? "talk sean@g.ms.uky.edu"
***  ``With a name like Renderman, you know it's good jam.''

-----------------

Reply-To: kyriazis@turing.cs.rpi.edu (George Kyriazis)
Organization: RPI CS Dept.

        Hello world!  I have been using ray tracers for quite some time now,
and I have made many changes to some of them, so I though it was time for
me to write a ray tracer.  So there it is!  It is not supposed to be fast
or anything, but I think it is well commented and easy to understand.
It is very simple also.
        I am willing to give it to anyone that wants it.  Unfortunately,
I don't think I can put it in a place where people can ftp to, so if you
want it, please send me mail.  I'm sure I can put it in some public place
later.
        The ray tracer currently supports only spheres, with ambient,
diffuse, specular lighting, together with reflections and refractions.
I don't use any in-line code for the vector routines, but subroutines, for
readability.
        Hope someone will want to play around with it.

  George Kyriazis

------------------------------

Postscript Ray Tracer, John Hartman and Paul Heckbert

[This was published in the last hardcopy RT News.  Here it is again, for those
not inclined to type a lot]
Reply-To: jhartman@miro.Berkeley.EDU (John H. Hartman)
Organization: University of California, Berkeley

  Ever worry about all those cycles that are going to waste every night when
you shut off your laserwriter? Well, now you can put them to good use.
Introducing the world's first PostScript ray tracer. Just start it up, wait
a (long) while, and out comes an image that any true graphics type would
die laughing over. As it is currently set up it will trace a scene with
three spheres and two lights. The image resolution is 16x16 pixels.
  Warning: the code is a real kludge. I'm not sure there is much you can
change without breaking it, but you're welcome to try. If, by chance, you
are able to improve the running time please send us the improved version.
  psray.ps is the ray tracer. result.ps is what a 16x16 image should look
like.
  Have fun.

-----------------------------------------------------------------------
John H. Hartman 		jhartman@ernie.berkeley.edu
UCB/CSD				ucbvax!ernie!jhartman



# to unpack, cut here and run the following shell archive through sh
# contents: psray.ps result.ps
#
echo extracting psray.ps
sed 's/^X//' <<'EOF10807' >psray.ps
X%!
X% Copyright (c) 1988  John H. Hartman and Paul Heckbert
X%
X% This source may be copied, distributed, altered or used, but not sold for 
X% profit or incorporated into a product except under licence from the authors.
X% This notice should remain in the source unaltered.
X%   John H. Hartman jhartman@ernie.Berkeley.EDU
X%   Paul Heckbert   ph@miro.Berkeley.EDU
X
X%    This is a PostScript ray tracer. It is not a toy - don't let the kids 
X%  play with it. Features include: shadows, recursive specular reflection
X%  and refraction, and colored surfaces and lights (bet you can't tell!).
X%  To use this thing just send it to your favorite Postscript printer.
X%  Then take a long nap/walk/coffee break/vacation. Running time for
X%  a recursive depth of 3 and a size of 16x16 is about 1000 seconds
X%  (roughly 20 minutes) or 4 seconds/pixel. 
X%    There are a few parameters at the beginning of the file that you can
X%  change.  The rest of the code is pretty indecipherable. It is translated
X%  from a C program written by Paul Heckbert, Darwyn Peachey, and Joe Cychosz
X%  for the Minimal Ray Tracing Programming Contest in comp.graphics in
X%  May 1987.  Some changes have been made to improve the running time.
X%  Don't even bother trying to figure out how this works if you are looking
X%  for a good example of a ray tracer.
X%
X/starttime usertime def
X/DEPTH 3 def   % recursion depth
X/SIZE 16 def   % image resolution
X/TIMEOUT SIZE dup mul 10 mul cvi 120 add def  % approximately 10 sec/pixel
X/NUM_SPHERES 5 def
X/AOV 25.0 def    % angle of view
X/AMB [0.02 0.02 0.02] def  % ambient light
X% list of spheres/lights in scene
X%            x    y    z     r   g   b   rad  kd   ks  kt   kl  ir
X/SPHERES [[[ 0.0  6.0  0.5] [1.0 1.0 1.0] 0.9 0.05 0.2 0.85 0.0  1.7]
X          [[-1.0 8.0 -0.5] [1.0 0.5 0.2] 1.0 0.7  0.3 0.0  0.05  1.2]
X          [[ 1.0 8.0 -0.5] [0.1 0.8 0.8] 1.0 0.3  0.7 0.0  0.0   1.2]
X	  [[ 3.0 -6.0 15.0] [1.0 0.8 1.0] 7.0 0.0  0.0 0.0  0.6  1.5]
X	  [[-3.0 -3.0 12.0] [0.8 1.0 1.0] 5.0 0.0  0.0 0.0  0.5  1.5]
X	 ] def
X
Xstatusdict begin
XTIMEOUT setjobtimeout
X/waittimeout TIMEOUT def
Xend
X/initpage {
X   /Courier findfont
X   10 scalefont setfont
X} def
X
X/X 0 def
X/Y 1 def
X/Z 2 def
X/TOL 5e-4 def
X/BLACK [0.0 0.0 0.0] def
X/WHITE [1.0 1.0 1.0] def
X/U 0.0 def
X/B 0.0 def
X% index of fields in sphere array
X/cen 0 def
X/col 1 def
X/rad 2 def
X/kd 3 def
X/ks 4 def
X/kt 5 def
X/kl 6 def
X/ir 7 def
X/NEG_SIZE SIZE neg def
X/MATRIX [SIZE 0 0 NEG_SIZE 0 SIZE] def
X/vec {3 array} def
X/VU vec def
X/vunit_a 0.0 def
X
X% dot product, two arrays of three reals
X/vdot {
X   aload pop 
X   4 -1 roll
X   aload pop 
X   4 -1 roll mul
X   2 -1 roll 4 -1 roll mul add
X   3 -2 roll mul add
X} def
X
X% vcomb, sa, a, sb, b  returns new array of sa*a + sb*b
X
X/vcomb {  
X   aload pop
X   4 -1 roll dup dup
X   5 1 roll 3 1 roll mul
X   5 1 roll mul
X   4 1 roll mul 3 1 roll 
X   5 -2 roll aload pop
X   4 -1 roll dup dup
X   5 1 roll 3 1 roll mul
X   5 1 roll mul
X   4 1 roll mul 3 1 roll 
X   4 -1 roll add 5 1 roll 
X   3 -1 roll add 4 1 roll
X   add 3 1 roll
X   vec astore 
X} def
X
X/vsub {
X   aload pop
X   4 -1 roll aload pop
X   4 -1 roll sub 5 1 roll 
X   3 -1 roll sub 4 1 roll
X   exch sub 3 1 roll
X   vec astore 
X} def
X
X/smul {
X   aload pop
X   4 -1 roll dup dup
X   5 1 roll 3 1 roll mul
X   5 1 roll mul
X   4 1 roll mul 3 1 roll 
X   vec astore
X} def
X
X/vunit {
X   /vunit_a exch store
X   1.0 vunit_a dup vdot sqrt div vunit_a smul
X} def
X
X/grayscale {
X   % convert to ntsc, then to grayscale
X   0.11 mul exch
X   0.59 mul add exch
X   0.30 mul add
X   255.0 mul
X   cvi
X} def
X
X
X/intersect { % returns best, tmin, rootp
X   7 dict begin
X   /d exch def
X   /p exch def
X   /best -1 def
X   /tmin 1e30 def
X   /rootp 0 def
X   0 1 NUM_SPHERES 1 sub {
X      /i exch def
X      /sphere SPHERES i get def
X      /VU sphere cen get p vsub store
X      /B d VU vdot store
X      /U B dup mul VU dup vdot sub sphere rad get dup mul add store
X      U 0.0 gt
X      {
X	 /U B U sqrt sub store
X	 U TOL lt 
X	 {
X	    /U 2.0 B mul U sub store
X	    /B 1.0 store
X	 }
X	 { /B -1.0 store } 
X	 ifelse
X	 U TOL ge 
X	 U tmin lt and
X	 {
X	    /best i store
X	    /tmin U store
X	    /rootp B store
X	 }
X	 if
X      }
X      if
X   } for
X   best tmin rootp
X   end
X} def
X
X/trace {
X   13 dict begin
X   /d exch def
X   /p exch def
X   /level exch def
X   /saveobj save def
X   /color AMB vec copy def
X   /level level 1 sub store
X   p d intersect
X   /root exch def
X   /v exch def
X   /s exch def
X   -1 s ne
X   {
X      /sphere SPHERES s get def
X      /p 1.0 p v d vcomb store
X      /n
X      sphere cen get p 
X      root 0.0 lt { exch } if 
X      vsub vunit def
X      sphere kd get 0.0 gt
X      {
X	 0 1 NUM_SPHERES 1 sub
X	 {
X	    /i exch def
X	    /light SPHERES i get def
X	    light kl get 0.0 gt
X	    {
X	       /VU light cen get p vsub vunit store
X	       /v light kl get 
X	       n VU vdot 
X	       mul store
X	       v 0.0 gt
X	       p VU intersect
X	       /B exch store
X	       /nd exch def
X	       i eq
X	       and
X		  { /color 1.0 color v light col get vcomb def } 
X	       if
X	    } if
X	 } for
X      } if
X      color aload pop
X      sphere col get aload vec copy /VU exch store 
X      4 -1 roll mul
X      5 1 roll
X      3 -1 roll mul
X      4 1 roll
X      mul
X      3 1 roll
X      color astore pop
X      /nd d n vdot neg store
X      /color 
X      sphere ks get
X      sphere kd get color sphere kl get VU vcomb
X      0 level eq
X	 { BLACK vec copy}
X	 { level p 1.0 d 2 nd mul n vcomb trace vec astore }
X      ifelse
X      1.0 3 -1 roll vcomb store
X      root 0.0 gt
X	 { /v sphere ir get store }
X	 { /v 1.0 sphere ir get div store }
X      ifelse
X      /U 1 v dup mul 1 nd dup mul sub mul sub store
X      U 0.0 gt
X      {
X	 /color 
X	 1.0 color sphere kt get
X	 0 level eq
X	    { BLACK vec copy}
X	    { level p v d v nd mul U sqrt sub n vcomb trace vec astore }
X	 ifelse
X	 vcomb store
X      } if
X   } if
X   color aload pop
X   saveobj restore
X   end
X} def
X
X/main {
X   initpage
X   /data SIZE dup mul string def
X   /half SIZE 0.5 mul def
X   /i 0 def
X   /dy half AOV cvr 0.5 mul dup cos exch sin div mul def
X   /temp vec def
X   0 1 SIZE 1 sub
X   {
X      /y exch def
X      0 1 SIZE 1 sub
X      {
X	 /x exch def
X         data i
X	 /saveobj save def
X	 VU X x cvr half sub put 
X	 VU Y  dy put
X	 VU Z half y cvr sub put
X	 DEPTH BLACK VU vunit trace 
X         grayscale 
X	 saveobj restore
X      	 put
X     /i i 1 add store
X      } for
X   } for
X   gsave
X   100 300 translate 400 400 scale SIZE SIZE 8 MATRIX {data} image
X   grestore
X   100 200 moveto
X   (Statistics: ) show
X   100 190 moveto
X   (Size: ) show SIZE 10 string cvs show
X   100 180 moveto
X   (Depth: ) show DEPTH 3 string cvs show
X   100 170 moveto
X   (Running time: ) show usertime starttime sub 1000 div 20 string cvs show
X   showpage 
X} def
X/main load bind
Xmain
Xstop
EOF10807
echo extracting result.ps
sed 's/^X//' <<'EOF10808' >result.ps
X%!
X/picstr 16 string def
X100 300 translate
X400 400 scale
X16 16 8 [16 0 0 -16 0 16]
X{currentfile picstr readhexstring pop} image
X0505050505050d0e1114140505050505
X050505050518231c1136472c05050505
X0505050528231b262729364b58050505
X05050525251c0e2528280e3a52550505
X050505241b0c0d0d0d0d0d0c3c540505
X050505080a0b0b0c0c0b0b0b0a080505
X0505620608090a0a0a0a090908072805
X056873170607070808080707060c2e2a
X57676e94050505060606060505752d29
X525e6456310505050505050514722825
X45512f2e2b0a06050505050111141420
X35402726240b0b0b0509040410111119
X1e2c1e1d1b0b0b0b050904040c0d0d0c
X0b121312100b0b0b0504040407080807
X050b0b0b0b0b0b050505040404040404
X05050505050505050505050505050505
Xshowpage
EOF10808
exit

-------------------------------------------------------------------------------

END OF RTNEWS

# Ray-Tracers address list. 9/8/88
# All addresses are with respect to my address:
#
# Eric Haines - efficient intersection calc., coherency, spline intersection
# 3D/Eye Inc
# 410 E. Upland Rd
# Ithaca, NY  14850
# (607)-257-1381
# alias	eric_haines	hpfcla!hpfcrs!eye!erich@hplabs.HP.COM
alias	eric_haines	saponara@tcgould.tn.cornell.edu
#
alias	no_answer	john_chapman
# There are two distribution lists, ray_tracers1 and ray_tracers2, because
# my mailer dumps core if the list gets too long.
alias	ray_tracers1	jim_arvo \
			al_barr \
			brian_barsky \
			daniel_bass \
			rod_bogart \
			wim_bronsvoort \
			at_campbell \
			john_chapman \
			chuan_chee \
			michael_cohen \
			jim_ferwerda \
			fred_fisher \
			john_francis \
			phil_getto \
			andrew_glassner \
			jeff_goldsmith \
			chuck_grant \
			paul_haeberli \
			eric_haines \
			pat_hanrahan \
			paul_heckbert \
			michael_hohmeyer \
			jeff_hultquist \
			erik_jansen \
			ken_joy \
			mike_kaplan \
			tim_kay \
			dave_kirk \
			roman_kuchkuda \
			george_kyriazis \
			david_lister \
			pete_litwinowicz \
			gray_lorig \
			wayne_lytle
#
alias	ray_tracers2	tom_malley \
			don_marsh \
			michael_natkin \
			masataka_ohta \
			tom_palmer \
			darwyn_peachey \
			john_peterson \
			frits_post \
			pierre_poulin \
			thierry_priol \
			panu_rekola \
			linda_roy \
			cary_scofield \
			pete_segal \
			scott_senften \
			cliff_shaffer \
			susan_spach \
			stephen_spencer \
			rick_speer \
			steve_stepoway \
			mike_stevens \
			paul_strauss \
			kr_subramanian \
			russ_tuck \
			greg_turk \
			ben_trumbore \
			mark_vandewettering \
			greg_ward \
			bob_webber \
			lee_westover \
			andrew_woo
#
# Jim Arvo - 5D, caustics, efficiency
# CHF-O2-RD
# Apollo Computer
# 330 Billerica Road
# Chelmsford, Mass. 01824
# (617) 256-6600 X-7766
alias	jim_arvo	apollo!arvo@eddie.mit.edu
#
# Al Barr - realistic modeling and rendering
# Caltech
alias	al_barr		barr@csvax.caltech.edu
#
# Brian Barsky - splines
# Computer Science Division
# University of California,
# Berkeley, CA  94720
alias	brian_barsky		barsky@miro.berkeley.edu
#
# Daniel Bass - Realism, diffraction & wavelength effects
# Apollo Computer Inc.
# 300 Apollo Drive
# Chelmsford, MA 01824
# (508)-256-6600
alias	daniel_bass	daniel@apollo.com
#
# Rod Bogart - blending RT and images
# University of Utah
# 3190 MEB
# Salt Lake City, UT 84112
alias	rod_bogart	bogart%gr@cs.utah.edu
#
# Wim Bronsvoort
# Faculty of Mathematics & Informatics
# Delft University of Technology
# Julianalaan 132
# 2628 BL Delft
# The Netherlands
alias	wim_bronsvoort	dutrun!wim@mcvax.cwi.nl
#
# * A.T. Campbell, III
# The University of Texas at Austin
# Computer Sciences
# Austin, TX  78712
alias	at_campbell	atc@cs.utexas.edu
#
# John Chapman - spatio-temporal coherence, parallelism
# Simon Fraser University
alias	john_chapman	fornax!sfu-cmpt!chapman@cornell.uucp
alias	john_chapman_quick	fornax!bby-bc!john@cornell.uucp
#
# Chuan Chee  -  natural phenomena, ray tracing, animation
# University of Toronto
# Department of Computer Science
# 10 King's College Road
# Toronto, Ontario, Canada	
# M5S 1A4
# (416) 978-8725,
# (416) 978-6619
alias	chuan_chee	ckchee@dgp.toronto.edu
#
# Michael Cohen - radiosity
# Cornell Program of Computer Graphics
# 120 Rand Hall
# Ithaca, NY 14850
# (607)-255-4880
alias	michael_cohen	cohen@squid.tn.cornell.edu
#
# Jim Ferwerda
# Cornell Program of Computer Graphics
# 120 Rand Hall
# Ithaca, NY 14850
# (607)-255-4880
alias	jim_ferwerda	jaf@squid.tn.cornell.edu
#
# Fred Fisher - realism, radiosity enhancements, ray tracing manufacturing
#	databases
# 169 Summer St
# Maynard, MA  01754
#	home - (508)897-5937
#	work - (508)480-5352
alias	fred_fisher	FISHER%3D.dec@decwrl.dec.com
#
# John Francis - Intersection calculations, anti-aliasing, procedural
#   primitives, spline surfaces, integration of ray-tracing & other renderers.
# Apollo Computer
# 270 Billerica Rd.,
# Chelmsford, Mass. 01824
# (617)-256-6600 Ext. 7777
alias	john_francis	apollo!johnf@eddie.mit.edu
#
# Phil Getto
# Organization: RPI CS Dept.
alias	phil_getto	phil@yy.cicg.rpi.edu
#
# Andrew Glassner - octree subdivision, anti-aliasing
# Xerox PARC
alias	andrew_glassner	glassner@cs.unc.edu
#
# Jeff Goldsmith - hypercube ray-tracing, automatic bounding volume hierarchy
# JPL, Pasadena
alias	jeff_goldsmith	jeff@hamlet.caltech.edu
#
# Chuck Grant - tracing fewer rays, coherency, antialiasing
# Lawrence Livermore National Laboratory
# P. O. Box 5504 L-156
# Livermore California 94550
# (415) 422-7278
alias	chuck_grant	grant@icdc.llnl.gov
#
# Paul Haeberli
# Silicon Graphics
alias	paul_haeberli	sgi!paul@pyramid.pyramid.com
#
# Pat Hanrahan
# PIXAR
alias	pat_hanrahan	pixar!pat@ucbvax.berkeley.edu
#
# Paul Heckbert - beam tracing, efficiency, radiosity, Jell-O
# CS grad student		415-642-9716
# 508-7 Evans Hall, UC Berkeley		UUCP: ucbvax!miro.berkeley.edu!ph
# Berkeley, CA 94720			ARPA: ph@miro.berkeley.edu
alias	paul_heckbert	ph@miro.berkeley.edu
alias	paul_heckbert_summer	heckbert.pa@xerox.com
#
# Michael Hohmeyer
# Computer Science Division
# University of California,
# Berkeley, CA  94720
#	school:	508-7 Evans Hall, UC Berkeley	415-642-9716
alias	michael_hohmeyer	hohmeyer@miro.berkeley.edu
#
# Jeff Hultquist
# MS 233-14
# NASA
# Moffet Field, CA  94043
alias	jeff_hultquist	hultquis@prandtl.nas.nasa.gov
#
# Erik Jansen
# Dept of Industrial Design
# Delft University of Technology
# Jaffalaan 9
# 2628 BX Delft
# The Netherlands
alias	erik_jansen	dutio!fwj@mcvax.cwi.nl
#
# Ken Joy - intersection tests, efficiency, coherency
# Computer Graphics Research Lab
# Computer Science Division
# University of California
# Davis, CA   95616
# (916) 752-1077
alias	ken_joy		joy@ucdavis.edu
#
# Mike Kaplan - octree subdivision
# Ardent, SF area, CA
# (408)-732-0400
#	hplabs!dana!mrk
alias	mike_kaplan	dana!mrk@hplabs.hp.com
#
# Tim Kay - efficient intersection calculations
# Caltech
alias	tim_kay		tim@csvax.caltech.edu
#
# Dave Kirk - 5D ray-tracing, efficiency
# California Institute of Technology
# 256-80 Computer Science Department
# Pasasdena, California  91125
# Office: 66 Jorgensen, (818)-356-6237
# (818) 356-6841  department secretary
alias	dave_kirk	dk@csvax.caltech.edu
#
# Roman Kuchkuda
# Megatek
alias	roman_kuchkuda	megatek!kuchkuda@ucsd.ucsd.edu
#
# George Kyriazis - parallel ray-tracing
# ECSE Dept., JEC,
# R.P.I.,
# Troy, NY 12180
alias	george_kyriazis	kyriazis@turing.cs.rpi.edu
#
# Olin Lathrop - hardware, octree, efficiency
# 55 Sunset Rd
# Groton, MA  01450
# was: alias	olin_lathrop	apollo!olin@eddie.mit.edu
#
# David Lister - efficiency, natural phenomena, optics, parallelism
# Data General Corp.
# 62 T. W. Alexander Drive
# Research Triangle Park, NC   27709
# (919) 248-6223
alias	david_lister	lister@dg-rtp.dg.com
#
# Pete Litwinowicz -- realistic image sythesis, etc.
# Apple Computer, INC
# 20525 Mariani Av,  MS 22-Y
# Cupertino, CA 95014
alias	pete_litwinowicz	litwinow@apple.com
#
# Gray Lorig - volumetric data rendering, sci-vi, and parallel & vector
#	architectures.
# Cray Research, Inc.
# 1333 Northland Drive
# Mendota Heights, MN  55120
# (612)-681-3645
alias	gray_lorig	gray%rhea.CRAY.COM@uc.msc.umn.edu
#
# Wayne Lytle
# Cornell Program of Computer Graphics
# 120 Rand Hall
# Ithaca, NY 14850
# (607)-255-4880
alias	wayne_lytle	wtl@cockle.tn.cornell.edu
#
# Tom Malley - parallelism, complex scenes, space subdiv., shading methods
# Evans & Sutherland Computer Corporation
# P.O. Box 8050
# Salt Lake City, Utah  84108
# (801)582-5847
# alias tom_malley malley@cs.utah.edu
# alias tom_malley esunix!tmalley@esunix.es.com
# alias tom_malley esunix!bambam!tmalley@cs.utah.edu
alias	tom_malley	esunix!tmalley@cs.utah.edu
#
# Don Marsh
# 801 Waverly, #4
# Palo Alto, CA 94301
alias	don_marsh	dmarsh@apple.apple.com
#
# Leonard McMillan
# Radiant Graphics
# 229 River Rd
# Red Bank, NJ 07701
# was: alias	leonard_mcmillan	lm%pixels@research.att.com
#
# Michael Natkin
# Box 1910
# Computer Science
# Brown University
# Providence, RI  02912
# (401) 863-7693
alias	michael_natkin	mjn@cs.brown.edu
#
# Masataka Ohta - efficiency, ray coherence
# Computer Center,
# Tokyo Institute of Technology,
# 2-12-1, O-okayama, Meguro-ku,
# Tokyo 113, JAPAN
# NOTE: msgs sent to/from Japan now cost $1.13 US per kilobyte, so no flaming!
alias	masataka_ohta	mohta%titcce.cc.titech.junet%utokyo-relay.csnet@RELAY.CS.NET
#
# Tom Palmer - applied ray tracing: realism & modeling for molecular graphics
# National Cancer Institute
# P.O. Box B  Bldg 430
# Frederick, MD 21701
# (301) 698-5797
alias	tom_palmer	palmer@ncifcrf.gov
#
# Darwyn Peachey
# Pixar
alias	darwyn_peachey	pixar!peachey@ucbvax.berkeley.edu
#
# John Peterson - bicubic splines, texturing
# Apple Computer
# MS 32E
# 20525 Mariani Ave
# Cupertino, CA 95014
alias	john_peterson	jp@apple.apple.com
#
# Frits Post
# Faculty of Mathematics & Informatics
# Delft University of Technology
# Julianalaan 132
# 2628 BL Delft
# The Netherlands
alias	frits_post	dutrun!frits@mcvax.cwi.nl
#
# Pierre Poulin - illumination models, natural phenomenon
# University of Toronto
# Department of Computer Science
# 10 King's College Road
# Toronto, Ontario, Canada
# M5S 1A4
# (416) 978-6619
alias	pierre_poulin	poulin@dgp.toronto.edu
#
# Thierry Priol - Parallel space tracing - hypercube
# IRISA
# Campus de Beaulieu
# Av du general LECLERC
# 35042 RENNES CEDEX
# FRANCE
alias	thierry_priol	inria!irisa!priol@mcvax.cwi.nl
#
# Rich Redner
# University of Tulsa
# Tulsa, OK  74104
#
# Panu Rekola - spline intersection, illumination models, textures
# Helsinki University of Technology
# Laboratory of Information Processing Science
# Room Y229A
# SF-02150 Espoo
# Finland
#	pre@cs.hut.fi
alias	panu_rekola	pre@cs.hut.fi
#
# Linda Roy
# Silicon Graphics
# M-S B-170
# 2011 Shoreline Blvd
# Mountain View, CA
alias	linda_roy	lroy@sgi.com
#
# Cary Scofield
# Apollo Computer Inc.
# 270 Billerica Rd.
# Chelmsford MA 01824
alias	cary_scofield	apollo!scofield@eddie.mit.edu
#
# Pete Segal
# AT&T Pixel Machines
# Room 4K-208
# Crawfords Corner Road
# Holmdel, NJ  07733
# (201)-949-1244
alias	pete_segal	pls%pixels@research.att.com
alias	old_pete_segal	ihnp4!homxc!pixels!pls@cornell.uucp
#
# * Scott Senften
# University of Tulsa
# 600 S. College
# Tulsa, OK 74104
alias	scott_senften	apctrc!bigmac!senften@cornell.uucp
#
# Cliff Shaffer - hierarchical data structures, algorithm analysis
# Department of Computer Science
# Virginia Tech
# Blacksburg VA 24061
# (703)-961-6931
alias	cliff_shaffer	shaffer@vtopus.cs.vt.edu
#
# Susan Spach
# HP Labs
# 1501 Page Mill Rd
# Reference: Bldg 29A Upper
# Palo Alto, CA  94303
alias	susan_spach	spach@hplabs.hp.com
#
# Rick Speer
alias	rick_speer	speer@ucbvax.berkeley.edu
#
# Stephen Spencer - accurate diffuse light calculation, antialiasing
# The Ohio State University Advanced Computing Center for the Arts and Design
# 1224 Kinnear Road
# Columbus Ohio  43212
# (614) 292-3416
alias	stephen_spencer	spencer@tut.cis.ohio-state.edu
#
# Steve Stepoway -- efficient intersection calc., parallelism
# 713 Ranier Circle
# Garland, Texas  75041
# (214) 278-4949
alias	steve_stepoway	stepoway@smu.edu
#
# * Mike Stevens
# University of Tulsa
# 1620 S. 94th E. Ave
# Tulsa, OK 74112
alias	mike_stevens	apctrc!zfms0a@cornell.uucp
#
# Paul Strauss - CSG
# Brown University
alias	paul_strauss	pss@cs.brown.edu
#
# * K.R. Subramanian
# The University of Texas at Austin
# Computer Sciences
# Taylor 2-126
# Austin, TX  78712
alias	kr_subramanian	subramn@cs.utexas.edu
#
# Daniel Toth
# Ford Motor Company
# Rm 2228 Bldg 3
# PO Box 2053
# Dearborn, MI 48121
#
# Russ Tuck - SIMD parallel algorithms and architectures for ray tracing
# Computer Science Dept, CB 3175
# Univ. of North Carolina
# Chapel Hill, NC 27599
# (919) 962-1755 (or ...-1932)
alias	russ_tuck	tuck@cs.unc.edu
#
# Greg Turk - rendering equation & tracing from lights
# UNC at Chapel Hill
# P.O. Box 26
# Chapel Hill, NC 27514
# (919) 962-1918
alias	greg_turk	turk@cs.unc.edu
#
# Ben Trumbore
# Cornell Program of Computer Graphics
# 120 Rand Hall
# Ithaca, NY 14850
# (607)-255-4880
alias	ben_trumbore	wbt@cockle.tn.cornell.edu
#
# Mark VandeWettering
alias	mark_vandewettering	markv@cs.uoregon.edu
#
# Greg Ward - accuracy and realism, daylight, efficient light sources.
# Lawrence Berkeley Laboratory
# 1 Cyclotron Rd., 90-3111
# Berkeley, CA  94720
# (415) 486-4757
# alias greg_ward	greg@lbl-csam.arpa
alias	greg_ward	gjward@lbl.gov
#
# Robert E. Webber, Prof. - Ray tracing large databases of volume density data
# Computer Science Department
# Rutgers University, Busch Campus
# New Brunswick, New Jersey   08903
# (webber@athos.rutgers.edu ; rutgers!athos.rutgers.edu!webber)
alias	bob_webber	webber@aramis.rutgers.edu
#
# Lee Westover (also Doug Turner - same address)
# 207 Sitterson Hall
# Dept of Computer Science
# UNC-Chapel Hill
# Chapel Hill, NC 27514
alias	lee_westover	westover@cs.unc.edu
#
# Andrew C. Woo - intersection culling algorithms and complexity
# University of Toronto
# Department of Computer Science
# 10 King's College Road
# Toronto, Ontario, Canada	
# M5S 1A4
# (416) 978-6986, 
# (416) 978-6619
alias	andrew_woo 	andreww@dgp.toronto.edu
  

From saponara@tcgould.TN.CORNELL.EDU Thu Sep  8 15:55:07 1988
Return-Path: <saponara@tcgould.TN.CORNELL.EDU>
Received: from tcgould.TN.CORNELL.EDU by turing.cs.rpi.edu (4.0/1.2-RPI-CS-Dept)
	id AA05308; Thu, 8 Sep 88 15:54:57 EDT
Date: Thu, 8 Sep 88 15:53:51 EDT
From: saponara@tcgould.tn.cornell.edu (John Saponara)
Received: by tcgould.TN.CORNELL.EDU (5.54/1.2-Cornell-Theory-Center)
	id AA25908; Thu, 8 Sep 88 15:53:51 EDT
Message-Id: <8809081953.AA25908@tcgould.TN.CORNELL.EDU>
To: kyriazis@turing.cs.rpi.edu
Subject: back issues, part 1 of 3
Status: RO

George,
	Here are the back issues.  Could you forward these and the latest
copy of the News and the mailing list (they're all coming today) to Phil
Getto for me?  Thanks.  Also, tell Phil he must send me a note if he wants
to be on the distribution list.

Eric H.

----------------------------------------------------

	I'm presently keeping the list up-to-date.  As far as adding new people
to this mailing list, I'd personally like to see the list not grow without
bounds.  Given that the Intro to Ray Tracing course had the highest attendance,
there's obviously a lot of people interested in ray-tracing.  The group
presently consists of researchers and people building ray-tracing systems, and
it'd be nice to keep it at this level (and keep all of our long-distance e-mail
costs down).

	First off, a quick announcement:  if you didn't get a copy of the
"Intro. to Ray Tracing" course notes at SIGGRAPH 87 and would like a copy
(they sold out twice), send me $20 and I'll xerox them.  There are only three
articles which are reprints - the rest is new stuff and pretty worthwhile.


[Skip the next offering if you read it in The Ray Tracing News]

SPD & NETLIB:

	My news for the day is that netlib is now carrying my "standard"
procedural database generators (written in C).  If you don't know about netlib,
here's the two minute explanation:

-------------------------------------------------------------------------------

	Netlib has two addresses.  One is:

	...!hplabs!hpfcla!research!netlib

(There may be other quicker routes to netlib - you'll have to research that
yourself).  The second one may be more convenient, as it is an arpa connection:

	netlib@anl-mcs.arpa

So after you do "mail [...!]hplabs!hpfcla!research!netlib", the next step is to
request what you want, one line per request.  For example, to get my databases,
simply type on a single line (and don't have anything else on the line):

	send haines from graphics

and end the message.  Netlib is computerized, and will automatically parse
your message and send you the 82K bytes of dox & code for my databases.

	The best way to find out about netlib and what it has to offer is to
send it a request:

	send index

and you'll get sent a listing of all the libraries available.  It's mostly
numerical analysissy stuff (lots'o'matrix solvers), but there are some things
of interest.  One particularly yummy database is the "polyhedron"
contributions.  There are some 142 polyhedra descriptions (vertices, faces, &
edge descriptions and more).  Some of these descriptions are buggy, but most
are good (as netlib says, "Anything free comes with no guarantee").

-----------------------------------------------------------------------------

	As far as the question "what do the images look like?" goes, the
images will be published in IEEE CG&A in November.


SPLINE SURFACES:

	A question which I want to answer is "which is the best way in
software to ray-trace bicubic spline patches?"  In particular, I want to
intersect bicubics (also quadrics and linears, and any mix of the three, e.g.
bicubic u, quadric v) that can also be rational, and also have non-uniform
knots.  As an added bonus, I'd like to do trimming curves.  I am interested in
anyone's feelings or findings on this subject, especially any experiences with
actual implementation they may have had.

To kick off discussion of this problem, John Peterson, who is researching this
question at the University of Utah, was kind enough to spend some time on a
response to me.  His reply follows (printed with his permission):

------------------------------------------------------------------------------

RE ray tracing splines..  I've sent a paper on ray tracing splines via
polygons to TOG, but since that's going to be stuck in the review process
for a while, here's an overview:

Most of the recent published stuff on this have been approaches using
numerical methods; which I would avoid like the plague.  I recently
discovered that Whitted mentions surface subdivision very briefly in his
classic paper (CACM '80) and also in Rubin & Whitted (SIGGRAPH '80).
The technique they use was to subdivide the surface "on the fly", i.e.,
an area of surface is split only when rays come near it.  Whitted
doesn't go into any detail in these papers though, just a couple of
paragraphs each.

However, Whitted's motivation for "subdivision on the fly" was lack of
memory on his PDP-11 - nowadays I think you're better off just to do all
the subdivision initially, then throw the surface away - much less
bookkeeping.  The polygon/bounding volume database isn't that huge if you
use adaptive surface subdivision (more on that in a moment).

In terms of references, I'd highly recommend the "Killer B's" - "An 
Introduction to the Use of Splines in Computer Graphics" by Bartels,
Beatty and Barsky.  It appeared as SIGGRAPH tutorial notes in '85 and
'86, and I think a book version is coming out from Kaufmann(sp?) in
September.  Another good reference is, "A Survey of Curve and Surface
Methods in CAGD", by Bohm, Farin and Kahmann, in "Computer Aided Geometric
Design", July 1984.  Both of these give excellent background on all the
math you'll need for dealing with splines.  If you need to know about
rationals, see Tiller's paper "Rational B-Splines for Curve and Surface
Representations" in CG&A, September '83.

The subdivision algorithm I used is based on the Oslo Algorithm (Cohen,
Lyche & Riesenfeld, Computer Graphics & Image Proc., Oct. 1980).  This
is a little slower than some of the other subdivision algorithms, but
the win is that you're not restricted to specific orders or knot
vectors.  Since the subdivision time is generally small compared to the
ray tracing time (like < 10%) I find it's worth the generality.  (By
the way, the Killer B's description of the Oslo algorithm is much easier
reading than the CG&IP article.  Sweeney's paper in the Feb. '86 CG&A
also has a good description of it).  Other subdivision classics are Ed
Catmull's PhD thesis (U. Utah, '75) and Lane, Carpenter, Whitted &
Blinn's article in the Jan. '80 CACM.

A couple tricks are noteworthy.  First, if you're doing adaptive surface
subdivision, you'll need a "flatness criteria" (to determine when to
quit splitting the surface).  I've found a good approximation is to take
the bounding box of the control mesh, and find the point in the middle
of it.  Then project the size of a pixel onto this point, and use this
distance as a flatness criteria.  

Other trick: Crack prevention.  If you split a surface into two parts,
one part may get subdivided more than the other.  If this happens, along
the seam between the two halves you need to force the polygon vertices in the
side with more divisions to lie on the edge of the surface with fewer
subdivisions.


My reply to John follows:

	Thanks much for taking the time to tell me about splines and your
findings.  You leave me in a quandary, though.  I'm interested in the
numerical techniques for bicubics, but I also want to take your warnings to
heart.

	I have to admit, I have a great fear of throwing away the nice
compact spline description and blow it up into tons of polygons.  From your
comments, you say that using adaptive techniques can help avoid this problem.
You seem to divide to the pixel level as your criteria - hmmm, I'll have to
think about that.  Avoiding cracks sounds like a headache.  Also, it seems to
me that you'll have problems when you generate reflection rays, since for these
rays the "flatness criteria" is not necessarily valid.  Have you ever noticed
practical problems with this (one pathological example I can think of is a
lens in front of a spline patch: the lens magnifies the pixel sized patches
into much larger entities.  However, almost everything has pathological
problems of some sort.  Have you run into any problems due to your method on a
practical level)?

	I may try subdividing on the fly to avoid all this.  To this end, have
you looked at Ron Pulleyblank's routine for calculating bicubic patch
intersections (IEEE CG&A March 1987)?  What do you think of his "on the fly"
subdivision algorithm?

	Articles: thanks for the references.  Have you seen the article by
Levner, Tassinari, and Marini, "A Simple Method for Ray Tracing Bicubic
Surfaces," in Computer Graphics 1987, T.L. Kunii editor, Springer-Verlag, Tokyo?
Sounded intriguing - someone's hopefully going to get me a copy of it soon
if you don't have it and would like a copy.  If you do have a copy, is it any
good?

----------------------------------------------------------------------------

Now, here's John's response:

RE: Numerical techniques.  I guess grim memories of round-off errors
and consistently inconsistent results may bias my opinion, but there
are some fundamental reasons for the problems with numerical methods.
Finding roots of systems of two equations is inherently an unstable
process (for a good description of this, see section 9.6 in "Numerical
Recipes" by William Press, et. al.).  Another way to think about
iterative approximations in two variables is the chaotic patterns
you see Mandlebrot sets.  It's (very roughly) the same idea.  Chaos
and ray tracing don't mix...

Your comments about the flatness criteria are true, though in practice
I've only found one "practical" instance where it really poses a
problem.  This is when a light source is very close to an object, and
casts a shadow on a wall some distance away.  The shadow projection
magnifies the surface's silhouette onto the wall, and in some cases
you see some faceting in the shadow's edge.  The work-around is to
have a per-surface "resolution factor" attribute.  The flatness
criteria found by projecting the pixel is multiplied by this factor,
so a surface with a "res factor" of 0.5 may generate up to twice as
many polygons as it normally would (extra polygons are generated only
where the surface is really curved, though).

In order to get a feel for just how much data subdivision generates, I
tried the following experiment.  I took the code for balls.c (from
the procedural database package you posted) and modified it to
generate a rational quadratic Bezier surface for each sphere
(as well as bounding volumes around each "group" of spheres).  I
didn't do the formal benchmark case (too lazy), but just choose a view
where all the spheres (level 2 == 91 of them) just filled the screen.
The results look like this:

	Image Size  Triangles
	(pixels)    generated
	128x128	     7800
	512x512     30400

The original spline surface definition wasn't small, each sphere has
45 rational (homogeneous) control points + knot vectors.  My
philosophy at the moment is that the algorithms for handling lots of
trivial primatives win over those for handling a few complex ones.
Right now the "lots of little primatives" camp has a lot of strong
members (Octrees/Voxels, Kay/Kajiya/Caltech, Arvo & Co, etc).  If you
just want to maximize speed, I think these are difficult to beat, but
they do eat lots of memory.

I'd be very interested in seeing a software implementation of Pulleyblank's
method.  The method seemed very clever, but it was very hardware oriented
(lots of integer arithmetic, etc).  I guess the question is whether or not
their subdivision algorithm is faster than just a database traversal.  
Something like Kay/Kajiya or Arvo's traversal methods would probably scream
if you could get them to run in strictly integer arithmetic (not to mention
putting them in hardware...)

Cheers,
jp

----------------------------------------------------------------------------

Anywell, that's the discussion as far as it's gone.  We can continue it in one
of two ways:  (1) everyone writes to everyone on the subject (this is quick,
but can get confusing if there are a lot of answers), (2) send replies to me,
which I'll then send to all.  I opt for (1) for now:  if things get confusing
we can always shift to (2).

[Actually, we're shifting to (2) around now, though it seems worthwhile to
pass on your comments to all, if you're set up to do it.  A summary of the
comments will (eventually, probably) get put in Andrew's ray-tracing
newsletter.]


More responses so far:

>From Jeff Goldsmith:

Re: flatness criterion

The definition that JP gave seems to be based on pixel geometry.
That doesn't seem right.  Why not subdivide until you reach 
subpatches that have preset limits in the change in their 
tangent vector (bounded curvature?)  Al Barr and Brian Von
Herzen have done some formal studies of that in a paper given
this year.  (It wasn't applied to ray tracing, but it doesn't
matter.)  I used that technique for creating polygonal representations
of superquadrics with fair to good success.  The geometric 
criterion makes sure that not much happens to the surface
within a patch, which is what you really want, anyway. 

I, too, by the way, believe in the gobs of polygons vs. one 
compicated object tradeoff.  The two seem to be close in
speed, but polygons saves big time in that you never need
new code for your renderer.  I hate writing (debugging) 
renderer code because it takes so long.  Modeling code
is much faster.
				--Jeff
 

>From Tim Kay:

Subject: ray tracing bicubic patches

The discussion about subdivision was interesting.  I just want to point
out that a paper in this year's proceedings (Snyder and Barr) did just
what the discussion suggested.  The teapot was modeled with patches,
and John hacked them up into very small polygons.  He also talked about
some of the problems that you run into.

Tim

------------------

>From Brian Barsky:

What numerical techniques is John referring to?  He doesn't mean the 
resolvent work, does he?

----------------------------

Response from John Peterson:

I was using a modified version of Sweeney's method.  It was extended in
two ways; first, a more effective means was used to generate the bounding
volumes around the mesh, and it was able to handle surfaces with arbitrary
orders and knot vectors.  I wrote up the results in a paper that appeared
in a very obscure proceedings (ACM Rocky Mnt. Regional Conference,
Santa Fe, NM, Nov. 1986)

------------------------------------------------------


ABNORMAL NORMALS:

>From Eric Haines:

My contribution for the week is an in-house memo I just wrote on transforming
normals, which is easier that it sounds.  Some of you have undoubtedly dealt
with this long ago, but hopefully I'll notify some poor soul that all is not
simple with normal transforms.  Pat Hanrahan mentioned this problem in his talk
at the SIGGRAPH '87 Intro to Ray Tracing course, but I didn't really understand
why he was saying "don't use the modeling matrix to transform normals!"  Now
I do, and I thought I'd explain it in a fairly informal way.  Any comments
and corrections are appreciated!

The file's in troff format, run by:

	pic thisfile | troff -mm

(The document was written partly so that I could learn about troff and pic,
so it's pretty primitive).


All for now.  The file follows, indented by 4 spaces so that no "."'s would
be in column one (which some e-mailers evidentally don't like).

----------------------cut here---------------------
    .tr ~
    .ds HF 3 3 2 2 2 2 2
    .nr Hi 0
    .HM I A 1 a
    .nr Ht 1
    .nr Hb 6
    .nr Hs 6
    .nr Cl 7
    .na
    .fi
    .ad l
    \f(CS
    .ce
    Abnormal Normals
    .ce
    Eric Haines, 3D/Eye Inc.

	The problem:  given a polygon and its normal(s) and a modeling matrix, how
    do we correctly transform the polygon from model to world space?  We assume
    that the modeling matrix is affine (i.e. no perspective transformation is going
    on).

	This question turns out to be fraught with peril.  The right answer is
    to transform the vertices using the modeling matrix, and to transform the
    normals using the transpose of the inverse (also known as the adjunct) of the
    modeling matrix.  However, no one believes this on first glance.  Why do all
    that extra work of taking the inverse and transposing it?  So, we'll present
    the wrong answers (which are commonly used in the graphics community
    nonetheless, sometimes with good reason), then talk about why the right answer
    is right.

	Wrong answer #1:  Transform the normals using the modeling matrix.  What
    this means is multiplying the normal [~x~y~z~0~] by the modeling matrix.  This
    actually works just fine if the modeling matrix is formed from translation
    matrices (which won't affect the normal transformation, since translations
    multiply the 'w' component of the vector, which is 0 for normals) and rotation
    matrices.  Scaling matrices are also legal, as long as the x, y, and z
    components are the same (i.e. no "stretching" occurs).  Reflection matrices
    (where the object is flipped through a mirror plane - more about these later)
    are also legal, as long as there is no stretching.  Note that scaling will
    change the overall length of the vector, but not the direction.

	So what's wrong?  Well, scaling matrices which stretch the object (i.e.
    whose scaling factors are not all the same for x, y, and z) ruin this scheme.
    Imagine you have a plane at a 45 degree tilt, formed by the equation
    x~=~y (more formally, x~-~y~=~0).  Looking down upon the x-y plane from the z
    axis, the plane would appear as a line x~=~y.  The plane normal is [~1~-1~0~]
    (for simplicity don't worry about normalizing the vector), which would appear
    to be a ray where x~=~-y, x~>~0.  Now, say we scale the plane by stretching
    it along the x axis by 2, i.e. the matrix:

	[~2~0~0~0~]
    .br
	[~0~1~0~0~]
    .br
	[~0~0~1~0~]
    .br
	[~0~0~0~1~]

    This would form a plane in world space where x~=~2y.  Using the method of
    multiplying the normal by this modeling matrix gives us a ray where x~=~-2y,
    x~>~0.  The problem with this ray is that it is not perpendicular to our
    plane.  In fact, the normal is now 2x~=~-y, x~>~0.  Therefore, using the
    modeling matrix to transform normals is wrong for the
    stretching case.


    .DS
    .PS 6.3

    # x-y grid
    LX: arrow up up
    "+y" at LX.end above
    move to LX.end
    move left 0.25
    "before" "transform"
    move to LX.start
    LY: arrow right right
    "+x" at LY.end ljust
    move to LY.start
    line left ; move to LY.start
    line down ; move to LY.start

    # plane
    M: line up right up right
    "plane" at M.end + (-0.05,0.0) rjust
    move to M.start
    line down left
    move to M.start

    N: arrow down right dashed
    "normal" at N.end + (0.05,0.0) ljust
    move to N.start

    ##############
    move right 2.0
    # x-y grid
    LX: arrow up up
    "+y" at LX.end above
    move to LX.end
    move left 0.25
    "after" "transform"
    move to LX.start
    LY: arrow right right
    "+x" at LY.end ljust
    move to LY.start
    line left ; move to LY.start
    line down ; move to LY.start

    # plane
    M: line up right right
    "plane" at M.end + (-0.05,0.0) rjust
    move to M.start
    line down 0.25 left
    move to M.start

    N: arrow down right right dashed
    box invisible height 0.25 "bad" "normal" with .n at N.end
    move to N.start

    N: arrow down right 0.25 dotted
    box invisible height 0.25 "correct" "normal" with .n at N.end
    move to N.start
    .PE


    .ce
    Figure 1 (a) & (b) - Stretching Transformation
    .DE
    .na
    .fi
    .ad l


	Wrong answer #2:  Transform the vertices, then calculate the normal.  This
    is a limited response to the wrongness of method #1, solving the stretching
    problem.  It's limited because this method assumes the normal is calculated
    from the vertices.  This is not necessarily the case.  The normals could be
    supplied by the user, given as a normal for the polygon, or on a normal per
    vertex basis, or both.  However, even if the system only allowed normals which
    were computed from the vertices, there would still be a direction problem.

	Say the method used to calculate the normal is to take the cross product of
    the first two edges of the polygon (This is by far the most common method.
    Most other methods based on the local geometry of the polygon will suffer from
    the same problem, or else the problem in method #1).  Say the vertices are
    [~1~0~0~], [~0~0~0~], and [~0~-1~0~].  The edge vectors (i.e. the vector formed
    from subtracting one vertex on the edge from the other vertex forming that edge)
    are [~1~0~0~] and [~0~1~0~], in other words the two edge vectors are parallel
    to the +x and +y axes.  The normal is then [~0~0~1~], calculated from the cross
    product of these vectors.

	If we transform the points by the reflection matrix:

	[~1~0~~0~0~]
    .br
	[~0~1~~0~0~]
    .br
	[~0~0~-1~0~]
    .br
	[~0~0~~0~1~]

    the result is the same: none of the edges actually moved.  However, when we
    use a reflection matrix as a transform it is assumed that we want to reverse the
    object's appearance.  With the above transform the expected result is that
    the normal will be reversed, thereby reversing which side is thought of as
    the front face.  Our method fails on these reflection transforms because it
    does not reverse the normal:  no points changed location, so the normal will
    be calculated as staying in the same direction.

	The right (?) answer:  What (usually) works is to transform the normals
    with the transpose of the inverse of the modeling matrix.  Rather than trying
    to give a full proof, I'll talk about the three types of matrices which are
    relevant:  rotation, reflection, and scaling (stretching).  Translation was
    already seen to have no effect on normals, so we can ignore it.  Other more
    obscure affine transformations (e.g. shearing) are avoided in the discussion,
    though the method should also hold for them.

	In the case of rotation matrices and reflection matrices, the transpose and
    the inverse of these transforms are identical.  So, the transpose of the
    inverse is simply the original modeling matrix in this case.  As we saw, using
    the modeling matrix worked fine for these matrices in method #1.  The problems
    occurred with stretching matrices.  For these, the inverse is not just a
    transpose of the matrix, so the transpose of the inverse gives a different
    kind of matrix.  This matrix solves our problems.  For example, with the bad
    stretching case of method #1, the transpose of the inverse of the stretch
    matrix is simply:

	[~0.5~0~0~0~]
    .br
	[~~0~~1~0~0~]
    .br
	[~~0~~0~1~0~]
    .br
	[~~0~~0~0~1~]

    (note that the transpose operation is not actually needed in this particular
    case).  Multiplying our normal [~1~-1~0~] by this matrix yields [~0.5~-1~0~],
    or the equation 2x~=~-y, x~>~0, which is the correct answer.

	The determinant:  One problem with taking the inverse is that sometimes
    it isn't defined for various transforms.  For example, casting an object onto
    a 2D x-y plane:

	[~1~0~0~0~]
    .br
	[~0~1~0~0~]
    .br
	[~0~0~0~0~]
    .br
	[~0~0~0~1~]

    does not have an inverse:  there's no way to know what the z component should
    turn back into, given that the above transform matrix will always set the z
    component to 0.  Essentially, information has been irretrievably destroyed
    by this transform.  The determinant of the upper-left 3x3 matrix (the only
    part of the matrix we really need to invert for the normal transform) is 0,
    which means that this matrix is not invertable.

	An interesting property of the determinant is that it, coupled with method
    #2, can make that method work.  If the determinant of the 3x3 is positive, we
    have not shifted into the mirror world.  If it is negative, then we should
    reverse the sign of the normal calculated as we have entered the mirror
    universe).

	It would be nice to get a normal for polygons which have gone through this
    transform.  All bets are off, but some interesting observations can be made.
    The normal must be either [~0~0~1~] or its negative [~0~0~-1~] for this
    transformation (or undefined, if all vertices are now on a single line).
    Choosing which normal is a bit tricky.  One OK method is to check the normal
    before transform against [~0~0~1~]: if the dot product of the two is negative,
    then reverse the normal so that it will point towards the original direction.
    However, if our points went through the z-reflection matrix we used earlier,
    then went through the transform above, the normals were reversed, then the
    object was cast onto the x-y plane.  In this case we would want to have the
    reverse of the normal calculated from the edges.  However, this reversal has
    been lost by our casting transform:  concatenating the reflection matrix with
    the casting matrix yields the same casting matrix.  One tricky way of
    preserving this is to allow 0 and -0 to be separate entities, with the sign of
    zero telling us whether to reverse the normal or not.  This trick is rather
    bizarre, though - it's probably easier to just do it the simple way and warn
    whoever's using the system to avoid non-invertable transformations.

THE END:

	Well, that's all for now.  Do you have any comments? questions?
interesting offerings for the group?  Either send your bit to everyone on the
list, or send a copy on to me and I'll post it to all.  I realize this is
quite a deluge of info for one message, but all of this has accumulated over a
few months.  The traffic so far has been quite mild: don't worry about future
flooding.

	All for now,

	Eric Haines

----------------------------------------

 _ __                 ______                         _ __
' )  )                  /                           ' )  )
 /--' __.  __  ,     --/ __  __.  _. o ____  _,      /  / _  , , , _
/  \_(_/|_/ (_/_    (_/ / (_(_/|_(__<_/ / <_(_)_    /  (_</_(_(_/_/_)_
             /                               /|
            '                               |/

Ray Tracing News, e-mail edition, 1/15/88

concatenated by Eric Haines, hpfcla!hpfcrs!eye!erich@hplabs.HP.COM

Well, we've all been massively inactive as far as ray tracing news, what with
SIGGRAPH and the holidays.  Now that the rush is over, I thought I'd pass on
some additional comments on spline surfaces and how to ray-trace them, a
polemic against octree subdivision, and end with a quick list of recommended
books.  Finally, the updated mailing list (note that Andrew Glassner moved).

Speaking of whom, Andrew Glassner would like contributions to "The Ray Tracing
News", hardcopy edition.  He hopes to publish another one soon, but says it may
be the last if no one sends him any more material.  So, if you have an
interesting technical memo or other short (less than 5 pages) piece you'd like
to share with the rest of us, please write him (see the mailing list).

	All for now,

	Eric

------------------------------------------------------------------------------


>From: hpfcla!hpda!uunet!mcvax!dutio!fwj (erik jansen)
Subject: subdivision and CSG.

I went briefly through the discussion. I have been working on most items 
the last five years. Some of the results are described in 'Solid modelling 
with faceted primitives', my PhD thesis 'book'.  It is printed (108 pages) with
a cover in colour. People who are interested in a free copy can mail me.

Here is the abstract::


     Solid modelling with faceted primitives

     F.W. Jansen


     Computer Aided Design and Computer Graphics techniques are valuable
     tools in industrial design for the design and visualisation of
     objects. For the internal representation of the geometry of objects,
     several geometric modelling schemes are used. One approach, Con-
     structive Solid Geometry (CSG), models objects as combinations of
     primitive solids.  The subject of research in this project is a CSG
     representation where the surfaces of the primitives are approximated
     with flat surface elements (facets).  Techniques to improve the
     efficiency of the display of models with a large number of these
     surface elements have been developed.

          Two approaches have been taken.  The first approach is based on
     the use of additional data structures to enhance the processing,
     sorting and search of these surface elements.  Further, a method is
     presented to store intermediate results of the logical computations
     needed for the processing of CSG representations.  These methods are
     applied to a CSG polygon modelling system.

          The second approach aims at the development of algorithms for
     multi-processor systems and VLSI-based display systems.  The central
     method is a CSG depth-buffer algorithm.  A tree traversal method is
     introduced that combines several techniques to reduce the processing
     and memory use.  The methods have been applied to a CSG halfspace
     modelling system.


          Keywords: computer graphics, geometric modelling, solid 
     modelling, Constructive Solid Geometry (CSG), ray tracing algorithm,
     depth-buffer algorithm, z-buffer algorithm, list-priority algorithm,
     depth-priority algorithm, spatial subdivision, CSG classification, 
     CSG coherence.

The following subjects are also included: adaptive subdivision, crack removal.

You can send this information to all. I will read the discussion more carefully
and will comment on it later.
 
Erik Jansen

-------------------------------------------------------------------------------

>From: Masataka Ohta <hpfcda!mohta%titcce.cc.titech.junet%utokyo-relay.csnet@RELAY.CS.NET>
Subject: Bounded ray tracing

Dear Sir,

The discussions so far is very interesting one and I have
several comments.

As I am charged for foreign mail (about $1 for 1K bytes, both
incoming and out going), it costs considerablely to mail everyone
on the list separately. So, I would like you to re-distribute
my transpacific mail to everyone else.

					Masataka Ohta

My comment on the flatness criteria with reflections follows:
-----------------------------

Though I don't like subdividing patches into polygons for ray
tracing (it's incoherent and, for example, CSG objects are
difficult to render), good "flatness criteria" even with
reflection, refraction or shadowing can be given using ray
bound tracing.

The basic idea is simple. Ray bound is a combination of two
bounds: a bound of ray origins and a bound of ray directions.
A efficient bound can be formed by using a sphere for bounding
ray origins and using a circle (on a unit sphere, i.e. using
spherical geometry) for ray directions.

To begin with, bound a set of all rays which originates from
each pixel. Flatness of a patch for the first generation ray
should be computed against this ray bound, which is equivalent
to measure flatness with perspective transformation, because
rays are bounded by a pixel-sized cone.

As for the second generation rays, they can be bounded by a
certain ray bound which can be calculated form the first
generation ray bound. And those ray bounds should be used
for the flatness check.

For those further interested in ray bound tracing, I will
physically mail my paper titled "Bounded ray tracing for
perfect and efficient anti-aliasing".

-------------------------------------------------------------------------------

>From: Eric Haines
Subject: Spline surface rendering, and what's wrong with octrees

Well, after all the discussion of spline surfaces, I finally went with turning
the spline surface into patches, putting an octree around these, and then do
Glassner/Kay/Fujimoto/etc octree ray-tracing (in reality I found Glassner's
article the most useful, though I didn't use his hashing scheme due to (a)
being pressed for time and (b) being pressed for memory space.  This seems to
work fairly well, but I noticed some interesting problems with octrees that
I thought I'd pass on.

----------

[Note: this first problem is kinda boring if you've never implemented an octree
subdivision scheme before.  Skip on to problem # 2, which I think is more
important].

The first problem is: How do I cleverly chose octree bounds?  This problem was
first mentioned to me by Mike Kaplan, which I did not think about much until I
suddenly noticed that all available memory was getting gobbled by certain
polygonalized splines.  The problem is that there are two parameters which are
commonly used to end the further subdivision of an octree cube into its eight
component "cubies".

One is a maximum number of primitives per octree cube.  To make the octree in
the first place we have a bounding cube which contains the environment.  If
the cube has more than a certain number of primitives in it, then octree
subdivision takes place.  The octree cubies formed are then each treated in a
like fashion, subdividing until all leaf cubies contain less than or equal to
the number of primitives.  The second parameter is the maximum tree depth,
which is the number of levels beyond which we will not subdivide cubes.  This
parameter generally has precedence over the first parameter, i.e. if the
maximum level has been reached but the maximum number of primitives is still
exceeded, subdivision will nonetheless halt.

The trick is that you have to pay close attention to both parameters.
Originally I set these parameters to some reasonable numbers: 6 primitives and
8 levels being my maximums.  What I found is that some objects would have
very deep octrees, all the way down to level 8, even though their number of
primitives was low.  For example, an object with 64 patches would still have
some leaf nodes down at level 8 which had 7+ primitives in them.  I was
pretty surprised by this problem.

My solution for spline surfaces was to keep the maximum number of primitives
at 6 and use another parameter to determine the maximum level.  I use the
formula:

	max level = round_up [ ln( primitives / K ) / ln( S ) ]

where K is the maximum number of primitives (i.e. 6) and S was a prediction of
how much an octree subdivision would cut down the number of primitives in an
octree.  For example, in an environment consisting of a set of randomly
distributed points, one would expect that when the octree cube containing these
points was subdivided into eight octree cubies, each octree cubie would have
about 1/8th of the points inside it.  For a spline surface I reasoned that
about four of the octree cubies might have some part of the surface in them,
which would give an S=4 (Note that the largest, original octree must have at
least four cubies filled by the surface.  However, this is not necessarily true
for suceedingly smaller cubies).  Another factor which had to be taken into
account was that there would also be some overlap: some primitives would appear
in two or more cubies.  So, as a final reasonable guess I chose S=3.5 .  This
seems to work fairly well in practice, though further testing would be very
worthwhile.

Coming up with some optimal way to chose a maximum octree depth still seems to
be an open question.  Further study on how various environments actually fill
space would be worthwhile:  how many octree nodes really are filled on the
average for each subdivision?  More pragmatically, how do we determine the
best maximum depth for ray-tracing an environment?  The problem with not
limiting the maximum level is primarily one of memory.  If the octree grows
without reasonable bounds a simple scene could use all available memory.  Also,
a large number of unnecessary octree nodes results in additional access time,
either through having to search through the octree or through having extraneous
objects in the hashing table.

A more intelligent approach might be to do adaptive subdivision: subdivide an
octree cube as usual, then see how many fewer primitives there are in each
cubie.  If some cube has more than some percentage of primitives in it, the
subdivision could be deemed useless and so subdivision would end at this point.
If anyone knows a master's candidate looking for a project, this whole question
of when it is profitable to subdivide might make a worthwhile topic.  Judging
from the interest in octrees by ray tracing researchers at last year's
roundtable, I think this will become more and more important as time goes on.

-------------

The second problem with octrees:  I decided to go with octrees for spline
surfaces only because these objects would have fairly localized and even
distribution of primitives (i.e. quadrilateral patches).  I feel that octree
efficiency techniques are probably horrible for ray tracing in general.

For example, imagine you have a football stadium made of, say, 5K primitives.
Sitting on a goal line is a shiny polygonalized teapot of 5K quadrilaterals
(note that the teapot is teapot sized compared to the stadium).  You fill the
frame with the teapot for a ray trace, hoping to get some nice reflections of
the stadium on its surface.

If you use an octree for this scene, you'll run into an interesting problem.
The teapot is, say, a foot long.  The stadium is 200 yards long.  So, the
teapot is going to be only 1/600th the size of the stadium.  Each octree
subdivision creates 8 cubies which are each half the length of the parent
cube.  You could well subdivide down to 9 levels (with that 9th level cubie
having a length of 1/512th of the stadium length: about 14 inches) of octrees
and still have the whole teapot inside one octree cube, still undivided.  If
you stopped at this 9th level of subdivision, your ray trace would take
forever.  Why?  Because whenever a ray would enter the octree cubie containing
the teapot (which most of the rays from your eye would do, along with all those
reflection and shadow rays), the cubie would contain a list of the 5K teapot
polygons.  Each of these polygons would have to be tested against the ray,
since there is no additional efficiency structure to help you out.  In this
case the octree has been a total failure.

Now, you may be in a position where you know that your environments will be
well behaved: you're ray tracing some specific object and the surrounding
environment is limited in size.  However, the designer who is attempting to
create a system which can respond to any user's modeling requests is still
confronted by this problem.  Further subdivision beyond level nine down to
level eighteen may solve the problem in this case.  But I can always come up
with a worse pathological case.  Some realistic examples are an animation
zooming in on a texture mapped earth into the Caltech campus:  when you're
on the campus the sphere which represents the earth would create a huge
octree node, and the campus would easily fall within one octree cubie.  Or
a user simply wants to have a realistic sun, and places a spherical light
source 93 million miles away from the scene being rendered.  Ridiculous?  Well,
many times I find that I will place positional light sources quite some
distance away from a scene, since I don't really care how far the light is,
but just the direction the light is coming from.  If a primitive is associated
with that light source, the octree suddenly gets huge.

Solutions?  Mine is simply to avoid the octree altogether and use Goldsmith's
automatic bounding volume generation algorithm (IEEE CG&A, May 1987).  However,
I hate to give up all that power of the octree so easily.  So, my question:
has anyone found a good way around this problem?  One method might be to do
octree subdivision down to a certain level, then consider all leaf cubies that
have more than the specified number of primitives in their lists as "problem
cubies".  For this list of primitives we perform Goldsmith's algorithm to get
a nice bounding volume hierarchy.  This method reminds me of the SIGGRAPH 87
paper by John Snyder and Alan Barr, "Ray Tracing Complex Models Containing
Surface Tesselations".  Their paper uses SEADS on the tesselated primitives and
hierarchy on these instanced SEADS boxes to get around memory constraints,
while my idea is to use the octree for the total environment so that the
quick cutoff feature of the octree can be used (i.e. if any primitive in an
octree cubie is intersected, then ray trace testing is done, versus having to
test the whole environment's hierarchy against the ray).  Using bounding
volume hierarchy locally then gets rid of the pathological cases for the octree.

However, I tend to think the above just is not worthwhile.  It solves the
pathological cases, but I think that automatic bounding volume hierarchy (let's
call it ABVH) methods will be found to be comparable in speed to octrees in
many cases.  I think I can justify that assertion, but first I would like to
get your opinions about this problem.

-------------------------------------------------------------------------------

Top Ten Hit Parade of Computer Graphics Books
    by Eric Haines

One of the most important resources I have as a computer graphics programmer
is a good set of books, both for education and for reference.  However, there
are a lot of wonderful books that I learn about years after I could have first
used them.  Alternately, I will find that books I consider classics are unknown
by others.  So, I would like to collect a list of recommended reading and
reference from you all, to be published later in the year.  I would especially
like a recommendation for good books on filtering and on analytic geometry.
Right now I am reading _Digital Image Processing_ by Gonzalez and Wintz and have
_A Programmer's Geometry_ on order, but am not sure these fit the bill.
_An Introduction to Splines for use in Computer Graphics and Geometric
Modeling_ by Bartels/Beatty/Barsky looks like a great resource on splines,
but I have read only four chapters so far so am leaving it off the list for
now.

Without further ado, here are my top ten book recommendations.  Most should be
well known to you all, and so are listed mostly as a kernel of core books I
consider useful.  I look forward to your additions!

    _The Elements of Programming Style, 2nd Edition_, Brian W. Kernighan,
	P.J. Plauger, 168 pages, Bell Telephone Laboratories Inc, 1978.

	All programmers should read this book.  It is truly an "Elements of
	Style" for programmers.  Examples of bad coding style are taken from
	other textbooks, corrected, and discussed.  Wonderful and pithy.

    _Fundamentals of Interactive Computer Graphics_, James D. Foley, A. Van
	Dam, 664 pages, Addison-Wesley Inc, 1982.

	A classic, covering just about everything once over lightly.

    _Principles of Interactive Computer Graphics, 2nd Edition_,  William M.
	Newman, R.F. Sproull, 541 pages, McGraw-Hill Inc, 1979.

	The other classic.  It's older (e.g. ray-tracing did not exist at this
	point), but gives another perspective on various algorithms.

    _Mathematical Elements for Computer Graphics_, David F. Rogers, J.A. Adams,
	239 pages, McGraw-Hill Inc, 1976.

	An oldie but goodie, its major thrust is a thorough coverage of 2D and
	3D transformations, along with some basics on spline curves and
	surfaces.

    _Procedural Elements for Computer Graphics_, David F. Rogers, 433 pages,
	McGraw-Hill Inc, 1985.

	For information on how to actually implement a wide variety of
	graphics algorithms, from Bresenham's line drawer on up through
	ray-tracing, this is the best book I know.  However, for complicated
	algorithms I would recommend also reading the original papers.
	
    _Numerical Recipes_, William H. Press, B.P. Flannery, S.A. Teukolsky,
	W.T. Vetterling, 818 pages, Cambridge University Press, 1986.

	Chock-full of information on numerical algorithms, including code
	in FORTRAN and PASCAL (no "C", unfortunately).  The best part of
	this book is that they give good advice on what methods are appropriate
	for different types of problems.

    _A First Course in Numerical Analysis, 2nd Edition_, Anthony Ralston,
	P. Rabinowitz, 556 pages, McGraw-Hill Inc, 1978.

	Tom Duff's recommendation says it best: "This book is SO GOOD [<-these
	words should be printed in italics] that some colleges refuse to use
	it as a text because of the difficulty of finding exam questions that
	are not answered in the book".  It covers material in depth which
	_Numerical Recipes_ glosses over.

    _C: A Reference Manual_, Samuel P. Harbison, G.L. Steele Jr., 352 pages,
	Prentice-Hall Inc, 1984.

	A comprehensive and comprehensible manual on "C".

    _The Mythical Man-Month_, Frederick P. Brooks Jr, 195 pages, Addison-Wesley
	Inc, 1982.

	A classic on the pitfalls of managing software projects, especially
	large ones.  A great book for beginning to learn how to schedule
	resources and make good predictions of when software really is going
	to be finished.

    _Programming Pearls_, Jon Bentley, 195 pages, Bell Telephone Laboratories
	Inc, 1986.

	Though directed more towards systems and business programmers, there
	are a lot of clever coding techniques to be learnt from this book.
	Also, it's just plain fun reading.

As an added bonus, here's one more that I could not resist:

    _Patterns in Nature_, Peter S. Stevens, 240 pages, Little, Brown and Co.
	Inc, 1974.

	The thesis is that simple patterns recur again and again in nature and
	for good reasons.  A quick read with wonderful photographs (my favorite
	is the comparison of a turtle shell with a collection of bubbles
	forming a similar shape).  Quite a few graphics researchers have used
	this book for inspiration in simulating natural processes.

---------------------------------

>From Olin Lathrop:

Here goes another attempt to reach more people.  I will now spare you all
a paragraph of griping about the e-mail system.

About the normal vector transform:

Eric, you are absolutely right.  I also ran into this when some of my squashed
objects just didn't look right, about 4 years ago.  I would just like to offer
a slightly different way of looking at the same thing.  I find I have difficulty
with mathematical concepts unless I can attatch some sort of physical significance
to them.  (I think of a 3x4 transformation matrix as three basis
vectors and a displacement vector instead of an amorphous pile of 12 numbers.)

My first attack at finding a transformed normal was to find two non-paralell
surface vectors at the point in question.  These could be transformed regularly
and the transformed normal would be their cross product.  This certainly 
works, but is computationally slow.  It seemed clear that there should exist
some 3x3 matrix that was the total transform the normal vector really went thru.
To simplify the thought experiment, what if the original normal vector was exactly
along the X axis?  Well, the unit surface vectors would be the Y and Z axis
vectors.  When these are sent thru the regular 3x3 transformation matrix, 
they become the Y and Z basis vectors of that matrix.  The final resulting
normal vector is therefore the cross product of the Y and Z basis vectors of the
regular matrix.  This is then what the X basis vector of the normal vector
transformation matrix should be.  In general, a basis vector in the normal
vector transformation matrix is the cross product of the other two basis
vectors of the regular transformation matrix.  I wasn't until about a year
later that I realized that this resulting matrix was the inverse transpose
of the regular one.  

This derivation results in exactly the same matrix that Eric was talking about,
but leaves me with more physical understanding of what it represents.

Now for a question:  It has always bothered me that this matrix trashes the
vector magnitude.  This usually implies re-unitizing the transformed normal
vector in practise.  Does anyone avoid this step?  I don't want to do any
more SQRTs than necessary.  You can assume that the original normal vector
was of unit length, but that the result also needs to be.


About octrees:

1)  I don't use Andrew's hashing scheme either.  I transform the ray so that
  my octree always lives in the (0,0,0) to (1,1,1) cube.  To find the voxel
  containing any one point, I first convert the coordinates to 24 bit integers.
  The octree now sits in the 0 to 2**23 cube.  Picking off the most significant
  address bit for each coordinate yields a 3 bit number.  This is used to select
  one of 8 voxels at the top level.  Now pick off the next address bit down
  and chose the next level of subordinate voxel, etc, until you hit a leaf node.
  This process is LOGn, and is very quick in practise.  Finding a leaf voxel
  given an integer coordinate seems to consume about 2.5% of the time for most
  images.  I store direct pointers to subordinate voxels directly in the parent
  voxel data block.  In fact, this is the ONLY way I have of finding all but the
  top voxel.

2)  Choosing subdivision criteria:  First, the biggest win is to subdivide on
  the fly.  Never subdivide anything until you find there is a demand for it.
  My current subdivision criteria in order of precidence (#1 overrides #2) are:

  1)  Do not subdivide if hit subdivision generation limit.  This is the same
    as what Eric talked about.  I think everyone does this.

  2)  Do not subdivide if voxel is empty.

  3)  Subdivide if voxel contains more than one object.

  4)  Do not subdivide if less than N rays passed thru this voxel, but did
    not hit anything.  Currently, N is set to 4.

  5)  Subdivide if M*K < N.  Where M is the number of rays that passed thru this
    voxel that DID hit something, and K is a parameter you chose.  Currently,
    K is set to 2, but I suspect it should be higher.  This step seeks to avoid
    subdividing a voxel that may be large, but has a good history of producing
    real intersections anyway.  Keep in mind that for every ray that did hit
    something, there are probably light source rays that did not hit anything.
    (The shader avoids launching light rays if the surface is facing away from
    the light source.)  This can distort the statistics, and make a voxel appear
    less "tight" than it really is, hence the need for larger values of K.

  6)  Subdivide.

Again, the most important point is lazy evaluation of the octree.  The above rules
are only applied when a ray passes thru a leaf node voxel.  Before any rays are
cast, my octree is exactly one leaf node containing all the objects.

3) First solution to teapot in stadium:  This really cries out for nested objects.
  Jim Arvo, Dave Kirk, and I submitted a paper last year on "The Ray Tracing Kernel"
  which discussed applying object oriented programming to designing a ray tracer.
  Jim just told me he is going to reply about this in detail so I will make this
  real quick.  Basically, objects are only defined implicitly by the results of
  various standard operations they must be able to perform, like "intersect
  yourself with this ray".  The caller has no information HOW this is done.  An
  object can therefore be an "aggregate" object which really returns the result of
  intersecting the ray with all its subordinate objects.  this allows for easily
  and elegantly mixing storage techniques (octrees, linear space, 5D structures,
  etc.) in the same scene.  More on this from JIM.

4) Second solution to teapot in stadium:  I didn't understand why an octree 
  wouldn't work well here anyway.  Suppose the teapot is completely enclosed
  in a level 8 voxel.  That would only "waste" 8x8=64 voxels in getting down
  to the space you would have chosen for just the teapot alone.  Reflection
  rays actually hitting the rest of the stadium would be very sparse, so go
  ahead and crank up the max subdivision limit.  Am I missing something?

-----------------------------------------------

(This is a reply to Olin Lathrop.  Summary: "well, maybe the octree is not so
bad after all...").

>From: Eric Haines


Olin Lathrop writes:
> To simplify the thought experiment, what if the original normal vector was exactly
> along the X axis?  Well, the unit surface vectors would be the Y and Z axis
> vectors.  When these are sent thru the regular 3x3 transformation matrix, 
> they become the Y and Z basis vectors of that matrix.  The final resulting
> normal vector is therefore the cross product of the Y and Z basis vectors of the
> regular matrix.  This is then what the X basis vector of the normal vector
> transformation matrix should be.  In general, a basis vector in the normal
> vector transformation matrix is the cross product of the other two basis
> vectors of the regular transformation matrix.  It wasn't until about a year
> later that I realized that this resulting matrix was the inverse transpose
> of the regular one.  

The problem is the sign of the basis vector is unclear by this method.
I tried this approach, but it fails on mirror matrices.  Suppose your
transformation matrix is:
[ -1 0 0 0 ]
[  0 1 0 0 ]
[  0 0 1 0 ]
[  0 0 0 1 ]

This matrix definitely affects the surface normal in X, but your two vectors
in Y and Z are unaffected.  This problem never occurs in the "real" world
because such a matrix is equivalent to twisting an object through 4D space
and making it go "through the looking glass".  However, it happens in computer
graphics a lot:  e.g. I model half a car body, then mirror reflect to get the
other half.  If you have a two sided polygon laying in the YZ plane, with one
side red & the other blue, and apply the above transformation, no vertices
(and no tangent vectors) have any non-zero X components, and so will not change.
But the normal does reverse, and the sides switch colors.  My conclusion was
that you have to use the transpose of the inverse to avoid this problem, since
surface normals fail for this case. (p.s. did you get a copy of Glassner's
latest (2nd edition) memo on this problem?  He does a good job explaining the
math).

> About octrees:
>
> 1)  I don't use Andrew's hashing scheme either.  I transform the ray so that
>   my octree always lives in the (0,0,0) to (1,1,1) cube...

Actually, this is the exact approach I finally took, also.  I had rejected
the hashing scheme earlier, and forgotten why (and misremembered that it was
because of memory costs) - the correct reason for not hashing is that it's
faster to just zip through the octree by the above method; no hashing is
needed.  It's pretty durn fast to find the right voxel, I agree.

Have you experimented with trying to walk up and down the octree, that is, when
you are leaving an octree voxel you go up to the parent and see if the address
is inside the parent?  If not, you go to its parent and check the address, etc,
until you find that you can go back down.  Should be faster than the straight
downwards traversal when the octree is deep: the neighboring voxels of the
parent of the voxel you're presently in account for 3 of the 6 directions the
ray can go, after all.  You have 1/2 a chance of descending the octree if you
check the parent, 3/4ths if you go up two parents, etc.  (Where did I read of
this idea, anyway?  Fujimoto?  Kaplan?  Whatever the case, it's not original
with me).

Another idea that should be mentioned is one I first heard from Andrew
Glassner:  putting quadtree-like structures on the cube faces of the octree
cubes.  It's additional memory, but knowing which octree cube is the next would
be a faster process.  Hopefully Andrew will write this up sometime.

The subdivision criteria ideas are great - makes me want to go and try them
out!  When are you going to write it up and get it published somewhere? Lazy
subdivision sounds worthwhile: it definitely takes awhile for the octrees to
get set up under my present "do it all at the beginning" approach (not to
mention the memory costs).  That was something I loved about the Arvo/Kirk
paper - without it the 5D scheme would appear to be a serious memory hog.

> 4) Second solution to teapot in stadium:  I didn't understand why an octree 
>   wouldn't work well here anyway.  Suppose the teapot is completely enclosed
>   in a level 8 voxel.  That would only "waste" 8x8=64 voxels in getting down
>   to the space you would have chosen for just the teapot alone.  Reflection
>   rays actually hitting the rest of the stadium would be very sparse, so go
>   ahead and crank up the max subdivision limit.  Am I missing something?

There are two things which disturbed me about the use of the octree for this
problem.  One was that if the maximum subdivision level was reached prematurely
then the octree falls apart.  I mentioned that you could indeed subdivide down
another 9 levels and have an 18 level octree that would work.  However, the
problem with this is knowing when to stop - why not go on to 24 levels?  For
me it boils down to "when do I subdivide?".  I suspect that your additional
criteria might solve a lot of the pathological cases, which is why I want
to test them out.  Also note that there are built in maximum subdivision levels
in octree schemes which could be reached and still not be sufficient (though
admittedly your 24 levels of depth are probably enough.  Of course, I once
thought 16 bits was enough for a z-buffer - now I'm not so sure.  Say you have
a satellite which is 5 feet tall in an image, with the earth in the background.
We're now talking 23 levels of subdivision before you get within the realm
of subdividing the satellite.  With 24 levels of depth being your absolute
maximum you've hit the wall, with only one subdivision level helping you out
on the satellite itself).

Good point that as far as memory goes it's really just 8x8 more voxels "wasted".
One problem is: say I'm 8 feet in each direction from the teapot, with me and
the teapot in diagonally opposite corners of a cube which is then made into an
octree.  The only way to get through the 8 cubes in the containing box is to
travel through 4 of them (i.e. if I'm in Xhi, Yhi, Zhi and the teapot is in
Xlo, Ylo, Zlo, then I have to intersect my own box and then three other boxes
to move me through in each "lo" direction).  In this case there are only 3
levels of octree cubes I have to go through before getting to the 1 foot cube
voxel which contains the teapot.  The drawback of the octree is that I have to
then do 3x4=12 box intersections which must be done each ray and which are
useless.  Minor, but now think of reflection rays from the teapot which try to
hit the stadium: each could go through up to 8 levels x 4 voxels per level =
32 voxels just to escape the stadium without hitting anything (not including
all the voxels needed to be traversed from the teapot to the one foot cube).
Seems like a lot of intersection and finding the next octree address and tree
traversal for hitting the background.  I suspect less bounding volumes would be
hit using hierarchy, and the tests would be simpler (many of them being just
a quick "is the ray origin inside this box?": if so, check inside the box).

I guess it just feels cleaner to have to intersect only bounding volumes
which are needed, which is the feel which automatic bounding volume hierarchy
has to it.  Boxes can be of any size, so that if someone adds a huge earth
behind a satellite all that is added is a box that contains both.  With
hierarchy you can do some simple tricks to cut down on the number of
bounding volumes intersected.  For example, by recording that the ray fired
at the last pixel hit such and so object, you can test this object first for
intersection.  This quickly gets you a maximum depth that you need not go
beyond: if a bounding volume is intersected beyond this distance you don't have
to worry about intersecting its contents.  This trick seems to gain you about
90% of the speed-up of the octree (i.e. not having to intersect any more
voxels once an intersection is found), while also allowing you the speed up
of avoiding needless octree voxel intersections.  I call this the "ray
coherency" speedup - it can be used for all types of rays (and if you hit
when the ray is a shadow ray, you can immediately stop testing - this trick
will work for the octree, too!  Simply save a pointer to the object which
blocked a particular shadow ray for a particular light last pixel and try it
again for the next shadow ray).

I still have doubts about the octree.  However, with lazy evaluation I think
you get rid of one of my major concerns: subdividing too deep makes for massive
octrees which soak up tons of memory.  Have you had to deal with this problem,
i.e. has the octree ever gotten too big, and do you have some way to free up
memory (some "least recently used" kind of thing)?

An interesting comment that I read by John Peterson on USENET news some months
ago was:

>> [John Watson @ Ames:]
>> Anyway, I know there have been a few variations of the constant-time
>> algorithms around, and what I need to know is, what is the _best_, 
>> i.e. simplest, most effiecent, etc, ... version to implement.
>> 
>> Could some of you wonderful people comment on these techniques in general, 
>> and maybe give me some pointers on recent research, implementions, etc. 
>
> This is an interesting question.  Here at Utah, myself and Tom Malley
> implemented three different schemes in the same ray tracer; Whitted/Rubin,
> Kay/Kajiya, and an octree scheme (similar to the Glassner/Kaplan, camp, I
> think).  The result?  All three schemes were within 10-20% of each other
> speedwise.  Now, we haven't tested these times extensively; I'm sure you could
> find wider variances for pathological cases.  But on the few generic test
> cases we measured, there just wasn't much difference.  (If we get the time,
> we plan on benchmarking the three algorithms more closely).

I suspect that this is probably the case, with octree working best when the
scene depth (i.e. the number of objects which are intersected by each ray,
regardless of distance) is high, the "ray coherency" method outlined above for
hierarchy fails, and so cutting off early is a big benefit. Automatic hierarchy
probably wins when there are large irregularities in the density of the
number of objects in space.  (Of course, the SEADS method (equal sized voxels
and 3DDDA) is ridiculous for solving the "teapot in a stadium" kind of
problems, but it's probably great for machines with lots of memory ray tracing
scenes with a localized set of objects.

By the way, I found Whitted/Rubin vs. Kay/Kajiya to be about the same:  Kay had
less intersections, but the sorting killed any time gained.  I find the
coherency ray technique mostly does what Kay/Kajiya does: quickly gets you a
maximum intersection depth for cutoff.

Without the memory constraints limiting the effectiveness of the octree I can
believe it well could be the way of the future:  it is ideal for hardware
solution (so those extra voxel intersection and traversal tests don't bother me
if they're real fast), sort of like how the z-buffer is the present winner in
hidden surface algorithms because of its simplicity.

So, how's that for a turnabout on my polemical anti-octree position?
Nonetheless, I'm not planning to change my hierarchy code in the near future -
not until the subdivision and memory management problems are more fully
understood.

All for now,

Eric Haines

    
--------------------------------------------------

  SUBSPACES AND SIMULATED ANNEALING

  I  started  out  intending  to  write a very short reply to Eric Haines's
  "teapot in  a football stadium" example, but it turned out to  be  rather
  long.   At  any  rate,  most of what's described here (except for some of
  the very speculative stuff near  the bottom) is a result  of  joint  work
  with Dave Kirk, Olin Lathrop, and John Francis. 

  One  way  that  we've  dealt  with  situations  similar  to Eric's teapot
  example is to use a  combination  of  spatial  subdivision  and  bounding
  volume  techniques.   For  instance,  we commonly mix two or three of the
  following techniques into a  "meta" hierarchy for ray  tracing  a  single
  environment:

      1) Linear list 
      
      2) Bounding box hierarchy  
      
      3) Octrees  (including BSP trees) 
      
      4) Linear grid subdivision 
      
      5) Ray Classification      
      
  We  commonly  refer  to  these  as  "subspaces".   For us this means some
  (convex) volume of  space, a  collection  of  objects  in  it,  and  some
  technique  for  intersecting a ray with those objects.  This technique is
  part of an "aggregate  object", and all the objects it  manages  are  the
  "children".   Any  aggregate  object  can  be  the  child  of  any  other
  aggregate  object,  and   appears  simply  as  a  bounding   volume   and
  intersection  technique  to  its parent.  In other words, it behaves just
  like a primitive object. 

  Encapsulating a subspace as just another  "object"  is  very  convenient.
  This  is something which Dave and Olin and I agreed upon in order to make
  it possible to "mix  and  match"  our  favorite  acceleration  techniques
  within  the  same  ray  tracer for testing, benchmarking, and development
  purposes. 

  As an example of how we've used this  to  ray  trace  moderately  complex
  scenes  I'll  describe  the amusement park scene which we animated.  This
  consisted of a number of rides spread throughout a park, each  containing
  quite  a  bit  of  detail.   We  often  showed  closeups of objects which
  reflected the rest of the park (a somewhat scaled  down  version  of  the
  teapot  reflecting   the  stadium).   There  were somewhere around 10,000
  primitive objects  (not   including  fractal  mountains),  which  doesn't
  sound  like  much  anymore,  but  I   think  it still represents a fairly
  challenging scene to ray trace --  particularly for animating. 

  The organization of the scene suggested  three  very  natural  levels  of
  detail.  A typical example of this is

      I) Entire park ( a collection of rides, trees, and mountains )
  
          II) Triple decker Merry-go-round ( one of the rides )
  
              III) A character riding a horse ( a "detail" of a ride )

  Clearly  a single linear grid would not do well here because of the scale
  involved.  Very  significant  collections  of  primitives  would  end  up
  clumped  into  single  voxels.  Octress, on the other hand, can deal with
  this problem  but don't enjoy quite the  "voxel  walking"  speed  of  the
  linear grid.  This suggests a compromise. 

  What  we did initially was to place a coarse linear grid around the whole
  park, then  another  linear  grid  (or  octree)  around  each  ride,  and
  frequently  a  bounding box hierarchy around small clusters of primitives
  which would fall entirely with a voxel of even the second-level  (usually
  16x16x16) linear grid. 

  Later,  we  began to use ray classification at the top level because, for
  one thing, it did  some  optimizations  on  first-generation  rays.   The
  other   levels  of  the  hierarchy  were  kept in place for the most part
  (simplified a bit)  in order to run well on machines  with  <  16  MB  of
  physical  memory.   This  effectively  gave  the  RC (ray classification)
  aggregate object a "coarser" world to  deal  with,  and  drastically  cut
  down  the  size  of the candidate sets it built.  Of course, it also "put
  blinders" on it by not allowing it to distinguish between objects  inside
  these   "black  boxes"  it  was  handed.   It's  obviously  a  space-time
  trade-off.  Being able to nest the subspaces  provides  a  good  deal  of
  flexibility for making trade-offs like this. 

  A  small  but  sort  of interesting additional benefit which falls out of
  nesting  subspaces is that it's possible  to  take  better  advantage  of
  "sparse"  transformations.   Obviously the same trick of transforming the
  rays into a canonical object space  before doing  the  intersection  test
  (and  transform  the  normal  on  the  way  out) also works for aggregate
  objects.  Though this means doing possibly several transforms  of  a  ray
  before  it  even  gets  to a primitive object, quite often the transforms
  which are lower  in  the  hierarchy  are  very  simple  (e.g.  scale  and
  translate).   So,  there  are  cases  when  a  "dense"  (i.e.  expensive)
  transform gets you into  a  subspace  where  most  of  the  objects  have
  "sparse"  (i.e.  cheap)  transforms.  [I'll  gladly  describe how we take
  advantage of matrix sparsity structures if anybody  is  interested.]   If
  you  end   up  testing N objects before finding the closest intersection,
  this means that (occasionally)  you  can  do  the  job  with   one  dense
  transform  and  N  sparse  ones, instead of N dense transforms.   This is
  particularly appropriate when you build a  fairly  complex  object   from
  many  scaled  and  translated primitives, then rotate the whole mess into
  some strange final orientation.  Unfortunately, even in  this  case  it's
  not   necessarily   always  a  win.   Often  just  pre-concatenating  the
  transforms and tossing the autonomous objects (dense transforms and  all)
  into  the  parent  octree  (or  whatever) is the better thing to do.  The
  jury is still out on this one. 

  Currently, all of the "high level" decisions  about  which  subspaces  to
  place  where  are  all  made  manually  and  specified  in  the  modeling
  language.  This is much harder to do well  than  we  imagined  initially.
  The  tradeoffs  are  very  tricky  and  sometimes  counter-intuitive.   A
  general rule of thumb which seems to be of value is to put an  "adaptive"
  subspace  (e.g.  an  octree, RC) at the top level if the scene  has tight
  clusters of geometry, and  a  Linear  grid  if  the  geometry  is  fairly
  uniform.    Judicious  placement  of  bounding  box hierarchies within an
  adaptive hierarchy is a real art.  On the one hand,  you  don't  want  to
  hinder  the  effectiveness   of  the  adaptive subspace by creating large
  clumps of geometry that it  can't   partition.   On  the  other  hand,  a
  little  a  priori  knowledge  about  what's  important and where bounding
  boxes will do a good job can often make a big   difference  in  terms  of
  both time and space (the space part goes quintuple for RC). 

  Now,   the   obvious   question   to   ask  is  "How  can  this  be  done
  automatically?"  Something  akin  to  Goldsmith  and  Salmon's  automatic
  bounding  volume generation  algorithm may be appropriate.  Naturally, in
  this context, we're talking about a heterogeneous mixture  of  "volumes,"
  not  only  differing  in shape and surface area, but also in "cost," both
  in terms of space and time.  Think of each subspace as being  a  function
  which  allows you to intersect a ray with a set of objects with a certain
  expected (i.e. average) cost.  This  cost  is  very  dependent  upon  the
  spatial  arrangement  and  characteristics of the objects in the set, and
  each type of  subspace  provides  different   trade-offs.   Producing  an
  optimal  (or  at  least  good)  organization of  subspaces is then a very
  nasty combinatorial optimization problem.  

  An idea that I've been toying with for quite some  time  now  is  to  use
  "simulated  annealing"  to  find a near-optimal subspace hierarchy, where
  "optimality" can be phrased in terms of any  desired  objective  function
  (taking  into  account,  e.g., both space and time).  Simulated annealing
  is a technique for  probabilistically exploring the vast  solution  space
  (typically)   of   a  combinatorial  optimization  problem,  looking  for
  incremental improvements WITHOUT getting   stuck  too  soon  in  a  local
  minimum.   It's  very closely linked to some ideas in thermodynamics, and
  was  originally  motivated  by  nature's  ability  to  find  near-optimal
  solutions  to  mind-bogglingly  complex  optimization  problems  --  like
  getting all the water molecules in  a  lake  into  a  near-minimum-energy
  configuration  as  the temperature gradually reaches freezing.  It's been
  fairly successfull at "solving" NP-hard problems such  as  the  travaling
  salesman and chip placement (which are practically the same thing). 

  This  part  about  simulated  annealing  and  subspace hierarchies is all
  very  speculative, mind you.  It may not  be  practical  at  all.    It's
  easy  to  imagine  the  "annealing"  taking three  CPU-years to produce a
  data structure which takes an hour to  ray  trace  (if  it's  done  as  a
  preprocessing  step  --  not  lazily).   There  are  many details which I
  haven't discussed here -- largely because  I  haven't  figured  them  out
  yet.   For example, one needs to get a handle on the distribution of rays
  which will be intersected with the environment in order to  estimate  the
  efficiency  of the various subspaces.  Assuming a uniform distribution is
  probably a good first approximation,  but there's got to be a better  way
  --  perhaps  through  incremental improvements as the scene is ray traced
  and, in particular, between successive frames of an animation. 

  If this has any chance of working it's going to  require  an  interesting
  mix  of  science and "art".  The science is in efficiently estimating the
  effectiveness of a subspace (i.e. predicting the relevant costs) given  a
  collection  of  objects   and  a  probability  density  function  of rays
  (probably uniform).  The art is in  selecting  an   "annealing  schedule"
  which  will  let  the  various  combinations of hierarchies percolate and
  gradually  "freeze"  into  a  near-optimal  configuration.   Doing   this
  incrementally  for an animation is a further twist for which I've seen no
  analogies in the simulated annealing literature. 

  If you've never heard of simulated annealing  and  you're  interested  in
  reading  about  it,  there's  a  very  short  description  in  "Numerical
  Recipes."   The best  paper that I've found, though, is "Optimization  by
  Simulated  Annealing,"   by  S. Kirkpatrick, C. D. Gelatt, Jr., and M. P.
  Vecchi, in the May 13, 1983 issue  of Science. 

  Does this sound at all interesting to anybody?  Is anyone  else  thinking
  along  these or similar lines?  
  
                                                              -- Jim Arvo
 

From saponara@tcgould.TN.CORNELL.EDU Thu Sep  8 15:55:40 1988
Return-Path: <saponara@tcgould.TN.CORNELL.EDU>
Received: from tcgould.TN.CORNELL.EDU by turing.cs.rpi.edu (4.0/1.2-RPI-CS-Dept)
	id AA05328; Thu, 8 Sep 88 15:55:31 EDT
Date: Thu, 8 Sep 88 15:54:21 EDT
From: saponara@tcgould.tn.cornell.edu (John Saponara)
Received: by tcgould.TN.CORNELL.EDU (5.54/1.2-Cornell-Theory-Center)
	id AA25915; Thu, 8 Sep 88 15:54:21 EDT
Message-Id: <8809081954.AA25915@tcgould.TN.CORNELL.EDU>
To: kyriazis@turing.cs.rpi.edu
Subject: back issues, part 2 of 3
Status: RO


 _ __                 ______                         _ __
' )  )                  /                           ' )  )
 /--' __.  __  ,     --/ __  __.  _. o ____  _,      /  / _  , , , _
/  \_(_/|_/ (_/_    (_/ / (_(_/|_(__<_/ / <_(_)_    /  (_</_(_(_/_/_)_
             /                               /|
            '                               |/

Ray Tracing News, e-mail edition, 2/15/88

concatenated by Eric Haines, hpfcla!hpfcrs!eye!erich@hplabs.HP.COM

So, now that the SIGGRAPH paper submission rush is over, the SIGGRAPH paper
review process begins.  Fortunately, it's generally easier to comment on someone
else's deathless prose than write it yourself.  It's also time to start
procrastinating on writing up SIGGRAPH tutorial notes.  So, all in all it's
not been too busy, except for all the "real" work we've all (hopefully) been
doing.


Dore'
-----

The only new news I've got is on the new product by Ardent, called Dore'
(rhymes with "moray" - there should be an up-accent over that "e" in Dore).
Ardent is the new name for Dana Computer Inc (i.e. the "single-user
supercomputer/supergraphics" people.  Their "Titan" minisupercomputer is due
out realsoonnow).  Dore' stands for "Dynamic Object-Rendering Environment".

The places I've seen articles so far is "Electronics", February 4, 1988, on
pages 69-70, and "Mini-Micro Systems", February 1988, pages 22-23.  The
first article offers more detail.  I don't really want to rehash either
article in full.  The salient points (to me) about Dore' are:

	(1) Toolkit approach.
	(2) Can render using vectors, hidden surface, or ray tracing.
	(3) Hierarchical, object oriented system.
	(4) Five object classes:
	    (a) primitives (including points, curves, polygons, meshes, cubic
		solids (?!), and NURBS (non-uniform rational B-splines),
	    (b) appearance attributes (material properties, inc. solid texture
		maps and environmental reflection maps),
	    (c) geometric attributes (modeling matrices),
	    (d) studio objects (camera, lights) (I like this term!),
	    (e) organizational objects (hierarchy, and evidentally the ability
		to define function calls inside the environment which call
		routines in the application program.  No idea how this works).
	(5) Quoted times: 0.1 second for vector, 10 seconds for hidden
	    surface, 100 seconds ray-traced (I assume on the Titan.  No
	    idea what kind of scene complexity or resolution).
	(6) Written in C.
	(7) "Open" system - source code sold in hopes of selling Dore' on other
	    systems.

The best part (for universities and research labs) is the price: $250 for
a source code license - not sure what the cost is for source code maintenance
(vs. $15000 for commercial users plus $5000/year after the first year).  Per
copy binary license is $200.

I am teaching the ray-tracing section of "A Consumer's and Developer's Guide
to Image Synthesis" at SIGGRAPH this year, so definitely want to know more.
I would also like more information just out of curiosity.  So, you university
people, please go out there and get one - seems like a real bargain.  The
contact info for Ardent is:

	Ardent Computer Corp
	550 Del Rey Ave
	Sunnyvale, CA  94086
	408-732-2806


That's all, folks,

Eric

   

 _ __                 ______                         _ __
' )  )                  /                           ' )  )
 /--' __.  __  ,     --/ __  __.  _. o ____  _,      /  / _  , , , _
/  \_(_/|_/ (_/_    (_/ / (_(_/|_(__<_/ / <_(_)_    /  (_</_(_(_/_/_)_
             /                               /|
            '                               |/

			"Light Makes Right"

			 March 1, 1988


I just receive Andrew's note that the hardcopy RTN is in the mail, which
inspired me to flush the buffer and send on the latest offerings.  Special
thanks to Jeff Goldsmith for submissions.

    - Eric

--------------------------------------------------

Mailing list updates
--------------------

First, an address change.  John Peterson is now with Apple, who writes:

    'I'm currently hanging out at Apple thinking about "3D graphics for the
    rest of us" and how to keep the jaggies away from personal computers.
    (But there is this Cray sitting about 50 feet away.  Hmmm...)'

#
# John Peterson - bicubic splines, texturing
# Apple Computer (graduated University of Utah, 1988)
alias	john_peterson	hpfcrs!hpfcla!jp%apple.apple.com@RELAY.CS.NET

I asked him for ray tracers at the University of Utah.  So, Tom Malley and
Rod Bogart (whose initials are 'RGB') are now subscribers.

>From Tom:
    My thesis research was similar to what John Wallace described,
    being a two pass approach to radiosity to include specular reflection
    and transparency.  Form factors were all calculated via ray tracing,
    however.  I did some brief examination of different ray intersection
    methods along the way (Rubin-Whitted, Kay-Kajiya, and Glassner).

#
# Tom Malley - blending ray tracing and radiosity
# Evans & Sutherland (graduated University of Utah, 1988)
# (malley@cs.utah.edu, cs.utah.edu!esunix!tmalley)
alias	tom_malley	hpfcrs!hpfcla!hplabs!malley@cs.utah.edu


To quote John Peterson about Rod Bogart:
    Rod developed a really neat method for using ray tracing to integrate
    computer generated pictures with real world images (coming soon to a
    SIGGRAPH near you...).

#
# Rod Bogart - blending ray tracing and images
# University of Utah
alias	rod_bogart	hpfcrs!hpfcla!hplabs!bogart%gr@cs.utah.edu

-----------------------------------

Another Dore' Article

In case you have not been able to track down the two articles previously
mentioned about Dore', Ardent's new rendering system, there's now a third
(that I know of):  it's in "Computer Design", Feb 15, 1988, pages 30-31.
Pretty much like the other articles (i.e. cast from the same press release).

-----------------------------------

Responses to the "teapot in a football stadium" problem:


>From: Andrew Glassner

 Just a quick response to your football stadium/teapot example.  When you
subdivide a node, look at its children.  If only one child is non-empty,
replace the original node with its non-null child.  Do this recursively 
until the subdivision criterion is satisfied.  I do this in my spacetime
ray tracer, and the results can be big.  The ray propagation can get just
a bit more complex, but there are clever rays to keep it simple (see
John Amanatide's article in Eurographics '87, plus I have a scheme that
I hope to write up soon...).

  Better yet, go with a hybrid space subdivision/bounding volume scheme,
such as the one described in my spacetime paper (poorly described in the
Intro to RT notes, but better described in the version slated for the
March issue of CG&A; I'd be happy to mail you a preprint).  I think this
hybrid scheme gives the best of both worlds, and you can use whatever
space subdivision and bounding volume techniques that you like in the
two distinct phases of the algorithm.  I use adaptive space subdivision
and Kay's bounding slabs, and that combination seems to work pretty well.

  And now I have to get back to moving into my office!

------------------------------------------

Comments on Jim Arvo's Efficiency Article


>From: Eric Haines (with a few extra comments than my original letter to Jim)

	Your article on efficiency is fascinating.  I hope to read it more
carefully tonight and (eventually--we just came under a crunch of work)
comment on it.  Sounds like you've done a lot of serious thought and
speculation on the possibilities.  I agree with the philosophy of objects
each having their own private hierarchies, and having the ability to hook
these hierarchies up however you want.  We've done that on a small scale
with our tesselated spline surfaces:  automatic hierarchy a la Goldsmith &
Salmon (IEEE CG&A, May 1987) for everything, but then octrees for the spline
surfaces themselves.  A nice feature of Goldsmith is that you can weight the
cost of each primitive into the algorithm: multiply its area by some
intersection cost (which you'll probably have to figure out through
experimentation) to give it a weighting.  In this way a torus surface which has
the same size bounding volume as a quadrilateral can be given a higher
weighting factor.  A higher cost has the effect of making the hierarchical tree
less horizontal near the complicated object, i.e. there are more bounding
volumes overall, with a few complicated objects in each.  This is what you
want, since you'd rather spend a little extra time on intersecting bounding
volumes than wasting a lot of time intersecting the empty space around costly
objects.


Response from: Jim Arvo

    I'm glad you found my article interesting.  All your interesting
    mail finally motivated me to contribute to the discussion.  I
    thought I would toss out a pet idea of mine and see if it sparked
    any debate.  It turns out that Jeff Goldsmith also looked at
    simulated annealing for bounding box hierarchies.  One day one 
    of us will get some results.  Hopefully not negative results!

    With all the talk about octrees and such, it's clear that there 
    are a number of potential papers "waiting in the wings".  I've
    been thinking that by getting the right collaborations going,
    we (the ray tracing group) could easily "hand" IEEE several
    related papers, effectively defining a theme issue.  What do
    you think?


My reply:

	The efficiency article collection sounds possible.  Another idea which
someone (Mike Kaplan, maybe?  I forget) mentioned at last SIGGRAPH was "A
Characterization of Ten Ray Tracing Efficiency Algorithms".  If well done, this
would be a classic.  There are probably entirely new schemes still to be found,
and certainly trying to optimize and figure out good hybrid methods is an
area ripe for development.  But right now many of the structures and algorithms
are in place, and still have not been fully compared.  Timings are
unconvincing, and statistics are worthwhile but don't tell the whole story.
An in-depth comparison of the major algorithms and techniques to improve these
would be wonderful.  Someday, someday ... well, my hope is that a few of us
could do some writing along these lines, even if it's just brainstorming on
how to compare particular algorithms in a rigorous fashion  (e.g. How can we
simulate a scene mathematically?  OK, idealize each object as a box or sphere
for simplicity.  Now, how do we distribute the points to get realistic
clustering? Once we have a "scene generator" which could create various typical
distributions of objects in a scene, then we have to analyze how this generator
would interact with each algorithm, and be able to predict how each efficiency
scheme deals with the scenes generated.  Or there might be simpler ways to
isolate and analyze each factor which affects the efficiency of a scheme.
Anyway, whatever, but this stuff looks fun!).  Understanding the strengths of
the various techniques seems vital to being able to do any kind of "annealing"
process for optimization.

------------------------------------------------------------

Efficiency Tricks
-----------------

>From Jeff Goldsmith:

    Here's a good hack for Ray Tracing News:
When using Tim Kay's heapsort on bounding volumes in order
to get the closest, don't bother to do that for illumination
rays.  I know it seems obvious, but I never thought to do it.

The obvious corollary to that idea has a little more reach to
it.  Since illumination rays form the bulk of the rays we 
trace, getting the nearest intersection is of limited value.
In addition, if CSG is used, more times occur when the nearest
intersection is of less value.  This seems to indicate that 
space tracing techniques are doing some amount of needless work.
Since it doesn't really cost that much, perhaps it is not a flaw,
but maybe space tracers should consider approaches that don't
worry about where along the path we are and optimize that problem
instead.

---------------------------------------------

More Book Recommendations
-------------------------

>From: Jeff Goldsmith

    I agree completely with your comment about libraries.
Mine is a crucial resource for me.  Here are some of my
favorite books that are in my office:

	Geometry:
	
	    Computational Geometry for Design and Manufacture
	        Faux & Pratt
	 	--an early CAD text.  It has lots of good stuff
		on splines and 3D math.

	    Differential Geometry of Curves and Surfaces
		DoCarmo
		--A super text on classical differential geometry.
		(Not quite the same as analytic geometry.)

	    CRC Standard Math Tables
		--This has an awesome section on analytic geometry.
		Calculus, too.  Can't live without it.  It is not
		the same as the first part of the Chemistry and 
		Physics one.

	    Analytic Geometry
		Steen and Ballou
		--Once was the standard college text on the subject.
		That was a long time ago, but it is very easy to
		read and it covers the fundamentals.

	Computing:

	    Data Structures and Algorithms
		--Aho, Hopcroft and Ullman
		Read anything by these guys.

	    Data Structure Techniques
		--Standish
		More How-to than AHU's tome.

	    Numerical Analysis
		--Burden, Faires, and Reynolds
		I have the other two, as well.  This is the
		least complete of the three, but the algorithms
		inside are childishly easy to implement.  They
		always seem to work, too.  Best of all, for many
		cases, they have test data and solutions.

	    Software Tools
		--Kernighan and Plauger
		How to write command line interpreters, editors,
		macro expanders, the works.  Great reading.

	    Fundamentals of Computer Algorithms
		--Horowitz and Sahni
		Less technical than AHU, but pretty technical.
		Thicker.  It may very well answer the problem
		you can't figure out straight off.

	    The Art of Computer Programming
		--Knuth
		The "Encyclopedia"

	Physics:  (Seem awfully useful sometimes)

	    Gravitation
		--Misner, Thorne, and Wheeler
		The thickest book on my shelf.  It's a paperback, too.
		(It's bent three bookends permanently.  Cheap JPL ones.)
		Truly a tome on modern physics.
		
	    Modern Physics
		--Tippler
		Much easier to read than MTW.  Has lots of good appendices.

	    University Astronomy
		--Pasachoff and Kuttner
		I read this book for fun.  I wonder why I didn't read it
		while I was taking Kuttner's course?

	    The Feynman Lectures on Physics
		Awesome first course.  Most of my needs are problems in
		the text.

	Graphics, etc:

	    Raster Graphics Handbook
		--Conrac
		All about fundamentals of the craft.

	    Light and Color in Nature and Art
		--Williamson and Cummings
		Much easier to read than Hall's thesis, but less 
		technical as well. 

	Etc, Etc:

	    The Random House Dictionary of the English Language, 
	    College Edition
		The best collegiate sized dictionary around. 
		By far.

	    The Chicago Manual of Style
		Has most of the answers. Did you know that
		to recreate is to have fun, but to 
		re-create is computer graphics?

	    The Elements of Style
		The one that came before computers.

-----------------------------------------------

Bug for the Day						by Eric Haines
---------------

{This will be pretty unexciting for those who never intend to implement an
octree subdivision scheme.  For future implementers, I hope you find it of
use:  it took me quite a few hours to track this one down, so I think it is
worth going into.}

	This bug was one I had when implementing octree subdivision for ray
tracing.  The basic algorithm used was Glassner's:  once you intersect the
octree structure, move the intersection point in one half of the smallest
cube's dimension in the direction normal to the wall hit.  In other words,
find out what cube is the next cube by finding a point that should be well
inside of it, then translating this point into integer octree coordinates
and traversing the octree downwards until a leaf node is found.

	However, there are some subtle errors that can occur with moving to the
next octree cube.  My favorite is almost hitting the edge of a cube, moving
into the next cube, then getting caught moving to the cube diagonal to this
cube, i.e. moving from cube 1 to 2 to 3 ...

	X-->
	+---+---+
      ^ | 2 | $ |	Numbers are the order of cubes moved through.
      | +---#---+
      Y | 1 | 3 |
	+---+---+
	  ^________ray started here, and hit almost at the "#".
		   (ray is in +X, +Y direction)

This went into an infinite loop, going between 2 and 3 forever.  The reason
was that when I hit the boundary 1&2 I would add a Y increment (half minimum
box size) to the intersection point, then convert this to find that I was
now in box 2.  I would then shoot the ray again and it would hit the
wall at 2&$.  To this intersection point I would add an X increment.  However,
what would happen is that the Y intersection point would actually be ever so
slightly low - earlier when I hit the 1&2 wall adding the increment pushed us
into box 2.  But now when the Y intersection point was converted it would
put us in the 1+3 boxes, and X would then put us in box 3.  Basically, the
precision of the machine made the mapping between world space and octree
space be ever so slightly off.

	The infinite loop occurred when we shot the ray again at box 3.  It
would hit the 3/$ wall, get Y incremented, and because X was ever so slightly
less than what was truly needed to put the intersection point in the 3+$
boxes, we would go back to box 2, ad infinitum.  Another way to look at this
is that when we would intersect the ray against any of the walls near the
"#" point, the intersection point (due to roundoff) was always mapping to
box 1 if not incremented.  Incrementing in Y would move it to box 2, and in
X would move it to box 3, but then the next intersection test would yield
another point that would be in box 1.  Since we couldn't increment in
both directions at once, we could never get past 2 and 3 simultaneously.

	This bug occurs very rarely because of this: the intersection points
all have to be such that they are very near a corner, and the mapping of the
points must all land within box 1.  This problem occurred for me once in a
few million rays, which of course made it all that much more fun to search
for it.

	My solution was to check the distance of the intersections generated
each time: if the closest intersection was a smaller distance from the origin
than the closest distance for the previous cube move, then this intersection
point would not be used, but rather the next higher would be.  In this way
forward progress along the ray would always be made.

	By the way, I found that it was worthwhile to always use the original
ray origin for testing ray/cube intersections - doing this avoids any
cumulative precision errors which could occur by successively starting from
each new intersection point.  To simulate the origin starting within the cube
I would simply test only the 3 cube faces which faced away from the ray
direction (this was also faster to test).

	Anyway, hope this made sense - has anyone else run into this bug? Any
other solutions?

---------------------------------------------

A Pet Peeve (by Jeff Goldsmith)
-----------

Don't ever refer to pixels as rows and columns.  It makes it
hard to get the order (row,column)? (column,row)? right.  Refer
to pixels as (x,y) coordinates.  Not only is that the natural
system to do math on them, but it is much easier to visualize
in a debugging environment, as well as running the thing.  I
use the -x and -y npix switches on the tracer command line to
override any settings and have found them to be much easier to
deal with than the -r and -c that seem to be everywhere.  Note
that C's normal array order is (I think.  I always get these 
things wrong.) (y,x).

	[I agree: my problem now is that Y=0 is the bottom edge of the screen
	when dealing with the graphics package (HP's Starbase), and Y=0 is the
	top when directly accessing the frame buffer (HP's SRX). -- EAH]

---------------------------------------------

	Next "RT News" issue I'll include a write-up of Goldsmith/Salmon which
should hopefully make the algorithm clearer, plus some little additions I've
made.  I've found Goldsmith/Salmon to be a worthwhile, robust efficiency scheme
which hasn't received much attention.  It embodies an odd way of thinking
(I have to reread my notes about it when I want to change the code), as there
are a number of costs which must be taken into account and inherited.  It's
not immediately intuitive, but has a certain sense to it once all the pieces
are in place.  Hopefully I'll be able to shed some more light on it.

All for now,

Eric

 _ __                 ______                         _ __
' )  )                  /                           ' )  )
 /--' __.  __  ,     --/ __  __.  _. o ____  _,      /  / _  , , , _
/  \_(_/|_/ (_/_    (_/ / (_(_/|_(__<_/ / <_(_)_    /  (_</_(_(_/_/_)_
             /                               /|
            '                               |/

			"Light Makes Right"

			 March 8, 1988

-------------------------------------------------

Surface Acne
------------

>From Eric Haines:

	A problem which just about every ray tracer has run into, and which
has rarely appeared in the literature (and even more rarely been solved in any
way) is what I call "surface acne".

	An easy way to explain this problem is with an example.  Say you are
looking at a double sided (i.e. no culling) cylinder primitive.  You shoot an
eye ray, hitting the outside.  Now you look at a light.  As it turns out, the
intersection point truly is bathed by the light, and so should see it.  What
actually may happen is that the shadow test ray hits the cylinder.  In images
this will show up as black dots or other anomalous shadings - "surface acne".
I've seen this left in some images to give an interesting textured effect, but
normally it's a real problem.

	How did this happen?  Well, theoretically it can't.  However, due to
precision error the following happens.  When you hit the cylinder and
calculated the intersection point in world space, the point computed was
actually ever so slightly inside the cylinder.  Now, when the shadow ray
is sent out, it is tested against the cylinder's surface, and an intersection
is found at some tiny distance from the origin.

	A common solution is to just assign an epsilon to each intersector and
cross your fingers.  In other words, what you really do is move the ray origin
ever so slightly along the shadow (or reflection or refraction) ray direction
and hope this was far enough that the new origin is 'outside' of the object
(in actuality, what you want is for the new origin to be on the same side of
the object as the parent ray, except for refraction rays, which want to start
on the opposite side).  This works fairly well for test systems, but is pretty
scary stuff for software used by anyone who didn't design it (e.g. some user
decides to input his molecular database in meters, causing all his data to be
much smaller in radius than my fudge factor.  When I add my fudge factor
distance to the ray, I find that my new ray origin is way outside the scene).

	Another solution is to not test the item intersected if it is not
self-shadowing.  For example, a polygon cannot cast a shadow on itself, so
should not be tested for intersection when a ray originates on its surface.
This works fine for some primitives, but falls apart when self-shadowing
objects (cylinders, tori, spline surfaces, etc) are used.

	I have also experimented with some root polishing techniques, which
help to solve some problems, but I'll leave it at this for now.  Has anyone
any better solutions for surface acne (ideally foolproof ones)?  I suspect
that the best solution is a combination of the above techniques, but hopefully
I'm missing some concept that might make this problem easy to solve.  Hope to
hear from you all on this!

-------------

Addenda from Jeff Goldsmith:

    Al [Barr] and I have used a technical term for "surface acne,"
too.  We called it "black dots" or more often "black shit."
(Zbuffers have similar problems.  The results are called
"zbuffer shit" or "zippers".  Mostly the cruder term is
used since the artifacts are not particularly desirable.)

--------------------------------------------------

Goldsmith/Salmon Hierarchy Building
-----------------------------------

Well, I was going to write up some info on the Goldsmith/Salmon hierarchy
building algorithm, but the RT News buffer was filled almost immediately and 
I haven't done it yet.  However, there was this from Jeff Goldsmith, about
his earlier paper (IEEE CG&A, May 1987):

       If you are going to spend some time and effort on automatic
    tree generation stuff (Note: paper 2 is almost done--mostly
    talks about parallelism and hypercubes, but some stuff on trees
    as well--mostly work heuristics that include primitives
    and so on) I'd like to hear some thinking about the evaluation
    function.  Firstly, it's optimized for primary rays.  That turns
    out to be an unfortunate choice, since most rays are secondary
    rays.  We've come up with a second order correction that is 
    good for evaluating trees, but turns the generation algorithm
    into O(n log^2 n).  We've not played around with it enough to
    tell whether it works.  If you have some thoughts/solutions,
    that would be nice.  Another finding on the same vein that is
    much more important is: the mean (see next note) seems to be 
    reasonably close, but sigma is very high for the predictions
    vs. actual tries.  This wasn't important (actually, wasn't
    detected) on a sequential machine, but became crucial on a
    parallel machine.  Some of the variation is due to our assumption/
    attempt at view direction independence.  (Clearly, stuff in back
    is not checked for intersection much.)  I don't know whether
    that is all of it--we get bizarre plots of this data.  If you
    have any thoughts on how to make a better or more precise
    evaluation function, I'd really like to hear the reasoning and
    perhaps steal and use the results.  
    Oh, the promised note: The mean is only correct if the highest
    level bounding volume (root node) is contained completely within
    the view volume.  If it isn't, the actual results end up proportional
    to the predicted ones, but I haven't worked out the constant.  
    (It shows up on our graphs pretty clearly, though.)

	The second part of the algorithm is the builder.  I'm not
    convinced that it is a very good method at all, but it met the
    criteria I set up when trying to decompose trees--O(below n^2)
    and reasonably local (I was trying to use simulated annealing
    at the time.)  Some other features were environmental; some were
    because I couldn't think of a better way.  In no sense am I convinced
    that the incremental approach or the specific one chosen is best.
    I'd like to hear about that, too.

	The only part I really like about the whole thing is the 
    general approach of using heuristics to guess at some value
    (rated in flops eventually) and then trying to optimize that
    value.  Beyond that, I think there is a whole realm of computational
    techniques waiting to be used to approximately solve optimization
    problems.  I'm really interested in other work done in that 
    direction and especially results regarding graphics.

	Thanks for the good words; I seem to have been mentioned
    in most of the last issue.  I bet that has something to do with
    my having acquired a network terminal on my desk less than a 
    month ago (yay!).  

-----------------------------------------------------

Efficiency Tricks followup
--------------------------

These are comments generated by Jeff Goldsmith's note that Kay/Kajiya sorting is
not needed for shadow rays.

-----------

Comments from Masataka Ohta:

In the latest ray tracing news, you write:

>Efficiency Tricks

>Since illumination rays form the bulk of the rays we 
>trace.

If so, instead of space tracing, you should use ray coherence
at least for the illumination rays.

The ray coherent approaches are found in CG&A vol. 6, no. 9 "The
Light Buffer: A Shadow-Testing Accelerator" and in my paper "ray
coherence theorem and constant time ray tracing algorithm" in
proceedings of CG International '87.

>In addition, if CSG is used, more times occur when the nearest
>intersection is of less value.  This seems to indicate that 
>space tracing techniques are doing some amount of needless work.

How about tracing illumination rays from light sources, instead
of from object surface? It will be faster for your CSG case,
if the surface point lies in the shadow, though if the surface
point is illuminated, there will be no speed improvement.

The problem is interesting to me because my research on coherent
ray tracer also suggests that it is much better to trace illumination
rays from the light source.

Do you have any other reasons to determine from where illumination rays
are fired?

----------------------------------------------------------------------

Jeff Goldsmith's reply:

Actually, I believe you, though I won't say with certainty 
that we know the best way to do shadow testing.  However,
I'm interested in fundamentally understanding the ray tracing
algorithm and determining what computation MUST be done, so
the realization that space tracing illumination rays still
seems meaningful.  In fact, it is my opinion that space tracing
is not the right way to go and "backwards" (classical) ray
tracing will eventually be closer to what will be used 30
years from now.  I won't even try to defend that position;
no one knows the answers.  What we are trying to do is
shed a little "light" on the subject.  Thanks for your
comments.

-----------------

>From Eric Haines:

	I just got from Ohta the same note Ohta sent to you, plus your reply.
Your reply is so short that I've lost the sense of it.  So, if you don't mind,
a quick explanation would be useful.

> However,
> I'm interested in fundamentally understanding the ray tracing
> algorithm and determining what computation MUST be done, so
> the realization that space tracing illumination rays still
> seems meaningful.

What is "the realization that space tracing illumination rays"?  I'm missing
something here - which realization?

> In fact, it is my opinion that space tracing
> is not the right way to go and "backwards" (classical) ray
> tracing will eventually be closer to what will be used 30
> years from now.

Do you mean by "space tracing" Ohta's method?

Basically, it looks like I should reread Ohta's article, but I thought I'd
check first.

--------------

Further explanation from Jeff Goldsmith:

I think that a word got dropped from the sentence, either when I
typed it in or later.  (Who knows--I do that about as often as
computers do.)

I meant:  Since distance order is not needed for illumination
rays, space tracing methods in general (not Ohta's in particular)
do extra work.  It's not always clear that extra information costs
extra computation, but they usually go hand in hand.  (It was just
a rehash of the original message.)  Anyway, if extra computation is
being done, perhaps then there is an algorithm that does not do 
this computation, yet does all the others (or some others...)
that is of lower asymptotic time complexity.  

Basically, this all boils down to my response to various claims
that people have "constant time" ray tracers. It is just not 
true.  It can't be true if they are using a method that will yield
the first intersection along a path since we know that that 
computation cannot be done in less than O(n log n) without a
discretized distance measurement.  I don't think that space
tracers discretize distance in the sense of a bucket sort, but
I could be convinced, I suppose.  Anyway, that's what the ramblings
are all about.  If you have some insights, I'd like to start an
argument (sorry, discussion) on the net about the topic.  What
do you think?

------------------------------------------------------------

Extracts from USENET news
-------------------------

There was recently some interesting interchange about octree building on USENET.
Some people don't read or don't receive comp.graphics, so the rest of this
issue consists of these messages.

----------------

>From Ruud Waij (who is not on the RT News e-mail mailing list):

In article <198@dutrun.UUCP> winffhp@dutrun.UUCP (ruud waij) writes:
My ray tracing program, which can display the 
primitives block, sphere cone and cylinder, 
uses spatial enumeration of the object space 
(subdivision in regularly located cubical cells 
(voxels)) to speed up computation.

The voxels each have a list of primitives.
If the surface of a primitive is inside a voxel,
this primitive will be put in the list of the voxel.

I am currently using bounding boxes around the 
primitives: if part of the bounding box is 
inside the voxel, the surface of the primitive 
is said to be inside the voxel.
This is a very easy method but also very s-l-o-w.

I am trying to find a better way of determining 
whether the surface of a primitive is in a voxel 
or not, but I am not very succesful.
Does anyone out there have any suggestions ?

---------------

Response from Paul Heckbert:
  
Yes, interesting problem!  Fitting a bounding box around the object and listing
that object in all voxels intersected by the bounding box will be inefficient as
it can list the object in many voxels not intersected by the object itself.
Imagine a long, thin cylinder at an angle to the voxel grid.

I've never implemented this, but I think it would solve your
problem for general quadrics:

    find zmin and zmax for the object.
    loop over z from zmin to zmax, stepping from grid plane to grid plane.
	find the conic curve of the intersection of the quadric with the plane.
	this will be a second degree equation in x and y (an ellipse,
	    parabola, hyperbola, or line).
	note that you'll have to deal with the end caps of your cylinders
	    and similar details.
	find ymin and ymax for the conic curve.
	loop over y from ymin to ymax,
	    stepping from grid line to grid line within the current z-plane
	    find the intersection points of the current y line with the conic.
	    this will be zero, one, or two points.
	    find xmin and xmax of these points.
	    loop over x from xmin to xmax.
		the voxel at (x, y, z) intersects the object

Perhaps others out there have actually implemented stuff like this and will
enlighten us with their experience.

-----------------

Response from Andrew Glassner:

  Ruud and I have discussed this in person, but I thought I'd respond
anyway - both to summarize our discussions and offer some comments
on the technique.

  The central question of the posting was how to assign the surfaces
of various objects to volume cells, in order to use some form spatial
subdivision to accelerate ray tracing.  Notice that there are at
least two assumptions underlying this method.  The first assumes that 
the interior of each object is homogeneous in all respects, and thus
uninteresting from a ray-tracing point of view.  As a counterexample,
if we have smoke swirling around inside a crystal ball, then this
"homogeneous-contents" assumption breaks down fast.  
 
  To compensate, we either must include the volume inside each object 
to each cell's object list (and support a more complex object description
encompassing both the surface and the contents), or include as new objects 
the stuff within the original.  

  The other assumption is that objects have hard edges; otherwise we have 
to revise our definition of "surface" in this context.  This can begin
to be a problem with implicit surfaces, though I haven't seen this really
discussed yet in print.  But so as long as we're using hard-edged objects 
with homogeneous interiors, the "surface in a cell" approach is still 
attractive.  From here on I'll assume that cells are rectangular boxes.

  So to which cells do we attach a particular surface?  Ruud's current 
technique (gathered from his posting) finds the bounding box of the surface 
and marks every cell that is even partly within the bounding volume.  Sure,
this marks a lot of cells that need not be marked.  One way to reduce the
marked cell count is to notice that if the object is convex, we can unmark 
any cell that is completely within the object; we test the 8 corners with 
an inside/outside test (fast and simple for quadrics; only slightly slower 
and harder for polyhedra).  If all 8 corners are "inside", unmark the cell.  
Of course, this assumes convex cells - like boxes.  Note that some quadrics
are not convex (e.g. hyperboloid of one sheet) so you must be at least
a little careful here.

  The opposite doesn't hold - just because all 8 corners are outside
does NOT mean a cell may be unmarked.  Consider the end of a cylinder
poking into one side of a box, like an ice-cream bar on a stick,
where the ice-cream bar itself is our cell.  The stick is within the 
ice cream, but all the corners of the ice cream bar are outside the stick.  
Since this box contains some of the stick's surface, the box should still 
be marked.  So our final cells have either some inside and some outside 
corners, or all outside corners.

  What do we lose by having lots of extra cells marked?  Probably not
much.  By storing the ray intersection parameter with each object after
an intersection has been computed, we don't ever need to actually
repeat an intersection.  If the ray id# that is being traced matches
the ray id# for which the object holds the intersection parameter, we
simply return the intersection value.  This requires getting access to
the object's description and then a comparison - probably the object
access is the most expensive step.  But most objects are locally
coherent (if you hit a cell containing object A, the next time you need
object A again will probably be pretty soon).  So "false positives" -
cells who claim to contain an object they really don't - aren't so bad,
since the pages containing an object will probably still be resident
when we need it again.

  We do need to protect ourselves, though, against a little gotcha that
I neglected to discuss in my '84 CG&A paper.  If you enter a cell and
find that you hit an object it claims to contain, you must check that
the intersection you computed actually resides within that cell.  It's
possible that the cell is a false positive, so the object itself isn't
even in the cell.  It's also possible that the object is something like
a boomerang, where it really is within the current cell but the actual
intersection is in another cell.  The loss comes in when the intersection
is actually in the next cell, but another surface in the next cell (but
not in this one) is actually in front.  Even worse, if you're doing CSG, 
that phony intersection can distort your entire precious CSG status tree!
The moral is not to be fooled just because you hit an object in a cell; 
check to be sure that the intersection itself is also within the cell.  

  How to find the bounding box of a quadric?  A really simple way is
to find the bounding box of the quadric in its canonical space, and
then transform the box into the object's position.  Fit a new bounding
box around the eight transformed corners of the original bounding box.
This will not make a very tight volume at all, (imagine a slanted,
tilted cylinder and its bounding box), but it's quick and dirty and 
I use it for getting code debugged and at least running.

  If you have a convex hull program, you can compute the hull for
concave polyhedra and use its bounding box; obviously you needn't
bother for convex polyhedra.  For parametric curved surfaces you can
try to find a polyhedral shell the is guaranteed to enclose the
surface; again you can find the shell's convex hull and then find
the extreme values along each co-ordinate.

  If your boxes don't have to be axis-aligned, then the problem changes
significantly.  Consider a sphere: an infinite number of equally-sized
boxes at different orientation will enclose the sphere minimally.  More
complicated shapes appear more formidable.  An O(n^3) algorithm for
non-aligned bounding boxes can be found in "Finding Minimal Enclosing
Boxes" by O'Rourke (International Journal of Computer and Information
Sciences, Vol 14, No 3, 1985, pp. 183-199).  

  Other approaches include traditional 3-d scan conversion, which I think
should be easily convertable into an adaptive octree environment.  Or you
can grab the bull by the horns and go for raw octree encoding, approximating
the surface with lots of little sugar cubes.  Then mark any cell in your
space subdivision tree that encloses (some or all of) any of these cubes.




 _ __                 ______                         _ __
' )  )                  /                           ' )  )
 /--' __.  __  ,     --/ __  __.  _. o ____  _,      /  / _  , , , _
/  \_(_/|_/ (_/_    (_/ / (_(_/|_(__<_/ / <_(_)_    /  (_</_(_(_/_/_)_
             /                               /|
            '                               |/

			"Light Makes Right"

			 March 26, 1988



Table of Contents:
	Intro, Eric Haines
	Mailing list changes and additions: Kuchkuda, Lorig, Rekola
	More on shadow testing, efficiency, etc., Jeff Goldsmith
	More comments on tight fitting octrees for quadrics, Jeff Goldsmith
	LINEAR-TIME VOXEL WALKING FOR OCTREES, Jim Arvo
	Efficiency Tricks, Eric Haines
	A Rendering Trick and a Puzzle, Eric Haines
	PECG correction, David Rogers

---------------------------------------------------------------

Well, NCGA was pretty uninspiring, as it rapidly becomes more and more PC
oriented. It was great to see people, though, and nice to escape the Ithaca
snow and rain.

As far as ray tracing goes, a few companies announced products.  The AT&T
Pixel Machine now has two rendering packages, PICLIB and RAYLIB (these may
be merged into one package someday - I would guess that separate development
efforts caused the split [any comments, Leonard?]).  With the addition of some
sort of VM capability, this machine becomes pretty astounding in its ray
tracing performance (in case you didn't get to SIGGRAPH last year, they had
a demo of moving by mouse a shiny ball on top of the mandrill texture map:
about a frame per second ray trace on a small part of the screen).  HP
announced its new graphics accelerator, the TurboSRX, and with it the eventual
availability of a ray tracing and (the first!) radiosity package as an extension
to their Starbase graphics library.  Ardent and their Dore' package were sadly
missing.  Apollo was also noticeable for their non-appearance.  Sun TAAC was
there, showing off some ray traced pictures but seemingly not planning to
release a ray tracer (the salesman claiming that whoever bought a TAAC would
simply write their own).  Stellar was there with their supercomputer
workstation - interesting, but no ray-tracing in sight.  Anyone else note
anything of ray-tracing (or other) interest?

-------------------------------------------------------------------------

Some mailing list changes and additions

Changed address:

# Roman Kuchkuda
# Megatek
alias	roman_kuchkuda \
	hpfcrs!hpfcla!hplabs!ucbvax!ucsd!megatek!kuchkuda@rutgers.edu


New people:

I saw Gray at NCGA and got his email address.  He worked on ray tracing at
RPI and is at Cray:

# Gray Lorig - volumetric data rendering, sci-vi, and parallel & vector
#	architectures.
# Cray Research, Inc.
# 1333 Northland Drive
# Mendota Heights, MN  55120
# (612)-681-3645
alias	gray_lorig	hpfcrs!hpfcla!hplabs!gray%rhea.CRAY.COM@uc.msc.umn.edu


By way of introduction, this from Erik Jansen:

    I visited the Helsinki University of Technology last week and found 
    there a lot of ray tracing activities going on. They are even reviving 
    their old EXCELL system (Markku Tamminen did a PhD work on a spatial 
    index based on an adaptive binary space subdivision in '81-'82, I met 
    him in '81 and '82 and we talked at these occasions about ray tracing 
    and spatial subdivision. In his PhD thesis (1982) there is a ray tracing 
    algorithm given for the EXCELL method (EXtendible CELL method).
    I decided to implement the algorithm for ray tracing polygon models.
    That implementation failed because I could only use our PDP-11 at that 
    time and I could have about ten cells in internal memory - too less
    for effective coherence. The program spend 95% of its time on swapping.
    So far the history).
    I told them about the RT-news and they are very interested to receive
    it. I will mail them (Charles Woodward, Panu Rekola, e.o.) your address,
    so that they can introduce themselves to the others.

#
# Panu Rekola - spline intersection, illumination models, textures
# Helsinki University of Technology
# Laboratory of Information Processing Science
# Room Y229A
# SF-02150 Espoo
# Finland
#	pre@kiuas.hut.fi	(or pre@hutcs.hut.fi)
alias	panu_rekola	hpfcrs!hpfcla!hpda!uunet!mcvax!hutcs!pre

Panu Rekola writes:

    I just received a message from Erik Jansen (Delft) in which he told
    me that you take care of a mailing list called "Ray Tracing News".
    (I already sent a message to Andrew Glassner on the same topic because
    Erik told me to contact him when he visited us some weeks ago.)
    Now, I would like to join the discussion; I promise to ask no stupid
    questions. I have previously worked here in a CAD project (where I wrote
    my MSc thesis on FEM elements) and since about a year I have been 
    responsible of our graphics. Even though my experience in the field is
    quite short I suppose I have learned a lot while all people want to see
    their models in color and with glass etc., visualization has been the
    bottleneck in our CAD projects.

    As an introduction to the critical members of the mailing list you
    could tell that I am a filter who read unstandard input from the models
    created by other people, manipulates the data with the normal graphics
    methods, and outpus images. The special features of our ray tracer are
    the EXCELL spatial directory (which has been optimized for ray tracing
    during the last few weeks), a new B-spline (and Bezier) algorithm,
    methods to display BR models with curved surfaces (even blend surfaces,
    although this part yet unfinished). The system will be semi-commercially
    used in a couple of companies soon (e.g. car and tableware industry).

------------------------------------------------------------------------

More on shadow testing, efficiency, etc. (followup to Ohta/Goldsmith
correspondence):

>From Jeff Goldsmith:

    Sorry I haven't responded sooner, but movie-making has taken
    up all my time recently.  

    With respect to pre-sorting, etc.  It is important to note
    that the preprocessing time must be MUCH smaller than the 
    typical rendering time.  So far this has been true, and 
    even more so for animation.

    O(n) in my writing (explicitly in print) means linear in the
    number of objects in the scene.  Obviously, it is quite likely
    that the asymptotic time complexity (a.t.c.) of any ray tracing algorithm
    will be different for the number of rays.  Excluding ray coherence
    methods and hybrid z-buffer/ray tracing methods, the current
    a.t.c. is O(n) in the number of rays for ray tracing.  Actually, I
    think it is the same for these other methods because the hybrid
    methods just eliminate some of the rays from consideration and
    leave the rest to be just as hard and the coherence methods don't
    eliminate more than, say, 1/2 of the rays, I think.  In any event,
    for the a.t.c. to become sub-linear, there can be no such fraction,
    right?

    About space tracing: I think that I said that finding the 
    closest intersection is an O(log n) problem.  I agree, though,
    that that statement is not completely correct.  Bucket sort
    methods, for example, can reduce the a.t.c. below log n.  Also,
    global sort time (preprocessing) can distribute some of the
    computation across all rays, which can reduce the time complexity.
    What about the subdivide on the fly methods?  (e.g:
    Arvo and Kirk)  How do they fit in the scheme of things?
    I think your evaluation of the space tracing methods is correct,
    though the space complexity becomes important here, too.  Also,
    given a "full" space (like Fujimoto's demos,) the time complexity
    is smaller.  That leads to the question, "What if the time complexity
    of an algorithm depends on its input data?" Standard procedure is
    to evaluate "worst case" complexity, but we are probably interested
    in "average case" or more likely, "typical case."  Also, it would
    be worthwhile and interesting to understand which algorithms do
    better with which type of data.  We need to quantify this answer
    when trying to find good hybrid schemes.  (The next generation?)

    At SIGGRAPH '87 we had a round table and each answered the question,
    "what would you like to see happen to ray tracing in the next year."
    My choice was to see something proven about ray tracing.  It sounds
    like you are interested in that too.  Any takers?

--------------------------------------------------------------------

More comments on tight fitting octrees for quadrics
{followup to Ruud Waij's question last issue}

>From Jeff Goldsmith:

	With respect to the conversation about octree testing, I've only
    done one try at that.  I just tested 9 points against the implicit
    representation of the surface.  (8 corners and the middle.)  I didn't
    use it for ray tracing (I even forget what for) but I suspect that 
    antialiasing will hide most errors generated that way.

	Jim Blinn came up with a clever way to do edges and minima/
    maxima of quadric surfaces using (surprise) homogeneous coordinates.
    I don't think there ever was a real paper out of it, but he published
    a tutorial paper in the Siggraph '84 tutorial notes on "The Mathematics
    of Computer Graphics."  That technique works for any quadric surface
    (cylinders aren't bounded, though) under any homogeneous transform
    (including perspective!)  He also talks about how to render these
    things using his method.  I tried it; it works great and is 
    incredibly fast.  I didn't implement many of his optimizations and
    can draw a texture mapped cylinder (no end caps) that fills the
    screen (512x512) on a VAX 780 in under a minute.

	As to how this applies to ray tracing, he gives a method for
    finding the silhouette of a quadric as well as minima and maxima.
    It allows for easy use of forward differencing, so should be fast
    enough to "render" quadrics into an octree.

	Bob Conley did a volume-oriented ray tracer for his thesis.
    I don't remember the details, but there'll be a long note about
    it that I'll pass on.  He mentions that his code can do index
    of refraction varying over position.  He uses a grid technique
    similar to Fujimoto's.   

---------------------------------------------------------------

>From Jim Arvo:

    Just when you thought we had moved from octrees on to other things...
This just occurred to me yesterday. (Actually, that was several days ago.
This mail got bounced back to me twice now.  More e-mail gremlins I guess.)


LINEAR-TIME VOXEL WALKING FOR OCTREES
-------------------------------------

Here is a new way to attack the problem of "voxel walking" in octrees (at 
least I think it's new).  By voxel walking I mean identifying the successive
voxels along the path of a ray.  This is more for theoretical interest than
anything else, though the algorithm described below may actually be 
practical in some situations.  I make no claims about the practicality, 
however, and stick to theoretical time complexity for the most part.

For this discussion assume that we have recursively subdivided a cubical
volume of space into a collection of equal-sized voxels using a BSP tree
 -- i.e. each level imposes a single axis-orthogonal partitioning plane.  
The algorithm is much easier to describe using BSP trees, and from the point
of view of computational complexity, there is basically no difference 
between BSP trees and octrees.  Also, assuming that the subdivision has been
carried out to uniform depth throughout simplifies the discussion, but is by
no means a prerequisite.  This would defeat the whole purpose because we all
know how to efficiently walk the voxels along a ray in the context of 
uniform subdivision -- i.e. use a 3DDDA.

Assuming that the leaf nodes form an NxNxN array of voxels, any given ray 
will pierce at most O(N) voxels.  The actual bound is something like 3N,
but the point is that it's linear in N.  Now, suppose that we use a 
"re-traversal" technique to move from one voxel to the next along the ray.
That is, we create a point that is guaranteed to lie within the next voxel
and then traverse the hierarchy from the root node until we find the leaf
node, or voxel, containing this point.  This requires O( log N ) operations.
In real life this is quite insignificant, but here we are talking about the
actual time complexity.  Therefore, in the worst case situation of following
a ray through O( N ) voxels, the "re-traversal" scheme requires O( N log N )
operations just to do the "voxel walking."  Assuming that there is an upper
bound on the number of objects in any voxel (i.e. independent of N), this 
is also the worst case time complexity for intersecting a ray with the
environment.

In this note I propose a new "voxel walking" algorithm for octrees (call it
the "partitioning" algorithm for now) which has a worst case time complexity
of O( N ) under the conditions outlined above.  In the best case of finding
a hit "right away" (after O(1) voxels), both "re-traversal" and 
"partitioning" have a time complexity of O( log N ).  That is:

                    BEST CASE: O(1) voxels    WORST CASE: O(N) voxels
                    searched before a hit.    searched before a hit.
                  +---------------------------------------------------+
                  |                                                   |
  Re-traveral     |     O( log N )               O( N Log N )         |
                  |                                                   |
  Partitioning    |     O( log N )               O( N )               |
                  |                                                   |
                  +---------------------------------------------------+

The new algorithm proceeds by recursively partitioning the ray into little
line segments which intersect the leaf voxels.  The top-down nature of the 
recursive search ensures that partition nodes are only considered ONCE PER 
RAY.  In addition, the voxels will be visited in the correct order, thereby 
retaining the O( log N ) best case behavior.

Below is a pseudo code description of the "partitioning" algorithm.  It is
the routine for intersecting a ray with an environment which has been 
subdivided using a BSP tree.  Little things like checking to make sure
the intersection is within the appropriate interval have been omitted.
The input arguments to this routine are:

    Node : A BSP tree node which comes in two flavors -- a partition node 
           or a leaf node.  A partition node defines a plane and points to
           two child nodes which further partition the "positive" and 
           "negative" half-spaces.  A leaf node points to a list of 
           candidate objects.

    P    : The ray origin.  Actually, think of this as an endpoint of a 3D 
           line segment, since we are constraining the "ray" to be of finite
           length.

    D    : A unit vector indicating the ray direction.

    len  : The length of the "ray" -- or, more appropriately, the line 
           segment.  This is measured from the origin, P, along the 
           direction vector, D.

The function "Intersect" is initially passed the root node of the BSP tree,
the origin and direction of the ray, and a length, "len", indicating the 
maximum distance to intersections which are to be considered.  This starts
out being the distance to the far end of the original bounding cube.

============================================================================

FUNCTION Intersect( Node, P, D, len ) RETURNING "results of intersection"

    IF Node is NIL THEN RETURN( "no intersection" )

    IF Node is a leaf THEN BEGIN

        intersect ray (P,D) with objects in the candidate list

        RETURN( "the closest resulting intersection" )

        END IF

    dist := signed distance along ray (P,D) to plane defined by Node

    near := child of Node in half-space which contains P

    IF 0 < dist < len THEN BEGIN  /* the interval intersects the plane */

        hit_data := Intersect( near, P, D, dist )

        IF hit_data <> "no intersection" THEN RETURN( hit_data )

        Q := P + dist * D   /* 3D coords of point of intersection */

        far := child of Node in half-space which does NOT contain P

        RETURN( Intersect( far, Q, D, len - dist ) )

        END IF

    ELSE RETURN( Intersect( near, P, D, len ) ) 

    END

============================================================================

As the BSP tree is traversed, the line segments are chopped up by the
partitioning nodes.  The "shrinking" of the line segments is critical to 
ensure that only relevent branches of the tree will be traversed.

The actual encodings of the intersection data, the partitioning planes, and
the nodes of the tree are all irrelevant to this discussion.  These are 
"constant time" details.  Granted, they become exceedingly important when
considering whether the algorithm is really practial.  Let's save this
for later.

A naive (and incorrect) proof of the claim that the time complexity of this
algorithm is O(N) would go something like this:

    The voxel walking that we perform on behalf of a single ray is really
    just a search of a binary tree with voxels at the leaves.  Since each 
    node is only processed once, and since a binary tree with k leaves has
    k - 1 internal nodes, the total number of nodes which are processed in 
    the entire operation must be of the same order as the number of leaves.
    We know that there are O( N ) leaves.  Therefore, the time complexity 
    is O( N ).

But wait!  The tree that we search is not truly binary since many of the 
internal nodes have one NIL branch.  This happens when we discover that the 
entire current line segment is on one side of a partitioning plane and we
prune off the branch on the other side.  This is essential because there
are really N**3 leaves and we need to discard branches leading to all but 
O( N ) of them.  Thus, k leaves does not imply that there are only k - 1 
internal nodes.  The quention is, "Can there be more than O( k ) internal 
nodes?".

Suppose we were to pick N random voxels from the N**3 possible choices, then 
walk up the BSP tree marking all the nodes in the tree which eventually lead
to these N leaves.  Let's call this the subtree "generated" by the original
N voxels.  Clearly this is a tree and it's uniquely determined by the leaves.
A very simple argument shows that the generated subtree can have as many as 
2 * ( N - 1 ) * log N nodes.  This puts us right back where we started from,
with a time complexity of O( N log N ), even if we visit these nodes only 
once.  This makes sense, because the "re-traversal" method, which is also 
O( N log N ), treats the nodes as though they were unrelated.  That is, it 
does not take advantage of the fact that paths leading to neighboring 
voxels are likely to be almost identical, diverging only very near the 
leaves.  Therefore, if the "partitioning" scheme really does visit only 
O( N ) nodes, it does so because the voxels along a ray are far from random.
It must implicitly take advantage of the fact that the voxels are much more
likely to be brothers than distant cousins.

This is in fact the case.  To prove it I found that all I needed to assume 
about the voxels was connectedness -- provided I made some assumptions
about the "niceness" of the BSP tree.  To give a careful proof of this is
very tedious, so I'll just outline the strategy (which I *think* is 
correct).  But first let's define a couple of convenient terms:

1)  Two voxels are "connected" (actually "26-connected") if they meet at a 
    face, an edge, or a corner.  We will say that a collection of voxels is
    connected if there is a path of connected voxels between any two of them.

2)  A "regular" BSP tree is one in which each axis-orthogonal partition 
    divides the parent volume in half, and the partitions cycle: X, Y, Z, X,
    Y, Z, etc. (Actually, we can weaken both of these requirements 
    considerably and still make the proof work.  If we're dealing with 
    "standard" octrees, the regularity is automatic.)

Here is a sequence of little theorems which leads to the main result:

    THEOREM 1:  A ray pierces O(N) voxels.

    THEOREM 2:  The voxels pierced by a ray form a connected set.

    THEOREM 3:  Given a collection of voxels defined by a "regular" BSP 
                tree, any connected subset of K voxels generates a unique 
                subtree with O( K ) nodes.

    THEOREM 4:  The "partitioning" algorithm visits exactly the nodes of 
                the subtree generated by the voxels pierced by a ray.  
                Furthermore, each of these nodes is visited exaclty once 
                per ray.

    THEOREM 5:  The "partitioning" algorithm has a worst case complexity 
                of O( N ) for walking the voxels pierced by a ray.

Theorems 1 and 2 are trivial.  With the exception of the "uniqueness" part, 
theorem 3 is a little tricky to prove.  I found that if I completely removed
either of the "regularity" properties of the BSP tree (as opposed to just
weakening them), I could construct a counterexample.  I think that 
theorem 3 is true as stated, but I don't like my "proof" yet.  I'm looking 
for an easy and intuitive proof.  Theorem 4 is not hard to prove at all.  
All the facts become fairly clear if you see what the algorithm is doing.  
Finally, theorem 5, the main result, follows immediately from theorems 1 
through 4.


SOME PRACTICAL MATTERS:

Since log N is typically going to be very small -- bounded by 10, say -- 
this whole discussion may be purely academic.  However, just for the heck 
of it, I'll mention some things which could make this a (maybe) 
competative algorithm for real-life situations (in as much as ray tracing 
can ever be considered to be "real life").

First of all, it would probably be advisable to avoid recursive procedure
calls in the "inner loop" of a voxel walker.  This means maintaining an 
explicit stack.  At the very least one should "longjump" out of the 
recursion once an intersection is found.

The calculation of "dist" is very simple for axis-orthogonal planes, 
consisting of a subtract and a multiply (assuming that the reciprocals of 
the direction components are computed once up front, before the recursion 
begins).

A nice thing which falls out for free is that arbitrary partitioning 
planes can be used if desired.  The only penalty is a more costly distance
calculation.  The rest of the algorithm works without modification.  There 
may be some situations in which this extra cost is justified.

Sigh.  This turned out to be much longer than I had planned...

>>>>>> A followup message:

Here is a slightly improved version of the algorithm in my previous mail.
It turns out that you never need to explicitly compute the points of 
intersection with the partitioning planes.  This makes it a little more
attractive. 

                                                                 -- Jim


FUNCTION BSP_Intersect( Ray, Node, min, max ) RETURNING "intersection results"

BEGIN

    IF Node is NIL THEN RETURN( "no intersection" )

    IF Node is a leaf THEN BEGIN  /* Do the real intersection checking */

        intersect Ray with each object in the candidate 
        list discarding those farther away than "max." 

        RETURN( "the closest resulting intersection" )

        END IF

    dist := signed distance along Ray to plane defined by Node

    near := child of Node for half-space containing the origin of Ray

    far  := the "other" child of Node -- i.e. not equal to near.

    IF dist > max OR dist < 0 THEN  /* Whole interval is on near side. */

        RETURN( BSP_Intersect( Ray, near, min, max ) )

    ELSE IF dist < min THEN  /* Whole interval is on far side. */

        RETURN( BSP_Intersect( Ray, far , min, max ) )

    ELSE BEGIN  /* the interval intersects the plane */

        hit_data := BSP_Intersect( Ray, near, min, dist ) /* Test near side */

        IF hit_data indicates that there was a hit THEN RETURN( hit_data )

        RETURN( BSP_Intersect( Ray, far, dist, max ) )  /* Test far side. */

        END IF

    END


------------------------------------------------------------------------

Some people turn out to be on the e-mail mailing list but not the hardcopy
list for the RT News.  In case you don't get the RT News in hardcopy form, I'm
including the Efficiency Tricks article & the puzzle from it in this issue.


Efficiency Tricks, by Eric Haines
---------------------------------

Given a ray-tracer which has some basic efficiency scheme in use, how can we
make it faster?  Some of my tricks are below - what are yours?

[HBV stands for Hierarchical Bounding Volumes]

Speed-up #1:  [HBV and probably Octree] Keep track of the closest intersection
distance.  Whenever a primitive (i.e. something that exists - not a bounding
volume) is hit, keep its distance as the maximum distance to search.  During
further intersection testing use this distance to cut short the intersection
calculations.

Speed-up #2:  [HBV and possibly Octree] When building the ray tree, keep the
ray-tree around which was previously built.  For each ray-tree node, intersect
the object in the old ray tree, then proceed to intersect the new ray tree.
By intersecting the old object first you can usually obtain a maximum distance
immediately, which can then be used to aid Speed-up #1.

Speed-up #3:  When shadow testing, keep the opaque object (if any) which
shadowed each light for each ray-tree node.  Try these objects immediately
during the next shadow testing at that ray-tree node.  Odds are that whatever
shadowed your last intersection point will shadow again.  If the object is hit
you can immediately stop testing because the light is not seen.

Speed-up #4:  When shadow testing, save transparent objects for later
intersection.  Only if no opaque object is hit should the transparent objects
be tested.

Speed-up #5:  Don't calculate the normal for each intersection.  Get the
normal only after all intersection calculations are done and the closest object
for each node is know: after all, each ray can have only one intersection point
and one normal.  (Saving intermediate results is recommended for some
intersection calculations.)

Speed-up #6:  [HBV only] When shooting rays from a surface (e.g. reflection,
refraction, or shadow rays), get the initial list of objects to intersect
from the bounding volume hierarchy.  For example, a ray beginning on a sphere
must hit the sphere's bounding volume, so include all other objects in this
bounding volume in the immediate test list.  The bounding volume which
is the father of the sphere's bounding volume must also automatically be hit,
and its other sons should automatically be added to the test list, and so on
up the object tree.  Note also that this list can be calculated once for any
object, and so could be created and kept around under a least-recently-used
storage scheme.

------------------------------------------

A Rendering Trick and a Puzzle, by Eric Haines
----------------------------------------------

One common trick is to put a light at the eye to do better ambient lighting.
Normally if a surface is lit by only ambient light, its shading is pretty
crummy.  For example, a non-reflective cube totally in shadow will have all of
its faces shaded the exact same shade - very unrealistic.  The light at the eye
gives the cube definition.  Note that a light at the eye does not need shadow
testing - wherever the eye can see, the light can see, and vice versa.

The puzzle:  Actually, I lied.  This technique can cause a subtle error.  Do you
know what shading error the above technique would cause? [hint: assume the Hall
model is used for shading].

---------------------------------------------------------------------------

USENET roundup:

Other than a hilarious set of messages begun when Paul Heckbert's Jell-O (TM)
article was posted to USENET, and the perennial question "How do I find if a
point is inside a polygon?", not much of interest.  However, I did get a copy
of the errata in _Procedural Elements for Computer Graphics_ from David Rogers.

I updated my edition (the Second) with these corrections, which was generally
a time drain: my advice is to keep the errata sheets in this edition, checking
them only if you are planning to use an algorithm.  However, the third edition
corrections are mercifully short.


>From: "David F. Rogers" <rochester!harvard!USNA.MIL!dfr@cornell.UUCP>
>From:     David F. Rogers  <dfr@USNA.MIL>
Subject:  PECG correction
Date:     Thu, 10 Mar 88 13:21:11 EST

Correction list for PECG  2/26/86
David F. Rogers

There have been 3 printings of this book to date.
The 3rd printing occurred in approximately March 85.

To see if you have the 3rd printing look on page 386,
3rd line down and see if the word magenta is spelled
correctly.  If it is, you have the 3rd printing. If not, then
you have the 2nd or 1st printing.

To see if you have the 2nd printing look on page 90.  If
the 15th printed line in the algorithm is

  while Pixel(x,y) <> Boundary value

you have the 2nd printing.  If not you have the 1st printing.

Please send any additional corrections to me at

Professor David F. Rogers
Aerospace Engineering Department
United States Naval Academy
Annapolis, Maryland 21402

uucp:decvax!brl-bmd!usna!dfr
arpa:dfr@usna
_____________________________________________________________

Known corrections to the third printing:

Page	Para./Eq.	Line	Was		Should be

72	2		11	(5,5)		(5,1)
82	1 example	4	(8,5)		delete
100	5th equation		upper limit on integral should be 2
				vice 1
143	Fig. 3-14		yes branch of t < 0 and t > 1 decision blocks
				should extend down to Exit-line invisible
144	Cyrus-Beck
	algorithm	7	then 3		then 4
			11	then 3		then 4

145	Table 3-7	1	value for w
				[2 1]		[-2 1]

147	1st eq. 	23	V sub e sub x j V sub e sub y j
______________________________________________________________

Known corrections to the second printing:  (above plus)

text:

19	2		5	Britian 	Britain
36	Eq. 3		10	replace 2nd + with =
47	4		6	delta' > 0      delta'< = 0
82	1		6	set		complement
99	1		6	multipled	multiplied
100	1		6	Fig. 2-50a	Fig. 2-57a
100	1		8	Fig. 2-50b	Fig. 2-57b
122	write for new page
186	2		6	Fig. 3-37a	Fig. 3-38a
186	2		9	Fig. 3-38	Fig. 3-38b
187	Ref. 3-5		to appear	Vol. 3, pp. 1-23, 1984
194	Eq. 1			xn +		xn -
224	14 lines from bottom	t = 1/4 	t = 3/4
329	last eq.		-0.04		-0.13
	next to last eq.	-0.04 twice	-0.13 twice
	3rd from bottom 	0.14		-0.14
330	1st eq. 		-0.09		-0.14
	2nd eq. 		-0.09		-0.14
	3rd eq. 		-0.17		-0.27
	4th eq. 		0.36		0.30
				5.25		4.65
	last eq.		5.25		4.65
332			4	beta <		beta >
			6	beta <		beta >
355	2nd eq. 		w = s(u,w)	w = s(theta,phi)
385	2		5	magneta 	magenta
386			3	magneta 	magenta

algorithms: (send self-addressed long stamped envelope for xeroxed
	     corrections)

97	Bresenham	1	insert words  first quadrant  after modified
			10	remove ()
			12	1/2		I/2
			14	delta x 	x sub 2

117	Explicit	18	Icount = 0	delete
	clipping
			18	insert		m = Large
120			9	P'2             P'1
			12	insert after	Icount = 0
				end if
			13	insert after	1 if Icount <> 0 then
				neither end	   P' = P0
			14			removed statement label 1
			15	>=		>
			17			delete
			18			delete
			43	y>		yT>

122-124 Sutherland-	write for new pages
	Cohen

128	midpoint	4	insert after	initialize i
						i = 1
129			6	i = 1		delete
			6	insert		save original p1
						Temp = P1
			8	i = 2		i > 2
			11,12	save original.. delete
				Temp = P1
			14	add statement label 2
130			19-22	delete
			24	i = 2		i = i + 1
			29	<>		<> 0
			33	P1		P

143			3	wdotn		Wdotn
144			20	>=		>

176	Sutherland-	1	then 5		then 4
	Hodgman
177			9	4 x 4		2 x 2

198	floating	21,22	x,y		Xprev,Yprev
	horizon
199			4	Lower		Upper
200			11-19	rewrite as
				if y < Upper(x) and y > Lower(x) then Cflag = 0
				if y> = Upper(x) then Cflag = 1
				if y< = Lower(x) then Cflag = -1
			29	delete
			31	Xinc		(x2-x1)
			36	step Xinc	step 1
201			4	delete
			6	Xinc = 0       (x2-x1) = 0
			12	Y1 -		Y1 + Slope -
			12	insert after	Csign = Ysign
			13	Yi = Y1 	Yi = Y1 + Slope
			13	insert after	Xi = X1 + 1
			14-end	rewrite as	while(Csign = Ysign)
						   Yi = Yi + Slope
						    Xi = Xi + 1
						    Csign = Sign(Yi - Array(Xi))
						end while
						select nearest integer value
						if |Yi -Slope -Array(Xi - 1)| <=
						  |Yi - Array(Xi)| then
						    Yi = Yi - Slope
						    Xi = Xi -1
						end if
					     end if
					 return

258	subroutine Compute	N		i

402	HSV to Rgb	12	insert after	end if
			25	end if		delete

404	HLS to RGB	2	M1 = L*(1 - S)	M2 = L*(1 + S)
			4	M1		M2
			6	M2 = 2*L - M1	M1 = 2*L - M2
			10-12	=1		=L
			18	H		H + 120
			19	Value + 120	Value
			22	H		H - 120
			23	Value - 120	Value

405	RGB to HLS	22	M1 + M2 	M1 - M2

figures:

77	Fig. 2-39a		interchange Edge labels for scanlines 5 & 6
	Fig. 2-39b		interchange information for lists 1 & 3, 2 & 4

96	Fig. 2-57a,b		y sub i + 1	y sub(i+1)

99	Fig. 2-59		abcissa of lowest plot should be xi vice x

118	Fig. 3-4		first initialization block - add m = Large
				add F entry point just above IFLAG = -1
				decision block
119				to both IFLAG=-1 blocks add exit points to F

125	Fig. 3-5		line f - interchange Pm1 & Pm2

128	Fig. 3-6a		add initialization block immediately after Start
				initialize i, i=1

				immediately below new initialization block add
				entry point C

				in Look for the farthest vissible point from P1
				block - delete i=1

				in decision block i = 2 - change to i > 2

129	Fig. 3-6b		move return to below Save P1 , T = P1 block

				remove Switch end point codes block

				in Reset counter block replace i=2 with i=i + 1

180	Fig. 3-34b		Reverse direction of arrows of box surrounding
				word Start.

330	Fig. 5-16a		add P where rays meet surface

374	Fig. 5-42		delete unlabelled third exit from decision
				box r ray?

377	Fig. 5-44		in lowest box I=I+I sub(l (sub j)) replace
				S with S sub(j)
_________________________________________________________________________

Known corrections to the first printing:

90,91	scan line seed		write for xeroxed corrections
	fill algorithm
________________________________________________________________________
END OF RTNEWS
  

From saponara@tcgould.TN.CORNELL.EDU Thu Sep  8 15:56:08 1988
Return-Path: <saponara@tcgould.TN.CORNELL.EDU>
Received: from tcgould.TN.CORNELL.EDU by turing.cs.rpi.edu (4.0/1.2-RPI-CS-Dept)
	id AA05342; Thu, 8 Sep 88 15:55:59 EDT
Date: Thu, 8 Sep 88 15:54:47 EDT
From: saponara@tcgould.tn.cornell.edu (John Saponara)
Received: by tcgould.TN.CORNELL.EDU (5.54/1.2-Cornell-Theory-Center)
	id AA25920; Thu, 8 Sep 88 15:54:47 EDT
Message-Id: <8809081954.AA25920@tcgould.TN.CORNELL.EDU>
To: kyriazis@turing.cs.rpi.edu
Subject: part 3 of 3
Status: RO


 _ __                 ______                         _ __
' )  )                  /                           ' )  )
 /--' __.  __  ,     --/ __  __.  _. o ____  _,      /  / _  , , , _
/  \_(_/|_/ (_/_    (_/ / (_(_/|_(__<_/ / <_(_)_    /  (_</_(_(_/_/_)_
             /                               /|
            '                               |/

			"Light Makes Right"

			 April 6, 1988

Compiled by Eric Haines, 3D/Eye Inc, ...!hplabs!hpfcla!hpfcrs!eye!erich
All contents are US copyright (c) 1988 by the individual authors


New Distributor Address
-----------------------

I have been trying to switch over to something faster, cheaper, and more
reliable than uucp mail.  As you (hopefully) know, I have sent you test
messages, to which about 2/3rds the mailing list has replied.  If you receive
this issue of the RT News with a return address of "...!eye!erich", then I have
NOT received your reply.  Write to:

	saponara@tcgould.tn.cornell.edu

Those who have successfully replied will get it with the "saponara" return
address.  Please consider "...!eye!erich" to still be my mailing address - the
other account is "on loan" and may disappear someday.  By the way, the new
mailing list is attached at the end of this issue.

-- Eric

-----

Contents:
    RT News, hardcopy form (Andrew Glassner)
    node changes (Paul Heckbert & Brian Barsky, Darwyn Peachey)
    new members (Cary Scofield, Michael Cohen, John Chapman)
    question for the day (Rod Bogart)
    Re: Linear-time voxel walking for BSP (Erik Jansen)
    some thoughts on the theory of RT efficiency (Jim Arvo)
    Automatic Creation of Object Hierarchies for Ray Tracing (Eric Haines)
    Best of USENET (Tait Cyrus, Todd Elvins)

-----------------------------------------------------------------------------

from: Andrew Glassner
subject: RT News, hardcopy form

I was surprised that there are still some folks on the softcopy list
not on the hardcopy list.  On the next mailing, please ask anyone on
the electronic list who isn't getting the hardcopy (but wants to) to
drop me a line, electronically (glassner@cs.unc.edu) or physically
at the Department of Computer Science, UNC-CH, Chapel Hill, NC 27599-3175.

-Andrew

-----------------------------------------------------------------------------

subject: node changes

[the gist:  change the net addresses of Brian Barsky, Paul Heckbert, Don Marsh,
and Michael Hohmeyer's "degas" node to "miro".  Change Darwyn Peachey to Pixar]


from: Paul Heckbert

My net address is changing from ph@degas.berkeley.edu to ph@miro.berkeley.edu
(we're switching from an impressionist to a more modern artist)

Although the machine "degas" is being phased out, mail to my old address
should continue to reach me, but the new address is faster and more reliable.


from: Brian Barsky

Degas is going away.  Mail should get to me as "barsky@berkeley.edu" on
the ARPAnet and as "...ucbvax!barsky" on Usenet.  My machine is "beta",
but that shouldn't be necessary in the address.


from: Darwyn Peachey

Change of address:

    pixar!peachey@ucbvax.Berkeley.EDU
	    - or -
    {sun,ucbvax}!pixar!peachey

-----------------------------------------------------------------------------

subject: new people

from: Cary Scofield

    Please add my name to the Ray Tracing News mailing list. You can
use the same address(es) you use for Jim Arvo or Olin Lathrop or the
one I give you below.  I really enjoy reading RTN -- very enlightening
stuff!

                                    Thanks, 

                                        Cary Scofield
                                        Apollo Computer Inc.
                                        270 Billerica Rd.
                                        Chelmsford MA 01824

                                        apollo!scofield@eddie.mit.edu

> Could you please write up a paragraph about yourself which I
> could include next issue as an introduction?

I don't know that there is much about myself worth talking about, but here
goes ...

I've been working off-and-on with 3D graphics applications and systems
for about 9 years ... started at Apollo 4 1/2 years ago ... been
working on  Apollo's 3dGMR and Raytracer packages ... presently in
part-time pursuit of an MS in Systems Engineering at Boston Univ. ...
my prevailing interests nowadays are in finding and developing a
suitable software architecture for an image sythesis system that is
network-distributable and user-extensible.

                                        - Cary Scofield

-------

from: John Chapman

[John is presently doing grad. work in computer graphics]

Yes I'd like to be on the list - getting the back issues would be great,
thanks!  The most stable email address for me is:
 ...!ubc-cs!fornax!sfu_cmpt!chapman
although this may change shortly nothing sent to it will be lost.
Fastest response address for short notes is still ...fornax!bby-bc!john


--------

from: Eric Haines

Michael Cohen has also been added to the list.  He should be well known to
anyone who has read anything about radiosity, and has also done work in
ray tracing (his latest efforts with John Wallace resulting in an algorithm
for combining radiosity and ray tracing [Siggraph '87, last article]).  In
1983 we both entered the Program of Computer Graphics at Cornell, and after
graduation he stayed on as a professor.  Embarrassingly enough, I thought
that he was on the RT News mailing list (most of the PCG staff are) - anyway,
he's been added:

	cohen@squid.tn.cornell.edu

-----------------------------------------------------------------------------

subject: question for the day
from: Rod Bogart

I actually do have a ray tracing question.  There has been mucho
noise on the net about "point in poly".  Around here, we raytrace
triangles.  So, what is the fastest reliable way to intersect a
ray with a triangle and also get back barycentric coordinates?

We subdivide our B-spline patches into a hierarchy of bounding
volumes with triangles at the leaves.  We preprocess the triangles
by calculating a matrix to aid the intersection process.  The problem
is, the matrix must be inverted, and doesn't always cooperate (this
usually fails for triangles nearly aligned with a coordinate plane).
Is there a favorite way of storing a triangle (instead of our failing
matrix) so that the intersection returns barycentric coordinates
(r,s,t) all 0 to 1?

You don't need to bother the rest of the RT gang, if you have solution
of which you are confident.

Thanks,
RGB

-----------------------------------------------------------------------------

>From: erik jansen
Subject: Re: Linear-time voxel walking for BSP

I have some experience with the 'recursive' ray traversal (as I called it)
described by Jim Arvo. The first implementation dates from fall'83 (as 
mentioned in the previous RT news). I presented the idea during the 
workshop 'Data structures for raster graphics' (June 1985) and I compared
it there with the 'sequential' traversal and did the suggestion that both
methods can also be applied to a hierarchical box structure. See:
 
  F.W. Jansen, 'Data structures for ray tracing', 
  In: 'Data structures for raster graphics', Proceedings workshop,
  Kessener, L.R.A, Peters, F.J., Lierop, M.L.P. van, eds., 57-73,
  Eurographics Seminars, Springer Verlag, 1986.

In my thesis 'Solid modeling with faceted primitives', (Sept. 1987) it is 
discussed on the pages 63-66. 
I agree that these sources are rather obscure and not known to people
(let say) outside the Netherlands. Here follows a summary of the pages of
my thesis:

The (sequential) ray tracing algorithm for the (BSP) cell structure
proceeds by local access. First the intersection of the ray with the total 
structure is calculated, then the cells are stepwise traversed by calculating 
the intersection of the ray with the cell planes and determining which cell 
is traversed next. The spatial index (directory) is used to access the data
in the cell. (..).
The index in the directory is found by an address computation on the 
coordinates of a query point (for all points within a cell, the address 
computation function gives the same index value) (This is the EXCELL 
directory method see Tamminen Siggraph'83). A similar method is described 
in [Fujimoto et al., 1986] for traversing an octree space subdivision: a 
neighbour finding technique on the basis of octree indexes. Both methods 
are better than the method described in [Glassner, 1984], which classifies 
each point by descending the complete octree. 
(..)
Another method is feasible: a tree traversal method for the cell structure.
This method uses a cell structure that is created by a binary subdivision.
This recursive ray traversal method intersects the cell structure and 
recursively subdivides the ray and proceeds with the partitioning (collection 
of cells) first intersected. If the ray interval only intersects a single 
cell then the data in that cell is tested for intersection with the ray.
(..)
Both algorithms have been implemented but no clear differences in 
performance could be found: the average number of cells traversed for 
each ray is more than two or three, which would be beneficial for the 
sequential traversal, but it is not large enough to justify the initial
cost of the recursive ray traversal. (..).
(..)
The total processing time for the ray tracing breaks down into the following
components: cell traversal, intersection and shading.
Their contributions are depicted in fig.(..), which shows that the intersection
is still the major time-consuming activity.
The constant time performance is confirmed by the results of fig. (..).
Unfortunately, the constant time is constantly very high. (sigh)

So far my thesis and exactly what was predicted by Jim. In some test examples
(for instance a simple set up with five elementary volumes (a few thousand
polygons)) the average number of cells traversed was about 5.

The original idea was not to eliminate the number of internal nodes that
are traversed (because I use the EXCELL indexing method) but to avoid the
cell plane intersections. First I calculate the intersection of the ray with
the six bounding planes of the world box. I put xmin, xmax, ymin, ymax, zmin
and zmax on a stack and I half each time the ray segment for x, y and z 
alternatively. Further, while subdividing the ray I also half the directory
index range of the EXCELL directory (this requires another filling of the 
directory than in the original method) and when the range contains only
one index then the leaf level has been reached and the ray intersection
with the cell data can be done. (If this is unclear I can give a more 
elaborated explanation next time).
It will be clear that I can not use with my method arbitrary partitioning
planes.

An important point is the actual coding. My implementation was in Pascal
and I can imagine much better programs than I can write. 

-----------------------------------------------------------------------------

from: Jim Arvo
subject: some thoughts on the theory of RT efficiency

[Jim is writing the course notes on efficiency for the "Introduction to
Ray Tracing" course at this year's Siggraph.  He sent this on to me
and mentioned I could pass them on to all.  Enjoy! - EAH]

    Yes, I did receive the latest RT news with my octree stuff in it.  By
    the way, it occured to me that you mentioned something about walking up 
    an octree to find the nearest common ancestor between a voxel and its 
    neighbor.  Olin and I have been discussing this, and I'm quite sure that
    it's equivalent to the algorithm I described (but it's NOT immediately
    obvious).  The important fact is that neighboring voxels in an octree
    have a common ancestor which is two steps up (on average) regardless of
    the depth of the octree.  This is what gets rid of the log N factor.

    With respect to "A Characterization of Ten Ray Tracing Efficiency 
    Algorithms", I'm hoping that my section of the course notes will
    take a good first step toward filling this need.  There's obviously an
    infinite amount of work to be done here, especially considering that
    ray tracing is on very shaky theoretical grounds as it stands.
    Not that we have any doubts that it works!  We just can't say 
    anything very convincing about how good our optimizations are at 
    this point.  We clearly have to do some of the things you suggested
    in your last message, or possibly go back to square one and try to
    create a formal "ray tracing calculus" which we can actually prove
    things about.  And, of course, I agree totally that we need to get a 
    good handle on the strengths of the various techniques in order for 
    something like annealing to have any chance at all.

    By the way, while writing the course notes I realized that there is a 
    nice connection between the light buffer, the "ray coherence theorem" 
    [Ohta/Maekawa 87], and ray classification.  If you start with a 
    "direction cube" (like the hemicube of radiosity) and cross it with 
    (i.e. form the cartesian product with) different "sets", you get the 
    central abstractions which these algorithms are based upon.  Like so:


    light buffer  -----  ray coherence theorem  -----  ray classification

     directions              directions                    directions
         x                        x                            x
       points                 "objects"                     "space"  


    This is a very simple observation, but I think it makes the different
    uses of direction easier to visualize, and certainly easier to explain.
    All three use sorted object lists associated (directly or indirectly) 
    with "cells" of a subdivided direction cube.  I think these three 
    algorithms form a nice well-rounded section on "directional techniques."

    Well, back to the salt mine...

                                                                   -- Jim

--------more

    I want to get your opinion concerning the correspondence between 
    the light buffer, the "ray coherence" algorithm, and ray classification
    which I mentioned previously.  In particular, I have the following
    chart which I'd like to include in my section of the course notes and,
    since the light buffer is your baby, I'd like you to tell me

        1) If I've represented it fairly

        2) If there are other criteria which you think might 
           be helpful to add to the chart

    Here is the chart (which is actually just a figure in the context of
    a more detailed discussion):


                    Light Buffer     Ray Coherence      Ray Classification
                    ------------     -------------      ------------------

    Directions      Points            Objects           Space
    crossed with    (representing     (including        (bounding the
    ------------    light sources)    light sources)    environment)


    Applied to      shadowing rays    all rays          all rays
    ----------

    Type of         polygonal         unrestricted      unrestricted
    environment   (can be extended)
    -----------

    When data
    structure       preprocessing     preprocessing     lazily during
    is built                                            ray tracing
    ---------

    Direction       uniform           uniform           adaptive
    subdivision                                         via quadtree
    -----------

    Construction    modified          "coherence        "object class-
    of item list    polygon ras-      theorem" applied  ification" using
    ------------    terization        to all pairs of   hierarchy of
                                      objects           beams

    Parameters      number of         number of         max tree depth,
    ----------      direction         direction         max item list size,
                    cells             cells             truncation size, etc.

    Item list       constant time     constant time     Traversal of 2D or
    Lookup                                              5D hierarchy and   
    ---------                                           caching.


    Also, have you given any thought to adaptive subdivision or lazy
    evaluation?  Are there other interesting extensions that you'd care
    to mention?

    [I still haven't replied - soon! (tonight, if I can help it)]

-----------------------------------------------------------------------------

from: Eric Haines

Automatic Creation of Object Hierarchies for Ray Tracing
--------------------------------------------------------

This document is an explanation of the ideas presented by Goldsmith and Salmon
in their May '87 paper in IEEE CG&A.  Some changes to their algorithm have been
made for additional efficiency during ray tracing (though additional cost for
the preprocess).  

The problem is that you are given a list of objects and wish to create a near
optimal hierarchy of bounding volumes to contain these objects.  The basic idea
is to create the tree as we go, adding each node at whatever location seems
most optimal.  To avoid searching through the whole tree, we work from the top
down and figure out which child branch is most worth looking at.


The Algorithm
-------------

As G&S note, the form of the tree created is order dependent.  One good
method of ensuring a better tree is to shuffle the order of the list.  This can
be done by:

	given a list of objects OBJ[1..N],
	for I = 1 to N {
	    R = random integer from 1 to N
	    swap OBJ[I] and OBJ[R]
	}

Given the list of N objects, form bounding boxes for each and calculate the
area.  The area of the bounding box enclosing an object is proportional to:

	Probability of hit == P = X*(Y+Z)+Y*Z

and this is all we need to calculate, since we're using these areas relative
to one another.

The first object is considered as a root node of a tree structure, simply:

	{OBJ[1]}		Tree #1 - the initial tree

and the other nodes are added to this tree one by one in optimal places.


Node Placement
--------------

Start with the second node and look at the root of the tree.  The root will
be either an object or a bounding volume (well, for the second node it will
be an object, after that it'll be a bounding volume.  We're presenting this
algorithm in this fashion because this section will be used recursively
throughout the root's children).

Root node is an object: In this case, the new object (call it OBJ[2]) will form
a binary tree with the root node, as follows.

		{BV[1]}		Tree #2 - adding an object to tree #1
		   |			  (First Method)
	+----------+----------+
	|                     |
    {OBJ[1]}		  {OBJ[2]}

A bounding volume was created which contains both objects, and the bounding
volume became the root node.

Root node is a bounding volume: In this case, three different possible paths
can be taken when adding a new object to the tree. 

The first is simply to add a binary tree structure as above, with one child
being the old root and the other child being the new object.  For example,
adding a new object (OBJ[3]) in this fashion to tree #2 above yields:

			   {BV[2]}	Tree #3 - Adding object #3 to tree #2
			      |			  (First Method)
		   +----------+----------+
		   |			 |
		{BV[1]}		     {OBJ[3]}
		   |
	+----------+----------+
	|                     |
    {OBJ[1]}		  {OBJ[2]}

Again a new bounding volume has been created and made the root node.

The second method for adding to a bounding volume root node is to simply
add the new object as a new child (leaf) node:

		{BV[1']}	Tree #4 - adding an object to tree #1
		   |			  (Second Method)
	+----------+----------+---------------------+
	|                     |			    |
    {OBJ[1]}		  {OBJ[2]}		{OBJ[3]}

Note that the bounding volume root node may increase in size.  This new
bounding volume is noted as BV[1'] to note this possible change.

The third method is to not add the new object as a sibling or a child of
the root, but rather to add it as a grandchild, great-grandchild or lower.
In this case, some child is picked as the most efficient to accomodate the new
object.  This child then acts like the root node above and the new object is
added in the same fashion, by choosing one of the three methods (or, if it is
a leaf node, automatically selecting the first method).  The next section deals
with deciding which method is the most efficient.


Selection Process
-----------------

The goal behind creating the tree is to minimize the average number of
intersection tests which must be performed.  By looking at the extra costs
added by adding an object at various points we can select the least expensive
option.

The costs are based on the area of the objects, which represents the
probability of the objects being hit by a ray.  The scheme is to test what
the additional cost is for method one and for method two and record which is
better.  This cost is termed the "local best cost".  For the root node this
cost is also saved as the "best cost", and the method and item are also saved.
Then method three is used, selecting the child thought to be most efficient for
adding the new object.  An "inheritance cost" is calculated for method three,
which is passed on to the child.  This cost is essentially the expense to the
parents of adding the object to the tree.  It occurs only when the parent
bounding volume must increase in size to accomodate the new object.  The term
"inheritance cost" will mean the cost addition calculated at the level, which
is the sum of the "parent inheritance cost", inherited from above, and the
"local inheritance cost", the extra cost due to this level.

The child selected is then checked for its cost using methods one and two.  If
the local best cost plus the parent inheritance cost is better than the best
cost of earlier parents, the best cost, method, and location are updated.  We
now check method three to look for an efficient child of this child.  If the
inheritance cost (which will be the sum of the local inheritance cost found at
this level plus the parent inheritance cost from earlier) is greater than the
best cost, method three does not have to be pursued further on down the tree.
This halting is valid because no matter where the node would be added further
down the tree, the cost will never be better than the best cost.

Otherwise, this whole procedure continues recursively on down the tree until
method three can no longer be performed, i.e. the inheritance cost is higher
than the best cost or the selected testing node is an object (leaf) node.  At
this point, whichever node has been chosen as the best node is modified by
method one or two.  Note that method three is not a true modification, but
rather is just a recursive mechanism, pushing the testing by methods one and
two down the tree.  After performing the changes to the tree structure, the
parent boxes are increased in volume (if need be) to accomodate the new
object.


Efficiency Criteria
-------------------

Well, we've been talking in a vacuum up to now about how to calculate the
average number of ray intersections performed for a tree.  We perform the same
calculations as G&S to determine the average--see that article for a
good explanation.  We also take their advice and not worry about the probability
of hitting the root node, and so multiply our probability equation by the
proportional volume of the root node.  The practical effect is that this avoids
a number of pointless divisions, since we just want to calculate relative
costs of adding the object to the structure and don't really want the actual
average ray/tree intersection value.

The cost for method one: what happens here is that a bounding volume is
created and its two children are the original test node and the new object.
We will use trees #1 and #2 to derive the cost calculations.  The initial
probability cost of hitting OBJ[1] are simply:

	old cost = 1

This is the because the topmost node is always tested.

The new probability cost is the cost of hitting the bounding volume and
intersecting the two objects inside the bounding volume.  Essentially:

	new cost = 1 + 2 * P(BV[1])

This formula can be interpreted as follows.  One ray intersection is done to
intersect the bounding volume.  If this volume is hit, two more ray intersection
tests must be done to check the two objects contained within.  From these two
equations we can calculate the difference, which is:

	cost difference = 2 * P(BV[1])


The cost for method two: what happens here is that the new object is added to
the existing bounding volume.  Using trees #2 and #4 for demonstration, the old
cost is:

	old cost = 1 + k * P(BV[1])

where k is the number of children (in our example, it's 2).  The new cost is:

	new cost = 1 + (k+1) * P(BV[1'])

with P(BV[1']) being the new area of the bounding volume.  The difference is:

	cost difference = ( P(BV[1']) - P(BV[1]) ) * k + P(BV[1'])


The inheritance cost for method three: the cost incurred will be the difference
between intersecting children times the old parent's area and intersecting
children times the new parent's volume.  The old cost is:

	old cost = 1 + k * P(BV[1])

The number of children is the same for the new cost (i.e. the new object will
be added on down the line by method one or two, which always ends up with one
root node), so the only thing that can change is the volume:

	new cost = 1 + k * P(BV[1'])

The difference:

	cost difference = ( P(BV[1']) - P(BV[1]) ) * k

With these criteria in hand, we can now actually try the method out.


Example
-------

There are four tiles of a checkerboard to be ray traced, which, after
shuffling, are as shown:

	+---+---+
	| 1 | 3 |
	+---+---+
	| 4 |
	+---+
	| 2 |
	+---+

We start with a tree consisting of tile #1, with area 1.

Adding tile #2, we automatically come up with a tree:

		BV* (area 3)
		 |
	+--------+----------+
	|		    |
	1 (area 1)	    2 (area 1)

Note: BV* means the new BV being added, BV' means an old BV possibly increasing
in size.

Looking at tile #3, we try out method one:

			   BV* (area 6)
			    |
		 +----------+----------+
		 |		       |
		BV (area 3)	       3 (area 1)
		 |
	+--------+----------+
	|		    |
	1 (area 1)	    2 (area 1)

The cost difference is simply 2 times the new BV area, i.e. 12.

We also look at the cost of method 2:

			   BV' (area 6)
		 	    |
	+-------------------+---------------------+
	|		    |			  |
	1 (area 1)	    2 (area 1)		  3 (area 1)

The cost is (new area - old area) * 2 children + new area = (6-3)*2 + 6 = 12.
Since this is the same as method one, either method one or method 2 can be
selected, with the best cost being 12.

Method 3 is now applied, looking at each child and using methods one and two
on them.  The inheritance cost is (new area - old area) * 2 children =
(6-3)*2 = 6.  Each child is an object (leaf) node, so only method one can be
used.  Trying this on object 1:

			   BV' (area 6)
			    |			inheritance = 6
		 +----------+----------+
		 |		       |
		BV* (area 2)	       2 (area 1)
		 |
	+--------+----------+
	|		    |
	1 (area 1)	    3 (area 1)

The local cost is 2 * new area, i.e. 4.  The cost including the inheritance
is 4 + 6 = 10, which is lower than our earlier best cost of 12, so performing
method one on object 1 is now the best solution.

Trying method 1 on object 2 we get:

			   BV' (area 6)
			    |			inheritance = 6
		 +----------+----------+
		 |		       |
		 1 (area 1)	      BV* (area 6)
		  		       |
	                    +----------+----------+
			    |			  |
			    2 (area 1)		  3 (area 1)

The increase in cost is 2 * new area, i.e. 12.  This plus the inheritance of
6 is 18, which is certainly horrible (as we would expect).  So, we end out
with the best solution being replacing the node for object 1 with a BV which
contains objects 1 and 3, shown earlier.

For object 4, we again test method one:

				      BV* (area 6)
				       |
			    +----------+----------+
			    |			  |
			   BV (area 6)	          4 (area 1)
			    |
		 +----------+----------+
		 |		       |
		BV (area 2)	       2 (area 1)
		 |
	+--------+----------+
	|		    |
	1 (area 1)	    3 (area 1)

The local cost is 2 * new area, i.e. 12.  Trying out method two:

			   	      BV (area 6)
			    	       |
		 +---------------------+---------------------+
		 |		       |		     |
		BV (area 2)	       2 (area 1)	     4 (area 1)
		 |
	+--------+----------+
	|		    |
	1 (area 1)	    3 (area 1)

The cost is (new area - old area) * 2 children + new area = (6-6)*2 + 6 = 6.
This is the better of the two methods, so method two applied to the root is
noted as the best cost of 6.

Method 3 is now applied, looking at each child and using methods one and two
on them.  The inheritance cost is (new area - old area) * 2 children =
(6-6)*2 = 0.  The first child is the BV containing objects 1 and 3, so we try
method 1:

				      BV (area 6)
				       |		inheritance = 0
			    +----------+----------+
			    |			  |
			   BV* (area 4)		  2 (area 1)
			    |
		 +----------+----------+
		 |		       |
		BV (area 2)	       4 (area 1)
		 |
	+--------+----------+
	|		    |
	1 (area 1)	    3 (area 1)

The cost is 2 * new area + inheritance = 8, which is not better than our
previous best cost of 6.  Method two yields:

				        BV (area 6)
				         |		inheritance = 0
			      +----------+----------+
			      |			    |
			     BV' (area 4)	    2 (area 1)
			      |
	+---------------------+---------------------+
	|		      |			    |
	1 (area 1)	      3 (area 1)	    4 (area 1)

The cost is (new area - old area) * 2 children + new area = (4-2)*2 + 4 = 8,
which, plus the inheritance cost of 0, is not better than our previous best
cost of 6.

Now we try method one on the second node of the tree, i.e. object 2:

			   	     BV (area 6)
			    	      |
		 +--------------------+--------------------+
		 |		       			   |
		BV (area 2)	       			  BV* (area 2)
		 |					   |
	+--------+----------+			+----------+----------+
	|		    |			|		      |
	1 (area 1)	    3 (area 1)		2 (area 1)	      4 (area 1)

Again, the cost is 2 * new area + inheritance = 2 * 2 + 0 = 4.  This beats
our best cost of 6, so this is the optimal insertion for object 4 so far.
Since this cost is also better than the best cost of 8 for the first child,
this branch is the best child to pursue further on down (though we can't go
further).  This means that we do not have to try methods one for object 4 on
the first child's children (objects 1 and 3).  Confused?  Well, that's life.
Try examples of your own to see how the algorithm works.


Improvements
------------

G&S say they check which child node will be tested further by which has
the smallest increase in area when the new object is added to it.  In case of
ties (which occur mostly when the area doesn't increase at all), they give a
few possible sub-criteria for choosing.  However, by the logic of bounding
volumes, the real test that should be done is to pick the bounding volume which
gives the best result when methods one and two are applied to it.

To prove this by example, imagine a bounding volume containing two other
bounding volumes, one huge (say it has 50% of the area of its parent) and one
tiny (say it's 1%).  Let's say an object is inserted which would not increase
the size of the 50% BV but would triple the size of the 1% BV to 3%.  By
Goldsmith's criterion we must pick the 50% BV to insert the object into.  Now
if we intersect the root node we will hit the 50% BV half the time, and so must
then do the other object intersection half the time.  If instead we had picked
the 3% BV, this would be hit 3% of the time, meaning that we would have to do
the object test only 3% of the time.  In both cases we had to test each BV, but
it was smarter of us to put the object in the smaller BV in order to avoid
testing.  This case will be caught by testing the increases for methods one and
two for the BV's.  Method one for the larger would yield 50%*2 and method two
50%*1, giving 50% as the best cost.  For the tiny BV, method one yields
3%*2 = 6% and method two (3%-1%)*2 + 3% = 7%, giving 6% as the best cost,
obviously much better.  Even though we have increased the chance of having to
open the smaller box, it's still cheaper to do this than to add the object to
a box which is much more likely to be hit.

-----------------------------------------------------------------------------

Best of USENET:

>From: cyrus@hi.unm.edu (Tait Cyrus)
Newsgroups: comp.graphics
Subject: images
Organization: U. of New Mexico, Albuquerque

It seems that once a month someone asks where they can get
ahold of some images to play with.  Well, for those of you
looking for images to "play" with, I have some images
available that you can have.  They are available via
anonymous ftp on hc.dspo.gov (26.1.0.90).  The images are
in pub/images.  There are two directories 'old' and 'new'.
Both contain images as well as a README.  The README's
describe the sizes of the images and basically what the
images are of.  Because of disk space constraints, all of the
images are compressed.  All of the images, but one, are
grey scale.  The other is an image of a mandrill in color.
Three files make up the madrill, one for green, red and
blue.

Currently there are 20 images.  As time goes on this number
will be increasing (I hope), so if you are interested in
images, you might want to check once a month or so to see
if there are any new ones.

If you have any images which you would like to make available,
please let me know.  I would like to put them with the ones
I currently have (kind of like a repository).

More:

Several people have asked me for some more details concerning the images
I have made available on hc.dspo.gov (26.1.0.90).  Again, the images
are available via anonymous ftp and are in /pub/images.  There are 2
directories, 'new' and 'old' which contain the 20 or so images.
Both have README's which describe the images and the sizes of the images.

The images are compressed, the README's are NOT.  If your system does
not have uncompress (you might check with your system manager first
to make sure) I have put the source for compress/uncompress in /pub
of the same machine.

The images are all grey scale, except for mandrill.  The grey
scale images are such that each pixel is represented by one
byte (value of 0 = black, 255 = white).

Hope this helps any of you who were confused or were having troubles.


>From: todd@utah-cs.UUCP (Todd Elvins)
Newsgroups: comp.graphics
Subject: New model of an espresso maker
Organization: University of Utah CS Dept

I have installed a file called 'espresso.polys' on the ftp directory
at cs.utah.edu

It is a model of one of those aluminum italian made espresso makers.

T. Todd Elvins  IFSR
({ucbvax,ihnp4,decvax}!utah-cs!todd, todd@cs.utah.edu)
"Eat, drink, and be merry, for tomorrow you might be in Utah."

END OF RTNEWS
 _ __                 ______                         _ __
' )  )                  /                           ' )  )
 /--' __.  __  ,     --/ __  __.  _. o ____  _,      /  / _  , , , _
/  \_(_/|_/ (_/_    (_/ / (_(_/|_(__<_/ / <_(_)_    /  (_</_(_(_/_/_)_
             /                               /|
            '                               |/

			"Light Makes Right"

			 June 20, 1988

Compiled by Eric Haines, 3D/Eye Inc, ...!hplabs!hpfcla!hpfcrs!eye!erich
All contents are US copyright (c) 1988 by the individual authors

Contents:
    New people and addresses (David Lister, Pete Segal, Paul Heckbert)
    Non-article on RenderMan and Dore' (Eric Haines)
    Ray Tracing Bibliography Update (Paul Heckbert & Eric Haines)
    Commercial Ray Tracing Software (Eric Haines)
    Benchmarks (Jeff Goldsmith)

So, I've regained the desire to type again a mere month after the SIGGRAPH
course notes deadlines.  Not much in the queue, but I thought I'd pass on what
little there is.  The most interesting developments which affect ray tracing
are the release of the RenderMan (tm) interface by Pixar and the publication
of Ardent's Dore' (tm) Programmer's Guide and Reference Manual.

Andrew Glassner's hardcopy "RT News" is coming out soon - if you're not on the
mailing list, write him.

SIGGRAPH Ray Tracers Roundtable:  in case you don't know, this is an informal
get-together during SIGGRAPH where we talk about ray tracing techniques and
trends.  Personally, I'd like to aim for about the same time as last year:
Thursday, sometime after 5 pm (with people then hitting the technical reception
for food afterwards).  Same as last year, we'll leave notes at the message
center to all subscribers as to exactly where and when.  If you're not planning
to attend SIGGRAPH, please save me some note-writing effort by telling me.
Otherwise, see you there.

-----------------------------------------------------------------------------

New people and addresses
------------------------

Paul Heckbert's summer address (at Xerox PARC):

	heckbert.pa@xerox.com

Mail will still (slowly but surely) reach Paul at his old Berkeley address.


# David Lister
# Data General Corp.
# 62 T. W. Alexander Drive
# Research Triangle Park, NC   27709
# (919) 248-6223
alias	david_lister	hpfcrs!hpfcla!hplabs!lister@dg-rtp.dg.com

>From David:

I have been working on Ray Tracing since 1984 while I was a graduate
student at The Ohio State University.  I am still working on my thesis
for my MSEE on parallelism and algorithm improvements for Ray Tracing.
My areas of Ray Tracing interest include the following:

1). algorithm efficency
2). simulation of natural phenomenon (optical characteristics of materials)

and,

3). parallelism.

David Lister


# Pete Segal
# AT&T Pixel Machines
# Room 4K-208
# Crawfords Corner Road
# Holmdel, NJ  07733
# (201)-949-1244
alias	pete_segal	hpfcrs!hpfcla!ihnp4!homxc!pixels!pls

>From Pete:

   I'm not real big on introductions, but here goes...

   I was born the son of a poor black sharecropper... wait, wrong story....
   
   I started in the Computer graphics lab at RPI in 1979, got my Master's
in '82 and came to Bell Labs doing graphics for CAD systems.  I moved to 
Pixel Machines in early 1987 and have been working there since.  I am 
interested in 3d rendering and animation, and I am currently working on
the Ray tracing library for the Pixel Machine, RAYlib.

-----------------------------------------------------------------------------

Non-article on RenderMan and Dore' (by Eric Haines)

What follows is a non-review of RenderMan and Dore'.  If you've been looking
for a chance to contribute to the RT News, here's your chance - I'm not
planning on reviewing either of these in depth, but would love to hear others'
opinions (even anonymous ones) of these packages.

I just received a copy of Pixar's "RenderMan" interface.  To quote the
preface:

	The RenderMan interface is designed so that the information needed to
	specify a photorealistic image can be passed to different rendering
	programs compactly and efficiently. ... In order to achieve this,
	the interface does not specify how a picture is rendered, but instead
	what picture is desired. ... The RenderMan interface is a collection
	of procedures to transfer the description of a scene to the rendering
	program.

The interface for the ray tracer is wonderfully short, as it is simply another
command in Pixar's shading language.  I include it here in full:

18.4.5 RAY TRACING

color trace( point R )

	"trace" returns the incident light falling on a surface in a given
	direction R.  If a particular implementation does not support ray
	tracing, and cannot compute the incident light arriving from an
	arbitrary direction, trace will return 0 (black).

That's it.  I don't see any way this command can support shadowing or
filtering, but I haven't read through the document carefully (Pixar seems
to want to go with their "Shadow Depth Maps" instead - SIGGRAPH 87 paper).

Anyway, this interface is "endorsed" by Apollo, DEC, Stellar, Ardent, and
Sun, so it seems to be a happening something.  I'm not sure what "endorsement"
means, exactly, but one thing it means is that you should probably check it out.


Dore':  well, I received the two preliminary manuals yesterday.  As such,
about all I can commit myself to is: "nice packaging - very artsy binders".
Reviews, please?

-----------------------------------------------------------------------------

Ray Tracing Bibliography Update (Paul Heckbert & Eric Haines)

Paul Heckbert's "Ray Tracing Bibliography" has been updated for May, 1988.
It will appear in the next issue of the hardcopy "Ray Tracing News" and in the
SIGGRAPH '88 "Introduction to Ray Tracing" course notes.  If you would like to
see it before then, or would just like to have an electronic version on hand,
please write Eric Haines and I'll mail you the latest version.  Also, if you
do get a copy, please tell us of any errata or addenda you may have for it.

-----------------------------------------------------------------------------

Commercial Ray Tracing Software, by Eric Haines
-------------------------------

I'm collecting information for the "Consumer's and Developer's Guide to Image
Synthesis" course at SIGGRAPH this year - namely, companies selling ray tracers.
The bare bones info is at the end of this issue.  I'm in the midst of ordering
manuals, but that's about as far as I'm going.  So, I would like to hear from
anyone who knows anything about these packages.  I will keep all reviews
confidential, unless you explicitly state otherwise.

Below is a partial list of groups (in alphabetical order) offering ray tracing
packages.  If you know of any others, please clue me in (I'm particular
ignorant when it comes to software for animators & advertising, such as
Wavefront.  For now, I have left them off the list below).  The "Contact"
section first lists the person who I have dealt with, followed by the official
contact address and/or phone number.  I believe the only company that won't
have representation on the floor at SIGGRAPH is Ray Tracing Corp (though UCS
probably will, in some form).  Oh, just to dispell rumors, "Numerical Designs
Ltd", Turner Whitted's company, is not planning on announcing a ray tracer
by SIGGRAPH 88.  Instead, they are marketing a package based on using pipes
and filters for rendering (beyond this, I do not know...).



AT&T, Pixel Machines "RAYlib".  Available only on the Pixel Machine.

	Cost: ???, manuals available realsoonnow.

	Contact: Ken Krause, 201-563-2274.
		 AT&T Pixel Machines
		 1-800-544-0097


Ardent, "Dore'".  Available on Ardent's Titan superworkstation.  Source
	available in C, portable to other machines.

	Cost: $250 for universities, research sites.  $15000/$5000 per year
	for commercial sites.  Binary license $200 per copy.  Programmer's
	Guide and Reference Manual preliminary versions are $25 each.

	Contact: Ardent Computer Corporation,
		 880 West Maude Avenue,
		 Sunnyvale, CA  94086
		 408-732-0400


Hewlett Packard.  Add-on to their existing "Starbase" graphics package,
	which will also include radiosity software.  Announced at NCGA 1988.
	Available with the TurboSRX come the end of the year (?).

	Cost: ???

	Contact: Hewlett Packard Co.
		 1-800-752-0900, Ext. 782A


Integra (via Mitsubishi, via Enimax), "Turbo Beam Tracing".  Available on IBM
	PC/AT or compatible.

	Cost: ??? (will be shown at SIGGRAPH)

	Contact: Gregory Szewczyk
		 Enimax International Inc.
		 113 Martin Grove Road
		 Etobicoke, Ontario M9B 4K7
		 CANADA
		 416-234-9120


Ray Tracing Corp, "TRACER" and "TRACER PC".  Source available in FORTRAN 77.
	"TRACER" runs on Cray, VAX, and many Unix-based systems.  "TRACER PC"
	is for the IBM PC, 640K memory, 20 Mb hard disk, math co-processor,
	and Targa or Number Nine card.  "TRACER PC" includes a modeler.

	Cost: $3000 for source and first year of updates.  Manual for $25.

	Contact: Mark Franklin
		 Ray Tracing Corporation
		 2516 Via Tejon, Suite 316
		 Palos Verdes Estates, CA  90274
		 213-373-0520


United Computer Systems, Inc, "Ray Tracer".  Available on Apollo, IBM, and
	Mac.  Evidentally selling well on Mac and IBM: 4x sales than Apollo
	sales since intro post-SIGGRAPH '87.  It turns out that this product
	is actually made by Ray Tracing Corp.

	Cost: ???, manual available for $25.

	Contact: Alan Brown (at TMAC (sp?)), 213-475-1067
		 United Computer Systems, Inc
		 Graphics Development Group
		 10564 Progress Way
		 Cypress, CA  90630
		 714-220-2931


Wavefront Technologies, Inc, "3-D Dynamic Imaging System".  Their system has
	"production speed ray tracing" as a feature.

	Cost: ???

	Contact: Wavefront Technologies
		 530 East Montecito Street
		 Santa Barbara, CA  93103
		 (805)-962-8117

-----------------------------------------------------------------------------

Benchmarks, by Jeff Goldsmith

[This recently appeared in the email newsletter on scientific visualization
that Jeff Goldsmith has been editing.  If you're interested in subscribing
and contributing to the SciVi discussion group, contact him at:
jeff@hamlet.caltech.edu .  I thought it of interest because of the bounding
box intersector statistics. - EAH]

I did an interesting micro-project this winter/spring.  We were
interested in porting some programs to PCs and/or Workstations, so
I compared the performance of a few chunks of graphics code on a bunch
of machines.  The code was supposed to represent some of the typical
cpu and I/O intensive things that graphics programs tend to do.

The benchmarks are:
	Faults:	randomly accesses a huge array (>16 Megabytes)
		(I know--not huge to you Cray users, but it is
		to PCs)  Obviously, tests virtual memory usage.
	Shade: typical shading routine.  Tests floating point
		very strongly.
	Bbox: highly optimized bounding box intersection routine
		for a ray tracer.  (actually, this one has become
		somewhat unoptimized for the testing, but you get
		the idea)  Was intended to test floating point
		performance and if-then-else branching.  Actually,
		tests floating point and caching.
	Clear: sequentially clears a large memory array.  Tests
		compiler, MIPS, and memory usage.
	Sub: calls an empty subroutine repeatedly.  Tests MIPS.
	Fread: formatted reads and writes.  Tests I/O speed, floating
		point performance and MIPS.
	Uread: binary reads and writes.  Tests raw I/O speed.


Machine	faults	shade	bbox	clear	sub	fread	uread
------- ------  -----	----	-----	---	-----	-----
VAX780	12.5	16.9	12.3	11.8	19.3	17.4	23.2
uVAX II 22.2	18.5	15.6	12.9	18.9	24.0	29.1
HP320	*	10	45	4	8	5	6
HP350	4	5	28	3	4	3	4
Sun 3/1	*	9	34	4	6	5	5
Sun 3/2 *	9	48	3	3	2	4
Sun 386 *	9.5	16	2	5	3	1
Mac II	* 	31.3	44.3	*	4.3	13	5.3
PC/AT	* 	70	180	*	39	17	6
PS 2/80 *	9.4	49.7	*	11.1	11.4	7.3
Compaq	~	~	~	~	19.0	13.1	5.5
Iris	*	30	59	4	5	15	3
IrisFPA *	11	13	4	5	6	3

All entries are in seconds to run specified benchmark.
* indicates that operating system not set up to allow
   program to run.
~ indicates benchmark not attempted.

The machines tested:
	VAX780:	Vax 11/780 with FPA and 32 Mbytes memory
	uVAX II:  Microvax II, VMS, 16 Mbytes memory
	HP320: Unix
	HP350: SRX, Unix, 32 Mbytes memory
	Sun 3/1: Sun 3/160, 68881 co-processor
	Sun 3/2: Sun 3/260, 68881 co-processor
	Sun 386: Sun 386i/250, 80386/80387, Unix
	Mac II: MPW, 68881
	PC/AT: Xenix, 80287
	PS 2/80: OS2, 80387 co-processor
	Compaq: Compaq 386, Unix, no coprocessor
	Iris: SGI IRIS 3030 without FPA
	IrisFPA: SGI IRIS 3030 with FPA
	(Iris 3130 performance was identical to 3030.)


What did I learn from all this?  One thing is obvious: if a 
FPA is available for your machine and you can afford it (they
are usually cheap) buy it even if you don't do alot of floating
point.  It affects all sorts of performance characteristics.
Another important point: it is not possible to evaluated computer
performance as one number.  Different computers do different things
well.  

-----------------------------------------------------------------------------
END OF RTNEWS
   

From saponara@tcgould.TN.CORNELL.EDU Sun Sep 11 16:40:12 1988
Return-Path: <saponara@tcgould.TN.CORNELL.EDU>
Received: from tcgould.TN.CORNELL.EDU by turing.cs.rpi.edu (4.0/1.2-RPI-CS-Dept)
	id AA16770; Sun, 11 Sep 88 16:40:01 EDT
Date: Sun, 11 Sep 88 16:21:33 EDT
From: saponara@tcgould.tn.cornell.edu (John Saponara)
Received: by tcgould.TN.CORNELL.EDU (5.54/1.2-Cornell-Theory-Center)
	id AA29051; Sun, 11 Sep 88 16:21:33 EDT
Message-Id: <8809112021.AA29051@tcgould.TN.CORNELL.EDU>
To: FISHER%3D.dec@decwrl.dec.com, apollo!arvo@eddie.mit.edu,
        apollo!johnf@eddie.mit.edu, atc@cs.utexas.edu, barr@csvax.caltech.edu,
        barsky@miro.berkeley.edu, bogart%gr@cs.utah.edu,
        ckchee@dgp.toronto.edu, cohen@squid.tn.cornell.edu,
        dana!mrk@hplabs.hp.com, daniel@apollo.com, dk@csvax.caltech.edu,
        dutio!fwj@mcvax.cwi.nl, dutrun!wim@mcvax.cwi.nl,
        batcomputer!cornell.uucp!fornax!sfu-cmpt!chapman, glassner@cs.unc.edu,
        grant@icdc.llnl.gov, gray@rhea.cray.com, hohmeyer@miro.berkeley.edu,
        hultquis@prandtl.nas.nasa.gov, jaf@squid.tn.cornell.edu,
        jeff@hamlet.caltech.edu, joy@ucdavis.edu, kyriazis@turing.cs.rpi.edu,
        lister@dg-rtp.dg.com, litwinow@apple.com, megatek!kuchkuda@ucsd.edu,
        ph@miro.berkeley.edu, phil@yy.cicg.rpi.edu,
        pixar!pat@ucbvax.berkeley.edu, roy@wisdom.tn.cornell.edu,
        saponara@tcgould.TN.CORNELL.EDU, sgi!paul@pyramid.pyramid.com,
        tim@csvax.caltech.edu, wtl@cockle.tn.cornell.edu
Subject: RT News, September 11, 988
Status: R

[whoops, bad title line: it's really 1988 AD, for those who are wondering]


 _ __                 ______                         _ __
' )  )                  /                           ' )  )
 /--' __.  __  ,     --/ __  __.  _. o ____  _,      /  / _  , , , _
/  \_(_/|_/ (_/_    (_/ / (_(_/|_(__<_/ / <_(_)_    /  (_</_(_(_/_/_)_
             /                               /|
            '                               |/

			"Light Makes Right"

			 September 11, 1988

Compiled by Eric Haines, 3D/Eye Inc, ...!hplabs!hpfcla!hpfcrs!eye!erich
All contents are US copyright (c) 1988 by the individual authors

Contents:
    Intro
    Capsule Autobiographies, by more new people
    The Continuing Saga of MTV's Raytracer, by Mark VandeWettering
    Public Domain Ray Tracer Q & A, by Mark VandeWettering
    Public Domain Ray Tracer Utilities, by Tom Vijlbrief
    Sorting Unnecessary on Shadow Rays for Kay/Kajiya?
	by Eric Haines and Mark VandeWettering
    Summary of Replies to Vectorizing Ray-Object Intersection Query,
	by Tom Palmer
    The Ray Tracer I Wrote, by George Kyriazis
    New Bitmaps Library Available, Jef Poskanzer

-----------------------------------------------------------------------------

Intro

    Let's see, there's Whitted's, VandeWettering's, Kyriazis', Ohta's,
Hartman/Heckbert's, "Pearl" on the Amiga & Atari, and a whole slew of them by
Heckbert and friends (the Minimal Ray Tracer contest on USENET) - we're now at
the point where the number of public domain ray tracers cannot be counted on
one hand (and that's even if you count all the minimal RTs from Paul Heckbert's
contest as worth one).  If nothing else, this proves ray tracing's a pretty
trivial algorithm in its simplest form.

    I had hoped to talk with Tim Kay about a project he proposed at the ray
tracing roundtable at SIGGRAPH this year, but he's in New York until the 20th.
The kernel of the idea that he proposed was setting up ftp space on one of the
Caltech computers and letting people check in their test ray-tracers there.

    In the meantime this issue quickly filled up with biographical sketches
and the winnowings of USENET, especially Mark VandeWettering's efforts.  His
ray tracer looks fairly nice, doing much more than spheres - check it out!

-- Eric

-----------------------------------------------------------------------------

Biographical Sketches, old and new


# Ben Trumbore - photorealism, efficiency, interactive ray tracing
# Cornell University Program of Computer Graphics staff

Greetings!  I am a candidate from the Reformed Radiosity party.  My campaign
pledge:  I will not be satisfied until computer generated images are
indistinguishable from photographs.  Like all right-thinking Americans, I
believe ray tracing is the means to this end.  Better images at better
runtimes!  And I strenuously deny allegations that I spend too much time
working with stochastic textures - every modern life needs balance.  I foresee
a world where interaction and ray tracing live in harmony, and I want you to
be a part of that world.  Thank you!

--------------

A few more new people have joined our ranks since last issue.

#
# Tom Palmer - applied ray tracing: realism & modeling for molecular graphics
# National Cancer Institute
# P.O. Box B  Bldg 430
# Frederick, MD 21701
# (301) 698-5797
alias tom_palmer palmer@ncifcrf.gov

I'm currently interested in vectorizing ray-object intersection calculations.
However, my primary interest is in applied ray tracing.  The wide variety of
primitives and the realistic rendering make ray tracing an ideal (if slow)
method for creating images of extremely complex molecular models.  I'm
currently working with a (chemist) collaborator experimenting with visualizing
electon density, electrostatic potential, molecular orbitals, etc. via ray
tracing primitives.


A note from Tom Palmer:

Doug Turner has left UNC and is now with Apple in Cupertino doing ray casting
and textures on their Cray.  I would guess his email address to be
turner@apple.com, but don't hold me to it.

--------------

#
# Phillip Getto - Real-time radiant energy simulation :-), sampling,
#                 object-oriented rendering, efficient intersections calcs.
# CII 7309
# Center for Interactive Computer Graphics
# Rensselaer Polytechnic Institute
# 110 8th St.
# Troy, NY 12180-3590
# (518) 276-6491
alias	phil_getto	phil@yy.cicg.rpi.edu

--------------

#
# George Kyriazis - parallel ray-tracing
# ECSE Dept., JEC,
# R.P.I.,
# Troy, NY 12180
# e-mail: kyriazis@turing.cs.rpi.edu
#	kyriazis@yy.cicg.rpi.edu
alias	george_kyriazis	kyriazis@yy.cicg.rpi.edu

I will (pretty soon) be parallelizing a ray tracer to work with an AT&T Pixel
Machine.  One of the problems there will be sharing pixels when doing
anti-aliasing.  Stochastic sampling may be considered.  Also implementing
algorithms for high hit/miss ratio in intersection calculations.

--------------

#
# Stephen Spencer - accurate diffuse light calculation, antialiasing
# The Ohio State University Advanced Computing Center for the Arts and Design
# 1224 Kinnear Road
# Columbus Ohio  43212
# (614) 292-3416
alias	stephen_spencer	spencer@tut.cis.ohio-state.edu

Currently employed by The Ohio State University Advanced Computing Center for
the Arts and Design as a Graphics Research Specialist I designing graphics
software for research and instructional use by students and staff working in
computer animation and industrial design.  Graphics interests include ray-
tracing (somewhat obviously) and radiosity, and improving the realism of
computer-generated images in general.

--------------

# Greg Turk - rendering equation & tracing from lights
# UNC at Chapel Hill
# P.O. Box 26
# Chapel Hill, NC 27514
# (919) 962-1918
alias greg_turk turk@unc.cs.edu

I'm currently looking at ways to speed up collision detection for complex
objects.  I'm betting that collision detection can be made fast enough to
be useful for interactive graphics applications, and I plan to see how far
I can get on Pixel-planes, my favorite one-of-a-kind graphics engine.

--------------

#
# Roy Hall
# Program of Computer Graphics
# 120 Rand Hall
# Cornell University
# Ithaca, NY 14853
# (607)255-6711
alias	roy_hall	roy@wisdom.tn.cornell.edu

	I've been writing renderer's commercially for some time and have
	been concentrating on efficiency and eas of use.  Recently I returned
	to academics as faculty at Cornell.  I expect to be pursuing research
	for a variety of rendering techniques, ray tracing being high on the
	list.  Just finished a book "Illumination and Color in Computer
	Generated Imagery" - pub. by Springer-Verlag - should be out Sept
	or Oct.  In addition to graphics research I'm teaching architecture
	classes in lighting and acoustics, and computer applications to 
	architecture.

--------------

#
# Mark VandeWettering
# Master's Student
# University of Oregon Computer And Information Sciences
# markv@cs.uoregon.edu
alias	mark_vandewettering	markv@cs.uoregon.edu

I am currently interested in most aspects of raytracing, radiosity and
realistic image synthesis.  I am the author of a public domain raytracer
that has been distributed via USENET and is available from me via e-mail
and via anonymous ftp.  I am interested in developing a "library" of
public domain code for doing image synthesis, and learning as much as I
can in the mean time.

To download my raytracer, ftp to drizzle.cs.uoregon.edu (128.223.4.1)
and login as ftp, with your name as a password.  The README file in the
pub directory can guide you further.


[by way of introduction, attached is Mark's reply to some of my comments about
his ray tracer]

I would be pleased to see my raytracer compared to other raytracers.  I am not
a "serious" graphics person, my Master's thesis work is in functional
programming languages and parallelism, but I do seem to spend alot of my off
hours trying to hack graphics stuff.

I chose Kajiya/Kay bounding volumes because they seemed much simpler than
octree methods, and still offered good speed.

As for all your suggestions:

	- The `t' option is not mentioned the `?' options list. [This option
	prints out a `.' after each scan-line is computed]

	Thanks, will include it.  I just hacked -t in to get some idea
	of how fast the raytracer was...

	- how does your implementation of Kay/Kajiya work?  That is, what
	  sorting algorithm is going on with insert/delete that ensures you
	  of getting the lowest key at the beginning of the list?  It's been
	  but 10 years since I last studied sorting algorithms, so I am
	  not up-to-date on what your scheme is all about (the powers of
	  two comparisons and swaps).
	
	I use a simple heap implementation and use it as a priority
	queue.  I forget which data structures book I hauled it out of,
	but it isn't the most efficient in the world.  But then again,
	when I profiled it, it wasn't in the "Top Ten Most Deadly" list
	either, so I guess its okay for now.  The next release will try
	to document it better.

	- More comments throughout the code would be a plus.  If you spend an
	  hour or two now, you'll save us all a lot of time.  Most of the stuff
	  looks fairly straightforward, but stuff like the Kay queue sort
	  could use at least a reference as to where to go next.
	
	Guilty as charged.  That is why I didn't try to post it to
	comp.sources.unix.  The next release will be cleaned up,
	commented, and have a better Makefile.

You comments on the lighting model are good, the lighting model needs to be
reworked at least slightly, and probably a lot more than slightly.  I haven't
got to it yet, but it is coming.

Compilation and link problems: I have a much more generic Makefile for it, but
I accidently distributed my "totally hacked one".  The next release will
probably be configurable to a wider variety of systems.

I have already received mail from a person at Tek Research Labs who is possibly
thinking of adapting my raytracer to their new Motorola 88000 based workstation
as a demo.  I said he could send me one if it sold any :-) I am glad to see my
effort so warmly received, and treated with some level of enthusiasm.  Nobody
has come back and said "Gosh, how stupid" so I will probably wish to extend
some further effort to keep it up.

-----------------------------------------------------------------------------

The Continuing Saga of MTV's Raytracer, by Mark VandeWettering

I thought I would take the time to present a list of the software
that I am making available via anonymous ftp.  All these things have
been distributed via netnews over the past few years, so I dusted them
off and made them available.

I encourage anyone who has some interesting programs to contribute to
send me mail.  Unfortunately, I cannot allow people to upload directly,
because our space is VERY tight here (the anonymous ftp login puts you
in a subdirectory of mine, and I have a puny quota), but I would like to
maintain a library of freely distributable source code for computer
graphic applications.  I hope to have viewers for sun and X11 soon, as
well as an imagen printer dump.  Anyone working on such things, write me
if you would like to have your work distributed.

Anyway, here is the current contents of the directory ~/pub on
drizzle.cs.uoregon.edu (128.223.4.1):

-cut-cut-cut-cut-cut-cut-cut-cut-cut-cut-cut-cut-cut-cut-cut-cut-cut-cut



This directory contains:

	raytrace.shar		my raytracer as it was posted to usenet's
				comp.graphics group

	rpi-tracer.shar		A raytracer written by George Kyriazis,
				posted to comp.graphics.
				
	teapot.nff.Z		compressed nff file of the famous
				"teapot", as patches for the above
				raytracer.

	mini-ray.shar		The winner to paul heckbert's minimal
				raytracing contest.  Wow!  Fits on a
				screen, once it has been "raped".

	ohta-tracer.shar	A REALLY FAST spheres only raytracer.
				Was originally posted to comp.graphics.

	haines.shar		A slightly out of date version of Eric
				Haines' NFF Standard Procedural Database
				stuff.  I am working on getting the
				latest and greatest from netlib, but
				they seem to be slow.

Any questions or bugs with this software can be send to markv@cs.uoregon.edu.

Thank you,

Mark

-----------------------------------------------------------------------------

Public Domain Ray Tracer Q & A, by Mark VandeWettering

Reply-To: markv@drizzle.UUCP (Mark VandeWettering)
Organization: University of Oregon, Computer Science, Eugene OR

[Mark's public domain RT was posted to USENET a week or two ago.  This is a
note he posted to USENET just recently.]


First of all thanks to everyone who has expressed an interest in my raytracer.
I wish to address some questions globally rather than to each individual.

Q:	The raytracer seems to work, but how do I display it on my XXX
	brand workstation?

A:	Well, there are many answers to this.  We only have black and
	white sun workstations here, so I have to display my images on
	an ancient and probably quite rare Tek4115 graphics terminal
	which has only 8 bitplanes.  

	What I do is take the output of this raytracer, and pipe it
	through a program to convert it to the Utah Raster Toolkit
	format that we use here at the U of O.  We have several programs
	to display these files on a wide variety of devices.  The utah
	Raster Toolkit is available via anonymous FTP from cis.utah.edu,
	or you can send them some amount of money, and they can make you
	a tape (I don't have exact details here).

	The only problem with this is the guts of my conversion program
	are not original, they consist of some code that Eugene Miya
	posted to this group awhile ago (never write what you can beg,
	borrow or steal!), and I would have to contact him before I
	distributed said program.  Also you would have to get the Utah
	Raster stuff as well.

	If I get enough mail to warrant posting this stuff, and if I can
	verify it with Eugene Miya, I will post my programs.

	My advice:  Don't bother hacking pic.c too much.  Write a
	program to display general raster images of an rgb bitmap to
	whatever device you have on hand.  Use dithering if you want
	black and white.  And then POST such programs, so that others
	can use/improve them.

Q:	Where can I get Eric Haines' NFF package?
	
A:	Simple:  I will be reposting it right after this message.

Q:	Does anyone have any nifty NFF file objects to trace?

A:	Well, I just converted the teapot to NFF file format (using
	faceted polygons) but it is pretty big.  If someone can suggest
	an anonymous FTP site where I could put some of these, as well
	as other revisions to my programs, I would make them available
	from there.

Q:	What else am I working on?
	
	Well, I would like to add motion blur, and statistically
	optimized anti-aliasing.  I added code so that you can specify
	colors by name rather than by guessing colors.  Parametric
	patches, tori, and surfaces of revolution would be nice to add
	too. As soon as I feel the raytracer has been significantly 
	extended, I will repost.

Finally, a bug fix (courtesy of Cameron Elliot):

Your program will crash on some machines unless you do the following...
(The buf array was too small.)

Modify screen.c:

/* Was before cam...
	[Ed:  Gadzooks, Mark can sure be stupid sometimes....]
        curbuf = (Pixel *) malloc (xres * sizeof (Pixel)) + 1 ;
        buf = (Pixel *) malloc (xres * sizeof (Pixel)) + 1;
*/
        curbuf = (Pixel *) malloc ((xres+1) * sizeof (Pixel)) ;
        buf = (Pixel *) malloc ((xres+1) * sizeof (Pixel));

        ---
        Cameron Elliott		Portable Cellular Communications
        Path: ...!uw-beaver!tikal!ptisea!cam

Thanks again for the bug fix, and the nice comments!

Keep the mail coming!  

Mark VandeWettering

-----------------------------------------------------------------------------

Public Domain Ray Tracer Utilities, by Tom Vijlbrief

>From: tom@tnosoes.UUCP (Tom Vijlbrief)
Newsgroups: comp.graphics
Organization: TNO Institute for Perception, Soesterberg, The Netherlands

In article <2683@uoregon.uoregon.edu> markv@drizzle.UUCP (Mark VandeWettering) writes:

>	My advice:  Don't bother hacking pic.c too much.  Write a
>	program to display general raster images of an rgb bitmap to
>	whatever device you have on hand.  Use dithering if you want
>	black and white.  And then POST such programs, so that others
>	can use/improve them.

This is a posting of two programs which convert the output
of the raytracing program to Sun rasterfile format.

Ray2sun maps to greyscale.

Cray2sun maps to color (3 bit red, 3 bit green and 2 bit blue)

The program Ray2sun works fine, but Cray2sun should be much smarter
to produce better quality pictures.  (E.g. use dithering...)

Tom

Tom Vijlbrief
TNO Institute for Perception
P.O. Box 23				Phone: +31 34 63 62 77
3769 DE  Soesterberg			E-mail: tnosoes!tom@mcvax.cwi.nl
The Netherlands				    or:	uunet!mcvax!tnosoes!tom

[Code is 200+ lines, so is left out.  Either check USENET or write Tom or
me for the code. - Eric]

------------------------------------------------------------------------------

Sorting Unnecessary on Shadow Rays for Kay/Kajiya?
	by Eric Haines and Mark VandeWettering

[What follows is a discussion (still ongoing - please do add your two cents)
between Mark and I about the Kay/Kajiya efficiency scheme.  Mark uses
Kay/Kajiya in his ray tracer for shadow rays, which I feel is superfluous.
It's a minor point, as the sorting, as Mark points out, is usually not a
killer as far as where the time is spent.  However, it was worth it to me
as I found I misunderstood a part of the algorithm and found a better way
(I think) to implement Kay/Kajiya than was originally presented.  Tim Kay:
care to comment?]

Eric's first note:

- Note that Kay/Kajiya sorting is unnecessary for shadow rays.  This
  is because you don't care about the closest object, but rather
  whether any object blocks the light.  As soon as you get a shadow
  test hit, you can skip the rest of the intersection test - you're
  done.

----------------

Mark's reply:

    Is this totally correct?  What we want to know is if there is a
    shadow between the light source and the point we hit.  If we
    sort the list, we can stop when we find the first item, but we
    can also stop when the bounding volumes or the intersection are
    greater than the distance to the shadow (the point isn't
    shadowed)  Because the heap insertion/deletion routines take so
    little time, it would seem that this would be a good
    optimization.  But yes, I agree that some optimization of shadow
    rays could be made.

----------------

Eric's response:

The way a sorted list is created is to intersect a BV, get its distance, and
put its children on the list using this distance as a key.  For shadowing, the
distance to the light acts as the maximum cutoff from the start.  So, when a BV
is intersected it is either beyond the light or not.  If beyond, its children
are not put on the candidate list.  If not beyond, the children can be placed
anywhere within the candidate list (since we want any intersection).
Essentially, there should never be anything on the candidate list which is
keyed as having a distance beyond the light.  Only when you actually test the
candidate can you tell if it is beyond, at which point you throw it out.  So,
there should never be a point where you can say "all these candidates are
beyond the light", as such candidates should not be on the test list, no matter
whether the list is sorted or not.  QED, the sorting is unneeded.  I should
mention that Jeff Goldsmith also figured this out independently - has anyone
else noticed that sorting is unnecessary for shadowing?

Kay/Kajiya is something of a Catch-22 (but a great method, nonetheless), since
the sort key is the candidate's parent's distance, and what we really would
like is to have the sort key be the candidate's distance.  To get this distance
we intersect the candidate.  Now we have the distance (if any) we would have
liked to used to sort the candidate, but it's too late: we've intersected the
candidate and so taken it off the candidate list (possibly replacing it with
its children).

However, there might actually be a slight gain in practice if the list is
sorted by distance for shadow rays.  Say you have two bounding volumes, each
with N objects.  If both BV's are intersected, and the further bounding volume
contains the light, then it's probably worth intersecting the objects in the
closer BV first.  This is because the further bounding volume probably contains
objects which are beyond the light, while the closer one has objects more or
less between the light and the test point.  I believe this gain is negated by
the following counter-argument: what if the closer BV contains the intersection
point, and the further does not contain the light or the intersection point?
By the previous logic, we should intersect the further BV's children first.
"Scene dependence" seems to be the key phrase here: are your shadowing objects
closer to the intersection point (e.g. a board with a bunch of nails pounded
into it seems worth testing the closer nails first), or closer to the light
source (e.g. the light source has a lampshade).

In light (ho ho) of this, I maintain that Kay & Kajiya sorting is unnecessary
for shadow rays.  However, there are certainly other interesting sorting
strategies worth considering for shadow rays.  Some possibilities:

  - Object complexity (i.e. if you have a choice, try intersecting the
    sphere before trying the spline surface).
  - Object size (big objects cast big shadows.  This idea actually
    helps enhance "shadow caching", where the last object to shadow a
    light is then tested first for the next shadow ray for that light.
    A big object will have more shadow coherence and so will result
    in less having to find another shadowing object.  Say a light is
    blocked by both a sphere and a fine mesh of polygons.  The sphere
    will have much more shadow coherence than each polygon, resulting
    in many fewer misses).
  - Function sorting.  I haven't thought about this much, but one might be
    able to come up with a function which simulates the probability
    that a given object will be intersected.  The function could be based
    on the closest intersection distance, or the farthest, or some of each.
    One possible theory:  BV's which neither overlap the light or the
    intersection point have a higher probability of containing a shadowing
    object, for the reasons given earlier.

Anyway, just some thoughts. - Eric Haines

----------------

Mark's response:

	> The way a sorted list is created is to intersect a BV, 
	> get its distance, and put its children on the list using 
	> this distance as a key.  

This isn't the way it is currently implemented in my raytracer, and
glancing back at Kay/Kajiya, it seems contrary to their intent as well.
If I intersect the parent volume, I intersect with the bounding volume
of each of the children, and use THAT distance as the key for insertion.
This does seem to require alot more bounding box intersections, but
still has exactly the same number of ray/object intersections.  

> Kay/Kajiya is something of a Catch-22 (but a great method,
> nonetheless), since the sort key is the candidate's parent's distance,
> and what we really would like is to have the sort key be the
> candidate's distance.  To get this distance we intersect the
> candidate.  Now we have the distance (if any) we would have liked to
> used to sort the candidate, but it's too late: we've intersected the
> candidate and so taken it off the candidate list (possibly replacing it
> with its children).

I think that this argument falls in light of my correction above.  Each
time an object is queued, it is keyed by the actual distance to its own
bounding volume, not the distance to the parent.

My feeling is that if you add the proper cutoffs, that Kajiya/Kay
testing for shadows is still just about the same cost as not bothering
to sort.   I actually implemented early cutoffs within the framework of
Kajiya/Kay BVs, and it only gave an improvement of 2% on average for the
Standard Procedural Database objects.  This DOESN'T mean that it is
correct, because I am uncertain just how "typical" the SPD stuff is, but
it would seem to be that for certain objects, the gain is small.

----------------

Eric's response:  Well, I made a semantic error.  Kay/Kajiya calls a
"candidate" an object (real or BV) whose bounding volume has been hit and
whose children have not been intersected.  So, you're right in that the
algorithm does put an intersected BV on the list as a candidate, and not
its children.  In practical terms this will mean less sorting: you only
have to insert the intersected BV into the list, and not all its children
(all of which have the same key).  My confusion arose from the fact that
Kay/Kajiya puts a bounding volume around every object, which I don't (a simple
triangle or a sphere is about the same to intersect than a bounding box, and
a BV around a sphere (which is a BV, after all), seems excessive).

    Interestingly enough, looking over the original paper, I now have to agree
with you:  sorting on shadow rays using their original algorithm makes sense.
However, I have found that there is an inefficiency in the Kay/Kajiya algorithm
as presented at SIGGRAPH (which I never noticed before):  in their figure 7,
where they outline the algorithm, (page 275 of the SIGGRAPH '86 proceedings)
it is stated, "if the ray hits the BV, insert the child into the heap".  Then
when they get to the "while" loop they state that "while heap is non-empty and
distance to top node < t" the loop should be performed.  Seems to me that they
are inserting children which could be immediately culled.  I believe a faster
algorithm results from:

	If the ray hits the bounding volume and distance < t
	    Insert the child into the heap

In other words, if the distance to the BV is greater than the present t (which
is the closest intersection distance of a real object), its child can
immediately be discarded (instead of inserting it into the heap).

    Using this slight modification of Kay/Kajiya now means that sorting is
unnecessary.  In the original, it was worth checking the distance because
objects which were beyond the present t distance (for shadowing, the distance to
the light) were actually inserted on the heap.  By realizing that these
children have no business on the heap (they're beyond the light, right?) and
not inserting them in the first place, there is then no reason to sort what
is then actually put on the heap.

[This is where things stand for now.  My original mistake of misreading Kay
and Kajiya was partly due to my using a more efficient algorithm:  not adding to
the heap if the child cannot possibly be hit.  I had never noticed that this
was not how they wrote it up, as I assumed they would compare all intersections
against the closest real intersection distance. - Eric]

------------------------------------------------------------------------------

Summary of Replies to Vectorizing Ray-Object Intersection Query, by Tom Palmer

>From: palmer@ncifcrf.gov (Thomas Palmer)
Newsgroups: comp.graphics


This is a summary of the replies I received to my query regarding
vectorizing ray-object intersection calculations.

----------------

>From: stan!dodge!dave@boulder.Colorado.EDU (Dave Plunkett)

   You might try any of:

   "The Vectorization of a Ray Tracing Program for Image Generation", 
   with J.M. Cychosz and M.J. Bailey.
   Cyber 200 Applications Seminar, NASA Goddard, October 1983.
   
   "Ray Tracing on a Cyber 205",
   Conference Proceedings VIM-40, 
   San Francisco, CA, April 1984.
   
   "A Vectorized Ray Tracing Algorithm", 
   Masters Thesis,
   Purdue University, West Lafayette, IN.  August 1984.
   
   "The Vectorization of a Ray Tracing Algorithm for Improved Execution Speed",
   with M.J. Bailey.  IEEE Computer Graphics and Applications,
   Vol. 5, No. 8, August 1985.

   The last two of these are more readily accessible.  These papers describe my
Master's research, a vectorized ray tracing algorithm for use on csg objects
that was written using explicit vector syntax on the 205.  If you have any
specific questions, I'd be glad to answer any that I can.

Dave Plunkett
Solbourne Computer, Inc.
Longmont, CO 80501
(303) 772-3400
...sun!stan!dave

----------------

>From: 3ksnn64@ee.ecn.purdue.edu (Joe Cychosz)

I have developed a vectorized ray tracing package for the CYBER 205.  Part of
the work is discussed in Plunkett's CG&A paper.  Nelson Max also had a paper on
vectorized intersections of height fields used to produce Carlos' Island.  Saul
Youseff at Florida State has also been doing some work using raytracing for
collector plate design.

Both Plunkett and myself use a ray queue to collect rays, and then vectorize
such that serveral rays are intersected with an object.  This approach does
make it difficult to implement these accelerated ray traces such as Mike
Kaplan's "Constant Time Ray Tracing".

I have a variation of Kaplan's approach that sub-divides the object space and
uses bit operators to elliminate unnecessary intersection calculations.

Joe Cychosz

----------------

>From: mcvax!tokaido.irisa.fr!priol@uunet.UU.NET (Thierry Priol -- Equipe Hypercubes)

There are few works on vectorization of the ray-object intersection.  One of
these works was done by Plunkett on a CDC-CYBER. The reference is:

[reference same as in Plunkett's note]
						 
Personally, I work in ray-tracing on a parallel MIMD (hypercube iPSC/2)
computer.

Thierry PRIOL
Institut de Recherche en Informatique et Systemes Aleatoires
Campus de Beaulieu
35042 RENNES CEDEX
FRANCE
e-mail : mcvax!inria!irisa!priol

---------------------

>From: mbc@a.cs.okstate.edu (Michael Carter)

     The real problem in the inner ray tracing loop is not ray-object
intersections, but ray-bounding volume intersections.  If you refer to the
article by Kay and Kajiya from SIGGRAPH '86, they describe a method of breaking
down the OBJECT space into a hierarchical data structure, and intersecting rays
against simple bounding volumes constructed from sets of planes.  This method
queries the objects in the order that they occur along the ray, therefore, NO
SORTING IS NEEDED.  It takes at least three pairs of planes to completely
enclose an object, therefore this intersection calculation could be done
efficiently, in parallel (on perhaps more than one object at a time!)  on a
vector machine.  Most of the time is spent in this ray-bounding volume
intersection loop, and not in the ray-object intersection algorithms.

     I realize that this is not something that the C compiler can do
on its own, but remember: no pain -- no gain.  (-:

-- 
Michael B. Carter
Department of Electrical and Computer Engineering, Oklahoma State University
UUCP:  {cbosgd, ea, ihnp4, isucs1, mcvax, pesnta, uokvax}!okstate!mbc
ARPA:  mbc%okstate.csnet@csnet-relay.arpa


[The statement "NO SORTING IS NEEDED" appears to be in error - sorting is an
integral part of the Kay & Kajiya algorithm. - Eric H.]

----------------

>From: spencer@tut.cis.ohio-state.edu (Stephen Spencer)

[included in last issue]

----------------------------------------------------------------------------

The Ray Tracer I Wrote, by George Kyriazis

>From: kyriazis@rpics (George Kyriazis)
Newsgroups: comp.graphics
Organization: RPI CS Dept.

	Since I had too many requests for my ray tracer, I am posting it
here [USENET].  Hope that will help people that can't get mail to me.  I finally
had the source put so people can read it, but it's not definite that it'll
stay where it is.  Right now it is on life.pawl.rpi.edu (128.113.10.2) in
pub/ray.  Can you please comment back on it ?  Thanks.

So, here it is:

[Deleted here, again, as it's another 900+ lines of archive.  Check USENET,
write George or me, or ftp it as above to get a copy.  What follows are
excerpts from his README file.]

	Here is a simple ray tracing program developed here at RPI.  It
incorporates shadows, reflections and refraction together with one
non-directional light source.  The light source can diffuse on the surfaces
and can also give specular reflections.  The illumination model used is
by Phong.  The only objects supported right now are spheres, but the data
structure can be easily expanded to incorporate more objects.

[...]

	The ray tracer is written so it can be easily understood (at least
that version), and it is fully commented.  Nevertheless, probably it won't
be understood by a newcomer.  

[...]

	Can you please inform me with any bugs that the program might have
or any features that you want the upcoming versions to have.  This software
was written by me, and the subsequent version will probably by produced
by other members of the RPI chapter of the ACM.
						Good luck!

	George Kyriazis
	kyriazis@turing.cs.rpi.edu

----------------------------------------------------------------------------

New Bitmaps Library Available, Jef Poskanzer

Reply-To: Jef Poskanzer <jef@rtsg.ee.lbl.gov>
Organization: Paratheo-Anametamystikhood Of Eris Esoteric, Ada Lovelace Cabal

The third release of the Portable Bitmap package is ready for FTPing
from expo.lcs.mit.edu:contrib/pbm.tar.Z.

Answers to some frequently asked questions:
 - The decimal address of expo is 18.30.0.212.
 - Please avoid ftp'ing from expo.lcs.mit.edu in between the hours of
   9am and 6pm east coast, USA time.  
 - There may be other places to FTP it from, but I don't know of them.
   In particular, you can't FTP it from lbl-rtsg.  Don't even try.
 - Don't forget to set binary mode when you do the FTP.
 - Pbmtorast and rasttopbm depend on Sun's pixrect library, and will
   compile only on suns.
 - Currently there is no way to get the package other than FTP.  However,
   if comp.sources.misc ever gets going again, perhaps PBM will get
   distributed there.  (I have sent mail to the moderator about it, and
   have received no reply.)

Appended is the README for PBM.  It includes a list of the major enhancements
in this version.
---
Jef

- - - - - - - - - -

                       Portable Bitmap Toolkit
                       Distribution of 31aug88
                    Previous distribution 04apr88


Included are a number of programs for converting various bitmap formats
to and from a portable format; plus some tools for manipulating the
portable bitmaps.

Major changes since the previous distribution:
    The pbm format now has a "magic number".
    New conversion filters brushtopbm, giftopbm, pbmtolj, pbmtomacp,
      pbmtoxwd, and pbmtox10wd.
    Icontopbm converter has a better parser -- it knows to skip over
      any extraneous comments at the beginning of the icon file.
    Pbmtops generates a different PostScript wrapper program -- it should
      handle huge bitmaps better.
    Xwdtopbm now handles byte-swapping correctly.
    Pbmmake takes a flag to specify the color of the new bitmap.
    Pbmpaste now implements 'or', 'and', and 'xor' operations as well
      as the default 'replace'.
Plus various minor bug fixes and enhancements.


Files in this distribution:

    README		this
    FORMATS		descriptions of the various bitmap formats
    Makefile		guess

    brushtopbm.c	convert from Xerox doodle brushes to portable bitmap
    cbmtopbm.c		convert from compact bitmap to portable bitmap
    giftopbm.c		convert from GIF to portable bitmap
    icontopbm.c		convert from Sun icon to portable bitmap
    macptopbm.c		convert from MacPaint to portable bitmap
    rasttopbm.c		convert from Sun raster to portable bitmap
    xbmtopbm.c		convert from X10 or X11 bitmap to portable bitmap
    xwdtopbm.c		convert from X10 or X11 window dump to portable bitmap
    xxxtopbm.c		convert from UNKNOWN BITMAP to portable bitmap

    pbmtoascii.c	convert from portable bitmap to ASCII graphic form
    pbmtocbm.c		convert from portable bitmap to compact bitmap
    pbmtoicon.c		convert from portable bitmap to Sun icon
    pbmtolj.c		convert from portable bitmap to HP LaserJet
    pbmtomacp.c		convert from portable bitmap to MacPaint
    pbmtops.c		convert from portable bitmap to PostScript
    pbmtoptx.c		convert from portable bitmap to Printronix
    pbmtorast.c		convert from portable bitmap to Sun raster
    pbmtoxbm.c		convert from portable bitmap to X11 bitmap
    pbmtox10bm.c	convert from portable bitmap to X10 bitmap
    pbmtoxwd.c		convert from portable bitmap to X11 window dump
    pbmtox10wd.c	convert from portable bitmap to X10 window dump

    pbmcatlr.c		concatenate portable bitmaps left to right
    pbmcattb.c		concatenate portable bitmaps top to bottom
    pbmcrop.c		crop a portable bitmap
    pbmcut.c		cut a rectangle out of a portable bitmap
    pbmenlarge.c	enlarge a portable bitmap N times
    pbmfliplr.c		flip a portable bitmap left for right
    pbmfliptb.c		flip a portable bitmap top for bottom
    pbminvert.c		invert a portable bitmap
    pbmmake.c		create a blank bitmap of a specified size
    pbmpaste.c		paste a rectangle into a portable bitmap
    pbmtrnspos.c	transpose a portable bitmap x for y

    libpbm.c		a few utility routines
    pbm.h		header file for libpbm
    macp.h		definitions for MacPaint files
    x10wd.h		definitions for X10 window dumps
    x11wd.h		definitions for X11 window dumps
    bmaliases		csh script to make aliases for converting formats
    *.1			manual entries for all of the tools
    pbm.5		manual entry for the pbm format
    bitreverse.h	useful include file


Unpack the files, edit Makefile and change the options to suit,
make, and enjoy!  Note that if you're not on a Sun, you won't be
able to compile rasttopbm and pbmtorast.

I've tested this stuff under 4.2 BSD, 4.3 BSD, and System V rel 2,
and on both Suns and Vaxen.  Nevertheless, I'm sure bugs remain.
Feedback is welcome - send bug reports, enhancements, checks, money
orders, etc. to the addresses below.


    Jef Poskanzer
    jef@rtsg.ee.lbl.gov
    {ucbvax, lll-crg, sun!pacbell, apple, hplabs}!well!pokey

-----------------------------------------------------------------------------

END OF RTNEWS
 

From saponara@tcgould.TN.CORNELL.EDU Mon Oct  3 16:43:41 1988
Return-Path: <saponara@tcgould.TN.CORNELL.EDU>
Received: from tcgould.TN.CORNELL.EDU (batcomputer.ARPA) by turing.cs.rpi.edu (4.0/1.2-RPI-CS-Dept)
	id AA09470; Mon, 3 Oct 88 16:43:27 EDT
Date: Mon, 3 Oct 88 15:47:56 EDT
From: saponara@tcgould.tn.cornell.edu (John Saponara)
Received: by tcgould.TN.CORNELL.EDU (5.54/1.2-Cornell-Theory-Center)
	id AA05461; Mon, 3 Oct 88 15:47:56 EDT
Message-Id: <8810031947.AA05461@tcgould.TN.CORNELL.EDU>
To: FISHER%3D.dec@decwrl.dec.com, apollo!arvo@eddie.mit.edu,
        apollo!johnf@eddie.mit.edu, atc@cs.utexas.edu, barr@csvax.caltech.edu,
        barsky@miro.berkeley.edu, bogart%gr@cs.utah.edu,
        ckchee@dgp.toronto.edu, dana!mrk@hplabs.hp.com, daniel@apollo.com,
        dk@csvax.caltech.edu, dutio!fwj@mcvax.cwi.nl, dutrun!wim@mcvax.cwi.nl,
        batcomputer!cornell.uucp!fornax!sfu-cmpt!chapman, glassner@cs.unc.edu,
        grant@icdc.llnl.gov, gray@rhea.cray.com, hohmeyer@miro.berkeley.edu,
        hultquis@prandtl.nas.nasa.gov, jaf@squid.tn.cornell.edu,
        jeff@hamlet.caltech.edu, joy@ucdavis.edu, kyriazis@turing.cs.rpi.edu,
        lister@dg-rtp.dg.com, litwinow@apple.com, m-cohen@cs.utah.edu,
        megatek!kuchkuda@ucsd.edu, ph@miro.berkeley.edu, phil@yy.cicg.rpi.edu,
        pixar!pat@ucbvax.berkeley.edu, roy@wisdom.tn.cornell.edu,
        saponara@tcgould.TN.CORNELL.EDU, sgi!paul@pyramid.pyramid.com,
        tim@csvax.caltech.edu, wtl@cockle.tn.cornell.edu
Subject: RT News, 10/3/88
Status: R

 _ __                 ______                         _ __
' )  )                  /                           ' )  )
 /--' __.  __  ,     --/ __  __.  _. o ____  _,      /  / _  , , , _
/  \_(_/|_/ (_/_    (_/ / (_(_/|_(__<_/ / <_(_)_    /  (_</_(_(_/_/_)_
             /                               /|
            '                               |/

			"Light Makes Right"

			 October 3, 1988

Compiled by Eric Haines, 3D/Eye Inc, ...!hplabs!hpfcla!hpfcrs!eye!erich
All contents are US copyright (c) 1988 by the individual authors

Contents:
    Intro
    New Addresses and People
    Bitmap Stuff, Jeff Goldsmith
    More Comments on Kay/Kajiya
    Questions and Answers (for want of a better name)
    More on MTV's Public Domain Ray Tracer (features, bug fixes, etc)
    NFF File Format, by Eric Haines

-----------------------------------------------------------------------------

Intro

    IMPORTANT:  Around October 16th I'm losing the `saponara' account at
    Cornell.  So, in case you haven't heeded my earlier warnings, this is
    truly the one!  Please write to me at:

	hpfcla!hpfcrs!eye!erich@hplabs.hp.com

    If you have never tried to write me at this address, you should try now
    (submit something for the News while you're at it...).

    This issue is something of a queue clearer for me: a lot has been posted
    on USENET concerning Mark VandeWettering's public domain ray tracer.  I
    include all of this and more at the end.  If you're not interested, I hope
    you can wade through it all until the end, as I would appreciate comments
    on the "neutral file format" I use in the SPD package.

-------------------------------------------------------------------------------

New Addresses and People

    Remember that you can ask me any time for the latest version of the RT News
mailing list.

Andrew Glassner has settled down and bought some bookshelves, and is at:

# Andrew Glassner		Andrew Glassner
# Xerox PARC			690 Sharon Park Drive
# 3333 Coyote Hill Road		Apt. #17
# Palo Alto, CA  94304		Menlo Park, CA  94025
# (415) 494 - 4467		(415) 854 - 4285
alias	andrew_glassner	glassner@xerox.com

For those of you who receive only the email version of the Ray Tracing News:
you should contact Andrew, as he is the editor of the hardcopy version of
the RT News.  The hardcopy contains many articles which do not appear in the
email version, so be sure to get both.

--------

# K.R.Subramanian
# The University of Texas at Austin
# Dept. of Computer Sciences 
# Taylor Hall 2.124
# Austin, Tx-78712. 

alias  krs  subramn@cs.utexas.edu (ARPA)
 or
alias  krs  {uunet...}!cs.utexas.edu!subramn (UUCP).

Interests in Ray Tracing:

	Use of hierarchical search structures for efficient ray tracing,
investigating better space partitioning techniques, trying to apply
ray tracing to practical applications.

	Currently a PhD student in Computer Sciences at The University of 
Texas at Austin.

One suggestion on the RT round table: We must have a portion of time 
where we can talk to other RT people on a more personal basis. At least,
I find it easier to talk to people.

On the RT news: I would like to see practical applications of ray tracing
described here. What applications really require mirror reflections,
refraction etc. Havent seen applications where ray tracing was the way
to go.

--------

>From: mcvax!ecn-nlerf.com!jack@uunet.UU.NET (Jack van Wijk)

Via my old colleagues at Delft University of Technology I received
a copy of your Ray Tracing News. I am delighted by this initiative, since
it provides a fast, informal way to communicate with colleagues working
in this sensational area. 

At the moment I do not do research with respect to ray tracing, but
I expect that in the coming year the blood will creep again where it can't go
(old Dutch proverb). The institute where I work now is very interested
in high quality graphics, scientific data visualization and parallellism, 
so I expect that ray tracing can be made a topic here.

I would be very happy if you could put me on the mailing list. Here is
a short auto-biography:

# Jarke J. (Jack) van Wijk - Geometric modelling, intersection algorithms,
#                            parallel algorithms.
# Netherlands Energy Research Foundation, ECN
# P.O. Box 1, 1755 ZG  Petten (NH), The Netherlands
alias	jack_van_wijk	ecn!jack@mcvax.cwi.nl

I have done research on ray-tracing at Delft University of Technology
from 1982 to 1986 together with Wim Bronsvoort and Erik Jansen. 
My thesis is: "On new types of solid models and their visualization with 
ray-tracing", Delft University Press, 1986, which title summarizes my
main interests. I have developed intersection algorithms for sweep-defined 
objects (translational, rotational, sphere), and blending. Also research was
done on curved surfaces, modelling languages and on improving the efficiency.
Currently I am interested in intersection algorithms, efficiency, and
parallel algorithms, and the use of ray tracing for Scientific Data 
Visualization.

--------

Linda Roy's mail address:

# Linda Roy - all aspects of ray tracing especially efficiency
# Silicon Graphics Inc.
# 2011 Shoreline Blvd.
# Mountain View, California 94039-7311
# 415-962-3684

--------

Mark VW's mail address:

# Mark VandeWettering
# c/o Computer and Information Sciences Dept.
# University of Oregon
# Eugene, OR 97403

-------------------------------------------------------------------------------

Bitmap Stuff, by Jeff Goldsmith

   [The following is for VMS people.  UNIX/C people should contact anyone at
the University of Utah for information on their "Utah RLE Toolkit", which has
all kinds of bitmap manipulation tools using pipes (in the style of Tom Duff).
It's a nice toolkit (and includes the famous mandrill picture), and can be
had by ftp from cs.utah.edu. - EAH]

   I have some bitmap utilities that I can put somewhere
if there's interest.  They aren't intended to be anywhere
nearly so portable as poskbitmaps, but they seem to have more tools. 
I'm pretty curious what a good total set of tools would be; 
maybe this can spark such a list.  Mine work only under VMS
(does direct mapping to files--FAST) and use a bizarre format
that is really just 1024 bytes of header followed by pixels.
Here's a list of the tools:
	Cutout:		Cuts a rectangle out
	Dissolve:	Fades from one picture to another
	Gamma:		Channel-independent contract change
	Filter:		2x2 boxfilter
	Lumin:		Color to Black and White via luminosity
	Pastein:	Pastes a rectangle into another picture
	Poke:		Mess with header data, e.g. offsets
	Resam:		Change from 1-1 to 5-4 aspect ratio fast
	Reverse:	Inverse video
	Switch:		Swap red, green, blue channels around
	Thresh:		Sets pixel<theshhold = 0.  Ramps rest
	Xzoom:		Horizontal stretch.  Floating point factor
	Zoom:		Floating point rescale.
None of these are super-robust, but they are pretty fast.  The
slowest is zoom and it runs in 1-2 minutes on a VAX 780.  On a
newer machine, they'd be ok-fast.  

By the way, I've used each of them in animations, so the transformations
are smooth.  Also, they are clearly useful.

-------------------------------------------------------------------------------

More Comments on Kay/Kajiya

>From Jeff Goldsmith:

I do a quick check on the children to determine the key for the sort.
I just use the largest component of the current ray as the direction
along which to check and then just use the minimum (or maximum) extent
of the bounding volume to generate a key.  Tim Kay says that that is
not what they meant in the paper, but it's close enough and seems to
work.  However, before the sorter ever gets to deal with a new bounding
volume, I check to see if the leading edge of the bounding volume is
beyond the current hit.  John Salmon added the trick that all illumination
rays get a pseudo-hit at the light source position, so that automatically
rejects all objects that cannot cast shadows.  (Of course, it deals with
objects on the other side of the ray origin, too.)  I also, of course,
don't sort the illumination rays' bounding volumes.

A further note: I did not find that the sorting cost was trivial; in 
fact, it made up for most of the time saved in avoiding bounding volume
checking.  It was more useful before we added all the other hacks to
avoid things, though. 

Good references for heap sort algorithms are:
	Standish, _Data Structure Techniques_     and
        Knuth, of course.

Heap sort is the right algorithm, I think, because a total order
is not needed on all the objects.  We need to pull off one object
(bounding volume) at a time from the head of the list, and once
we find a hit, we discard the rest of the list.  There's no point
in sorting stuff that we will never check.

----

I ended up tossing the heap sort version completely, in order to
save memory space.  (Odd, it's been a long time since I've had to
worry about code size.)  I think that I could gain all of their
savings and then some by just postprocessing the tree so that the
left child is closer to the eye than the right child.  Most non-
illumination rays go in the general direction of "away from the eye,"
so that would help them.  I-rays don't need sorting anyway.  Alternatively,
as you suggested, putting the bigger boxes (whatever) on the left would
work, too, maybe.  If I ever have time to futz with it, I'd like to
try some of that.

----

My reply to Jeff:

	Sorting on distance to eye sounds good - in fact, I was going to
try it, but I use the item buffer and so the eye rays are mostly taken care
of.  If anything, sorting with objects farther away might help me:  the
reflection rays, etc etc will probably be in a direction away from the eye
rays!  Oh, another good post-process might be to sort each list of sons on
the difficulty of sorting (or did I mention this already?) - try the sphere
before the spline.

-------------------------------------------------------------------------------

Questions and Answers (for want of a better name)

Wood Texture Request Filled:

Jeff Goldsmith's request for wood texture bitmaps was generously filled by
Rod Bogart, who made four bitmaps (wood.img[1-4]) available for ftp at
cs.utah.edu.  These are still there (I just grabbed them), though I don't
know how long they'll remain available.  These are scanned images from an
artist's book of textures.

--------

Efficiency Question

>From Mark VandeWettering:

How can we efficiently manage the intersect lists that get passed
between the various procedures.  Heckbert statically allocates arrays
within the stack frames of various procedures, which seems a little odd,
because you never really know how much space to allocate.  Also, merging
them using Roth's CSG scheme requires alot of copying: can this be
avoided?

--------

>From Jack Ritter:

A simple method for fast ray tracing has occurred to me,
and I haven't seen it in the literature, particularly
Procedural Elements for Computer Graphics.
It is a way to trivially reject rays that don't
intersect with objects. It works for primary
rays only (from the eye).  It is:

Do once for each object: 
   compute its minimum 3D bounding box. Project
   the box's 8 corners unto pixel space. Surround the
   cluster of 8 pixel points with a minimum 2D bounding box.
   (a tighter bounding volume could be used).

To test a ray against an object, check if the pixel
through which the ray goes is in the object's 2D box.
If not, reject it.

It sure beats line-sphere minimum distance calculation.

Surely this has been tried, hasn't it?

----

An Answer, by Eric Haines:

It's true, this really hasn't appeared in the literature, per se.  However, it
has been done.

The idea of the item buffer has been presented by Hank Weghorst, Gary Hooper,
and Donald P. Greenberg in "Improved Computational Methods for Ray Tracing",
ACM TOG, Vol. 3, No. 1, January 1984, pages 52-69.  Here they cast polygons
onto a z-buffer, storing the ID of the closest item for each pixel.  During
ray tracing the z-buffer is then sampled for which items are probably hit
by the eye ray.  These are checked, and if one is hit you're done.  If none
are hit then a standard ray trace is performed.  Incidentally, this is the
method Wavefront uses for eye rays when they perform ray tracing.  It's
fairly useful, as Cornell's research found that there are usually more eye
rays than reflection and refraction rays combined.  There's still all those
shadow rays, which was why I created the light buffer (but that's another
story...see IEEE CG&A September 1986 if you're interested).

In the paper the authors do not describe how to insert non-polygonal objects
into the buffer.  In Weghorst's (and I assume Hooper's, too) thesis he
describes the process, which is essentially casting the bounding box onto
the screen and getting its x and y extents, then shooting rays within this
extent at the object as a pre-process.  This is the idea you outlined.
However, theirs avoids all testing of the extents by doing the work as
a per object (instead of per ray) preprocess.  A per object basis means they
don't have to test extents: all they do is loop through the extent itself and
shoot rays at the object for each pixel.

--------

Efficient Polygon Intersection Question, from Mark VandeWettering

Another problem I have been considering arose from a profile of my
raytracer when run on the "gears" database.  A large amount of time 
(~40%) was spent in the polygon intersection code, which is greater than
other scenes which used polygons.  The reason:  the polygon intersection
routine which you described in the Siggraph Course Notes is linear in
the number of sides of the object.  For the case of the gear, the number
of sides is 144, which is a very large number.

Perhaps a better way of trying to intersect polygons is to decompose the
complex polygons into triangles, and then arrange them in your favorite
hierarchy scheme.  The simplest way would be to subdivide prior to the
raytracing in a preprocessing step.  Several very quick algorithms exist
for intersection with triangles, and I think that this may be a better
way to implement polygon intersection. 

"Back of the envelope" calculations:

Haines' method of intersection:		O(n) to intersect polygon
Triangular decomposition:		O(1) to intersect triangle
					* number of triangles searched
					  inside your hierarchy scheme.

Assuming a good hierarchy, you can expect O(logn) triangles to be
searched.  The problem is finding the constants involved in this.  I do
suspect that this method may in fact be superior, because in the ground
case (intersect a triangle), the two methods are equivalent (actually
since the code may be streamlined for triangles, the second is probably
better), and I expect that as the number of sides grows, the second will
get better relative to the first.

I am torn between trying to formally analyze the run-time, and just
going ahead and implementing the thing, and gaining performance
information from that.  Perhaps I will have some figures for you about
my experience soon.

I would like to hear from anyone on the RT-News who has
information on ray tracing superquadrics.  I am especially interested in
the numerical methods used to solve intersections, but any information
would be useful.  

[as I recall Preparata talks about preprocessing polygons into trapezoids
in his book _Computational Geometry_, leading to many fewer edge which
need testing (each trapezoid has but two sides which can intersect, as the
test ray is parallel to the other two edges).  Any other solutions, anyone?
-- EAH]

--------

Bug in Paul Heckbert's Ray Tracer?

>From Mark VandeWettering:

As I might have mentioned before, I modelled my raytracer after the one
described in Heckbert's article "Writing a RayTracer".  I have noticed
some ambiguities/anomolies/bugs(?) that might be interesting to examine.

In Heckbert's code, there is some "weirdness" going on in the Shade
procedure. The part of the "Shade" procedure which handles 
tranparency is something has a comment like:

/* hit[0].medium and hit.[1].medium are entering and exiting media */

The transmission direction is then calculated using the index of
refraction of the two media.

But hit[0].medium should be the medium that the ray originates in, not
the medium of the object actually hit.  Therefore, the index of
refractions are incorrect and the Transmission direction also is
incorrect.

Perhaps Paul could comment on this.  What seems to be correct is to keep
hit[0] reserved to contain the type of material that the ray originates
in, and hit[1] be the first hit along this ray?  Was this what was
intended?

--------

A Tidbit from USENET

>From: Ali T. Ozer

In article <10207@s.ms.uky.edu> sean@ms.uky.edu (Sean Casey) writes:
>Oh yeah, I hear that some of the commercial Amiga ray tracing software is
>being ported to the Mac II. These products have been around for a while, so
>it's a good chance for Mac users to get their hands on some already-evolved
>ray-tracing software.

For a lot higher price, though... I read that the Mac version of 
Byte by Byte's Sculpt 3D and Animate 3D packages will start from $500.

Ali Ozer, aozer@NeXT.com

-------------------------------------------------------------------------------

More on MTV's Public Domain Ray Tracer (features, bug fixes, etc)

--------

Raytrace to Impress/Postscript Converter, by David Koblas

Contained is a shar for converting MRGB pictures to either
impress or postscript depending on your needs (black and white).

{I'm looking for versatec plotter routines, if you have some I'd be interested}

[Ed. note: there is also a patch for this program posted to USENET.]

[as usual, the code is deleted for space.  Check USENET or contact David for
the program. - EAH]

name : David Koblas         place: MIPS Computers Systems                
phone: 408-991-0287         uucp : {ames,decwrl,pyramid,wyse}!mips!koblas 

--------

Raytrace to X Image converter, by Paul Andrews

Here's a somewhat primitive program to display one of Marks raytraced pic's
on an X display. There's no makefile, but then there's only one source file.

paul@torch.UUCP (Paul Andrews)

[again, code deleted for space.  Check USENET or write Paul]

--------

Better Shading Model for Raytracer, by David Koblas

A better shading model for the MTV raytracer [I probably should have
posted this a while back, while I was sure it all worked]

The two big changes this has are a better shading model, including doing 
something diffrent with diffuse reflection.  You can specify the color of 
a light, and surfaces have an ambiant and absorbance values [default: 
no ambiant and no absorbtion].  The "shine" value is now in the range 
from 0.0 -> 1.0 instead of 0 -> infinity.  On balls I ran a sed script 
like this: '/^f/s/ 35 / 0.2 /' and got close the the same results.  
Also all componants of a surface can be specified with r,g,b values.

Give it a try, and if you have any bugs/problems/sugestions, let me
know and I'll give them a try/fix.

name : David Koblas         place: MIPS Computers Systems                
phone: 408-991-0287         uucp : {ames,decwrl,pyramid,wyse}!mips!koblas 

[code deleted for space: check USENET or write David for the new model]

--------

>From Irv Moy:

	I have Mark VandeWettering's raytracer running on a Sun 3/260
and Version 2.4 of Eric Haines' SPD (I took the SPD that Mark posted and
applied the patch that Eric posted to get Ver. 2.4).  I display the output
of the raytracer on a Targa 32; I had to add an extra byte in the output
file for the Targa's alpha channel.  The output of 'balls.c' looks great;
I now have my very own "sphereflake"!!!
	I tried 'gears' at a size factor of 4 and the resulting output is
quite dark.  The background is a nice UNC blue but the gear surfaces are
very dark and so is the reflecting polygon underneath the gears.
Has anyone else tried to raytrace 'gears' with Mark's program yet???
Enquiring minds want to know.....(BTW, if you look closely at 'sphereflake',
you can see Elvis (recursively, of course)).

				Irv Moy
				UUCP: ..!chinet!musashi
				Internet: musashi@chinet.uucp

--------

>From Ron Hitchens:

   This may have some bearing on the problem:

vixen% ray -i gears.nff -o gears.pic -t
ray: (9345 prims, 5 lights)
ray: inputfile = "gears.nff"
ray: resolution 512 512
ray: after adding bounding volumes, 10516 prims
				    ^^^^^

   From defs.h:

#define MAXPRIMS        (10000)
			 ^^^^^

   I ran gears.nff last night and got the same results.  I bumped MAXPRIMS to
11000 and ran it again, seemed to work fine.  I only ran a 128x128 version,
the resolution was so low that most of the gears looked like fuzzy blobs, but
it seemed to be properly lighted and plenty colorful.  I have a 512x512 run
going now, should be finished in about 12 hours (I love my Sun 3/60FC, but it
sure would be handy to have a Cray now and then).

> (BTW, if you look closely at 'sphereflake',
> you can see Elvis (recursively, of course)).

   Naw, that's the spirit of Tom Snyder, Elvis is way too busy channelling
through an unemployed truck driver in Muncie, Indiana.

   To Mark VandeWettering: Hey, thanks for the ray tracer.  I don't suppose
you could send me a disk drive to store all these picture files on could you?

Ron Hitchens		ronbo@vixen.uucp	hitchens@cs.utexas.edu

--------

>From: Steve Holzworth

There is a bug in the screen.c routine of Mark's raytracer.
Specifically, everywhere he does a malloc, the code is of the form:

foo = (Pixel *) malloc (xres * sizeof (Pixel)) + 1;

The actual intent is to allocate xres+1 Pixels, thusly:

foo = (Pixel *) malloc ((xres + 1) * sizeof (Pixel));

There are three occurences of the former in the code; they should be changed
similarly to the later.  (Note: I never ran over this bug until I tried
to run a 1024x1024 image.  It worked fine on 512x512 or less images.)

Other than that, its a good raytracer.  Congrats, Mark!  I'm working on
a better lighting model and a better camera model.  I'll send them on 
when (if) I finish them.

						Steve Holzworth
						rti!tachyon!sch

--------

Teapot Database for Ray Tracing, by Ron Hitchens

Subject: Ray traced teapot

   Below is a modification of a program that Dean S. Jones posted a few weeks
ago that draws the well known teapot in wire frame using SunCore.  I changed
it so that it would use the same data to produce an NFF file that Mark
VandeWettering's ray tracer can use.  The result looks surprisingly good. 
Using the default step value of 6 is satisfactory, 12 looks very nice.

   I'd like to know what's causing the little specks on the spout and the
handle.  I don't know if it's a problem with how this guy generates the
NFF file, or some glitch in Mark's ray tracer.  I don't have the time to
investigate.

   The original program that Dean posted was Sun specific, since it used
SunCore.  This one is not Sun-specific, all it does is some computation
and spit out some text data, so it should run most anywhere.  You'll probably
need to remove the -f68881 from the makefile spec if you compile it on a
non-Sun system though.

   Enjoy.

Ron Hitchens		ronbo@vixen.uucp	hitchens@cs.utexas.edu

[code deleted for space.  Check USENET or write Ron Hitchens for the code]

--------

>From Mark VandeWettering (to me):

Your final comments regarding Kay/Kajiya BVs were basically 
in line with the thinking that I have done, and with the current state
of my raytracer.  I now provide cutoffs for shadow testing, and cull 
objects immediately if they are beyond the maximum distance that we need
to look.

This also allows me to implement some of the "shadow caching" and other
optimizations suggested by you in the March 28, 1988 RT-News.   Most of
these were trivial to implement, and will be incorporated in a
better/stronger/faster version of my raytracer.  

--

Gosh, I just can't keep quiet can I?  I just wanted you to know that a 
new and improved version of my raytracer is available for anonymous ftp.
It employs some of the stuff regarding Kay/Kajiya bounding volumes, and
shadow caches for an improvement in speed as well.  (Roughly 30%
improvement).  I can now do the sphereflake is less than 5 hours on a
Sun 3 w/68881 coprocessor.

For the future, I am thinking of CSG, antialiasing, and Goldsmith and
Salmon style hierarchy generation.  Things that have been put off, but I
would like to include would be more complex primitives, but I just can't
deal with numerical analysis at the moment :-)

Soon it will be back to the world of functional programming and my
thesis so I better get this all done.  *sigh*

--

New Ideas: an ObjectDesc -> NFF compiler

One possible project that I have thought of doing is an Object to NFF
compiler.  The compiler could be a procedural language which could be
used to define hierarchical objects, with facilities for rotation,
translation and scaling.  The output would be an NFF file for the scene.

For instance, we might have primitive object types  CUBE, SPHERE, POLYGON
and CONE.  Each of these might represent the canonical "unit" primitive.
We could then build new objects out of these primitives.

A hypothetical example program to create a checkerboard might be:

#
# checkboard.obj
# 
define object check {
	polygon (0.0 0.0 0.0)
	        (1.0 0.0 0.0)
	        (1.0 1.0 0.0)
	        (0.0 1.0 0.0) ;
	}
#
# Check4 contains 4 squares...
#
define object check4 {
	check, color white ;
	check, translate(1.0, 0.0, 0.0), color black ;
	check, translate(0.0, 1.0, 0.0), color white ;
	check, translate(1.0, 1.0, 0.0), color black ;
	}
#
# Board 4 is 1/4 of a checkerboard...
#
define object board4 {
	check4 ;
	check4, translate(2.0, 0.0, 0.0) ;
	check4, translate(0.0, 2.0, 0.0) ;
	check4, translate(2.0, 2.0, 0.0) ;
	}

#
# Board is a full sized checkerboard...
#
define object board {
	board4 ;
	board4, translate(4.0, 0.0, 0.0) ;
	board4, translate(0.0, 4.0, 0.0) ;
	board4, translate(4.0, 4.0, 0.0) ;
	}

#
# the scene to be rendered...
#

define scene {
	board ;
	}

--

I would also like it to support CSG, and maybe even procedural (looping
constructs).  I don't know if I will get up enough steam to implement
this, but it would make scenes easier to specify for the average user.

Ideally, such a language would be interesting to use for specifying
motion as well, although I have no real ideas about the ideal way to
specify (or implement) this.

-------------------------------------------------------------------------------

Neutral File Format (NFF), by Eric Haines

[This is a description of the format used in the SPD package.  Any comments
on how to expand this format are appreciated.  Some extensions seem obvious
to me (e.g. adding directional lights, circles, and tori), but I want to take
my time, gather opinions, and get it more-or-less right the first time. -EAH]

Draft document #1, 10/3/88

The NFF (Neutral File Format) is designed as a minimal scene description
language.  The language was designed in order to test various rendering
algorithms and efficiency schemes.  It is meant to describe the geometry and
basic surface characteristics of objects, the placement of lights, and the
viewing frustum for the eye.  Some additional information is provided for
esthetic reasons (such as the color of the objects, which is not strictly
necessary for testing rendering algorithms).

Future enhancements include:  circle and torus objects, spline surfaces
with trimming curves, directional lights, characteristics for positional
lights, CSG descriptions, and probably more by the time you read this.
Comments, suggestions, and criticisms are all welcome.

At present the NFF file format is used in conjunction with the SPD (Standard
Procedural Database) software, a package designed to create a variety of
databases for testing rendering schemes.  The SPD package is available
from Netlib and via ftp from drizzle.cs.uoregon.edu.  For more information
about SPD see "A Proposal for Standard Graphics Environments," IEEE Computer
Graphics and Applications, vol. 7, no. 11, November 1987, pp. 3-5.

By providing a minimal interface, NFF is meant to act as a simple format to
allow the programmer to quickly write filters to move from NFF to the
local file format.  Presently the following entities are supported:
     A simple perspective frustum
     A positional (vs. directional) light source description
     A background color description
     A surface properties description
     Polygon, polygonal patch, cylinder/cone, and sphere descriptions

Files are output as lines of text.  For each entity, the first line
defines its type.  The rest of the first line and possibly other lines
contain further information about the entity.  Entities include:

"v"  - viewing vectors and angles
"l"  - positional light location
"b"  - background color
"f"  - object material properties
"c"  - cone or cylinder primitive
"s"  - sphere primitive
"p"  - polygon primitive
"pp" - polygonal patch primitive


These are explained in depth below:

Viewpoint location.  Description:
    "v"
    "from" Fx Fy Fz
    "at" Ax Ay Az
    "up" Ux Uy Uz
    "angle" angle
    "hither" hither
    "resolution" xres yres

Format:

    v
    from %g %g %g
    at %g %g %g
    up %g %g %g
    angle %g
    hither %g
    resolution %d %d

The parameters are:

    From:  the eye location in XYZ.
    At:    a position to be at the center of the image, in XYZ world
	   coordinates.  A.k.a. "lookat".
    Up:    a vector defining which direction is up, as an XYZ vector.
    Angle: in degrees, defined as from the center of top pixel row to
	   bottom pixel row and left column to right column.
    Resolution: in pixels, in x and in y.

  Note that no assumptions are made about normalizing the data (e.g. the
  from-at distance does not have to be 1).  Also, vectors are not
  required to be perpendicular to each other.

  For all databases some viewing parameters are always the same:
    Yon is "at infinity."
    Aspect ratio is 1.0.

  A view entity must be defined before any objects are defined (this
  requirement is so that NFF files can be used by hidden surface machines).

--------

Positional light.  A light is defined by XYZ position.  Description:
    "b" X Y Z

Format:
    l %g %g %g

    All light entities must be defined before any objects are defined (this
    requirement is so that NFF files can be used by hidden surface machines).
    Lights have a non-zero intensity of no particular value [this definition
    may change soon, with the addition of an intensity and/or color].

--------

Background color.  A color is simply RGB with values between 0 and 1:
    "b" R G B

Format:
    b %g %g %g

    If no background color is set, assume RGB = {0,0,0}.

--------

Fill color and shading parameters.  Description:
     "f" red green blue Kd Ks Shine T index_of_refraction

Format:
    f %g %g %g %g %g %g %g %g

    RGB is in terms of 0.0 to 1.0.

    Kd is the diffuse component, Ks the specular, Shine is the Phong cosine
    power for highlights, T is transmittance (fraction of light passed per
    unit).  Usually, 0 <= Kd <= 1 and 0 <= Ks <= 1, though it is not required
    that Kd + Ks == 1.  Note that transmitting objects ( T > 0 ) are considered
    to have two sides for algorithms that need these (normally objects have
    one side).
  
    The fill color is used to color the objects following it until a new color
    is assigned.

--------

Objects:  all objects are considered one-sided, unless the second side is
needed for transmittance calculations (e.g. you cannot throw out the second
intersection of a transparent sphere in ray tracing).

Cylinder or cone.  A cylinder is defined as having a radius and an axis
    defined by two points, which also define the top and bottom edge of the
    cylinder.  A cone is defined similarly, the difference being that the apex
    and base radii are different.  The apex radius is defined as being smaller
    than the base radius.  Note that the surface exists without endcaps.  The
    cone or cylinder description:

    "c"
    base.x base.y base.z base_radius
    apex.x apex.y apex.z apex_radius

Format:
    c
    %g %g %g %g
    %g %g %g %g

    A negative value for both radii means that only the inside of the object is
    visible (objects are normally considered one sided, with the outside
    visible).  Note that the base and apex cannot be coincident for a cylinder
    or cone.

--------

Sphere.  A sphere is defined by a radius and center position:
    "s" center.x center.y center.z radius

Format:
    s %g %g %g %g

    If the radius is negative, then only the sphere's inside is visible
    (objects are normally considered one sided, with the outside visible).

--------

Polygon.  A polygon is defined by a set of vertices.  With these databases,
    a polygon is defined to have all points coplanar.  A polygon has only
    one side, with the order of the vertices being counterclockwise as you
    face the polygon (right-handed coordinate system).  The first two edges
    must form a non-zero convex angle, so that the normal and side visibility
    can be determined.  Description:

    "p" total_vertices
    vert1.x vert1.y vert1.z
    [etc. for total_vertices vertices]

Format:
    p %d
    [ %g %g %g ] <-- for total_vertices vertices

--------

Polygonal patch.  A patch is defined by a set of vertices and their normals.
    With these databases, a patch is defined to have all points coplanar.
    A patch has only one side, with the order of the vertices being
    counterclockwise as you face the patch (right-handed coordinate system).
    The first two edges must form a non-zero convex angle, so that the normal
    and side visibility can be determined.  Description:

    "pp" total_vertices
    vert1.x vert1.y vert1.z norm1.x norm1.y norm1.z
    [etc. for total_vertices vertices]

Format:
    pp %d
    [ %g %g %g %g %g %g ] <-- for total_vertices vertices

--------

Comment.  Description:
    "#" [ string ]

Format:
    # [ string ]

    As soon as a "#" character is detected, the rest of the line is considered
    a comment.

-------------------------------------------------------------------------------
END OF RTNEWS
 

From root%hpfcla@sde.hp.com Wed Nov  9 12:13:28 1988
Return-Path: <root%hpfcla@sde.hp.com>
Received: from sde.hp.com ([15.255.152.2]) by turing.cs.rpi.edu (4.0/1.2-RPI-CS-Dept)
	id AA26348; Wed, 9 Nov 88 12:11:44 EST
Received: from hpfcla.HP.COM by hp-sde ; Sat, 5 Nov 88 00:18:57 pst
Received: from hpbv.HP.COM by hpfcla.HP.COM; Sat, 5 Nov 88 01:05:40 mst
Received: by hpbv.HP.COM; Sat, 5 Nov 88 01:05:29 mst
Message-Id: <8811050805.AA22894@hpbv.HP.COM>
Received: from eye with uucp; Fri, 4 Nov 88 13:37:02
Received: from spruce (spruce) by eye; Fri, 4 Nov 88 13:37:02 est
Received: by spruce; Fri, 4 Nov 88 13:35:02 est
Date: Fri, 4 Nov 88 13:35:02 est
From: Eric Haines <eye!erich@spruce>
To: erich@spruce, hpbv!hpfcla!FISHER%3D.dec@decwrl.dec.com,
        hpbv!hpfcla!apollo!arvo@eddie.mit.edu,
        hpbv!hpfcla!apollo!johnf@eddie.mit.edu, hpbv!hpfcla!atc@cs.utexas.edu,
        hpbv!hpfcla!barr@csvax.caltech.edu,
        hpbv!hpfcla!barsky@miro.berkeley.edu,
        hpbv!hpfcla!bogart%gr@cs.utah.edu, hpbv!hpfcla!ckchee@dgp.toronto.edu,
        hpbv!hpfcla!dana!mrk@hplabs.hp.com, hpbv!hpfcla!daniel@apollo.com,
        hpbv!hpfcla!dk@csvax.caltech.edu, hpbv!hpfcla!dutio!fwj@mcvax.cwi.nl,
        hpbv!hpfcla!dutrun!wim@mcvax.cwi.nl,
        cornell!hpbv!hpfcla!fornax!sfu-cmpt!chapman,
        hpbv!hpfcla!glassner@xerox.com, hpbv!hpfcla!grant@icdc.llnl.gov,
        hpbv!hpfcla!gray%rhea.CRAY.COM@uc.msc.umn.edu,
        hpbv!hpfcla!hohmeyer@miro.berkeley.edu,
        hpbv!hpfcla!hultquis@prandtl.nas.nasa.gov,
        hpbv!hpfcla!jaf@squid.tn.cornell.edu,
        hpbv!hpfcla!jeff@hamlet.caltech.edu, hpbv!hpfcla!joy@ucdavis.edu,
        hpbv!hpfcla!kyriazis@turing.cs.rpi.edu,
        hpbv!hpfcla!lister@dg-rtp.dg.com, hpbv!hpfcla!litwinow@apple.com,
        hpbv!hpfcla!m-cohen@cs.utah.edu,
        hpbv!hpfcla!megatek!kuchkuda@ucsd.ucsd.edu,
        hpbv!hpfcla!ph@miro.berkeley.edu, hpbv!hpfcla!phil@yy.cicg.rpi.edu,
        hpbv!hpfcla!pixar!pat@ucbvax.berkeley.edu,
        hpbv!hpfcla!roy@wisdom.tn.cornell.edu,
        hpbv!hpfcla!sgi!paul@pyramid.pyramid.com,
        hpbv!hpfcla!tim@csvax.caltech.edu,
        hpbv!hpfcla!wtl@cockle.tn.cornell.edu,
        hpfcrs!hpfcla!m-cohen@cs.utah.edu
Subject: RT News, November 4th 1988
Status: R

 _ __                 ______                         _ __
' )  )                  /                           ' )  )
 /--' __.  __  ,     --/ __  __.  _. o ____  _,      /  / _  , , , _
/  \_(_/|_/ (_/_    (_/ / (_(_/|_(__<_/ / <_(_)_    /  (_</_(_(_/_/_)_
             /                               /|
            '                               |/

			"Light Makes Right"

			 November 4, 1988

Compiled by Eric Haines, 3D/Eye Inc, 410 E. Upland Rd, Ithaca, NY 14850
    607-257-1381, hpfcla!hpfcrs!eye!erich@hplabs.hp.com
All contents are US copyright (c) 1988 by the individual authors

Contents:
    Intro, by Eric Haines
    New People: David Rogers, Kelvin Thompson, A.T. Campbell III, Tim O'Connor
    Ray/Triangle Intersection with Barycentric Coordinates, by Rod Bogart,
	reply by Jeff Arenberg
    Letter and Replies
    Free On-Line Computer Graphics References, (Eugene Miya for) Baldev Singh
    Latest Mailing List, Short Form, by Eric Haines

-----------------------------------------------------------------------------

Intro
-----

    For a switch, there are no articles on MTV's ray tracer!  The major stuff
    this time is Rod Bogart's triangle intersector, and the announcement of
    Baldev Singh's computer graphics reference resource.  There are also many
    letters and short articles, along with the usual cullings of USENET.
    Enjoy.

-----------------------------------------------------------------------------

New People
----------

# Professor David F. Rogers
# Aerospace Engineering Department
# U.S. Naval Academy
# Annapolis, MD 21402
# USA
# Tel: 301-267-3283/4/5
# ARPANET: dfr@usna.mil
# UUCP:    ~uunet!usna!dfr
alias	david_rogers	dfr@cad.usna.mil



# Kelvin Thompson - hierarchy schemes, procedural objects, animation
# The University of Texas at Austin
# 4412 Ave A. #208
# Austin, TX  78751-3622
alias	kelvin_thompson	kelvin@cs.utexas.edu

I'm a PhD student in graphics at the University of Texas.  I received a BSEE
from Rice University in 1983, and a Master's in EE from UT in 1984.  My
doctoral project is on hierarchical, multi-scale databases for computer
graphics, and I'm building a ray-tracer as part of my work on that project.
I'm also interested in motion and animation.  I never plan on becoming
President of the United States of America.

-- Kelvin Thompson, Lone Rider of the Apocalypse
   kelvin@cs.utexas.edu  {...,uunet}!cs.utexas.edu!kelvin



# A. T. Campbell, III - shading models, animation
# Department of Computer Sciences
# University of Texas
# Austin, Texas 78712
# (512) 471-9708
alias	at_campbell 	atc@cs.utexas.EDU

I am in the PhD program in Computer Sciences at the University of Texas.  My
research area is developing a more sophisticated illumination model than those
currently in widespread use.  A modified form of distributed ray tracing is one
of the methods I am considering to evaluate my model.

Animation is another of my interests.  I am putting myself through school by
producing computer graphics animations for a small engineering company.
Sometimes I am called upon to create special effects such as motion blur and
atmospheric effects.  Based on what I heard at this year's ray tracing round
table at SIGGRAPH, it looks as if ray tracing can solve most of my problems.



# Tim O'Connor
# Staff, Cornell Program of Computer Graphics
# 120 Rand Hall
# Ithaca, NY 14853
alias	tim_oconnor	toc@wisdom.tn.cornell.edu

-------------------------------------------------------------------------------

Ray/Triangle Intersection with Barycentric Coordinates
------------------------------------------------------
[sent to RT News and USENET]

    articles by Rod Bogart, Jeff Arenberg


From: hpfcla!bogart%gr@cs.utah.edu (Rod G. Bogart)

A while back, there was a posting concerning ray/triangle intersection.  The
goal was to determine if a ray intersects a triangle, and if so, what are the
barycentric coordinates.  For the uninitiated, barycentric coordinates are
three values (r,s,t) all in the range zero to one.  Also, the sum of the values
is one.  These values can be used as interpolation parameters for data which is
known at the triangle vertices (i.e. normals, colors, uv).

The algorithm presented previously involved a matrix inversion.  The math went
something like this: Since (r,s,t) are interpolation values, then the
intersection point (P) must be a combination of the triangle vertices scaled by
(r,s,t).

    [ x1 y1 z1 ] [ r ]   [ Px ]   [ r ]   [ Px ]
    [ x2 y2 z2 ] [ s ] = [ Py ]   [ s ] = [ Py ]  ~V
    [ x3 y3 z3 ] [ t ]   [ Pz ]   [ t ]   [ Pz ]

So, by inverting the vertex matrix (V -> ~V), and given any point in the plane
of the triangle, we can determine (r,s,t).  If they are in the range zero to
one, the point is in the triangle.

The only problem with this method is numerical instability.  If one vertex is
the origin, the matrix won't invert.  If the triangle lies in a coordinate
plane, the matrix won't invert.  In fact, for any triangle which lies in a
plane through the origin, the matrix won't invert.  (The vertex vectors don't
span R3.)  The reason this method is so unstable is because it tries to solve a
2D problem in 3D.  Once the ray/plane intersection point is known, the
barycentric coordinates solution is a 2D issue.

Another way to think of barycentric coordinates is by the relative areas of the
subtriangles defined by the intersection point and the triangle vertices.

        1         If the area of triangle 123 is A, then the area of 
       /|\        P23 is rA.  Area 12P is sA and area 1P3 is tA.
      / | \       With this image, it is obvious that r+s+t must equal
     /  |  \      one.  If r, s, or t go outside the range zero to one,
    / t | s \     P will be outside the triangle.
   /  _-P-_  \    
  / _-     -_ \   
 /_-    r    -_\  
3---------------2 

By using the above are relationships, the following equations define r, s and
t.

	N = triangle normal = (vec(1 2) cross vec(1 3))
	    (vec(1 P) cross vec(1 3)) dot N
	s = -------------------------------
			length N
	    (vec(1 2) cross vec(1 P)) dot N
	t = -------------------------------
			length N
	r = 1 - (s + t)

In actual code, it is better to avoid the divide and the square root.  So, you
can set s equal to the numerator, and then test if s is less than zero or
greater than sqr(length N).  For added efficiency, preprocess the data and
store sqr( length N) in the triangle data structure.  Even for extremely long
thin triangles, this method is accurate and numerically stable.

RGB                         Living life in the fast lane, eight items or less.
({ihnp4,decvax}!utah-cs!bogart, bogart@cs.utah.edu)


--------

From: arenberg@trwrb.UUCP (Jeff Arenberg)
Subject: Re: Ray/Triangle Intersection with Barycentric Coordinates
[from USENET]

Ok, here is how I handle this calculation in my ray tracing program.  I think
it is quite efficient.

Let a triangle be represented in the following manner :

		   |\
		   |  \
		p1 |    \
		   |      \
  O ------------>  |________\
       p0              p2

where p0 is the vector from the origin to one vertex and p1, p2 are the vectors
from the first vertex to the other two vertices.

Let N =   p1 X p2  be the normal to the triangle. 
          -------
	| p1 X p2 |

Construct the matrices

    b =  |  p1  | ,  bb = inv(b) = | bb[0] |
	 |  p2  |                  | bb[1] |
	 |  N   |                  | bb[2] |

and store away bb.

Let the intersecting ray be parameterizes as

    r = t * D + P

Now you can quickly intersect the ray with the triangle using the following
pseudo code. ( . means vector dot product)

    Den = D . bb[2]
    if (Den == 0) then ray parallel to triangle plane, so return
    
    Num = (p0 - P) . bb[2]

    t = Num / Den
    if (t <= 0) then on or behind triangle, so return
    
    p = t * D + P - p0

    a = p . bb[0]
    b = p . bb[1]
    
    if (a < 0.0 || b < 0.0 || a + b > 1.0) then not in triangle and return

    b1 = 1 - a - b     /* barycentric coordinates */
    b2 = a
    b3 = b


The idea here is that the matrix bb transforms to a coordinate frame where the
sides of the triangle form the X,Y axes and the normal the Z axis of the frame
and the sides have been scaled to unit length.  The variable Den represents the
dZ component of the ray in this frame.  If dZ is zero, then the ray must be
parallel to the X,Y plane.  Num is the Z location of the ray origin in the new
frame and t is simply the parameter in both frames required to intersect the
ray with the triangle's plane.  Once t is known, the intersection point is
found in the original frame, saved for latter use, and the X,Y coordinates of
this point are found in the triangle's frame.  A simple comparison is then made
to determine if the point is inside the triangle.  The barycenter coordinates
are also easily found.

I haven't seen this algorithm in any of the literature, but then I haven't
really looked either.  If anyone knows if this approach has been published
before, I'd really like to know about it.

Jeff Arenberg
-------------------------------------------------------------
UUCP : ( ucbvax, ihnp4, uscvax ) !trwrb!csed-pyramid!arenberg
GEnie: shifty
-------------------------------------------------------------

-------------------------------------------------------------------------------

Letters and Replies
-------------------

From: David F. Rogers <hpfcla!dfr@USNA.MIL>
Subject:  Transforming normals

G'day Eric,

Was skimming the back issues of the RT News and your memo on transforming
normals caught my eye. Another way of looking at the problem is to recall
that a polygonal volume is made up of planes that divide space into two
parts. The columns of the volume matrix are formed from the coefficients of
the plane equations. Transforming a volume matrix requires that you
premultiply by the inverse of the manipulation matrix. The components of a
normal are the first three coefficients of the plane equation. Hence the
same idea should apply. (see PECG Sec. 4-3 on Robert's algorithm pp 211-213).
Surprising what you can learn from Roberts' algorithm yet most people
discount it.

Dave Rogers

-------------------------------------------------------------------------------

From: mcvax!ecn-nlerf.com!jack@uunet.UU.NET (Jack van Wijk)
Subject: 2D box-test

An answer to the question of Jack Ritter, RT-News October 3, 1988, 
    by Jack van Wijk

Jack Ritter proposes a method to improve the efficiency by testing the
ray-point against a 2-D box. This method has been published before:

Bronsvoort, W.F., J.J. van Wijk, and F.W. Jansen, "Two methods for Improving
the Efficiency of Ray Casting in Solid Modelling", Computer-Aided Design,
16(1), January 1984, pp. 51-55.

The method is used hierarchically here for CSG-defined models, in the spirit of
Roth. The gain of the method is significant, but not dramatically.  Probably in
our system the cost of the floating point intersection calculations was much
bigger than the box-test.

-------------------------------------------------------------------------------

From: Jeff Goldsmith
Subject: Neutral File Format

   Yuk.  I don't think that the world needs another ugly scene description
language unless it does something special.  I haven't seen renderman, but other
people seem to like it, so maybe that'll be better.  Yours looks a lot like
Wavefront's, with the disadvantage that it doesn't support a binary
representation.

    I hate to say it, but I use my own (less, I feel, but still ugly) text
format that does have a binary format as well as an ascii numerical format.
You are welcome to it if you want, but I would doubt it.  It's different in
that algebraic expressions are possible in place of any constant, plus it
includes flow control, tests, some computer algebra-type primitives and macros.
Plus, a historyer, command line editing, etc.  It looks a lot like an
interactive F77 interpreter with massive numbers of bizarre graphics commands.

    Perhaps you can instigate an effort to create a sensible object description
language and (maybe) supply an interpreter and some compiled formats.  It would
be worthwhile.  Perhaps just setting up an effort to spec one out would be good
enough.  Whatever.

--------

Reply From: Eric Haines

	I guess I didn't make it clear - NFF has been in use about a year now.
It's the format that the SPD benchmarking package uses.  I should have written
a better preface, obviously: I wanted to get the point across that this is
supposed to be absolutely minimal, and that no one should be using it for
modeling, but only for transferring the final database to a renderer.  There
could indeed be an NFF++ language which would not be user hostile, like NFF is.
Essentially, I see NFF as incredibly stupid and brain damaged.  This makes it
accessible to almost anyone who simply wants to read in a scene database
without too much hassle (even now, though, I'm getting questions like "what's
hither?" from people on USENET).

	Anyway, I like your ideas for algebraic expressions - I could use it
right now in my other language, which is a tad more user friendly and is what I
use when I want to munge around by hand.

--------

Reply From: Jeff Goldsmith

    Hmmm.  If you are trying to find an interface that can be used by
professionals, then it is probably not the same interface that might be used by
USENET-types.  Both problems might be worth addressing, but I'd say (from gross
personal bias) that the high-end problem is worth doing more.  Simply so that I
can trade databases more easily.  Simply so that code can be shared more
easily.  I'm really not all that concerned about getting computer graphics
capabilities out to high schoolers and other randoms quite yet.  In fact, I
doubt that graphics will have that sort of distribution in its current
"modeler-renderer" form.  I suspect that Mac interface and high-quality
user-interfaces will be the medium for that type of technology dissemination.
Eventually, we'll have programs that are called "Graphics Processors" or some
other nonsense and will be transmitting reasonably complex graphics
capabilities to anyone who wants to do it.  Artists will be the primary users,
though managers and engineers will use them in both technical and non-
technical efforts.  Joe six-pack just doesn't have that much
use/interest/capacity for generating pictures out of thin air.

    It would be really nice if there were a standardish graphics language
kernel.  Since just about everybody has their own interpreter that does just
about the same set of very basic things, plus, of course, their set of
enhancements, why not create a spec that would still allow all the
enhancements, but cover the basics thoroughly.  It might stifle creativity a
bit, but I doubt it.

    For transmission between modelers and renders, why not use the same
language as input to modelers?  Remove some options (or don't) and keep the
files the same.  If you are worried about speed, then a binary complied version
is necessary in any event.  (Case in point: my current project is Hubble Space
Telescope.  The uncompiled model takes 17! minutes to read in.  The compiled
one takes about 35 seconds.)  It might also be worth considering that some
people out there do use Fortran and that some things are hard to parse (NFF,
for example) in Fortran.  In fact, it's hard to parse anything that isn't fixed
field formatted in Fortran.  (I've got an ugly version like that, too.  Really
ugly.  .7 Fixmans maybe even.)

------------------------------------------------------------------------------

From: Cary Scofield
Subject: RT and applications

K.R.Subramanian (UTexas at Austin) asks:

> On the RT news: I would like to see practical applications of ray tracing
> described here. What applications really require mirror reflections,
> refraction etc. Haven't seen applications where ray tracing was the way
> to go.

Applications for ray tracing (besides "realistic" image synthesis):

    MCAD (3D solids modeling)

    Material property calculations (mass, center of gravity,
        moments of inertia, etc.)

    Lens design (geometric optics)

    Toolpath planning for numerical-controlled milling

    Weapons research (ballistics analysis)

    Vulnerability assessments (collision detection between a
        projectile and an object)

    Nuclear reactors (determination of neutron distributions
        in reactor cores)

    Astrophysics (eg., diffusion of light through stellar
        atmospheres; penetration of light through planetary
        atmospheres)

IN SUMMARY: Just about anything that requires solving a linear (and non-linear
w/restrictions) particle transport problem is a candidate application for
ray-tracing/ray-casting algorithms.


Cary Scofield - Apollo Computer Inc. - Graphics Software R&D
UUCP: [decwrl!decvax,mit-eddie,attunix]!apollo!scofield
ARPA: scofield@apollo.com
USMAIL: 270 Billerica Rd., Chelmsford, MA 01824  PHONE: (508)256-6600 x7744

------------------------------------------------------------------------------

From: subramn@cs.utexas.edu (K.R.Subramanian.)
Subject: Re:  Goldsmith and eyes

on the automatic hierarchy scheme of Goldsmith and Salmon:

     Somewhere in the RT news you mentioned that the hierarchy is optimized
only for primary rays from the eye?  

	In their paper, they mention that the probability of hitting a bounding
volume is proportional to the solid angle of the bounding volume presented at
the eye and if the eye is sufficiently far away, then this can be approximated
by the surface area of the bounding volume of the object(s).

	Is this the reason that the hierarchy is not the best for secondary
rays?  If that is so, what if the eye is somewhere within the scene?  In this
case, the assumption is again violated.


K.R.Subramanian
Department of Computer Sciences
The University of Texas at Austin
Austin, Tx-78712.
subramn@cs.utexas.edu
{uunet}!cs.utexas.edu!subramn

--------

Reply From: Eric Haines

	Jeff Goldsmith and I were discussing in the latest RT News whether the
eye location might be used to help out the hierarchy made by the Goldsmith/
Salmon algorithm.  Essentially, Jeff finds that since so many of his rays are
eye rays, he might want to try to test intersection of the objects closer to
the eye first.  In other words, after the G-S hierarchy is created, go through
and sort the sons of each bounding volume by the additional criterion of
distance to the eye.  This is an added fillip to the G-S algorithm: normally
(i.e. in the original article) they do not pay attention to the order of the
sons of a bounding volume.  The idea is that if you test the closer object
first and hit it, you can often quickly reject the further object when it is
tested (since you now have a maximum bound on the distance the ray is shot).
For example, say you have a list: polygon, sphere.  The closest approach (or
the center, or whatever criterion you decide to use) of the sphere is closer
than that of the polygon, so you reorder the son list: sphere, polygon.  If you
now test a ray against this list you get four possibilities:

	1) Sphere missed, polygon missed - no savings is accrued by sorting.
	2) Sphere missed, polygon hit - no savings is accrued by sorting.
	3) Sphere hit, polygon missed - by hitting the sphere, we now have
	   a maximum bound on the ray's (really the line segment's) length.
	   Now when the polygon is tested it might be quickly rejected. Say
	   we hit the polygon plane beyond the maximum distance.  In this
	   case, we can stop testing the polygon without doing the inside-
	   outside testing.  If we had intersected in the order "polygon,
	   sphere", we would have had to do this inside-outside test, then
	   gone on to test the sphere - extra work we could have avoided.
	4) Sphere hit, polygon hit - Pretty much the same as case (3), except
	   even more so:  in this case time is saved by (a) not having to
	   to do the inside-outside test, (b) not having to store information
	   about the intersected polygon, and (c) it is all the more likely
	   that a polygon beyond the sphere which is actually hit has the
	   intersection distance beyond the sphere's intersection distance
	   (vs. a missed polygon, where the intersection distance is somewhere
	   on an infinite plane which could easily be in front of the sphere).

	My idea for ordering the son lists was simply object size: within a son
list, sort from largest to smallest area, on the theory that larger objects
will tend to get hit more often and so get you an intersection point quickly.
The savings are based more on probability of hits, but the idea makes for G-S
hierarchy trees that are not eye-dependent (I use item buffering, so eye rays
are minimized).  Another idea is to order the lists by difficulty of
calculation: test spheres before splines, test triangles before 100-sided
polygons, etc.

	The idea of ordering lists by either size or difficulty is valid for
other efficiency schemes, too.  Octree lists and SEADS might benefit from
ordering the lists in a sensible fashion.  Has anyone else out there tried such
schemes?

--------

Reply From: subramn@cs.utexas.edu (K.R.Subramanian.)

Yes, I understood these discussions and they are all valid.  Somehow just
trying to optimize the eye rays doesn't impress me very much because you
yourself have mentioned the item buffer for eye rays and the light buffer for
doing shadow rays from the first level intersections.  It is not very clear to
me if the above schemes you mention will bear a great improvement.  Anyhow, I
am really interested in secondary rays since that's what ray tracing is all
about. In very complex scenes like the Cornell rings or Cornell mountain
databases (SPD data bases) its the secondary rays that are dominant.

My real question was trying to figure out if Jeff's approximation in using the
surface area of the bounding volumes to figure out the conditional
probabilities was valid for all rays, primary and secondary.  There he said
something like 'if you are far away, you can approximate ........'. Does this
refer to the ray length ?


K.R.Subramanian
Department of Computer Sciences
The University of Texas at Austin
Austin, Tx-78712.
subramn@cs.utexas.edu
{uunet}!cs.utexas.edu!subramn

--------

Reply From: Eric Haines

	Indeed, Jeff's optimization for eye rays doesn't thrill me.  But how do
you feel about optimizing on size or on intersection complexity (or both)?
Seems like this has a good chance of validity for secondary rays, too.

	I will pass on your comments to Jeff and see how he responds.  You
might just want to write him directly at:

alias	jeff_goldsmith	jeff@hamlet.caltech.edu


	I should clear up an important point: the SPD databases are in no way
connected with Cornell.  I designed them in August 1987, more than a year and a
half after leaving Cornell.  I hope that nowhere in the document I imply that
Cornell is associated with these.  Why the fuss?  Partly because Don Greenberg,
my president (at 3D/Eye Inc) is very firm about separating work done at 3D/Eye
and work done at the Cornell graphics lab (which he also runs).  Another reason
is that Cornell doesn't "endorse" these databases - Don would be pretty bugged
at me if it was said that they did.  So, please just refer to the SPD
databases, or the 3D/Eye SPD databases.  'Nuff said, and thanks.

--------

Reply From: KR Subramanian

>	      Indeed, Jeff's optimization for eye rays doesn't thrill me.  But 
>	how do you feel about optimizing on size or on intersection complexity 
>	(or both)? Seems like this has a good chance of validity for 
>	secondary rays, too.
	
	You are right. Using size or intersection cost in ordering your
intersections will do good, especially in shadow ray computation. As far as the
pixel or reflection rays are concerned, this depends on the method used. I have
a modified version of the BSP tree where the search goes very close to the path
of the ray and only on collection of unordered objects can we take advantage of
the above 2 facts.

	Also, this is basically a hack (well, I wouldn't go quite that far).
But size as presented to a ray depends on the direction of the ray, since
projected area on to a ray varies.  A polygon could present its entire area to
a ray orthogonal to it or almost nothing if its parallel to it.

        For shadow rays, if you have a mix of complex objects (patches, splines
etc) and simple objects like polygons, spheres you better do this in the order
of their complexity. That will definitely save a lot of work especially when
there are multiple light sources and lots of spawned rays.
	
--------

Reply From: Jeff Goldsmith

Ok.  Optimizations.

    1) The only reason that I suggest ordering from the eye is that there are
eye rays in all scenes.  Not true for secondary.  Besides, most secondary rays
get other kludges.  More importantly, they are somewhat random, so it's tough
to optimize for them.

    2) What he is confusing with the above is the heuristic for "probability"
determination.  That is not based on eye rays, but assumes a uniform
distribution of ray directions throughout the scene.  This is not the case, but
we haven't dealt with more complicated heuristics other than to decide that
they are a bit more tricky than they might seem.

    3) There is a factor in the tree combination heuristic (the one that adds
up the node costs into a tree cost) that is biased for primary rays.  I call
the tree cost the sum of the node costs.  This isn't strictly true for
secondary rays, because they emanate from a leaf node, thereby adding some
additional cost to the big nodes.  We tried accounting for this by using a
formulation that takes internal emanation costs into account.  Yes, it was more
accurate.  Not by enough to bother with.  I think the difference was on the
order of a few percent.  It was definitely well under the noise level.  We
don't use it anymore for no particular reason.  Don't bother to code it, except
as an intellectual exercise.  (Not a bad one at that.)

-------------------------------------------------------------------------------

From: hpfcla!bogart%gr@cs.utah.edu (Rod G. Bogart)
Subject: Wood Textures

As for the wood textures, there really isn't a lot to say.  They were scanned
with a Vicom frame grabber.  The data is 512x512 bytes.  The book they were
taken from is Textures by Brodatz.  We do not have permission from the author
or the publisher, so thats why we haven't made the whole set available.  Yes,
we did scan the whole book (over 100 images) but without permission, I dare not
let out more than a handful.  So, the images are on cs.utah.edu, and they are
wood[1234].img.  As for mailing them (UUCP), I'd rather not.  A quarter meg
uuencoded is a long mail message.  If you really really can't get them from a
friend with ftp access, then ask nice.

RGB

-------------------------------------------------------------------------------

From: stadnism@sun.soe.UUCP ( Steven Stadnicki,212 Reynolds,2684079,5186432664)
Subject: Shadows, mirrors, and "virtual lighting"
[from USENET]

     I am currently working on a simple raytracer (VERY simple--so far it only
models triangles) and have a major problem.  For shadow calculations I need to
know if there are any light sources which could shine on a point.  The problem?
With mirrors in the scene, it's possible to have reflected light illuminate
some section that would normally be in shadow, e. g.:

Light-> O                               |                               O'
         \------                        |                        ------/
                \------                 |                 ------/
                       \------          |          ------/
                              \------   | M ------/
             +-----+                 \-\| i
             |     |                 /-/| r
             |object          /------   | r
             |     |   /------          | o
             |     | SSSS               | r
----------------------------------------|

Then the area covered by the S's would not be in shadow, even though it isn't
directly illuminated by the light O.  I know how to solve the problem using
"virtual" lights; that is, a light that you would obtain if you reflected the
Light at O in the mirror above; it would appear at O'.  Multiple reflections
can be handled by re-reflecting virtual lights, etc.  So what's my problem?
Simple: if you have M mirrors and reflections can go up to depth K, you need
O(M^K) virtual lights for each "real" light.  Is there any way I might be able
to eliminate, for a given point, some combinations of reflections without
having to do much testing?

                                            Steven Stadnicki
                                            stadnism@clutx.clarkson.edu

P.S.  The "virtual lights" idea came from a wonderful book: "A Companion
to Concrete Analysis", by Melzak.

-------------------------------------------------------------------------------

From: jevans@cpsc.ucalgary.ca (David Jevans)
Subject: Re: Basics of raytracing
[from USENET]

If anyone is looking for the analysis of a regular subdivision
ray tracing method, see:

 journal:  Visual Computer July 1988
 title:    Analysis of an Algorithm for Fast Ray Tracing Using
           Uniform Space Subdivision
 authors:  Cleary, J and Wyvill, G

What the paper does is to describe the voxel traversal algorithm that Cleary
developed and that I use in my ray tracer, and then a theoretical analysis is
presented.  It is a convincing argument for using regular voxel subdivision
(although my method - submitted to CGI '89 in UK - works better for scenes
where polygons are not evenly distributed throughout a scene).

Visual Computer is published by Springer Verlag.  Unfortunately it doesn't
enjoy the circulation of CG&A or TOG so it is pretty (outrageously?) expensive
(like $160 US for 6 issues!).

The design of our Mesh Machine is in:
  journal:  Proc CIPS Graphics Interface '83, 33-34, Edmonton, Alberta, May
  title:    Design and Analysis of a Parallel Ray Tracing Computer
  authors:  John Cleary and Brian Wyvill and Graham Birtwistle and Reddy Vatti

--------(second article appended)

In article <4589@polyslo.CalPoly.EDU>, sjankows@polyslo.CalPoly.EDU (Mr Booga (detonate)) writes:
>   I have a request similar to Randy Ray's (Raytracing introduction).  I am
> starting a project in parallel raytracing using a Sequent Balance 8000 and
> a couple of color Sun 3's running X.  I have virtually exhausted the 
> local resources on raytracing and am in need of basic ray tracing algorithms
> and simple optimization algorithms.

Our university (the U. of Calgary) has significant experience in parallel ray
tracing.  Professors Cleary and Wyvill developed a mesh machine for raytracing
several years ago.  Graduate student (now working at Alias) Andrew Pearce
implemented a parallel raytracer for the mesh machine that also ran on a
network of Corvus'.

I implemented a parallel ray tracing algorithm for polygons and implicit
surfaces earlier in the year on a BBN Butterfly.  I used regular spatial
subdivision combined with adaptive (octree) subdivision to converge on the
surfaces.  Up to 10 nodes I got almost linear speedup, and on a 70 node system
I was still getting 50% from each new node added.

If anyone else is interested in references on parallel raytracing (the mesh
machine articles, Pearces Masters thesis, or others such as Dippe in Siggraph
86 etc) you can send me mail and I can send a list or copies of some of t

"behind these eyes that say I still exist..."

David Jevans, U of Calgary Computer Science, Calgary AB  T2N 1N4  Canada
uucp: ...{ubc-cs,utai,alberta}!calgary!jevans

--------

From: sdg@helios.cs.duke.edu (Subrata Dasgupta)
Subject: Re: Basics of raytracing
[from USENET]

In article <77@cs-spool.calgary.UUCP> jevans@cpsc.ucalgary.ca (David Jevans) writes:
>
>Our university (the U. of Calgary) has significant experience in parallel
>ray tracing.  Professors Cleary and Wyvill developed a mesh machine for
>raytracing several years ago.  Graduate student (now working at Alias)
>Andrew Pearce implemented a parallel raytracer for the mesh machine
>that also ran on a network of Corvus'.

	This all sounds very interesting! A few articles back a person inquired
about a paper on an algorithm analysis by Profs. Wyvill and Cleary. I am trying
to track that paper down but the reason for sending you this letter is to
request some info. on the mesh machine for ray- tracing developed at your univ.
If you can refer me to a recent paper on this machine it will be great. At Duke
we are developing what has come to be known as the Raycasting machine which
computes intersection of an array of parallel rays with primitives and then
uses constructive solid geometry to compute the shape, volume and other
parameters of an arbitrary object. Thus I would be very much interested in any
work in this area.

>If anyone else is interested in references on parallel raytracing
>(the mesh machine articles, Pearces Masters thesis, or others
>such as Dippe in Siggraph 86 etc) you can send me mail and I can send
>a list or copies of some of t

Any other info. in this area would be very much appreciated. Thanks!

Subrata Dasgupta

Department of Computer Science, Duke University, Durham, NC 27706
ARPA: 	sdg@cs.duke.edu
CSNET: 	sdg@duke 
UUCP: 	decvax!duke!sdg    

-------------------------------------------------------------------------------

From: upstill@pixar.UUCP (Steve Upstill)
Subject: Re: What is Renderman Standard?
Organization: Pixar -- Marin County, California
[from USENET]

    I'm writing the RenderMan book, so I guess I'm qualified to clear up 
    a couple of things from this posting:

In article <25225@tut.cis.ohio-state.edu> fish@shape.cis.ohio-state.edu (Keith Fish) writes:
>
>I'm sure PIXAR is more than willing to send you a spec of Renderman ...
>just ask them.  Also, there may be something available through the
>Siggraph 88 proceedings.

    There's nothing in the SIGGRAPH 88 proceedings about RenderMan.  You can
    get a copy of the spec by sending $15 (yes, I know it's a pain, but we've
    sent out ~1000 specs so far, and it got kind of expensive; this is the
    real cost) to
	Pixar
	3240 Kerner Blvd.
	San Rafael, Ca. 94901

>
><<< The following is MY understanding of Renderman >>>
>
>Renderman is an attempt by PIXAR to force a de facto standard interface in the
>Graphics Rendering/Imaging arena.  My understanding is that this interface
>is based on tools/routines that they have developed throughout the years for
>use on their hardware.  Because it was not designed for general/varying
>graphics architectures, many companies wonder if it will only work well on
>their systems -- hence, making their hardware also the de facto standard.

    RenderMan is based on about six years' research at Pixar and Lucasfilm on
    how to get quasi-photographic realism into computer graphics.  The effort
    has encompassed algorithms, software and hardware, and much of what is in
    the standard has been proven to work by actually implementing it;  so in 
    some sense the above is a correct statement.  However, there is an  
    implication here that RenderMan is some in-house methodology that Pixar is
    trying to foist off on the rest of the industry.  That is definitely 
    untrue.  Pat Hanrahan and Tom Porter spent about six months talking to 
    other companies in the industry, trying to establish a consensus and
    ensure that the standard is technically sound.  The best evidence I
    have of how much it changed as a result is the amount of work I had
    to put into changing my book between Versions 2 and 3 of the spec.

    As for the standard being specific to some particular hardware or 
    software configuration, you just have to look at the standard itself.
    From the geometric standpoint, it is a simple protocol for describing
    scenes, as generic as can be, and deliberately so.  It is essentially
    a superset of PHIGS+, with two differences: there is no provision for
    changing model descriptions once defined (you have to respecify scenes
    from one frame to another), and there are extensions for realism like
    the shading language, motion blur and depth-of-field.  The hardware-
    specificity is a canard, pure and simple.  The current (incomplete)
    version of our software runs on Sun, Silicon Graphics and '386-based
    Compaq machines, as well as on Transputer-based hardware accelerators
    in all three.

    More specifically, I can tell you that Pat went to a lot of trouble to
    make the interface standard independent of even the basic rendering 
    algorithm.  That is, RenderMan is consistent with scanline-based methods 
    as well as ray tracing; standard shading models as well as radiosity 
    techniques.  That wasn't easy.

>More importantly, PIXAR already has the software written for this "standard"
>so if this becomes a standard, any competitor of PIXAR would have to make the
>$$$ investment to write this software -- a good way to limit your competition.

    Sorry.  I'm working closely with the software group in trying to 
    generate pictures and example programs for my book, and I can testify 
    that the software, while quite far along, is not "already written",
    largely because of extensions to the standard that came out of 
    discussions with other companies.  True enough, we probably have a 
    head start on others, but the standard has been out there for five 
    months now, and will probably have been around for close to a year 
    before Pixar has its stuff on the market.

    Besides, the standard specifies nine capabilities which are optional for
    any particular implementation.  No renderer should have any trouble
    meeting the RenderMan standard if it supports PHIGS primitives and 
    performs such quality calculations as anti-aliasing and gamma correction.

>PIXAR made a big push for Renderman at Siggraph 88.  Although a few companies
>agreed to endorse this package (SUN, of course ... they'd endorse anything to
>get their name in lights ;-), many took a more intelligent approach and said
>that they would evaluate it.  PIXAR basically used a lot of marketing hype
>to get support initially and even listed supporters who, when you would
>walk up to their booth at Siggraph and ask them, said they did not support it.

    This comment is borderline offensive to me, partly because it is admittedly
    based on speculation and partly because I was around during the process
    I mentioned above, and I know what a painful and elaborate job Pat had
    to get the proposal into shape to win the support of the companies he did.
    There is a difference between endorsement and support.  Endorsement means
    "we have evaluated this; it is sound and we believe this is the way the
    industry should go".  Support means "we have hardware and/or software
    which implements this standard".  You would expect the latter to be a
    subset of the former.

    Nineteen companies endorsed the RenderMan standard at rollout.  The main 
    holdouts at this point are Silicon Graphics and Wavefront.  My personal
    suspicion (not to be taken as the views of Pixar) is that Wavefront 
    perceives RenderMan as a threat to their rendering market because 
    it supports features which would be difficult or impossible to 
    implement using their rendering algorithm.  And Wavefront software 
    runs on SGI machines.

>Many (most ?) of the companies who looked at Renderman have decided that it
>still needs a lot of work before it can be considered as even a base to start
>the development of a standard in the rendering/imaging arena.

    Who are these "many companies"??  What is this "lot of work"??  We would
    love to hear about.  There is a RenderMan Advisory Council made up of
    industry representatives whose job it is to hear complaints like that.

    I don't expect to hear too many of them, however.  As I said before,
    RenderMan is basically a simple-minded extension of PHIGS (read: EXTENSION. 
    Meaning "If PHIGS is good enough for you, so should be RenderMan") 
    adding constructs for supporting realistic graphics.  It has gone 
    through the mill of two major rewrites as a result of consultations
    with "many companies".

>There are
>several problems in the area of getting Renderman to mesh with other current
>standard graphics environments (eg. phigs, cgi, ...) so that it becomes a
>natural extension to the less-interesting/fancy graphics people do today.

    What are these problems??  Can you be more specific??

>Even for the niche market of image-rendering, Renderman does not include
>many (any ?) ideas from the companies that have been in this business for
>years ...  Wavefront, Alias Research, Neo-Visuals, Disney, etc.

    What are these ideas??  Come to think of it, what ideas has Disney
    contributed to image rendering?

>Keith Fish
>
>PS. I'm not cutting down PIXAR -- I think that the work they do is
>    fantastic (literally)!  I just don't like marketing ploys to degrade
>    what should be good technology, and this is what the Renderman-hype
>    seems to be.

    I appreciate your appreciation, but I wish I knew where these 
    impressions of yours came from.

>    I think that the industry can develop a good imaging
>    interface standard if everyone (animation software companies,
>    universities, graphics hardware companies, etc.) gets to contribute.

    Again, I thought that's exactly what we did.

    If anyone on the net is interested in more information on RenderMan
    without investing $15 in a spec, the current issue of Unix Review
    includes an article I wrote discussing the major aspects of the 
    standard.  Also, the November issue of Dr. Dobb's Journal has a 
    cover story on the shading language, which is RenderMan's doorway
    for extensibility.

Steve Upstill

-------------------------------------------------------------------------------

Free On-Line Computer Graphics References
-----------------------------------------

[incidentally, the latest version of the Ray Tracing Bibliography by Paul
Heckbert (and updated by myself) is available from Mark Vandewettering's
anonymous ftp site, drizzle.cs.uoregon.edu. --EAH]

From: eugene@eos.UUCP (Eugene Miya)
Subject: A little announcement (part 1 of 3)
Organization: NASA Ames Research Center, Calif.
[from USENET]

For a long time now, a lot of people have been asking simple information
queries in places like comp.graphics.  This resulted in the inevitable
repeating of topics, flood of inane news messages (many of which are wrong),
and a repeating cycle which bring disillusionment.

Computer graphics, unlike a lot of disciplines, has an overseer of the
literature.  If you open up an ACM/SIGGRAPH proceedings you will notice a
reference to "References" to Baldev Singh (currently at MCC).  Baldev has has
published significant references in the Computer Graphics Quarterly for a
couple of years (and is preparing for another shortly).  These bibliographies:

%A Baldev Singh
%T Computer Grap[hics Literature for 1986: A Bibliography
%J Computer Graphics
%V 21
%N 3
%D June 1987
%P 189-208

and

%A Baldev Singh
%A Gunther Schrack
%T Computer Grap[hics Literature for 1985: A Bibliography
%J Computer Graphics
%V 20
%N 3
%D July 1986
%P 85-145

Coverage in the field (for graphics) is quite good.  I know, I am trying to
maintain a comprehensive study of another field (see postings in comp.arch or
comp.parallel).  The problem is searching for literature on a paper database is
difficult (I won't get into details, take my word).  Frequently entries are
also wrong (not as bad as the net however).

A machine readable form however, solves many of these problems.  You can update
a machine readable form.  The problem becomes then of distribution and search,
surprise! something computers are good for!  It is with this back ground that
we in the Bay Area Association for Computing Machinery's Special Interest Group
on Computer Graphics announce the availability of Singh's ACM/SIGGRAPH
bibliography in a machine readable form.

While Baldev will oversee the collection and quality of entries, we with a
generous donation of cycles and disk space from the Digital Equipment
Corporation (DEC) will help oversee the redistribution of the computer graphics
bibliography.

This first article will describe how hosts on the Internet can retrieve the
computer graphics bibliography.  Two other optional means for those not on the
Internet will be presented over the next two days (but clearly Internet is the
superior way to do this).

THERE ARE TWO DANGERS inherent in all of these means.  The bibliography is kind
of big.  It's not a megabyte, but it's getting there.  IF YOU ARE at an
Internet site with lots of users, it's kind of dumb if you ALL made personal
copies (n megabytes ;-).  So before you copy, agree who at your site will
oversee obtaining it.  One copy per site please.

The second danger is everybody copying at the same time.  The information which
follows will illustrate the problem.  The DEC host which you be copying from is
DEC's gateway to the Internet.  It will be a tragedy to abuse this gateway if
every site tried to copy at once.  I know, we provide the 9600 baud IMP port to
DEC.  So let's not abuse this, let's be patient and take our turns. 1) copy the
computer graphics bibliography only during the weekends or evenings Pacific
Daylight or Standard time. 2) copy on a randomly determined evening of the
week.  How?  Flip a coin 3 times (say HTH, make Head == 0, Tails == 1, this
translates to 010 binary or 2 base 10). Using Sunday as 1, make Monday 2, copy
Monday evening P[SD]T.  (HHH or 000, retry).  If this is confusing, wait for
the weekend. AGAIN copy only in the evenings.

Now the questions you have all been patiently waiting for and I have been
rambling: where do I get, and how do I get it.  The Internet host is the
machine gatekeeper.dec.com [128.45.9.52].  Please respect this machine (hacker
ethic) for the assistance DEC is providing.  We don't wish to yank the
bibliography from this machine.  Don't try to break in, please.

Old time ARPAnet hackers will know where to go from here.  The "how" is a
process called anonymous FTP (File Transfer Protocol or Program, hasn't changed
since 1973 ;-)] Don't all do this at once.  Below is a sample session with
annotation as to how this works.  Catch the names of the subdirectories and
files below.  A lot of people aren't familiar with distributed systems other
than Email, so we've made the language oversimplistic, if you have problems
consult your local network guru.

Note the bibliographies exist in a data compressed binary form.  Use the Unix
uncompress(1) command to decode them.  Not on a Unix system?  Tough for the
time being.  Try to find one.  The further format of individual entries is Unix
refer format (a sample, see the two references above).  This is how Singh has
them, and also how my bibliography is stored.  Refer has lots of advantages
over other systems: free-format, widely available on Unix systems, uses a
minimum of space, ASCII, fully machine and human readable (it separates the
binary data from the text), fairly easy to learn, easily converted to other
formats (like [bib]TeX, Scribe, etc.)

Start script
  eos % ftp gatekeeper.dec.com
        ^^^^^^^^^^^^^^^^^^^^^^ issue this command, after some time you get:
  Connected to gatekeeper.dec.com.
  220 gatekeeper.dec.com FTP server (Version 4.28
  Name (gatekeeper.dec.com:######): anonymous
                                    ^^^^^^^^^ use this name
  331 Guest login ok, send ident as password.
  Password:
           ^^^^^^^^ does not echo, I typed "guest," doesn't matter
  230 Guest login ok, access restrictions apply.
  ftp> cd pub/graf-bib
       ^^^^^^^^^^^^^^^ change directory to pub/graf-bib
  200 CWD command okay.
  ftp> binary
       ^^^^^^  very important, you are getting compressed binary files
  200 Type set to I.
  ftp> ls
       ^^ optional  just to should you what you are getting ('dir' is okay, too)
  200 PORT command okay.
  150 Opening data connection for /bin/ls (128.102.21.2,1118) (0 bytes).
  bib85.Z
  bib86.Z
  226 Transfer complete.
  ^^^^^^^ those two filenames are what you want!
  18 bytes received in 0.2 seconds (0.09 Kbytes/s)
  ftp> mget *
       ^^^^^^ asks for all (star) files
  mget bib85.Z?
  mget bib86.Z?
                ^ you type "y <cr>" or "n <cr>" if you want them.
                NOTE: THIS WILL TAKE SOME TIME.
  ftp> quit
       ^^^^  done
  221 Goodbye.
  eos % # Now you can uncompress bib85.Z, etc.
end script

If you don't have a network guru, send mail to siggraph, not the poster of this
note below.  (Illiterates will type "reply" or "follow-up" to news.  Sorry, I'm
very tired of this. That's why I'm doing this.)  Big thanks are due to Brian
Reid and Jamie Painter (at DEC for this work).  Rick Beach okay'ed ACM
copyrights.  This is not for profit.  Please ACK the above people and
organizations (in particular, Baldev) when citing.  As I hope you can tell, we
are really trying to advance the state of the art in computer graphics.  This
should benefit experts as well as students alike.  It also shows the use of
technologies other than graphics to our (graphics) benefit.

--------

Subject: A little announcement (part 2 of 3)

I described the advantages of searching and reformatting.  I described
anonymous FTP.  This is the way to go if you are a major Internet site like
most universities.  The problem is: what about more casual users, poor people
with small disks?  Well, the files reside of DEC's disk.  Just LEAVE THEM
THERE.  Let Bay Area ACM/SIGGRAPH and Singh maintain them.  Then how do you
access it?  By electronic mail.

A similar system exists at the Argonne National Labs (and AT&T Bell Labs):
netlib numerical software distribution [CACM ref. if you need it].  A similar
set up for benchmarks exists at the NBS (See latest IEEE Computer).  Why not do
this for graphics references?

With a generous donation of cycles and disk space from the Digital Equipment
Corporation (DEC) and some software from CSIL at Stanford we have done just
this.

THERE ARE TWO DANGERS inherent: The bibliography is kind of big.  The second
danger is everybody copying at the same time.

The DEC host which you be copying from is DEC's gateway to the Internet.  It
will be a tragedy to abuse this gateway if every site tried to copy at once.
So let's not abuse this, let's be patient and take our turns.

1) retrieve references only during the weekends or evenings Pacific Daylight or
Standard time.

2) copy on a randomly determined evening of the week.  How?  Flip a coin 3
times (say THT make Head == 0, Tails == 1, this translates to 101 binary or 5
base 10). Using Sunday as 1, make Thursday 5, copy Thursday evening P[SD]T.
(HHH or 000, retry).  If this is confusing, wait for the weekend. AGAIN copy
only in the evenings.

Where, okay here goes the dangerous information:
	send mail to:
	graf-bib-server@decwrl.dec.com
This can also be
	{your favorite UUCP path}!decwrl!graf-bib-server
or if you work for DEC and have ENET access:
	DECWRL::graf-bib-server

Your mailer should ask for a "Subject:" field.  This is important.  If your
mailer doesn't (and lots don't) ask your system folk about mailrc file or
mh_profiles or how to invoke this field.  Because you should place the keywords
in that subject field.  One special keyword is "help."  You get a short little
description.  Make the first alphanumeric (don't give "years").  Additional
keywords are conjective (and's) causing a smaller and smaller search.  The
contents aren't perfect, but give us time.

Your mail is answered by the server daemon.  It searches and tries to find
relevant cited keywords (up to 6 significant first characters.  Choose
carefully.  Don't ask for all references with "computer graphics."  Hope you
understand why.  Just try "help" as your first keyword unless you know what you
are looking for.  The information comes back in the aforementioned (yesterday)
refer format.

If you don't have a network guru, send mail to siggraph, not the poster of this
note below.  (Illiterates will type "reply" or "follow-up" to news.  Sorry, I'm
very tired of this. That's why I'm doing this.)  Big thanks are due to Brian
Reid and Jamie Painter (at DEC for this work).  Rick Beach okay'ed ACM
copyrights.  This is not for profit.  Please ACK the above people and
organizations (in particular, Baldev) when citing.  As I hope you can tell, we
are really trying to advance the state of the art in computer graphics.  This
should benefit experts as well as students alike.  It also shows the use of
technologies other than graphics to our (graphics) benefit.

Our last note will concern one more way of getting references: just asking for
a floppy (low tech).  We in the Bay Area ACM/SIGGRAPH local group will be
adding to these.  Reference contributions and corrections are welcome.  It's
only possible if we work together to see this through.

--------

From: eugene@eos.UUCP (Eugene Miya)
Subject: Re: bib notation question

In article <3384@pt.cs.cmu.edu> pkh@vap.vi.ri.cmu.edu (Ping Kang Hsiung) writes:
>I got Eugene Miya's bib files over the weekend. There are some
>notations used in the files that I don't understand:
>
>1. Some \(em or (em  in the %J field. What these mean?
>(and why they don't have the closing ")".)
>
>2. In the key field, there are some numbers:
>	%K I3m educational computing
>	%K I3m mechanical engineering computing
>	%K I35 modeling systems
>How do I interpret/use these I3m, I35 numbers?
>
>3. Some acronyms: CGF, CAMP, ISATA. They are not defined in the files.

Oops!  Sorry. I got other mail on this.  I forgot all about them.  The
BACKSLASH macros are troff-isms.  There are tools like deroff to take them out
or r2bib to convert things into bibTeX.  These macros are 4 characters in size
\(em is a slightly longer dash.  They aren't a significant problem, write sed
filter.

The I fields are ACM Classification codes.  You can either get them from ACM
Computing Reviews (blue and white things, that most don't get) or you can get
the hardcopy versions of these bibliographies (they have the CR classification
scheme for graphics).

The acronyms are unfortunately a long term problems.  We can get a table to use
use U. AZ's bib program to fill them out.

I hope you are all finding some use of this stuff.  We NEED people around the
country to help us update this.  There are earlier years.  Also new papers are
being written all the time.  They have to get entered (even finding them is
hard).  I don't deserve the credit, I'm only pissed off that I have to read
queries over and over.  The credit belongs to the crew of Bay Area ACM/SIGGRAPH
working on this project. (other volunteers are welcome: especially key entry
help)

Another gross generalization from

--eugene miya, NASA Ames Research Center, eugene@aurora.arc.nasa.gov
  ex-Lame-duck Prez. Bay Area ACM/SIGGRAPH
  resident cynic at the Rock of Ages Home for Retired Hackers:
  "Mailers?! HA!", "If my mail does not reach you, please accept my apology."
  {uunet,hplabs,ncar,decwrl,allegra,tektronix}!ames!aurora!eugene
  "Send mail, avoid follow-ups.  If enough, I'll summarize."

-----------------------------------------------------------------------------

Here is the short form of the present mailing list, showing just email paths
from an ARPA node.  If you want the full list, which includes additional info
and snail mail addresses, drop me a note - Eric Haines

alias	jim_arvo		apollo!arvo@eddie.mit.edu
alias	al_barr			barr@csvax.caltech.edu
alias	brian_barsky		barsky@miro.berkeley.edu
alias	daniel_bass		daniel@apollo.com
alias	rod_bogart		bogart%gr@cs.utah.edu
alias	wim_bronsvoort		dutrun!wim@mcvax.cwi.nl
alias	at_campbell 		atc@cs.utexas.EDU
alias	john_chapman		fornax!sfu-cmpt!chapman@cornell.uucp
alias	chuan_chee		ckchee@dgp.toronto.edu
alias	michael_cohen		m-cohen@cs.utah.edu
alias	jim_ferwerda		jaf@squid.tn.cornell.edu
alias	fred_fisher		FISHER%3D.dec@decwrl.dec.com
alias	john_francis		apollo!johnf@eddie.mit.edu
alias	phil_getto		phil@yy.cicg.rpi.edu
alias	andrew_glassner		glassner@xerox.com
alias	jeff_goldsmith		jeff@hamlet.caltech.edu
alias	chuck_grant		grant@icdc.llnl.gov
alias	paul_haeberli		sgi!paul@pyramid.pyramid.com
alias	eric_haines		hpfcla!hpfcrs!eye!erich@hplabs.HP.COM
alias	roy_hall		roy@wisdom.tn.cornell.edu
alias	pat_hanrahan		pixar!pat@ucbvax.berkeley.edu
alias	paul_heckbert		ph@miro.berkeley.edu
alias	michael_hohmeyer	hohmeyer@miro.berkeley.edu
alias	jeff_hultquist		hultquis@prandtl.nas.nasa.gov
alias	erik_jansen		dutio!fwj@mcvax.cwi.nl
alias	ken_joy			joy@ucdavis.edu
alias	mike_kaplan		dana!mrk@hplabs.hp.com
alias	tim_kay			tim@csvax.caltech.edu
alias	dave_kirk		dk@csvax.caltech.edu
alias	roman_kuchkuda		megatek!kuchkuda@ucsd.ucsd.edu
alias	george_kyriazis		kyriazis@turing.cs.rpi.edu
alias	david_lister		lister@dg-rtp.dg.com
alias	pete_litwinowicz	litwinow@apple.com
alias	gray_lorig		gray%rhea.CRAY.COM@uc.msc.umn.edu
alias	wayne_lytle		wtl@cockle.tn.cornell.edu
alias	tom_malley		esunix!tmalley@cs.utah.edu
alias	don_marsh		dmarsh@apple.apple.com
alias	michael_natkin		mjn@cs.brown.edu
alias	tim_oconnor		toc@wisdom.tn.cornell.edu
alias	masataka_ohta		mohta%titcce.cc.titech.junet%utokyo-relay.csnet@RELAY.CS.NET
alias	tom_palmer		palmer@ncifcrf.gov
alias	darwyn_peachey		pixar!peachey@ucbvax.berkeley.edu
alias	john_peterson		jp@apple.apple.com
alias	frits_post		dutrun!frits@mcvax.cwi.nl
alias	pierre_poulin		poulin@dgp.toronto.edu
alias	thierry_priol		inria!irisa!priol@mcvax.cwi.nl
alias	panu_rekola		pre@cs.hut.fi
alias	david_rogers		dfr@cad.usna.mil
alias	linda_roy		lroy@sgi.com
alias	cary_scofield		apollo!scofield@eddie.mit.edu
alias	pete_segal		pls%pixels@research.att.com
alias	scott_senften		apctrc!bigmac!senften@cornell.uucp
alias	cliff_shaffer		shaffer@vtopus.cs.vt.edu
alias	susan_spach		spach@hplabs.hp.com
alias	rick_speer		speer@ucbvax.berkeley.edu
alias	stephen_spencer		spencer@tut.cis.ohio-state.edu
alias	steve_stepoway		stepoway@smu.edu
alias	mike_stevens		apctrc!zfms0a@cornell.uucp
alias	paul_strauss		pss@cs.brown.edu
alias	kr_subramanian		subramn@cs.utexas.edu
alias	kelvin_thompson		kelvin@cs.utexas.edu
alias	russ_tuck		tuck@cs.unc.edu
alias	greg_turk		turk@cs.unc.edu
alias	ben_trumbore		wbt@cockle.tn.cornell.edu
alias	mark_vandewettering	markv@cs.uoregon.edu
alias	jack_van_wijk		ecn!jack@mcvax.cwi.nl
alias	greg_ward		gjward@lbl.gov
alias	bob_webber		webber@aramis.rutgers.edu
alias	lee_westover		westover@cs.unc.edu
alias	andrew_woo 		andreww@dgp.toronto.edu
-----------------------------------------------------------------------------
END OF RTNEWS
 


From m-cohen@cs.utah.edu Mon Jan  9 19:34:36 1989
Return-Path: <m-cohen@cs.utah.edu>
Received: from cs.utah.edu by turing.cs.rpi.edu (4.0/1.2-RPI-CS-Dept)
	id AA12875; Mon, 9 Jan 89 19:33:45 EST
Received: by cs.utah.edu (5.59/utah-2.1-cs)
	id AA12614; Mon, 9 Jan 89 16:30:17 MST
Date: Mon, 9 Jan 89 16:30:17 MST
From: m-cohen@cs.utah.edu (Michael F Cohen)
Message-Id: <8901092330.AA12614@cs.utah.edu>
To: FISHER%3D.dec@decwrl.dec.com, apollo!arvo@eddie.mit.edu,
        apollo!johnf@eddie.mit.edu, atc@cs.utexas.EDU, barr@csvax.caltech.edu,
        barsky@miro.berkeley.edu, bogart%gr@cs.utah.edu,
        ckchee@dgp.toronto.edu, dana!mrk@hplabs.hp.com, daniel@apollo.com,
        dk@csvax.caltech.edu, dutio!fwj@haring.cwi.NL,
        dutrun!wim@haring.cwi.NL, cornell!fornax!sfu-cmpt!chapman@cs.utah.edu,
        glassner@xerox.com, grant@icdc.llnl.gov,
        gray%rhea.CRAY.COM@uc.msc.umn.edu, hohmeyer@miro.berkeley.edu,
        hpfcla!hpfcrs!eye!erich@hplabs.HP.COM, hultquis@prandtl.nas.nasa.gov,
        jaf@squid.tn.cornell.edu, jeff@hamlet.caltech.edu,
        jevans@cpsc.ucalgary.ca, joy@ucdavis.edu, kyriazis@turing.cs.rpi.edu,
        lister@dg-rtp.dg.com, litwinow@apple.com, m-cohen@cs.utah.edu,
        megatek!kuchkuda@ucsd.edu, ph@miro.berkeley.edu, phil@yy.cicg.rpi.edu,
        pixar!pat@ucbvax.berkeley.edu, raycasting@duke.cs.duke.edu,
        roy@wisdom.tn.cornell.edu, sgi!paul@pyramid.pyramid.com,
        tim@csvax.caltech.edu, wtl@cockle.tn.cornell.edu
Subject: Ray Tracing News
Status: R

Maichael,
	OK, I'll give it a whirl.  Attached is the RT News.  In a separate
file I'll be sending on the .mailrc you'll need to use for distribution.

Many thanks,

Eric

 _ __                 ______                         _ __
' )  )                  /                           ' )  )
 /--' __.  __  ,     --/ __  __.  _. o ____  _,      /  / _  , , , _
/  \_(_/|_/ (_/_    (_/ / (_(_/|_(__<_/ / <_(_)_    /  (_</_(_(_/_/_)_
             /                               /|
            '                               |/

			"Light Makes Right"

			 January 6, 1988
		        Volume 2, Number 1

Compiled by Eric Haines, 3D/Eye Inc, 2359 Triphammer Rd, Ithaca, NY 14850
    607-257-1381, hpfcla!hpfcrs!eye!erich@hplabs.hp.com
All contents are US copyright (c) 1988,1989 by the individual authors

Contents:
    Introduction - Eric Haines
    New Members - David Jevans, Subrata Dasgupta, Darwin Thielman,
	Steven Stadnicki, Mark Reichert
    Multiprocessor Visualization of Parametric Surfaces - Markku Tamminen,
	comments from many others
    Miscellany - K.R.Subramanian, David F. Rogers, Steven Stadnicki,
	Joe Smith, Mark Reichert, Tracey Bernath
    Supersampling Discussion - David Jevans, Alan Paeth, Andrew Woo,
	Loren Carpenter
    Distributed Ray Tracer Available - George Kyriazis
    Ray Tracing Program for 3b1 - Sid Grange
    Map Archive - Gary L. Crum
    Index of Back Issues - Eric Haines

-----------------------------------------------------------------------------

Introduction
------------

	Well, this has been a busy time around here.  First of all, note my
change of address (it's in the header).  We've moved to a larger building with
much better environmental controls (i.e. we don't have to cool the machines in
the winter by opening the windows).  In the meantime I've been trying to
actually finish a product and maybe even make some money from it.  There's
also those SIGGRAPH deadlines on January 10th....  Busy times, so excuse the
long delay in getting this issue out.

	The other great struggle has been to try to get my new Cornell account
to talk with the outside world.  DEC's EUNICE operating system has foiled me so
far, so this issue is being distributed by Michael Cohen, who's now at the
University of Utah.  Many thanks, Michael.

	Due to the length between issues, my cullings from USENET have
accumulated into an enormous amount of material.  As such, the condensed
version of these will be split between this and the next issue.  This issue
contains what I felt was the best of the supersampling discussion.  If this
material is old hat, please write and tell us of your experiences with
supersampling: what algorithm do you use? are you satisfied with it? what kind
of filtering is used, and what are your subdivision criteria?

	This issue is the first one that has a "Volume X, Number Y" in the
header.  This has been added partly for ease of reference, but also (more
importantly) for avoiding dropouts.  If you get "Number 1", then "Number 3",
you know you've missed something (probably due to email failure).  At the end
of this issue is a list of all the past issues.  If you are missing any, please
write and I'll send them on.

-------------------------------------------------------------------------------

New Members
-----------


From: David Jevans <hpfcla!jevans@cpsc.UCalgary.CA>

I can be reached at the U of Calgary.  I work days at Jade Simulations
International, ph # 403-282-5711.

	My interests in ray tracing are in multi-process (networks of SUNS, BBN
Butterfly, and Transputers) ray tracing, space subdivision, and ray tracing
functionally defined iso-surfaces.

	I am working on optimistic multi-processor ray tracing and combining
adaptive and voxel spatial subdivision techniques.  I have implemented a
parallel ray tracer on the University of Calgary's BBN Butterfly.  My ray
tracers handle a variety of object types including polygons, spline surfaces,
and functionally defined iso-surfaces.  My latest projects are using TimeWarp
to speed up multi-processor ray tracing, adding a texture language, frame
coherence for ray tracing animation, and developing the ray tracing answer to
radiosity.

David Jevans, U of Calgary Computer Science, Calgary AB  T2N 1N4  Canada
uucp: ...{ubc-cs,utai,alberta}!calgary!jevans

--------

# Subrata Dasgupta - raycasting of free-form surfaces, surface representations
# Duke University
# Dept. of Computer Science
# Durham, NC 27706
# (919)-684-5110
alias	subrata_dasgupta	sdg@cs.duke.edu

I am relatively new the field of ray tracing. I am involved in the design of a
raycasting system based on the original design by Gershon Kedem and John Ellis
[Proc. 1984 Int'l conference on Computer Design]. The original design uses
Constructive Solid Geometry for building up a complex object out of simple
primitives like cones, cylinders, and spheres. The main drawback of such a
system is that representing an object with cubic or higher order surfaces
require numerous quadratic primitives and even then is at best an approximation
to the original surface.

The raycasting machine uses an array of parallel rays and intersects them with
primitives. The applications of such a machine are potentially large like:
modeling surfaces for NC machines, calculating volume and moment of inertia,
finding fast surface intersections, to name just a few. At present there are 2
working models of the raycasting machine, one of which is in the Dept. of
Mechanical Engg. at Cornell. The other one is our experimental m/c which is
located at the U. of N. Carolina at Chapel Hill (The operative word in this
sentence is "located" :-) ). Although my input may not be very frequent at the
beginning I will be an avid reader of the raycasting news. Thanks for inviting
me in.

--------

From: Darwin G. Thielman <hpfcla!thielman@cs.duke.edu>
Subject: Duke ray casting group

# Darwin Thielman - Writing software to control the raycasting machine at Duke
# Duke University
# Computer Science Dept.
# Durham, NC. 27706
# (919) 684-3048 x246
alias	darwin_thielman		thielman@duke.cs.duke.edu

At Duke we have designed a system that does ray casting on many primitives in
parallel.  This is achieved with 2 types of processors, a primitive classifier
(PC) and a combine classifier (CC).  The PC's solve systems of quadratic
equations and the CC's combine these results.

I am the system software person, I am writing micro code for an Adage 3000
machine.  The micro code is responsible for controlling the ray casting board
and creating images from the output of the board.



In addition the following people also work on the ray casting system at Duke.

Gershon Kedem 
John Ellis
Subrata Dasgupta
Jack Briner
Ricardo Pantazis
Sanjay Vishin

Also at UNC Chapel Hill there is a eng. who is working on the hardware design
of the system he is Tom Lyerly.


Since I do not want to attempt to explain what each one of these people is
doing (with the possibility of getting it wrong) I will not try.  All of the
above people will have access to the RTN and if any of them are interested they
may respond.  Also If anyone want to get a hold of any of them just send me a
message and I will forward it to the proper person.



We also have one of our boards at Cornell, there is a group there that is
working on solid modeling, and hopes to use our hardware.  If you want you can
contact Rich Marisa (607) 255-7636 for more information on what they are doing.
His mail address is marisa@oak.cadif.cornell.edu.  Also I have talked to him
and if you want to see a demo of our system he would be glad to show it to you.

If you have any questions or comments please feel free to contact me.

					Darwin Thielman

--------

From: Steven Stadnicki <hpfcla!stadnism@clutx.clarkson.edu>

Steven Stadnicki - shadowing from reflected light sources, tracing atomic
		   orbitals, massively parallel ray tracing
212 Reynolds Dormitory
Clarkson University
Potsdam, NY 13676
(315) 268-4079
stadnism@clutx.clarkson.edu

Right now, I'm working on writing a simple ray tracer to implement my reflected
shadowing model (see the E-mail version of RTNews, Nov. 4), and then I'll be
trying a few texture models.  (Texture mapping on to atoms... the marble
P-orbital!)

--------

From: hpfcla!sunrock!kodak!supra!reichert@Sun.COM (Mark Reichert x25948)
Subject: RT News intro

#
# Mark Reichert	- diffuse interreflections
# Work:
#	Eastman Kodak Company
#	Advanced Systems Group
#	Building 69
#	Rochester, NY 14650
#	716-722-5948
#
# Home:
#	45 Clay Ave.
#	Rochester, NY 14613
#	716-647-6025
#
    I am currently interested in global illumination simulation using ray
tracing with auxiliary data structures for holding illuminance values.

    I am also interested in ray tracing from the lights into the environment -
maybe just a few bounces, then storing illuminance as above.

    What I would really like is a ray tracer (or whatever) that would do a nice
job of modeling triangular glass prisms.

alias	mark_reichert	hpfcrs!hpfcla!hplabs!sun!sunrock!kodak!supra!reichert

-------------------------------------------------------------------------------

Multiprocessor Visualization of Parametric Surfaces

From: Markku Tamminen <hpfcla!mit%hutcs.hut.fi@CUNYVM.CUNY.EDU>


I obtained the Ray-Tracing News from Panu Rekola, and sent the message below to
some people on your distribution list interested in related matters.

I have done research in geometric data structures and algorithms in 1981-84. A
practical result was the EXCELL spatial index (like an octree with binary
subdivision, together with a directory allowing access by address computation).

My present interests are described below.

Charles Woodward has developed a new and efficient method for ray-tracing
parametric surfaces using subdivision for finding the ray/patch intersection.

============================================================================

I am putting together a project proposal with the abstract below.  I would be
interested in obtaining references to any new work you have done in ray-tracing
and hardware, and later in an exchange of ideas.  I will send an
acknowledgement to any message I get, so if you don't see one something has
gone wrong.  (Email has often been very unreliable.)

Looking forward to hearing something from you,

        Markku Tamminen
        Helsinki University of Technology
        Laboratory of Information Processing Science
        02150 ESPOO 15, FINLAND
        Tel: 358-0-4513248 (messages: 4513229, home: 710317)
        Telex: 125161 HTKK SF
        Telefax:        358-0-465077
        ARPANET:        mit%hutcs.uucp%fingate.bitnet@cunyvm.cuny.edu
        INTERNET:       mit@hutcs.hut.fi
        BITNET:         mit%hutcs.uucp@fingate
        UUCP:           mcvax!hutcs!mit



       Multiprocessor visualization of parametric surfaces
                        Project proposal

               Markku Tamminen (mit@hutcs.hut.fi)
               Charles Woodward (cwd@hutcs.hut.fi)
                Helsinki University of Technology
          Laboratory of Information Processing Science
              Otakaari 1 A, SF-02150 Espoo, Finland


ABSTRACT

The proposed research aims at an  efficient  system  architecture
and  improved  algorithms  for realistic visualization of complex
scenes described by parametric surfaces.

The  key  components  of such  a  system are a spatial index  and
a surface patch intersector.  For both very efficient  uniproces-
sor  solutions  have  been developed  by the authors at the  Hel-
sinki  University  of  Technology.  However, to obtain sufficient
speed, at least the latter should be based on a specialized   ar-
chitecture.

We  propose obtaining a balanced complete system by gradually as-
cending  what we call a specialization hierarchy.   At its bottom
are solutions based on multiprocessors or networks of independent
computing units (transputers). In this case an important research
problem is how to avoid duplicating the data base in the  proses-
sors. At  the top of the hierarchy are specialized processors im-
plemented in VLSI.

The research will produce general insight into the  possibilities
of    utilizing  concurrency   and   specialized   processors  in
geometric search and computation.

PREVIOUS WORK

M. Mantyla and M. Tamminen , ``Localized Set Operations for Solid
Modeling ,'' Computer Graphics , vol. 17, no. 3, pp. 279-289 ,
1983.

M. Tamminen , The EXCELL Method for Efficient Geometric Access to
Data , Acta Polytechnica Scandinavica, Ma 34 , 1981.

Markku Tamminen, Olli Karonen , and Martti Mantyla, ``Ray-
Casting and Block Model Conversion Using a Spatial Index,''
Computer Aided Design, vol. 16, pp. 203 - 208, 1984.

C. Woodward, ``Skinning Techniques for Interactive B-Spline
Surface Interpolation,'' Computer-Aided Design, vol. 20, no. 8,
pp. 441-451, 1988.

C. Woodward, ``Ray Tracing Parametric Surfaces By Subdivision in
Viewing Plane,'' to Appear in Proc. Theory and Practice of
Geometric Modelling, ed. W. Strasser, Springer-Verlag, 1989.

--------

[a later message from Markku Tamminen:]

I thought I'd send this summary of responses as feedback, because some answers
may not have found their way through the network. I think mit@hutcs.hut.fi
might be the safest address to use for me.

Our project has not yet started - I have just applied for funding with the
proposal whose abstract I sent to you. Also, I am a software person, but hope
to get somebody hardware-oriented involved in the project. We will be using
transputers to start with, but so far I have just made some small experiments
with them.

Our ray/patch intersection method is based on subdivision. It is a new method
developed by Charles Woodward, and quite a bit more efficient than Whitted's
original one. However, it takes 3 ms for a complete ray/patch intersection on a
SUN4. Thus, we'd like to develop a specialized processor for this task - the
algorithm is well suited for that.

Our spatial index is EXCELL, which I originally published in 1981, and whose 3D
application was described in SIGGRAPH'83 by Martti Mantyla and me. I have
lately tuned it quite a bit for ray-tracing, and we are very satisfied with its
performance. (EXCELL uses octree-like, but binary, subdivision. It has a
directory, which is an array providing direct access by address computation,
and a data part, which corresponds to the leaf cells of an octree. Binary
subdivision leads to fewer leaf-cells than 8-way. There is an overflow
criterion that decides when subdivision will be discontinued.)

We have obtained best results when we store in the spatial index for each patch
the bounding box of its control points, and further cut it with a "slab"
defined by two planes parallel to a "mean normal" of the patch. Using this
method we have to perform, on the average, less than 2 complete patch/ray
intersection tests.

Our method has not been as efficient as that of subdividing the patches to all
the way to triangles. However, as much less storage is required, we consider
our technique more suited for distributed architectures.

In the proposed project we want to look both into developing a specialized
(co)processor for the ray/patch intersection task and into distributing the
whole computation on several processors. I think that the most difficult
research problem is the partitioning of the data base in a loosely coupled
system. In our case the ray/patch intersection task is so time consuming that
it would help (to begin with) to keep the ray/database traversal on the
workstation and jost distribute the intersection task to other processors.

Some questions:

    Does anybody know of HW work in the ray/patch intersection area; e.g.,
    as a continuation of Pulleyblank & Kapenga's article in CG&A?

    Does anybody know of somebody working with transputers in ray-tracing? (We
    do know of the INMOS ray-tracing demo.)

    How enthusiastic are you about the approach of using digital signal
    processors? What other off-the shelf processors would be specially suited
    as a basis for a ray-tracing coprocessor?

    What would be the specific computation to offload to a coprocessor?

I don't know what more to write now about our project. Below is a short summary
of the responses I got:

    From: kyriazis@yy.cicg.rpi.edu (George Kyriazis)

Works with pixel machine, and will transport RPI's ray-tracer to it.  Main
problem: duplication of code and data.

    From: jeff@CitIago.Bitnet (Jeff Goldsmith)

Has implemented ray-tracer with distributed database on hypercube.
Communication between parts of database by RPC. With 128 processors 50%
utilization. (Ref: 3rd Hypercube Concurrent Computation Proc.)  A proposal to
build custom silicon for intersection testing. In this case the rest of the
system could reside on a ordinary PC or workstation.

    From: gray@rhea.cray.com (Gray Lorig)

"Your abstract sounds interesting."

    From: Frederik Jansen <JANSEN@ibm.com>

Mike Henderson at Yorktown Heights is working on ray/patch intersection problem
(software approach).

    From: Mark VandeWettering <markv@drizzle.cs.uoregon.edu>

"As to HW implementation, my advice is NOT to take the path that I took, but to
implement one simple primitive: a bilinear interpolated triangular patch."
"Take a look at AT&T's 12 DSP chip raytracing board."  Would like to experiment
with implementing an accelerator based on Motorola DSP56001.

    From: tim%ducat.caltech.edu@Hamlet.Bitnet (Tim Kay)

"I am interested how fast a *single* general purpose processor can raytrace
when it is assisted by a raytracing coprocessor of some sort. there seems to be
tremendous potential for adding a small amount of raytracing HW to graphics
work stations." "I am quite capable of ray tracing over our TCP/IP network."


    From: priol@tokaido.irisa.fr

"I work on ray-tracing on a MIMD computer like the Hypercube." Partitions scene
boundary as suggested by Cleary. To do it well sub-samples image before
parallel execution. With 30-40 processors 50% efficiency. Part of work
published in Eurographics'88.

    From: toc@wisdom.TN.CORNELL.EDU (Timothy F. O'Connor)

"Abstract sounds interesting. My main interest is in radiosity approach."

    From: Russ Tuck <tuck@cs.unc.edu>

"My work is summarized in hardcopy RTnews vol.2, no. 2, June '88, "Interactive
SIMD Ray Tracing."

That's it for now. I hope there has been some interest in this multi-feedback.
I'll get more meat of my own in the messages when our new work starts.

Thanks to you all! / Markku

-------------------------------------------------------------------------------

Miscellany
----------

From: hpfcla!subramn@cs.utexas.edu
Subject: Ray Tracing articles.

I would like you to bring this to the attention of the RT news group (if you
consider it appropriate).

There are lots of conference proceedings and journals other than SIGGRAPH &
CG&A which contain ray tracing articles. At least, here, at our university we
don't get all those journals (for example, the Visual Computer) due to budget
constraints.  It would be nice for someone to post relevant articles on ray
tracing so that all of us will be aware of the work going on in ray tracing
every where. For instance, I have found several articles in the Visual Computer
that were relevant to what I was doing after being pointed at by someone else.
If these can be listed by someone who gets these journals, then it would make
it easier to get access to these articles.

K.R.Subramanian
Dept. of Computer Sciences
The University of Texas at Austin
Austin, Tx-78712.
subramn@cs.utexas.edu
{uunet}!cs.utexas.edu!subramn

--------

[My two cents: we don't get "Visual Computer" around here, either.  I've been
helping to keep Paul Heckbert's "Ray Tracing Bibliography" up to date and would
like to obtain relevant references (preferably in REFER format) for next year's
copy for use in SIGGRAPH course notes. See SIGGRAPH '88 notes from the
"Introduction to Ray Tracing" course for the current version to see the state
of our reference list. - Eric]

----------------------------------------

From: "David F. Rogers" <hpfcla!dfr@USNA.MIL>
Subject:  Transforming normals

Was skimming the back issues of the RT News and your memo on transforming
normals caught my eye. Another way of looking at the problem is to recall that
a polygonal volume is made up of planes that divide space into two parts. The
columns of the volume matrix are formed from the coefficients of the plane
equations. Transforming a volume matrix requires that you premultiply by the
inverse of the manipulation matrix. The components of a normal are the first
three coefficients of the plane equation. Hence the same idea should apply.
(see PECG Sec. 4-3 on Robert's algorithm pp 211-213).  Surprising what you can
learn from Roberts' algorithm yet most people discount it.

----------------------------------------

>From: stadnism@clutx.clarkson.edu (Steven Stadnicki,212 Reynolds,2684079,5186432664)
Subject: Some new thoughts on how to do caustics, mirrored reflection, etc.
[source: comp.graphics]

Here's a new idea I came up with to do caustics, etc. by ray tracing: from your
point on some surface, shoot out some number (say ~100) in "random" directions
(I would probably use a jittered uniform distribution on the sphere).  For each
light source, keep track of all rays that come within some "distance" (some
angle) from the light source.  Then, for each of these rays, try getting closer
to the light source using some sort of Newton-type iteration method... for
example, to do mirrored reflection:

           \|/
           -o-   Light source
           /|\
                               |  
                +---+          | M
                | O |          | i
                | b |          | r
                | j |          | r
                | e |          | o
                | c |          | r
                | t |          |
----------------+---+--X-------+

>From point X, shoot out rays in the ~100 "random" directions mentioned above;
say one of them comes within 0.05 radians of the light source.  Do some sort of
update procedure on the ray to see if it keeps getting closer to the light
source; if it does, then you have a solution to the "mirrored reflection"
problem, and you can shade X properly.  This procedure will work for curved
mirrors as well as planar ones (unlike the previous idea I mentioned), and will
also handle caustics well.  It seems obvious to me that there will be bad cases
for the method, and it is certainly computationally expensive, but it looks
like a useful method.  Any comments?

                                         Steven Stadnicki
                                         Clarkson University
                                         stadnism@clutx.clarkson.edu
                                         stadnism@clutx.bitnet

----------------------------------------

>From: jms@antares.UUCP (joe smith)
Organization: Tymnet QSATS, San Jose CA
Subject: Re: Ray Tracing Novice Needs Your Help
[source: comp.graphics]

In article <2399@ssc-vax.UUCP> dmg@ssc-vax.UUCP (David Geary) writes:
>  What I'd really like is to have the source in C to a *simple*
>ray-tracer - one that I could port to my Amiga without too much 
>difficulty.
>~ David Geary, Boeing Aerospace,               ~ 

My standard answer to this question when it comes up is to locate the May/June
1987 issue of Amiga World.  It's the one that has the ray-traced robot juggler
on the cover.  The article "Graphic Scene Simulations" is a great overview of
the subject, and it includes the program listing in C.  (Well, most of the
program.  Details such as inputting the coordinates of all the objects are
omitted.)

----------------------------------------

From: hpfcla!sunrock!kodak!supra!reichert@Sun.COM (Mark Reichert x25948)
Subject: A call for vectors.....

Imagine a unit sphere centered at the origin.

Imagine a vector, the "reference vector", from the origin to any point on the
surface of this sphere.

I would like to create n vectors which will evenly sample the surface of our
sphere, within some given angle about that "reference vector".

I need to be able to jitter these vectors in such a way that no two vectors in
a given bunch could be the same.


This appears to be a job for spherical coordinates, but I can't seem to find a
formula that can treat the surface of a sphere as a "uniform" 2D surface (ie.
no bunching up at the poles).


I desire these vectors for generating soft shadows from spherical light
sources, and for diffuse illumination guessing.


I have something now which is empirical and slow - neither of which trait I
find very desirable.

I will have a need for these vectors often, and seldom will either the
angle or the number of vectors needed be the same across consecutive requests.


Can anyone help me?

----------------------------------------

>From: hoops@watsnew.waterloo.edu (HOOPS Workshop)
Subject: Needing Ray Tracing Research Topic
[source: comp.graphics]

As a System Design Engineeering undergrad at University of Waterloo, I am
responsible for preparing a 'workshop' paper each term.  I am fascinated with
ray tracing graphics, but what I really need is a good application that I can
work into a workable research topic that can be completed in 1 or 2 terms.

If anyone in netland can offer any information on an implementation of ray
tracing graphics for my workshop please email me.

Thanks in advance folks,

Tracey Bernath
System Design Engineering
University of Waterloo
Waterloo, Ontario, Canada 

              hoops@watsnew.uwaterloo.ca
 Bitnet:      hoops@water.bitnet                
 CSNet:       hoops@watsnew.waterloo.edu        
 uucp:        {utai,uunet}!watmath!watsnew!hoops

-------------------------------------------------------------------------------

Supersampling Discussion
------------- ----------

[A flurry of activity arose when someone asked about doing supersampling in a
ray tracer.  Below are some of the more interesting and useful replies. - Eric]

----------------------------------------

>From: jevans@cpsc.ucalgary.ca (David Jevans)
[source: comp.graphics]
Summary: blech

In article <5548@thorin.cs.unc.edu>, brown@tyler.cs.unc.edu (Lurch) writes:
> In article <5263@cbmvax.UUCP> steveb@cbmvax.UUCP (Steve Beats) writes:
> >In article <1351@umbc3.UMD.EDU> bodarky@umbc3.UMD.EDU (Scott Bodarky) writes:
> >If you sample the scene using one pixel per ray, you will get
> >pretty severe aliasing at high contrast boundaries.  One trick is to sample
> >at twice the vertical and horizontal resolution (yielding 4 rays per pixel)
> >and average the resultant intensities.  This is a pretty effective method
> >of anti-aliasing.
 
> From what I understand, the way to achieve 4 rays per pixel is to sample at
> vertical resolution +1, horizontal resolution +1, and treat each ray as a
> 'corner' of each pixel, and average those values.  This is super cheap compared
> to sampling at twice vertical and horizontal.

Blech!  Super-sampling, as suggested in the first article, works ok but is very
slow and 4 rays/pixel is not enough for high quality images.  Simply rendering
vres+1 by hres+1 doesn't gain you anything.  All you end up doing is blurring
the image.  This is VERY unpleasant and makes an image look out of focus.

Aliasing is an artifact of regular under-sampling.  Most people adaptively
super-sample in areas where it is needed (edges, textures, small objects).
Super-sampling in a regular pattern often requires more than 16 rays per
anti-aliased pixel to get acceptable results.  A great improvement comes from
filtering your rays instead of simply averaging them.  Even better is to fire
super-sample rays according to some distribution (eg. Poisson) and then filter
them.

Check SIGGRAPH proceedings from about 84 - 87 for relevant articles and
pointers to articles.  Changing a ray tracer from simple super-sampling to
adaptive super-sampling can be done in less time than it takes to render an
image, and will save you HUGE amounts of time in the future.  Filtering and
distributing rays takes more work, but the results are good.

David Jevans, U of Calgary Computer Science, Calgary AB  T2N 1N4  Canada
uucp: ...{ubc-cs,utai,alberta}!calgary!jevans

----------------------------------------

>From: awpaeth@watcgl.waterloo.edu (Alan Wm Paeth)
Subject: Re: raytracing in || (supersampling speedup)
[source: comp.graphics]

In article <5548@thorin.cs.unc.edu> brown@tyler.UUCP (Lurch) writes:
>
>From what I understand, the way to achieve 4 rays per pixel is to sample at
>vertical resolution +1, horizontal resolution +1, and treat each ray as a
>'corner' of each pixel, and average those values.  This is super cheap compared
>to sampling at twice vertical and horizontal.

This reuses rays, but since the number of parent rays and number of output
pixels match, this has to be the same as low-pass filtering the output produced
by a raytracer which casts the same number of rays (one per pixel).

The technique used by Sweeney in 1984 (while here at Waterloo) compares the
four pixel-corner rays and if they are not in close agreement subdivides the
pixel.  The recursion terminates either when the rays from the subpixel's
corners are in close agreement or when some max depth is reached. The subpixel
values are averaged to form the parent pixel intensity (though a more general
convolution could be used in gathering up the subpieces).

This approach means that the subpixel averaging takes place adaptively in
regions of pixel complexity, as opposed to globally filtering the entire output
raster (which the poster's approach does implicitly).

The addition can be quite useful. For instance, a scene of flat shaded polygons
renders in virtually the same time as a "one ray per pixel" implementation,
with some slight overhead well spent in properly anti-aliasing the polygon
edges -- no time is wasted on the solid areas.

   /Alan Paeth
   Computer Graphics Laboratory
   University of Waterloo

----------------------------------------

>From: andreww@dgp.toronto.edu (Andrew Chung How Woo)
Subject: anti-aliasing
[source: comp.graphics]

With all these discussions about anti-aliasing for ray tracing, I thought I
would get into the fun also.

As suggested by many people, adaptive sampling is a good way to start dealing
with anti-aliasing (suggested by Whitted).  For another quick hack on top of
adaptive sampling, you can add jitter (suggested by Cook).  The jitter factor
can be controlled by the recursive depth of the adaptive sampling.  This
combination tends to achieve decent quality.

Another method which nobody has mentioned is "stratified sampling".  This is
also a rather simple method.  Basically, the pixel is divided into a N-size
grid.  You have a random number generator to sample a ray at (x,y) of the grid.
Then shoot another ray, making sure that the row x and column y are discarded
from further sampling, etc.  Repeat this for N rays.  Note, however, no sharing
of point sampling information is available here.

Andrew Woo

----------------------------------------

>From: loren@pixar.UUCP (Loren Carpenter)
Subject: Re: anti-aliasing
[source: comp.graphics]

[This is in response to Andrew Woo's article - Eric]

Rob Cook did this too.  He didn't call it "stratified sampling", though.  The
idea is suggested by the solutions to the "8 queens problem".  You want N
sample points, no 2 of which are in the same column, and no 2 of which are in
the same row.  Then you jitter on top of that....

p.s.  You better not use the same pattern for each pixel...


			Loren Carpenter
			...!{ucbvax,sun}!pixar!loren

----------------------------------------

[By the way, what kind of adaptive supersampling have people been using?  We've
had fairly good results with the simple "check four corners" algorithm and box
filtering the values generated.  We find that for complex scenes an adaptive
algorithm (which shoots 5 more rays if the subdivision criteria is met) shoots
about 25% more rays overall (a result that Linda Roy also obtained with her
ray-tracer).  What the subdivision criteria are affects this, of course.  We've
been using 0.1 as the maximum ratio of R,G, or B channels of the four pixel
corners.  How have you fared, and what kinds of filtering have you tried?
- Eric]

-------------------------------------------------------------------------------

Distributed Ray Tracer Available

>From: kyriazis@rpics (George Kyriazis)
[source: comp.graphics]

	During the last week I put myself together and managed to pull out a
second version of my ray tracer (the old one was under the subject: "Another
simple ray tracer available").  Tis one includes most of the options described
in R. Cook's paper on Distributed ray tracing.

Capabilities of the ray tracer are:
	Gloss (blurred reflection)
	Translucency (blurred refraction)
	Penumbras (area light sources)
	Motion Blur
	Phong illumination model with one light source
	Spheres and squares
	Field of view and arbitrary position of the camera

The ray tracer has been tested on a SUN 3 and SUN 4.  I have it available under
anonymous ftp on life.pawl.rpi.edu (128.113.10.2) in the directory pub/ray
under the name ray.2.0.shar.  There are some older version there if you want to
take a look at them.  If you can't ftp there send me mail and I'll send you a
copy.  I also have a version for the AT&T Pixel Machine (for the few that have
access to one!).

No speed improvements have been made yet, but I hope I will have Goldsmith's
algorithm running on it soon.  I know that my file format is not standard, but
people can try to write their own routine to read the file.  It's pretty easy.

Hope you have fun!


  George Kyriazis
  kyriazis@turing.cs.rpi.edu
  kyriazis@ss0.cicg.rpi.edu

-------------------------------------------------------------------------------

Ray Tracing Program for 3b1

>From: sid@chinet.UUCP (Sid Grange)
Subject: v05i046: ray tracing program for 3b1
[source: comp.sources.misc]
Posting-number: Volume 5, Issue 46
Archive-name: tracer

[This was posted to comp.sources.misc.  I include some of the documentation so
you can get a feel for it. Nothing special, but it might be fun if you have a
3b1 (it's supposed to work on other UNIX systems, too). - Eric]

NAME
     tracer- run a simple ray tracing procedure

DESCRIPTION
     Tracer is a program developed originally to study how
     ray tracing works, and was later modified to the present state
     to make it more compatible for animated film production.

     It is capable of depicting a number of balls (up to 150)
     and a plane that is covered with a tiling of any bitmapped picture.

PROGRAM NOTES
     This program generates a file containing a header with x and y sizes,
     followed by the data in 8-bit greyscale, one pixel to a character, in 
     scanlines.
     There are two necessary input files: ball data, and a pattern bitmap.
     The tiling bitmap can be digitized data, it must be in the form of 
     scan lines no longer than 512 bytes followed by newlines.

-------------------------------------------------------------------------------

Map Archive

>From: crum@lipari.usc.edu (Gary L. Crum)
Subject: DB:ADD SITE panarea.usc.edu (Internet archive for maps)
[source: comp.archive, and ftp]

An Internet archive for maps of geographic-scale maps has been set up, starting
with data from the United States Geological Survey (USGS) National Cartographic
Information Center (NCIC), specifically a map of terrain in USGS Digital
Elevation Model (DEM) format.

The archive is on host panarea.usc.edu [128.125.3.54], in anonymous FTP
directory pub/map.  Gary Crum <crum@cse.usc.edu> is maintainer.

The pub/map directory is writable by anonymous ftp.  Approximately 50M bytes
are available for the map archive as of this writing.

NOTES:

* Files ending in the .Z extension have been compressed with the "compress"
program available on comp.sources archives such as j.cc.purdue.edu.  They
should be transferred using the "binary" mode of ftp to UNIX systems.  Send
mail to the maintainer to request format changes (e.g., to uuencoded form split
into small pieces).

* Some maps, e.g., DEM files from USGS, contain long lines which have been
observed to cause problems transferring with some FTP implementations.  In
particular, a version of the CMU TCP/IP package for VAX/VMS did not support
these long lines.

* Source code for UNIX tools that manipulate ANSI labeled tapes and VMS tapes
is available as pub/ansitape.tar.Z on the map archive host.

-----------------

Index for Map Archive on Internet Host panarea.usc.edu [128.125.3.54].
version of Mon Nov 14 09:41:10 PST 1988

NOTE:  This INDEX is descriptive to only the directory level in many cases.

-rw-r--r--  1 crum      1090600 May 26 09:33 dem/MAP.N0009338
-rw-r--r--  1 crum       278140 Nov 11 14:16 dem/MAP.N0009338.Z
	Digital Elevation Model 7.5 minute quad (see dem/README).
	Ft. Douglas quadrangle, including part of Salt Lake City, UT.
	Southeast corner has coordinates 40.75 N 111.75 W

drwxrwxrwx  2 crum          512 Nov  1 19:23 terrain-old-format/MAP.37N112W
drwxrwxrwx  2 crum          512 Nov  1 19:23 terrain-old-format/MAP.37N119W
drwxrwxrwx  2 crum          512 Nov  1 19:24 terrain-old-format/MAP.38N119W
	USGS NCIC terrain maps in "old" format before DEM was introduced.
	Files in these directories ending in extensions .[a-s] should be
	concatenated together after transfer.

-rw-rw-rw-  1 45         777251 Nov 11 11:10 world-digitized/world-digitized.tar.Z
drwxrwxr-x  7 crum          512 Nov 11 10:56 world-digitized/extracted
	The "extracted" directory is produced from the tar file.
	From world-digitized/expanded/doc/read.me :
		The World Digitized is a collection of more than 100,000
	points of latitude and longitude.  When connected together, these
	co-ordinates form outlines of the entire world's coastlands, islands,
	lakes, and national boundaries in surprising detail.

drwxrwxrwx  2 crum         1024 Nov 12 19:10 dlg/35N86W
	Digital Line Graph of area with top-left coordinates 35 N 86 W.
	See dlg/README.  From roskos@ida.org (Eric Roskos).

-------------------------------------------------------------------------------

Index of Back Issues, by Eric Haines


This is a list of all back issues of the RT News, email edition.  I'm
retroactively giving these issues numbers for quick reference purposes.
Topics are fully listed in the first issue in which they are discussed, and
follow-up material is listed in braces {}.  I've tried to summarize the
main topics covered, not the individual articles.


[Volume 0, August-December 1987] - Standard Procedural Databases, Spline
	Surface Intersection, Abnormal Normals

[Volume 1, Number 1,] 1/15/88 - Solid Modelling with Faceted Primitives,
	What's Wrong [and Right] with Octrees, Top Ten Hit Parade of Computer
	Graphics Books, Comparison of Efficiency Schemes, Subspaces and
	Simulated Annealing.

[Volume 1, Number 2,] 2/15/88 - Dore'

[Volume 1, Number 3,] 3/1/88 - {comments on Octrees, Simulated Annealing},
	Efficiency Tricks, More Book Recommendations, Octree Bug Alert

[Volume 1, Number 4,] 3/8/88 - Surface Acne, Goldsmith/Salmon Hierarchy
	Building, {more Efficiency Tricks}, Voxel/Quadric Primitive Overlap
	Determination

[Volume 1, Number 5,] 3/26/88 - {more on Efficiency, Voxel/Quadric}, Linear
	Time Voxel Walking for Octrees, more Efficiency Tricks, Puzzle,
	PECG Correction

[Volume 1, Number 6,] 4/6/88 - {more on Linear Time Voxel Walking}, Thoughts
	on the Theory of RT Efficiency, Automatic Creation of Object
	Hierarchies (Goldsmith/Salmon), Image Archive, Espresso Database
	Archive

[Volume 1, Number 7,] 6/20/88 - RenderMan & Dore', Commercial Ray Tracing
	Software, Benchmarks

[Volume 1, Number 8,] 9/5/88 - SIGGRAPH '88 RT Roundtable Summary, {more
	Commercial Software}, Typo in "Intro to RT" Course Notes, Vectorizing
	Ray-Object Intersections, Teapot Database Archive, Mark
	VandeWettering's Ray Tracer Archive, DBW Render, George Kyriazis
	Ray Tracer Archive, Hartman/Heckbert PostScript Ray Tracer (source),

[Volume 1, Number 9,] 9/11/88 - {much more on MTV's Ray Tracer and utilities},
	Archive for Ray Tracers and databases (SPD, teapot), Sorting on Shadow
	Rays for Kay/Kajiya, {more on Vectorizing Ray-Object Intersection},
	{more on George Kyriazis' Ray Tracer}, Bitmaps Library

[Volume 1, Number 10,] 10/3/88 - Bitmap Utilities, {more on Kay/Kajiya},
	Wood Texture Archive, Screen Bounding Boxes, Efficient Polygon
	Intersection, Bug in Heckbert Ray Tracer (?), {more on MTV's Ray
	Tracer and utilities}, Neutral File Format (NFF)

[Volume 1, Number 11,] 11/4/88 - Ray/Triangle Intersection with Barycentric
	Coordinates, Normal Transformation, {more on Screen Bounding Boxes},
	{comments on NFF}, Ray Tracing and Applications, {more on Kay/Kajiya
	and eye rays}, {more on Wood Textures}, Virtual Lighting, Parallel
	Ray Tracing, RenderMan, On-Line Computer Graphics References Archive,
	Current Mailing List

-----------------------------------------------------------------------------
END OF RTNEWS



From m-cohen@cs.utah.edu Tue Feb 21 19:21:56 1989
Return-Path: <m-cohen@cs.utah.edu>
Received: from cs.utah.edu by turing.cs.rpi.edu (4.0/1.2-RPI-CS-Dept)
	id AA14160; Tue, 21 Feb 89 19:21:46 EST
Received: by cs.utah.edu (5.61/utah-2.1-cs)
	id AA19818; Tue, 21 Feb 89 16:28:08 -0700
Date: Tue, 21 Feb 89 16:28:08 -0700
From: m-cohen@cs.utah.edu (Michael F Cohen)
Message-Id: <8902212328.AA19818@cs.utah.edu>
To: FISHER%3D.dec@decwrl.dec.com, andreww@dgp.toronto.edu,
        cornell!apctrc!bigmac!senften@cs.utah.edu,
        cornell!apctrc!zfms0a@cs.utah.edu, apollo!arvo@eddie.mit.edu,
        apollo!johnf@eddie.mit.edu, apollo!scofield@eddie.mit.edu,
        atc@cs.utexas.EDU, barr@csvax.caltech.edu, barsky@miro.berkeley.edu,
        bogart%gr@cs.utah.edu, ckchee@dgp.toronto.edu, dana!mrk@hplabs.hp.com,
        daniel@apollo.com, dfr@cad.usna.mil, dk@csvax.caltech.edu,
        dmarsh@apple.com, dutio!fwj@haring.cwi.NL, dutrun!frits@haring.cwi.NL,
        dutrun!wim@haring.cwi.NL, ecn!jack@haring.cwi.NL,
        esunix!tmalley@cs.utah.edu,
        cornell!fornax!sfu-cmpt!chapman@cs.utah.edu, gjward@lbl.gov,
        glassner@xerox.com, gould!rti!ndl!jtw@sun.com, grant@icdc.llnl.gov,
        gray%rhea.CRAY.COM@uc.msc.umn.edu, hohmeyer@miro.berkeley.edu,
        hpfcla!hpfcrs!eye!erich@hplabs.HP.COM, hultquis@prandtl.nas.nasa.gov,
        inria!irisa!priol@haring.cwi.NL, jaf@squid.tn.cornell.edu,
        jeff@hamlet.caltech.edu, jevans@cpsc.ucalgary.ca, joy@ucdavis.edu,
        jp@apple.com, kelvin@cs.utexas.edu, kyriazis@turing.cs.rpi.edu,
        lister@dg-rtp.dg.com, litwinow@apple.com, lroy@sgi.com,
        m-cohen@cs.utah.edu, markv@cs.uoregon.edu, megatek!kuchkuda@ucsd.edu,
        mike@brl.mil, mjn@cs.brown.edu,
        mohta%titcce.cc.titech.junet%utokyo-relay.csnet@RELAY.CS.NET,
        palmer@ncifcrf.gov, ph@miro.berkeley.edu, phil@zeno0.rdrc.rpi.edu,
        pixar!pat@ucbvax.berkeley.edu, pixar!peachey@ucbvax.berkeley.edu,
        pls%pixels@research.att.com, poulin@dgp.toronto.edu, pre@cs.hut.fi,
        pss@cs.brown.edu, raycasting@duke.cs.duke.edu,
        roy@wisdom.tn.cornell.edu, sgi!paul@pyramid.pyramid.com,
        shaffer@vtopus.cs.vt.edu, spach@hplabs.hp.com,
        speer@ucbvax.berkeley.edu, spencer@tut.cis.ohio-state.edu,
        stadnism@clutx.clarkson.edu, stepoway@smu.edu, subramn@cs.utexas.edu,
        supra!reichert@kodak.com, tim@csvax.caltech.edu,
        toc@wisdom.tn.cornell.edu, tuck@cs.unc.edu, turk@cs.unc.edu,
        wbt@cockle.tn.cornell.edu, webber@aramis.rutgers.edu,
        westover@cs.unc.edu, wtl@cockle.tn.cornell.edu
Subject: Ray Tracing News
Status: R

 _ __                 ______                         _ __
' )  )                  /                           ' )  )
 /--' __.  __  ,     --/ __  __.  _. o ____  _,      /  / _  , , , _
/  \_(_/|_/ (_/_    (_/ / (_(_/|_(__<_/ / <_(_)_    /  (_</_(_(_/_/_)_
             /                               /|
            '                               |/

			"Light Makes Right"

			 February 20, 1988
		         Volume 2, Number 2

Compiled by Eric Haines, 3D/Eye Inc, 2359 Triphammer Rd, Ithaca, NY 14850
    607-257-1381, hpfcla!hpfcrs!eye!erich@hplabs.hp.com
All contents are US copyright (c) 1989 by the individual authors

Contents:
    Introduction (Eric Haines)
    New Subscribers (Turner Whitted, Mike Muuss)
    The BRL CAD Package (Mike Muuss)
    New Book: _Illumination and Color in Computer Generated Imagery_, Roy Hall
	(Eric Haines)
    Uniform Distribution of Sample Points on a Surface
    Depth of Field Problem (Marinko Laban)
    Query on Frequency Dependent Reflectance (Mark Reichert)
    "Best of comp.graphics" ftp Site (Raymond Brand)
    Notes on Frequency Dependent Refraction [comp.graphics]
    Sound Tracing [comp.graphics]
    Laser Speckle [comp.graphics]

-----------------------------------------------------------------------------

Introduction, by Eric Haines

Whew, things have piled up!  I've culled my comp.graphics findings as best as
I can.  I've decided to delete everything on the question of what a stored
ray-trace image should be called ("image", "bytemap", "pixmap", and "bitmap"
were some of the candidates).  It's a good question, but the discussion just
got too long to recap.  Paul Heckbert's original posting advocated not using
"bitmap" for 24 bit images, since "bitmap" denotes an M x N x 1 bit deep image
in most settings.  It would be pleasant to get a consensus on acceptable usage,
but it's also interesting to me from a `word history' standpoint.  If you have
an opinion you'd like to share on this topic, pass it on to me and I'll
summarize them (if possible, a 25 word or less summation would be nice).  My
own is: "I'm a product of my environment.  Cornell used bitmap, Hewlett-Packard
uses bitmap, so I tend to use `bitmap', `24 bit deep bitmap', or `image'".

I've put all the comp.graphics postings at the end, and the good news is that
the queue is now empty.  The `Sound Tracing' postings to comp.graphics were many
and wordy.  I've tried to pare them down to references, interesting questions
that arose, and informed (or at least informed-sounding to my naive ears)
opinions.

-----------------------------------------------------------------------------

New Subscribers
---------------

# Turner Whitted
# Numerical Design Limited
# 133 1/2 E. Franklin Street
# P.O. Box 1316
# Chapel Hill, NC 27514
alias	turner_whitted	gould!rti!ndl!jtw@sun.com

[this mail path is just a good guess - does gould have an arpa connection?
The uucp path I use is:
	    turner_whitted	hpfcrs!hpfcla!hplabs!sun!gould!rti!ndl!jtw
]


# Michael John Muuss -- ray-tracing for predictive analysis of 3-D CSG models
# Leader, Advanced Computer Systems Team
# Ballistic Research Lab
# APG, MD  21005-5066
# USA
# ARPANET:  mike@BRL.MIL
# (301)-278-6678 		[telephone is discouraged, use E-mail instead]
alias	mike_muuss	mike@BRL.MIL

I lead BRL's Advanced Computer Systems Team (ACST) in research projects in
(a) CSG solid modeling, ray-tracing, and analysis, (b) advanced processor
architectures [mostly MIMD of late], (c) high-speed networking, and (d)
operating systems.  We are the developers of the BRL-CAD Package, which is a
sophisticated Combinatorial Solid Geometry (CSG) solid modeling system, with
ray-tracing library, several lighting models, a variety of non-optical
"lighting" models (eg, radar) [available on request], a device independent
framebuffer library, a collection of image-processing tools, etc.  This
software totals about 150,000 lines of C code, which we make available in
source form under the terms of a "limited distribution agreement" at no charge.

My personal interests wander all over the map, right now I'm fiddling with some
animation software, some D/A converters for digital music processing, and some
improvements to our network-distributed ray-tracer protocol.

Thanks for the invitation to join!

	Best,
	 -mike

-----------------------------------------------------------------------------

			The BRL CAD PACKAGE
			   Short Summary

In FY87 two major releases of the BRL CAD Package  software  were
made (Feb-87, July-87), along with two editions of the associated
400 page manual. The package includes a powerful  solid  modeling
capability and a network-distributed image-processing capability.
This software is now running at over  300  sites.   It  has  been
distributed to 42 academic institutions in twenty states and four
countries including Yale, Princeton, Stanford, MIT,USC, and UCLA.
The University of California - San Diego is using the package for
rendering  brains  in  their  Brain  Mapping   Project   at   the
Quantitative Morphology Laboratory.  75 different businesses have
requested and received the  software  including  23  Fortune  500
companies   including:   General  Motors,  AT&T,  Crysler  Motors
Corporation,  Boeing,  McDonnell   Douglas,   Lockheed,   General
Dynamics,  LTV  Aerospace & Defense Co., and Hewlett Packard.  16
government organizations representing all  three  services,  NSA,
NASA,  NBS  and  the Veterns Administration are running the code.
Three of the four national laboratories have copies  of  the  BRL
CAD  package.   More  than  500  copies  of  the manual have been
distributed.

BRL-CAD started in 1979 as a task to provide an interactive
graphics editor for the BRL target description data base.

Today it is > 100,00 lines of C source code:

	Solid geometric editor
	Ray tracing utilities
	Lighting model
	Many image-handling, data-comparison, and other
	supporting utilities

It runs under UNIX and is supported over more than a dozen product
lines from Sun Workstations to the Cray 2.

In terms of geometrical representation of data, BRL-CAD supports:

	the original Constructive Solid Geometry (CSG) BRL data
	base which has been used to model > 150 target descriptions,
	domestic and foreign

	extensions to include both a Naval Academy spline 
	(Uniform B-Spline Surface) as well as a U. of
	Utah spline (Non-Uniform Rational B-Spline [NURB] Surface)
	developed under NSF and DARPA sponsorship

	a facerted data representation, (called PATCH),
	developed by Falcon/Denver
	Research Institute and used by the Navy and Air Force for
	vulnerability and signature calculations (> 200 target
	descriptions, domestic and foreign

It supports association of material (and other attribute properties)
with geometry which is critical to subsequent applications codes.

It supports a set of extensible interfaces by means of which geometry
(and attribute data) are passed to applications:

	Ray casting
	Topological representation
	3-D Surface Mesh Generation
	3-D Volume Mesh Generation
	Analytic (Homogeneous Spline) representation

Applications linked to BRL-CAD:

o Weights and Moments-of-Inertia
o An array of Vulnerability/Lethality Codes
o Neutron Transport Code
o Optical Image Generation (including specular/diffuse reflection,
	refraction, and multiple light sources, animation, interference)
o Bistatic laser target designation analysis
o A number of Infrared Signature Codes
o A number of Synthetic Aperture Radar Codes (including codes
	due to ERIM and Northrop)
o Acoustic model predictions
o High-Energy Laser Damage
o High-Power Microwave Damage
o Link to PATRAN [TM] and hence to ADINA, EPIC-2, NASTRAN, etc.
	for structural/stress analysis
o X-Ray calculation

BRL-CAD source code has been distributed to approximately 300
computer sites, several dozen outside the US.

----------

To obtain a copy of the BRL CAD Package distribution, you must send
enough magnetic tape for 20 Mbytes of data. Standard nine-track
half-inch magtape is the strongly preferred format, and can be written
at either 1600 or 6250 bpi, in TAR format with 10k byte records. For
sites with no half-inch tape drives, Silicon Graphics and SUN tape
cartridges can also be accommodated. With your tape, you must also
enclose a letter indicating

(a) who you are,
(b) what the BRL CAD package is to be used for,
(c) the equipment and operating system(s) you plan on using,
(d) that you agree to the conditions listed below.

This software is an unpublished work that is not generally available to
the public, except through the terms of this limited distribution.
The United States Department of the Army grants a royalty-free,
nonexclusive, nontransferable license and right to use, free of charge,
with the following terms and conditions:

1.  The BRL CAD package source files will not be disclosed to third
parties.  BRL needs to know who has what, and what it is being used for.

2.  BRL will be credited should the software be used in a product or written
about in any publication.  BRL will be referenced as the original
source in any advertisements.

3.  The software is provided "as is", without warranty by BRL.
In no event shall BRL be liable for any loss or for any indirect,
special, punitive, exemplary, incidental, or consequential damages
arising from use, possession, or performance of the software.

4.  When bugs or problems are found, you will make a reasonable effort
to report them to BRL.

5.  Before using the software at additional sites, or for permission to
use this work as part of a commercial package, you agree to first obtain
authorization from BRL.

6.  You will own full rights to any databases or images you create with this
package.

All requests from US citizens, or from US government agencies should be
sent to:

	Mike Muuss
	Ballistic Research Lab
	Attn: SLCBR-SECAD
	APG, MD  21005-5066

If you are not a US citizen (regardless of any affiliation with a
US industry), or if you represent a foreign-owned or foreign-controlled
industry, you must send your letter and tape through your Ambassador to
the United States in Washington DC. Have your Ambassador submit the
request to:

	Army Headquarters
	Attn: DAMI-FL
	Washington, DC  20310

Best Wishes,
 -Mike Muuss

Leader, Advanced Computer Systems Team
ArpaNet:  <Mike @ BRL.ARPA>

--------

p.s. from David Rogers:

If you have the _Techniques in Computer Graphics_ book from Springer-Verlag the
frontispiece was done with RT the BRL ray tracer.  It is also discussed in a
paper by Mike Muuss in that book.

--------

p.s. from Eric Haines:

Mike Muuss was kind enought to send me the documentation (some two inches
thick) for the BRL package.  I haven't used the BRL software (sadly, it does
not seem to run on my HP machine yet - I hope someone will do a conversion
someday...), but the package looks pretty impressive.  Also, such things as the
Utah RLE package and `Cake' (an advanced form of `make') come as part of the
distribution.  There are also interesting papers on the system, the design
philosophy, parallelism, and many other topics included in the documentation.

-----------------------------------------------------------------------------

_Illumination and Color in Computer Generated Imagery_
    by Roy Hall, Springer-Verlag, New York, 1989, 282 pages
    (article by Eric Haines)

Roy Hall's book is out, and all I'll say about it is that you should have one.
The text (what little I've delved into so far) is well written and complemented
with many explanatory figures and images.  There are also many appendices
(about 100 pages worth) filled with concise formulae and "C" code.  Below is
the top-level Table of Contents below to give you a sense of what the book
covers.

The "C" code will probably be available publicly somewhere sometime soon.  I'll
post the details here when it's ready for distribution.

    1.0 Introduction				 8 pages
    2.0 The Illumination Process		36 pages
    3.0 Perceptual Response			18 pages
    4.0 Illumination Models			52 pages
    5.0 Image Display				40 pages
    Appendix I - Terminology			 2 pages
    Appendix II - Controlling Appearance	10 pages
    Appendix III - Example Code			86 pages
    Appendix IV - Radiosity Algorithms		14 pages
    Appendix V - Equipment Sources		 4 pages
    References					 8 pages
    Index					 4 pages

-----------------------------------------------------------------------------

Uniform Distribution of Sample Points on a Surface

[Mark Reichert asked last issue how to get a random sampling of a sphere]

    How to generate a uniformly distributed set of rays over the unit sphere:
Generate a point inside the bi-unit cube.  (Three uniform random numbers in
[-1,1].)  Is that point inside the unit sphere (and not at the origin)?  If
not, toss it and generate another (not too often.)  If so, treat it as a vector
and normalize it.  Poof, a vector on the unit sphere.  This won't guarantee a
isotropic covering of the unit sphere, but is helpful to generate random
samples.

--Jeff Goldsmith

--------

    One method is simply to do a longitude/latitude split-up of the sphere (and
randomly sampling within each patch), but instead of making the latitude lines
at even degree intervals, put the latitude divisions at even intervals along
the sphere axis (instead of even altitude [a.k.a. theta] angle intervals).
Equal axis divisions give us equal areas on the sphere's surface (amazingly
enough - I didn't believe it was this simple when I saw this in the Standard
Mathematics Tables book, so rederived it just to be sure).

    For instance, let's say you'd like 32 samples on a unit sphere.  Say we
make 8 longitude lines, so that now we want to make 4 patches per slice, and so
wish to make 4 latitudinal bands of equal area.  Splitting up the vertical axis
of the sphere, we want divisions at -0.5, 0, and 0.5.  To change these
divisions into altitude angles, we simply take the arcsin of the axis values,
e.g. arcsin(0.5) is 30 degrees.  Putting latitude lines at the equator and at
30 and -30 degrees then gives us equal area patches on the sphere.  If we
wanted 5 patches per slice, we would divide the axis of the unit sphere (-1 to
1) into 5 pieces, and so get -0.6,-0.2,0.2,0.6 as inputs for arcsin().  This
gives latitude lines on both hemispheres at 36.87 and 11.537 degrees.

    The problem with the whole technique is deciding how many longitude vs.
latitude lines to make.  Too many longitude lines and you get narrow patches,
too many latitude and you get squat patches.  About 2 * long = lat seems pretty
good, but this is just a good guess and not tested.

    Another problem is getting an even jitter to each patch.  Azimuth is
obvious, but you have to jitter in the domain for the altitude.  For example,
in a patch with an altitude from 30 to 90 degrees, you cannot simply select a
random degree value between 30 and 90, but rather must get a random value
between 0.5 and 1 (the original axis domain) and take the arcsin of this to
find the degree value.  (If you didn't do it this way, the samples would tend
to be clustered closer to the poles instead of evenly).

    Yet another problem with the above is that you get patches whose geometry
and topology can vary widely.  Patches at the pole are actually triangular, and
patches near the equator will be much more squat than those closer to the
poles.  If you would rather have patches with more of an equal extent than a
perfectly equal area, you could use a cube with a grid on each face cast upon
the sphere (radiosity uses half of this structure for hemi-cubes).  The areas
won't be equal, but they'll be pretty close and you can weight the samples
accordingly.  There are many other nice features to using this cast cube
configuration, like being able to use scan-line algorithms, being able to vary
grid size per face (or even use quadtrees), being able to access the structure
without having to perform trigonometry, etc.  I use it to tessellate spheres in
the SPD package so that I won't get those annoying clusterings at the poles of
the sphere, which can be particularly noticeable when using specular
highlighting.

--Eric Haines

-----------------------------------------------------------------------------

Depth of Field Problem

From: Marinko Laban via Frits Post	dutrun!frits@mcvax.cwi.nl


First an introduction. I'm a Computer Graphics student at the Technical
University of Delft, The Netherlands. My assignment was to do some research
about distributed ray tracing. I actually implemented a distributed ray tracer,
but during experiments a very strange problem came up. I implemented
depth-of-field exactly in the way R.L.  Cook described in his paper. I decided
to do some experiments with the shape of the f-stop of the simulated camera.
First I simulated a square-shaped f-stop. Now I now this isn't the real thing
in an actual photocamera, but I just tried. I divided the square f-stop in a
regular raster of N x N sub-squares, just in the way you would subdivide a
pixel in subpixels. All the midpoints of the subsquares were jittered in the
usual way. Then I rendered a picture. Now here comes the strange thing. My
depth-of-field effect was pretty accurate, but on some locations some jaggies
were very distinct. There were about 20 pixels in the picture that showed very
clear aliasing of texture and object contours. The funny thing was that the
rest of the picture seemed alright. When I rendered the same picture with a
circle-shaped f-stop, the jaggies suddenly disappeared! I browsed through my
code of the square f-stop, but I couldn't find any bugs. I also couldn't find a
reasonable explanation of the appearance of the jaggies. I figure it might have
something to do with the square being not point-symmetric, but that's as far as
I can get. I would like to know if someone has experience with the same
problem, and does somebody has a good explanation for it ...

Many thanks in advance,
Marinko Laban

-----------------------------------------------------------------------------

Query on Frequency Dependent Reflectance

From: hpfcla!sunrock!kodak!supra!reichert@Sun.COM (Mark Reichert x25948)

Hello.

I'm adding fresnel reflectance to my shader.  I'm in need of data for
reflectance as a function of frequency for non-polarized light at normal
incidence.  I would like to build a stockpile of this data for a wide variety
of materials.  I currently have some graphs of this data, but would much prefer
the actual sample points in place of the curve-fitted stuff I have now. (not to
mention the typing that you might save me).

If you have stuff such as this, and can share it with me, I would be most
appreciative. Also, if there is some Internet place where I might look, that
would be fine too.

Thanks,

Mark

-----------------------------------------------------------------------------

"Best of comp.graphics" ftp Site, by Raymond Brand

A collection of the interesting/useful [in my opinion] articles from
comp.graphics over the last year and a half is available for anonymous ftp.

It contains answers to most of the "most asked" questions from that period
as well as most of the sources posted to comp.graphics.

Now that you know what is there, you can find it in directory pub/graphics
at albanycs.albany.edu.

If you have anything to add to the collection or wish to update something
in it, or have have some ideas on how to organize it, please contact me at
one of the following.

[There's also a subdirectory called "ray-tracers" which has source code for
you-know-whats and other software--EAH]

--------
Raymond S. Brand                 rsbx@beowulf.uucp
3A Pinehurst Ave.                rsb584@leah.albany.edu
Albany NY  12203                 FidoNet 1:7729/255 (518-489-8968)
(518)-482-8798                   BBS: (518)-489-8986

-----------------------------------------------------------------------------

Notes on Frequency Dependent Refraction

Newsgroups: comp.graphics

In article <3324@uoregon.uoregon.edu<, markv@uoregon.uoregon.edu (Mark VandeWettering) writes:
< }<	Finally, has anyone come up with a raytracer whose refraction model
< }< takes into account the varying indices of refraction of different light
< }< frequencies?  In other words, can I find a raytracer that, when looking
< }< through a prism obliquely at a light source, will show me a rainbow?
< }
< }     This could be tough. The red, green, and blue components of monitors
< }only simulate the full color spectrum. On a computer, yellow is a mixture
< }of red and green. In real life, yellow is yellow. You'd have to cast a
< }large number of rays and use a large amount of computer time to simulate
< }a full color spectrum. (Ranjit pointed this out in his article and went
< }into much greater detail).
< 
< Actually, this problem seems the easiest.  We merely have to trace rays
< of differing frequency (perhaps randomly sampled) and use Fresnel's
< equation to determine refraction characteristics.  If you are trying to
< model phase effects like diffraction, you will probably have a much more
< difficult time.

This has already been done by a number of people.  One paper by T. L. Kunii
describes a renderer called "Gemstone Fire" or something.  It models refraction
as you suggest to get realistic looking gems.  Sorry, but I can't recall where
(or if) it has been published.  I have also read several (as yet) unpublished
papers which do the same thing in pretty much the same way.

David Jevans, U of Calgary Computer Science, Calgary AB  T2N 1N4  Canada
uucp: ...{ubc-cs,utai,alberta}!calgary!jevans

--------

>From: coifman@yale.UUCP (Ronald Coifman)

>>     This could be tough. ...
>
>This is the easy part...
>You fire say 16 rays per pixel anyway to do
>antialiasing, and assign each one a color (frequency).  When the ray
>is refracted through an object, take into account the index of
>refraction and apply Snell's law.  A student here did that
>and it worked fine.  He simulated rainbows and diffraction effects
>through prisms.
>
>	(Spencer Thomas (U. Utah, or is it U. Mich. now?) also implemented 
>the same sort of thing at about the same time.

  Yep, I got a Masters degree for doing that (I was the student Rob is refer-
ring to).  The problem in modelling dispersion is to integrate the primary
sample, over the visible frequencies of light.  Using the Monte Carlo integra-
tion techniques of Cook on the visible spectrum yields a nice, fairly simple
solution, albeit at the cost of supersampling at ~10-20 rays per pixel, where
dispersive sampling is required.

  Thomas used a different approach.  He adaptively subdivided the spectrum
based on the angle of spread of the dispersed ray, given the range of frequen-
cies it represents.  This can be more efficient, but can also have unlimited 
growth in the number of samples.  Credit Spencer Thomas; he was first.

  As at least one person has pointed out, perhaps the most interesting aspect
of this problem is that of representing the spectrum on an RGB monitor.  That's
an open problem; I'd be really interested in hearing about any solutions that
people have come up with.  (No, the obvious CIE to RGB conversion doesn't work
worth a damn.)

  My solution(s) can be found in "A Realistic Model of Refraction for Computer
Graphics", F. Kenton Musgrave, Modelling and Simulation on Microcomputers 1988
conference proceedings, Soc. for Computer Simulation, Feb. 1988, in my UC Santa
Cruz Masters thesis of the same title, and (hopefully) in an upcoming paper
"Prisms and Rainbows: a Dispersion Model for Computer Graphics" at the Graphics
Interface conference this summer.  (I can e-mail troff sources for these papers
to interested parties, but you'll not get the neat-o pictures.)

  For a look at an image of a physical model of the rainbow, built on the 
dispersion model, see the upcoming Jan. IEEE CG&A "About the Cover" article.

					Ken Musgrave

Ken Musgrave			arpanet: musgrave@yale.edu
Yale U. Math Dept.		
Box 2155 Yale Station		Primary Operating Principle:
New Haven, CT 06520				Deus ex machina

-------------------------------------------------------------------------------

Sound Tracing

>From: ph@miro.Berkeley.EDU (Paul Heckbert)
Subject: Re: Sound tracing
[source: comp.graphics]

In article <239@raunvis.UUCP> kjartan@raunvis.UUCP
(Kjartan Pierre Emilsson Jardedlisfraedi) asks:
>  Has anyone had any experience with the application of ray-tracing techniques
> to simulate acoustics, i.e the formal equivalent of ray-tracing using sound
> instead of light? ...

Yes, John Walsh, Norm Dadoun, and others at the University of British Columbia
have used ray tracing-like techniques to simulate acoustics.  They called their
method of tracing polygonal cones through a scene "beam tracing" (even before
Pat Hanrahan and I independently coined the term for graphics applications).

Walsh et al simulated the reflection and diffraction of sound, and were able to
digitally process an audio recording to simulate room acoustics to aid in
concert hall design.  This is my (four year old) bibliography of their papers:

    %A Norm Dadoun
    %A David G. Kirkpatrick
    %A John P. Walsh
    %T Hierarchical Approaches to Hidden Surface Intersection Testing
    %J Proceedings of Graphics Interface '82
    %D May 1982
    %P 49-56
    %Z hierarchical convex hull or minimal bounding box to optimize intersection
    testing between beams and polyhedra, for graphics and acoustical analysis
    %K bounding volume, acoustics, intersection testing

    %A John P. Walsh
    %A Norm Dadoun
    %T The Design and Development of Godot:
    A System for Room Acoustics Modeling and Simulation
    %B 101st meeting of the Acoustical Society of America
    %C Ottawa
    %D May 1981

    %A John P. Walsh
    %A Norm Dadoun
    %T What Are We Waiting for?  The Development of Godot, II
    %B 103rd meeting of the Acoustical Society of America
    %C Chicago
    %D Apr. 1982
    %K beam tracing, acoustics

    %A John P. Walsh
    %T The Simulation of Directional Sound Sources
    in Rooms by Means of a Digital Computer
    %R M. Mus. Thesis
    %I U. of Western Ontario
    %C London, Canada
    %D Fall 1979
    %K acoustics

    %A John P. Walsh
    %T The Design of Godot:
    A System for Room Acoustics Modeling and Simulation, paper E15.3
    %B Proc. 10th International Congress on Acoustics
    %C Sydney
    %D July 1980

    %A John P. Walsh
    %A Marcel T. Rivard
    %T Signal Processing Aspects of Godot:
    A System for Computer-Aided Room Acoustics Modeling and Simulation
    %B 72nd Convention of the Audio Engineering Society
    %C Anaheim, CA
    %D Oct. 1982

Paul Heckbert, CS grad student
508-7 Evans Hall, UC Berkeley		UUCP: ucbvax!miro.berkeley.edu!ph
Berkeley, CA 94720			ARPA: ph@miro.berkeley.edu

--------

>From: jevans@.ucalgary.ca (David Jevans)
Subject: Re: Sound tracing
[source: comp.graphics]

Three of my friends did a sound tracer for an undergraduate project last year.
The system used directional sound sources and microphones and a
ray-tracing-like algorithm to trace the sound.  Sound sources were digitized
and stored in files.  Emitters used these sound files.  At the end of the 4
month project they could digitize something, like a person speaking, run it
through the system, then pump the results through a speaker.  An acoustic
environment was built (just like you build a model for graphics).  You could
get effects like echoes and such.  Unfortunately this was never published.  I
am trying to convince them to work on it next semester...

David Jevans, U of Calgary Computer Science, Calgary AB  T2N 1N4  Canada
uucp: ...{ubc-cs,utai,alberta}!calgary!jevans

--------

>From: eugene@eos.UUCP (Eugene Miya)

May I also add that you research all the work on acoustic lasers done at places
like the Applied Physics Lab.

--------

>From: riley@batcomputer.tn.cornell.edu (Daniel S. Riley)
Organization: Cornell Theory Center, Cornell University, Ithaca NY

In article <572@epicb.UUCP> david@epicb.UUCP (David P. Cook) writes:
>>In article <7488@watcgl.waterloo.edu> ksbooth@watcgl.waterloo.edu (Kelly Booth) writes:
>>>[...]  It is highly unlikely that a couple of hackers thinking about
>>>the problem for a few minutes will generate startling break throughs
>>>(possible, but not likely).

Ok, I think most of us can agree that this was a reprehensible attempt at
arbitrary censorship of an interesting discussion.  Even if some of the
discussion is amateurish and naive.

>  The statement made above
[...]
>  Is appalling!  Sound processing is CENTURIES behind image processing.
>		If we were to apply even a few of our common algorithms
>		to the audio spectrum, it would revolutionize the
>		synthizer world.  These people are living in the stone
>		age (with the exception of a few such as Kuerdswell [sp]).

On the other hand, I think David is *seriously* underestimating the state of
the art in sound processing and generation.  Yes, Ray Kurzweil has done lots of
interesting work, but so have many other people.  Of the examples David gives,
most (xor'ing, contrast stretching, fuzzing, antialiasing and quantization) are
as elementary in sound processing as they are in image processing.  Sure, your
typical music store synthesizer/sampler doesn't offer these features (though
some come close--especially the E-mu's), but neither does your vcr.  And the
work Kurzweil music and Kurzweil applied intelligence have done on instrument
modelling and speech recognition go WAY beyond any of these elementary
techniques.

The one example I really don't know about is ray tracing.  Sound tracing is
certainly used in some aspects of reverb design, and perhaps other areas of
acoustics, but I don't know at what level diffraction is handled--and
diffraction is a big effect with sound propagation.  You also have to worry
about phases, interference, and lots of other fun effects that you can (to
first order) ignore in ray tracing.  References, anyone?  (Perhaps I should
resubscribe to comp.music, and try there...)

(off on a tangent: does any one know of work on ray tracers that will do things
like coherent light sources, interference, diffraction, etc?  In particular,
anyone have a ray tracer that will do laser speckling right?  I'm pretty naive
about the state of the art in image synthesis, so I have no idea if such beasts
exist.  It looks like a hard problem to me, but I'm just a physicist...)

>No, this is not a WELL RESEARCHED area as Kelly would have us believe.  The
>sound people are generally not attacking sound synthesis as we attack
>vision synthesis.  This is wonderful thinking, KEEP IT UP!

Much work in sound synthesis has been along lines similar to image synthesis.
Some of it is proprietary, and the rest I think just receives less attention,
since sound synthesis doesn't have quite the same level of perceived
usefulness, or the "sexiness", of image synthesis.  But it is there.
Regardless, I agree with David that this is an interesting discussion, and I
certainly don't mean to discourage any one from thinking or posting about it.

-Dan Riley (dsr@lns61.tn.cornell.edu, cornell!batcomputer!riley)
-Wilson Lab, Cornell U.

--------

>From: kjartan@raunvis.UUCP (Kjartan Pierre Emilsson Jardedlisfraedi)
Newsgroups: comp.graphics

  We would like to begin by thanking everybody for their good replies, which
will in no doubt come handy.  We intend to try to implement such a sound tracer
soon and we had already made some sort of model for it, but we were checking
whether there was some info lying around about such tracers.  It seems that our
idea wasn't far from actual implementations and that is reassuring.
  
  For the sake of Academical Curiosity and overall Renaissance-like
Enlightenment in the beginning of a new year we decided to submit our crude
model to the critics and attention of this newsgroup, hoping that it won't
interfere too much with the actual subject of the group, namely computer
graphics.

			The Model:

	We have some volume with an arbitrary geometry (usually simple such
	as a concert hall or something like that). Squares would work just
	fine as primitives.  Each primitive has definite reflection
	properties in addition to some absorption filter which possibly
	filters out some frequencies and attenuates the signal.
	  In this volume we put a sound emitter which has the following
	form:

		The sound emitter generates a sound sample in the form
		of a time series with a definite mean power P.  The emitter
		emits the sound with a given power density given as some
		spherical distribution. For simplicity we tessellate this
		distribution and assign to each patch the corresponding mean
		power.

	  At some other point we place the sound receptor which has the
	following form:

		We take a sphere and cut it in two equal halves, and then
		separate the two by some distance d.  We then tessellate the
		half-spheres (not including the cut).  We have then a crude
		model of ears.

	  Now for the actual sound tracing we do the following:

		For each patch of the two half-spheres, we cast a ray
		radially from the center, and calculate an intersection
		point with the enclosing volume.  From that point we
		determine which patch of the emitter this corresponds to,
		giving us the emitted power.  We then pass the corresponding
		time series through the filter appropriate to the given
		primitives, calculate the reflected fraction, attenuate the
		signal by the square of the distance, and eventually
		determine the delay of the signal.  

		When all patches have been traced, we sum up all the time
		series and output the whole lot through some stereo device.

	    A more sophisticated model would include secondary rays and
	    sound 'shadowing' (The shadowing being a little tricky as it is
	    frequency dependent)


	pros & cons ?
				Happy New Year !!

					-Kjartan & Dagur


Kjartan Pierre Emilsson
Science Institute - University of Iceland
Dunhaga 3
107 Reykjavik
Iceland					Internet: kjartan@raunvis.hi.is

--------

>From: brent@itm.UUCP (Brent)
Organization: In Touch Ministries, Atlanta, GA

    Ok, here's some starting points: check out the work of M. Schroeder at the
Gottingen. (Barbarian keyboard has no umlauts!)  Also see the recent design work
on the Orange County Civic Auditorium and the concert hall in New Zealand.
These should get you going in the right direction.  Dr. Schroeder laid the
theoretical work and others ran with it.  As far as sound ray tracing and
computer acoustics being centuries behind, I doubt it.  Dr. S. has done things
like record music in stereo in concert halls, digitized it, set up playback
equipment in an anechoic chamber (bldg 15 at Murry Hill), measured the path
from the right speaker to the left ear, and from the left speaker to the right
ear, digitized the music and did FFTs to take out the "crossover paths" he
measured.  Then the music played back sounded just like it did in the concert
hall.  All this was done over a decade ago.

    Also on acoustic ray tracing: sound is much "nastier" to figure than
pencil-rays of light.  One must also consider the phase of the sound, and the
specific acoustic impedance of the reflecting surfaces.  Thus each reflection
introduces a phase shift as well as direction and magnitude changes.  I haven't
seen too many optical ray-tracers worrying about interference and phase shift
due to reflecting surfaces.  Plus you have to enter vast world of
psychoacoustics, or how the ear hears sound.  In designing auditoria one must
consider "binaural dissimilarity" (Orange County) and the much-debated
"auditory backward inhibition" (see the Lincoln Center re-designs).
Resonance?? how many optical chambers resonate? (outside lasers?)  All in all,
modern acoustic simulations bear much more resemblance to Quantum Mechanic
"particle in the concert hall" type calculations than to simple ray-traced
optics.

    Postscript: eye-to-source optical ray tracing is a restatement of
Rayleigh's "reciprocity principle of sound" of about a century ago.
Acoustitions have been using it for at least that long.

        happy listening,

                brent laminack (gatech!itm!brent)

--------

Reply-To: trantow@csd4.milw.wisc.edu (Jerry J Trantow)
Subject: Geometric Acoustics (Sound Tracing)
Summary: Not so easy, but here are some papers
Organization: University of Wisconsin-Milwaukee

Some of the articles I have found include
 
Criteria for Quantitative Rating and Optimum Design on Concert Halls
 
Hulbert, G.M.  Baxa, D.E. Seireg, A.
University of Wisconsin - Madison
J Acoust Soc Am v 71 n 3 Mar 83 p 619-629
ISSN 0001-4966, Item Number: 061739
 
Design of room acoustics and a MCR reverberation system for Bjergsted
Concert hall in Stavanger
 
Strom, S.  Krokstad, A.  Sorsdal, S.  Stensby, S.
Appl Acoust v19 n6 1986 p 465-475
Norwegian Inst of Technology, Trondheim, Norw
ISSN 0003-682X, Item Number: 000913
 
 
I am also looking for an English translation of:
 
Ein Strahlverfolgungs-Verafahren Zur Berechnung von Schallfelern in Raemem
[ Ray-Tracing Program for the calculation of sound fields of rooms ]
 
Voralaender, M.
Acoustica v65 n3 Feb 88 p 138-148
ISSN 0001-7884, Item Number: 063350
 
If anyone is interested in doing a translation I can send the German copy that
I have.  It doesn't do an ignorant fool like myself any good and I have a hard
time convincing my wife or friends who know Deutch to do the translation.
 
A good literature search can discover plenty of articles, quite a few of which
are about architectural design of music halls.  With a large concert hall, the
calculations are easier because of the dimensions.  (the wavelength is small
compared to the dimensions of the hall)
 
The cases I am interested in are complicated by the fact that I want to work
with relatively small rooms, large sources, and to top it off low (60hz)
frequencies.  I vaguely remember seeing a blurb somewhere about a program done
by BOSE ( the speaker company) that calculated sound fields generated by
speakers in a room.  I would appreciate any information on such a beast.
 
The simple source for geometric acoustics is described in Beranek's Acoustic in
the chapter on Radiation of Sound.  To better appreciate the complexity from
diffraction, try the chapter on The Radiation and Scattering of Sound in Philip
Morse's Vibration and Sound ISBN 0-88318-287-4.

I am curious as to the commercial software that is available in this area.
Does anyone have any experience they could comment on???

------

>From: markv@uoregon.uoregon.edu (Mark VandeWettering)
Subject: More Sound Tracing
Organization: University of Oregon, Computer Science, Eugene OR

I would like to present some preliminary ideas about sound tracing, and
critique (hopefully profitably) the simple model presented by Kjartan Pierre
Emilsson Jardedlisfraedi.  (Whew! and I thought my name was bad, I will
abbreviate it to KPEJ)


CAVEAT READER: I have no expertise in acoustics or sound engineering.  Part of
the reason I am writing this is to test some basic assumptions that I have made
during the course of thinking about sound tracing.  I have done little/no
research, and these ideas are my own.

KJEP had a model related below:

>	We have some volume with an arbitrary geometry (usually simple such
>	as a concert hall or something like that). Squares would work just
>	fine as primitives.  Each primitive has definite reflection
>	properties in addition to some absorption filter which possibly
>	filters out some frequencies and attenuates the signal.

	One interesting form of sound reflector might be the totally
	diffuse reflector (Lambertian reflection).  It seems that if
	this is the assumption, then the appropriate algorithm to use
	might be radiosity, as opposed to raytracing.  Several problems
	immediately arise:

		1.	how to handle diffraction and interference?
		2.	how to handle "relativistic effects" (caused by
			the relatively slow speed of sound)
	
	The common solution to 1 in computer graphics is to ignore it.
	Is this satisfactory in the audio case?  Under what
	circumstances or applications is 1 okay?  

	Point 2 is not often considered in computer graphics, but in
	computerized sound generation, it seems critical to accurate
	formation of echo and reverberation effects.  To properly handle
	time delay in radiosity would seem to require a more difficult
	treatment, because the influx of "energy" at any given time
	from a given patch could depend on the outgoing energy at a
	number of previous times.  This seems pretty difficult, any
	immediate ideas?

>	  Now for the actual sound tracing we do the following:
>
>		For each patch of the two half-spheres, we cast a ray
>		radially from the center, and calculate an intersection
>		point with the enclosing volume.  From that point we
>		determine which patch of the emitter this corresponds to,
>		giving us the emitted power.  We then pass the corresponding
>		time series through the filter appropriate to the given
>		primitives, calculate the reflected fraction, attenuate the
>		signal by the square of the distance, and eventually
>		determine the delay of the signal.  
>
>		When all patches have been traced, we sum up all the time
>		series and output the whole lot through some stereo device.

	One open question: how much directional information is captured
	by your ears?  Since you can discern forward/backward sounds as
	well as left/right, it would seem that ordinary stereo
	headphones are incapable of reproducing sounds as complex as one
	would like.  Can the ears be fooled in clever ways?

	The only thing I think this model lacks is secondary "rays" or
	echo/reverb effects.  Depending on how important they are,
	radiosity algorithms may be more appropriate.

	Feel free to comment on any of this, it is an ongoing "thought
	experiment", and has made a couple of luncheon conversations
	quite interesting.

Mark VandeWettering

--------

>From: ksbooth@watcgl.waterloo.edu (Kelly Booth)
Organization: U. of Waterloo, Ontario

In article <3458@uoregon.uoregon.edu> markv@drizzle.UUCP (Mark VandeWettering) writes:
>
>		1.	how to handle diffraction and interference?
>		2.	how to handle "relativistic effects" (caused by
>			the relatively slow speed of sound)
>	
>	The common solution to 1 in computer graphics is to ignore it.

Hans P. Moravec,
"3D Graphics and Wave Theory"
Computer Graphics 15:3 (August, 1981) pp. 289-296.
(SIGGRAPH '81 Proceedings)

[Trivia Question: Why does the index for the proceedings list this as starting
on page 269?]

Also, something akin to 2 has been tackled in some ray tracers where dispersion
is taken into account (this is caused by the refractive index depending on the
frequency, which is basically a differential speed of light).

-----------------------------------------------------------------------------

Laser Speckle

>From: jevans@cpsc.ucalgary.ca (David Jevans)

In article <11390016@hpldola.HP.COM>, paul@hpldola.HP.COM (Paul Bame) writes:
> A raytracer which did laser speckling right might also be able
> to display holograms.  

A grad student at the U of Calgary a couple of years ago did something like
this.  He was using holographic techniques for character recognition, and could
generate synthetic holograms.  Also, what about Pixar?  See IEEE CG&A 3 issues
ago.

David Jevans, U of Calgary Computer Science, Calgary AB  T2N 1N4  Canada
uucp: ...{ubc-cs,utai,alberta}!calgary!jevans

--------

>From: dave@onfcanim.UUCP (Dave Martindale)
Organization: National Film Board / Office national du film, Montreal

Laser speckle is a particularly special case of interference, because it
happens in your eye, not on the surface that the laser is hitting.

A ray-tracing system that dealt with interference of light from different
sources would show the interference fringes that occur when a laser light
source is split into two beams and recombined, and the interference of acoustic
waves.  But to simulate laser speckle, you'd have to trace the light path all
the way back into the viewer's eye and calculate interference effects on the
retina itself.

If you don't believe me, try this: create a normal two-beam interference fringe
pattern.  As you move your eye closer, the fringes remain the same physical
distance apart, becoming wider apart in angular position as viewed by your eye.
The bars will remain in the same place as you move your head from side to side.

Now illuminate a target with a single clean beam of laser light.  You will see
a fine speckle pattern.  As you move your eye closer, the speckle pattern does
not seem to get any bigger - the spots remain the same angular size as seen by
your eye.  As you move your head from side to side, the speckle pattern moves.

As the laser light reflects from a matte surface, path length differences
scramble the phase of light traveling by slightly different paths.  When a
certain amount of this light is focused on a single photoreceptor in your eye
(or a camera), the light combines constructively or destructively, giving the
speckle pattern.  But the size of the "grains" in the pattern is basically the
same as the spacing of the photoreceptors in your eye - basically each cone in
your eye is receiving a random signal independent of each other cone.

The effect depends on the scattering surface being rougher than 1/4 wavelength
of light, and the scale of the roughness being smaller than the resolution
limit of the eye as seen from the viewing position.  This is true for almost
anything except a highly-polished surface, so most objects will produce
speckle.

Since the pattern is due to random variation in the diffusing surface, there is
little point in calculating randomness there, tracing rays back to the eye, and
seeing how they interfere - just add randomness directly to the final image
(although this won't correctly model how the speckle "moves" as you move your
head).

However, to model speckle accurately, the pixel spacing in the image has to be
no larger than the resolution limit of the eye, about half an arc minute.  For
a CRT or photograph viewed from 15 inches away, that's 450 pixels/inch, far
higher than most graphics displays are capable of.  So, unless you have that
sort of system resolution, you can't show speckle at the correct size.

-----------------------------------------------------------------------------
END OF RTNEWS

From m-cohen@cs.utah.edu Mon May 15 14:27:17 1989
Return-Path: <m-cohen@cs.utah.edu>
Received: from cs.utah.edu by turing.cs.rpi.edu (4.0/1.2-RPI-CS-Dept)
	id AA15124; Mon, 15 May 89 14:27:11 EDT
Received: by cs.utah.edu (5.61/utah-2.1-cs)
	id AA05401; Mon, 15 May 89 12:14:30 -0600
Date: Mon, 15 May 89 12:14:30 -0600
From: m-cohen@cs.utah.edu (Michael F Cohen)
Message-Id: <8905151814.AA05401@cs.utah.edu>
To: FISHER%3D.dec@decwrl.dec.com, apollo!arvo@eddie.mit.edu,
        apollo!johnf@eddie.mit.edu, atc@cs.utexas.EDU, barr@csvax.caltech.edu,
        barsky@miro.berkeley.edu, bogart%gr@cs.utah.edu,
        carl@mssun1.msi.cornell.edu, ckchee@dgp.toronto.edu,
        dana!mrk@hplabs.hp.com, daniel@apollo.com, dk@csvax.caltech.edu,
        dutio!fwj@mcvax.cwi.nl, dutrun!wim@mcvax.cwi.nl,
        cornell!fornax!sfu-cmpt!chapman@cs.utah.edu, glassner@xerox.com,
        grant@icdc.llnl.gov, gray%rhea.CRAY.COM@uc.msc.umn.edu,
        hohmeyer@miro.berkeley.edu, hpfcla!hpfcrs!eye!erich@hplabs.HP.COM,
        hultquis@prandtl.nas.nasa.gov, jaf@squid.tn.cornell.edu,
        jeff@hamlet.caltech.edu, jevans@cpsc.ucalgary.ca, joy@ucdavis.edu,
        kyriazis@turing.cs.rpi.edu, lister@dg-rtp.dg.com, litwinow@apple.com,
        m-cohen@cs.utah.edu, megatek!kuchkuda@ucsd.edu, ph@miro.berkeley.edu,
        phil@zeno0.rdrc.rpi.edu, pixar!pat@ucbvax.berkeley.edu,
        raycasting@duke.cs.duke.edu, roy@wisdom.tn.cornell.edu,
        sgi!paul@pyramid.pyramid.com, tim@csvax.caltech.edu,
        wtl@cockle.tn.cornell.edu
Subject: Ray Tracing News
Status: R

 _ __                 ______                         _ __
' )  )                  /                           ' )  )
 /--' __.  __  ,     --/ __  __.  _. o ____  _,      /  / _  , , , _
/  \_(_/|_/ (_/_    (_/ / (_(_/|_(__<_/ / <_(_)_    /  (_</_(_(_/_/_)_
             /                               /|
            '                               |/

			"Light Makes Right"

			 May 12, 1989
		        Volume 2, Number 1

Compiled by Eric Haines, 3D/Eye Inc, 2359 Triphammer Rd, Ithaca, NY 14850
    607-257-1381, hpfcla!hpfcrs!eye!erich@hplabs.hp.com
All contents are US copyright (c) 1989 by the individual authors

Contents:
    Introduction
    New People (Carl Bass, Paul Wanuga)
    QRT Ray Tracer (and five other Amiga Ray Tracers) (Steve Koren)
    New Version of MTV Ray Tracer (Mark VandeWettering)
    Minimal Sphere Containing Three Points (Earl Culham)
    Noise and Turbulence Function Code, Pascal and C (Jon Buller,
	William Dirks)

-----------------------------------------------------------------------------

Introduction

Well, we're now at the point where there are six ray tracers for the Amiga.
Interestingly, none of them have implicit efficiency schemes (i.e. where the
user does not have to intervene and create the efficiency structure himself).
Admittedly, efficiency schemes are more code, but I've found that I was getting
a factor of three speed up for a simple scene (a ten ball sphereflake) by using
an efficiency scheme vs. not using one.  When your computer is the speed of an
Amiga, efficiency schemes become vital.

Next time I'll include "Tracing Tricks", an article I "edited" for the latest
(and last) "Introduction to Ray Tracing" course notes.  The article is a "best
of the RT News" compendium of efficiency tricks.  By the way, the course notes
should be quite a bargain: they'll consist of the book of our notes by Academic
Press, plus some new tidbits and reprints of "classic" articles.

I would like to put the "Ray Tracing News" back issues somewhere that people
can FTP them.  Personally I don't have a computer that has an FTP site, so if
there are any volunteers that would like to store the back issues, please
contact me.  The entire archive is about 448K at this point (not including this
issue), broken into 5 parts.  Can you volunteer?

-----------------------------------------------------------------------------

# Carl Bass - hybrid shading models and quick(er) hidden surface methods
# Ithaca Software
# 902 West Seneca Street
# Ithaca, NY 14850
# 607-273-3690
alias	carl_bass	carl@mssun1.msi.cornell.edu

Carl is the co-founder of Ithaca Software Inc (once upon a time called "Flying
Moose Inc"), which sells the HOOPS package for all kinds of computers.  This is
an object-oriented system which I don't know much about beyond that their
debugger is called WHOOPS.

--------

#
# Paul Wanuga
# Cornell Program of Computer Graphics
# 120 Rand Hall
# Ithaca, NY 14853
# (607)-255-4880
alias	paul_wanuga	phw@love.tn.cornell.edu
Erich:
 
     Could you please include me in your list of wiz-bango ray tracers?  It
appears Don has me slated for research in ray-tracing complex, realistic,
non-procedural environments.

-----------------------------------------------------------------------------

QRT Ray Tracer (and five other Amiga Ray Tracers), Steve Koren

	This package appeared on comp.graphics a few months back.  I believe
the latest version of the package is available from Mark VandeWettering's FTP
site (see next article).  Write to Steve for more info:

	hpfela!koren@hplabs.hp.com

The software is written in C and worked just fine on my system.  Below is an
excerpt of the UserMan.doc file of the QRT system (which Steve extensively
documented).


QRT is a ray tracing  image  rendering  system  that runs under a
variety  of  operating  systems.   It  has  a  free  format input
language   with   extensive   error   detection   and   reporting
capabilities.
    
QRT was developed on the Amiga  personal  computer, so it will
be compared to  other  Amiga  ray  tracers.  There  are, to my
knowledge, five other  Amiga  ray  tracers,  each with its own
strengths  and  weaknesses.    I  will  describe  each  system
briefly, and compare it to QRT.  All the Amiga ray tracers can
operate in HAM (4096 color) mode.

  RT: RT was the first ray  tracer  written for the Amiga, by
      Eric    Graham.  It will model  a universe made of only
      spheres, a   sky, and a checkered  or solid ground.  It
      is  relatively  fast,  but  not  generally  useful  for
      realistic modeling   because  of the sphere limitation.
      The input language  is  cryptic,  although  some  error
      checking is done.  The  system  will  only generate low
      resolution images.

 SILVER: I have never seen SILVER, so I cannot say much about
      this system.

 SCULPT-4D: This package  incorporates  an interactive editor
      for  creating  objects,   and  is  capable  of  quickly
      generating a preliminary  image  of  the scene by using
      hidden surface techniques.  However, every primitive is
      made of polygons, and some  primitives  such as spheres
      require hundreds of  polygons  for a smooth texture, so
      the ray tracing is very slow.   Also, the package takes
      a large amount of memory to run, and is prone to system
      crashes.  Its chief  feature  is  the ability to create
      arbitrary shaped objects  using  a series of triangles.
      Mirrored, dull, or shiny objects are supported.

 CLIGHT: This ray tracer also has  an interactive editor, but
      produces very poor quality  images.   It is not capable
      of any patterning or reflection characteristics.

 DBW: This is possibly the most  complete  ray tracer for the
      Amiga.  It will support objects  with arbitrary degrees
      of reflection and gloss,  depth  of field effects, some
      texturing,   wavy   surfaces,   fractals,   transparent
      surfaces, diffuse  propagation  of light from object to
      object,  and  5  primitive   types  (sphere,  triangle,
      parallelogram, fractal, and ring).  The input language,
      however,   is    so    cryptic    as   to   be   nearly
      incomprehensible, and  if  there  is  any  error in the
      input file,  it  will  crash  the  system.   It is also
      painfully slow;  some  images  take  16  to 24 hours to
      complete.

QRT is meant to be a  compromise  between the fast, simple ray
tracers and the slow powerful systems.   It compares favorably
in speed to RT, and in power  to  Sculpt-3d  or DBW.  It has a
very friendly input language  with  extensive  error checking.
Here are some features of QRT:

   o  Multiple primitive types, including user defined
      quadratic surfaces

   o  Arbitrary levels of diffuse reflection, spectral
      reflection, transmission, ambient lighting, and gloss

   o  User defined pattern information for objects

   o  Bounding boxes for groups of objects

   o  Shadows

   o  Multiple light sources with different characteristics

   o  Arbitrary Phong spectral reflection coefficients

   o  Color dithering to increase the apparent number of
      colors

   o  Easy to use, free format input language with error
      checking.  Parameters are by keyword and may appear in
      any order.

   o  Supports medium resolution (128k dots/screen)

-----------------------------------------------------------------------------

New Version of MTV Ray Tracer, Mark VandeWettering

Mark VandeWettering's nice little ray tracer (polygons, spheres, cones and
cylinders, Kay/Kajiya efficiency scheme, yacc/lex parser for NFF format,
otherwise written in C) was released on USENET in three parts on March 27.
Others have interesting features, but the selection of primitives and the speed
of the code of the MTV ray tracer is a big plus.  It's currently my favorite
public domain ray tracer (the amazing BRL package I consider private).

The package is available at the usual comp.sources.unix archive sites or from
Mark via anonymous ftp at drizzle.cs.uoregon.edu.  Mark's address is:

    markv@drizzle.cs.uoregon.edu

-----------------------------------------------------------------------------

Subject: Re: Ray Traced Bounding Spheres
From: ECULHAM@UALTAVM.BITNET (Earl Culham)
Organization: University of Alberta VM/CMS

In article <17241@versatc.UUCP>, ritter@versatc.UUCP (Jack Ritter) writes:
 
>Given a cluster of points in 3 space, is there
>a good method for finding the minimum radius
>sphere which encloses all the points?  If not
>minimum, at least "small"?  Certainly it should
>be tighter than the sphere which encloses the
>minimum bounding box.
>
>I have a feeling the solution is iterative. If
>so, I could provide a good initial guess for the
>center & radius.
 
The solution is not iterative. A simple solution is available in a two step
process, and is characterized by whether three, or only two of the points are
on the surface of the optimal sphere.
 
Given the points, A, B, and C.
Searching for the center of the optimal sphere, P, with the smallest
radius, R.
 
Checking the midpoints.
 
set P = point halfway between A and B
set R = 1/2 of length from A to B
If length from C to P is less than or equal to R ---> done
Repeat with line A-C, and B-C if needed.
 
Extending the midpoints at right angles.
 
build the line which intersects A-B at a right angle through the midpoint
  of A-B, call the line D.
build the line which intersects A-C at a right angle through the midpoint
  of A-C, call it E.
P is the point where D and E intersect. (Solve the simultaneous equations;
  R is the length A-P)

-----------------------------------------------------------------------------

From: bullerj@handel.colostate.edu (Jon Buller)
Subject: Re: Pixar's noise function
Organization: Colorado State University, Ft. Collins CO 80523

In article <3599@pixar.UUCP> aaa@pixar.UUCP (Tony Apodaca) writes:
>In article <2553@ssc-vax.UUCP> coy@ssc-vax.UUCP (Stephen B Coy) writes:
>>          ...My question:  Does anyone out there know what this
>>noise function really is?
>
>	Three-dimensional simulated natural textures using pseudorandom
>functions were simultaneously and independently developed by Darwyn
>Peachey and Ken Perlin in 1984-5.  Both have papers in the 1985
>Siggraph Proceedings describing their systems.  Perlin, in particular,
>describes in detail how noise() is implemented and can be used creatively

   [A description of the properties of the noise function goes here...]

>	If you have ever been interested in realistic computer graphics, do
>whatever it takes to get a look at Perlin's paper.  In 1985, his pictures
>were absolutely astounding.  In 1989, they are STILL astounding.

	No kidding, some of those pictures are INCREDIBLE.

     Here is my code for a look-alike to the Pixar Noise function, and while I
can't say anything about exactly what Pixar's looks like, I think this is
probably close.  After reading the 1985 SIGGRAPH papers on 3d texturing, (and
seeing my prof's code to do a similar thing) I wrote this.  It uses a quadratic
B-Spline instead of the cubic Hermite interpolant implied in the paper.  Also
note that DNoise is just the x, y, and z derivatives of Noise (which are also
B-Splines).  The hashing functions are from Knuth's Art of Computer Programming
(I don't remember which volume though).

I know the code is Pascal, and all of you will hate it, but I believe I write
better Pascal than C...  One final note, this was Lightspeed Pascal 2.0 for the
Macintosh, but things have been reformatted slightly to get it on the net.  I
hope this is what you all wanted.

--------

More:

One other thing you might notice, Noise is C1 continuous, DNoise is only C0.
This means that DNoise will have creases in it (along the planes of the random
grid.  To see this, crank out a square: 0<X<5, 0<Y<5, Z=0.  You will see smooth
regions within each unit square, and creases between squares.  To avoid this,
use a cubic B-Spline, or cubic Hermite (as hinted to in the SIGGRAPH
proceedings) the problem there, is that you either need more data points (64
instead of 27) for the B-Spline, or derivative info at each point of the grid
(a normal plane, 4 floats instead of 1).  This would take too muh time for me
to code up to be worth it, and would probably run too much slower (10min for a
200x200 pixel picture now, ug.)  If somebody wants to give me a cray-3 to play
with, I'll write more accurate (and slower) code, until then... 8-)

Jon
bullerj@handel.cs.colostate.edu          ...!ncar!ccncsu!handel!bullerj
(These are my ideas (and code), nobody else SHOULD want these bugs)
I'm just trying to graduate.  Apple, Pixar, HP, etc. take note, I would like
your job offers, I have tired of the university life.


[NOTE: I have attached the Pascal code for the Turbulence functions.  The Noise
functions are in the next message in "C".  Sorry about the mixed languages, but
I haven't nicely translated these. -- EAH]

(* ---------- cut here ---------- cut here ---------- cut here ---------- *)

const
   MaxPts = 512;  { Must be 2^n}
   MPM1 = MaxPts - 1;
type
   PtsTyp = array[0..MaxPts] of Extended;
var
   Points: PtsTyp;


function Turb (Size: Integer;
               ScaleFactor: Extended;
               Loc: Vect;
               Pts: PtsTyp): Extended;
   var
      Scale, Result: Extended;
      Cur: Integer;
begin
   Result := Noise(Loc, Pts);
   Scale := 1.0;

   Cur := 1;
   while Cur < Size do begin
      Cur := BSL(Cur, 1);          {Cur := Cur * 2}
      Scale := Scale * ScaleFactor;
      Loc := Scale_Vect(2.0, Loc);
      Result := Result + Noise(Loc, Pts) * Scale;
   end;
   Turb := Result;
end;


function DTurb (Size: Integer;
                ScaleFactor: Extended;
                Loc: Vect;
                Pts: PtsTyp): Vect;
   var
      Result: Vect;
      Scale: Extended;
      Cur: Integer;
begin
   Result := DNoise(Loc, Pts);
   Scale := 1.0;

   Cur := 1;
   while Cur < Size do begin
      Cur := BSL(Cur, 1);
      Scale := Scale * ScaleFactor;
      Loc := Scale_Vect(2.0, Loc);
      Result := Add_Vect(Result, Scale_Vect(Scale, DNoise(Loc, Pts)));
   end;
   DTurb := Result;
end;

-----------------------------------------------------------------------------

And the C Version...

From: dirks@oak.cis.ohio-state.edu (william r dirks)
Subject: C Versions of Noise and DNoise Routines
Organization: Ohio State University Computer and Information Science


It was suggested to me that some of you would be interested in my 
translated-into-C-and-corrected versions of noise() and Dnoise().

So, here they are.  

Note that this is only noise and Dnoise.  The turbulence routines are
not included 'cause I haven't translated them yet.

Oh yeah, initnoise() fills the pts[] array with random numbers between
0 and 1.  Don't forget to initialize, or noise will always return 0.
(That's been experimentally verified! :-))

--Bill--

[Note that you should look over the rand() function if you use this stuff.
Some rand()'s need initialization (srand()), and some give numbers from 0
to 32767, and so should use this as a divisor. -- EAH]


---------cut-here------------------------------------cut-here--------
/*
**	Noise and Dnoise routines
*
*	Many thanks to Jon Buller of Colorado State University for these
*	routines.
*/


typedef struct vector {
   double x, y, z;
} Vector;


#define NUMPTS	512
#define P1	173
#define P2	263
#define P3	337
#define phi	0.6180339


static double pts[NUMPTS];


void initnoise()
{
   int i;
   
   for (i = 0; i < NUMPTS; ++i)
      pts[i] = rand() / 2.147483e9;
}


double noise(p)
Vector *p;
{
   int xi, yi, zi;
   int xa, xb, xc, ya, yb, yc, za, zb, zc;
   double xf, yf, zf;
   double x2, x1, x0, y2, y1, y0, z2, z1, z0;
   double p000, p100, p200, p010, p110, p210, p020, p120, p220;
   double p001, p101, p201, p011, p111, p211, p021, p121, p221;
   double p002, p102, p202, p012, p112, p212, p022, p122, p222;

   xi = floor(p->x);
   xa = floor(P1 * (xi * phi - floor(xi * phi)));
   xb = floor(P1 * ((xi + 1) * phi - floor((xi + 1) * phi)));
   xc = floor(P1 * ((xi + 2) * phi - floor((xi + 2) * phi)));

   yi = floor(p->y);
   ya = floor(P2 * (yi * phi - floor(yi * phi)));
   yb = floor(P2 * ((yi + 1) * phi - floor((yi + 1) * phi)));
   yc = floor(P2 * ((yi + 2) * phi - floor((yi + 2) * phi)));

   zi = floor(p->z);
   za = floor(P3 * (zi * phi - floor(zi * phi)));
   zb = floor(P3 * ((zi + 1) * phi - floor((zi + 1) * phi)));
   zc = floor(P3 * ((zi + 2) * phi - floor((zi + 2) * phi)));

   p000 = pts[xa + ya + za & NUMPTS - 1];
   p100 = pts[xb + ya + za & NUMPTS - 1];
   p200 = pts[xc + ya + za & NUMPTS - 1];
   p010 = pts[xa + yb + za & NUMPTS - 1];
   p110 = pts[xb + yb + za & NUMPTS - 1];
   p210 = pts[xc + yb + za & NUMPTS - 1];
   p020 = pts[xa + yc + za & NUMPTS - 1];
   p120 = pts[xb + yc + za & NUMPTS - 1];
   p220 = pts[xc + yc + za & NUMPTS - 1];
   p001 = pts[xa + ya + zb & NUMPTS - 1];
   p101 = pts[xb + ya + zb & NUMPTS - 1];
   p201 = pts[xc + ya + zb & NUMPTS - 1];
   p011 = pts[xa + yb + zb & NUMPTS - 1];
   p111 = pts[xb + yb + zb & NUMPTS - 1];
   p211 = pts[xc + yb + zb & NUMPTS - 1];
   p021 = pts[xa + yc + zb & NUMPTS - 1];
   p121 = pts[xb + yc + zb & NUMPTS - 1];
   p221 = pts[xc + yc + zb & NUMPTS - 1];
   p002 = pts[xa + ya + zc & NUMPTS - 1];
   p102 = pts[xb + ya + zc & NUMPTS - 1];
   p202 = pts[xc + ya + zc & NUMPTS - 1];
   p012 = pts[xa + yb + zc & NUMPTS - 1];
   p112 = pts[xb + yb + zc & NUMPTS - 1];
   p212 = pts[xc + yb + zc & NUMPTS - 1];
   p022 = pts[xa + yc + zc & NUMPTS - 1];
   p122 = pts[xb + yc + zc & NUMPTS - 1];
   p222 = pts[xc + yc + zc & NUMPTS - 1];

   xf = p->x - xi;
   x1 = xf * xf;
   x2 = 0.5 * x1;
   x1 = 0.5 + xf - x1;
   x0 = 0.5 - xf + x2;

   yf = p->y - yi;
   y1 = yf * yf;
   y2 = 0.5 * y1;
   y1 = 0.5 + yf - y1;
   y0 = 0.5 - yf + y2;

   zf = p->z - zi;
   z1 = zf * zf;
   z2 = 0.5 * z1;
   z1 = 0.5 + zf - z1;
   z0 = 0.5 - zf + z2;

   return   z0 * (y0 * (x0 * p000 + x1 * p100 + x2 * p200) +
                  y1 * (x0 * p010 + x1 * p110 + x2 * p210) +
                  y2 * (x0 * p020 + x1 * p120 + x2 * p220)) +
            z1 * (y0 * (x0 * p001 + x1 * p101 + x2 * p201) +
                  y1 * (x0 * p011 + x1 * p111 + x2 * p211) +
                  y2 * (x0 * p021 + x1 * p121 + x2 * p221)) +
            z2 * (y0 * (x0 * p002 + x1 * p102 + x2 * p202) +
                  y1 * (x0 * p012 + x1 * p112 + x2 * p212) +
                  y2 * (x0 * p022 + x1 * p122 + x2 * p222));
}



Vector Dnoise(p)
Vector *p;
{
   Vector v;
   int xi, yi, zi;
   int xa, xb, xc, ya, yb, yc, za, zb, zc;
   double xf, yf, zf;
   double x2, x1, x0, y2, y1, y0, z2, z1, z0;
   double xd2, xd1, xd0, yd2, yd1, yd0, zd2, zd1, zd0;
   double p000, p100, p200, p010, p110, p210, p020, p120, p220;
   double p001, p101, p201, p011, p111, p211, p021, p121, p221;
   double p002, p102, p202, p012, p112, p212, p022, p122, p222;

   xi = floor(p->x);
   xa = floor(P1 * (xi * phi - floor(xi * phi)));
   xb = floor(P1 * ((xi + 1) * phi - floor((xi + 1) * phi)));
   xc = floor(P1 * ((xi + 2) * phi - floor((xi + 2) * phi)));

   yi = floor(p->y);
   ya = floor(P2 * (yi * phi - floor(yi * phi)));
   yb = floor(P2 * ((yi + 1) * phi - floor((yi + 1) * phi)));
   yc = floor(P2 * ((yi + 2) * phi - floor((yi + 2) * phi)));

   zi = floor(p->z);
   za = floor(P3 * (zi * phi - floor(zi * phi)));
   zb = floor(P3 * ((zi + 1) * phi - floor((zi + 1) * phi)));
   zc = floor(P3 * ((zi + 2) * phi - floor((zi + 2) * phi)));

   p000 = pts[xa + ya + za & NUMPTS - 1];
   p100 = pts[xb + ya + za & NUMPTS - 1];
   p200 = pts[xc + ya + za & NUMPTS - 1];
   p010 = pts[xa + yb + za & NUMPTS - 1];
   p110 = pts[xb + yb + za & NUMPTS - 1];
   p210 = pts[xc + yb + za & NUMPTS - 1];
   p020 = pts[xa + yc + za & NUMPTS - 1];
   p120 = pts[xb + yc + za & NUMPTS - 1];
   p220 = pts[xc + yc + za & NUMPTS - 1];
   p001 = pts[xa + ya + zb & NUMPTS - 1];
   p101 = pts[xb + ya + zb & NUMPTS - 1];
   p201 = pts[xc + ya + zb & NUMPTS - 1];
   p011 = pts[xa + yb + zb & NUMPTS - 1];
   p111 = pts[xb + yb + zb & NUMPTS - 1];
   p211 = pts[xc + yb + zb & NUMPTS - 1];
   p021 = pts[xa + yc + zb & NUMPTS - 1];
   p121 = pts[xb + yc + zb & NUMPTS - 1];
   p221 = pts[xc + yc + zb & NUMPTS - 1];
   p002 = pts[xa + ya + zc & NUMPTS - 1];
   p102 = pts[xb + ya + zc & NUMPTS - 1];
   p202 = pts[xc + ya + zc & NUMPTS - 1];
   p012 = pts[xa + yb + zc & NUMPTS - 1];
   p112 = pts[xb + yb + zc & NUMPTS - 1];
   p212 = pts[xc + yb + zc & NUMPTS - 1];
   p022 = pts[xa + yc + zc & NUMPTS - 1];
   p122 = pts[xb + yc + zc & NUMPTS - 1];
   p222 = pts[xc + yc + zc & NUMPTS - 1];

   xf = p->x - xi;
   x1 = xf * xf;
   x2 = 0.5 * x1;
   x1 = 0.5 + xf - x1;
   x0 = 0.5 - xf + x2;
   xd2 = xf;
   xd1 = 1.0 - xf - xf;
   xd0 = xf - 1.0;

   yf = p->y - yi;
   y1 = yf * yf;
   y2 = 0.5 * y1;
   y1 = 0.5 + yf - y1;
   y0 = 0.5 - yf + y2;
   yd2 = yf;
   yd1 = 1.0 - yf - yf;
   yd0 = yf - 1.0;

   zf = p->z - zi;
   z1 = zf * zf;
   z2 = 0.5 * z1;
   z1 = 0.5 + zf - z1;
   z0 = 0.5 - zf + z2;
   zd2 = zf;
   zd1 = 1.0 - zf - zf;
   zd0 = zf - 1.0;

   v.x        = z0 * (y0 * (xd0 * p000 + xd1 * p100 + xd2 * p200) +
                      y1 * (xd0 * p010 + xd1 * p110 + xd2 * p210) +
                      y2 * (xd0 * p020 + xd1 * p120 + xd2 * p220)) +
                z1 * (y0 * (xd0 * p001 + xd1 * p101 + xd2 * p201) +
                      y1 * (xd0 * p011 + xd1 * p111 + xd2 * p211) +
                      y2 * (xd0 * p021 + xd1 * p121 + xd2 * p221)) +
                z2 * (y0 * (xd0 * p002 + xd1 * p102 + xd2 * p202) +
                      y1 * (xd0 * p012 + xd1 * p112 + xd2 * p212) +
                      y2 * (xd0 * p022 + xd1 * p122 + xd2 * p222));
                                  
   v.y        = z0 * (yd0 * (x0 * p000 + x1 * p100 + x2 * p200) +
                      yd1 * (x0 * p010 + x1 * p110 + x2 * p210) +
                      yd2 * (x0 * p020 + x1 * p120 + x2 * p220)) +
                z1 * (yd0 * (x0 * p001 + x1 * p101 + x2 * p201) +
                      yd1 * (x0 * p011 + x1 * p111 + x2 * p211) +
                      yd2 * (x0 * p021 + x1 * p121 + x2 * p221)) +
                z2 * (yd0 * (x0 * p002 + x1 * p102 + x2 * p202) +
                      yd1 * (x0 * p012 + x1 * p112 + x2 * p212) +
                      yd2 * (x0 * p022 + x1 * p122 + x2 * p222));
                                  
   v.z        = zd0 * (y0 * (x0 * p000 + x1 * p100 + x2 * p200) +
                       y1 * (x0 * p010 + x1 * p110 + x2 * p210) +
                       y2 * (x0 * p020 + x1 * p120 + x2 * p220)) +
                zd1 * (y0 * (x0 * p001 + x1 * p101 + x2 * p201) +
                       y1 * (x0 * p011 + x1 * p111 + x2 * p211) +
                       y2 * (x0 * p021 + x1 * p121 + x2 * p221)) +
                zd2 * (y0 * (x0 * p002 + x1 * p102 + x2 * p202) +
                       y1 * (x0 * p012 + x1 * p112 + x2 * p212) +
                       y2 * (x0 * p022 + x1 * p122 + x2 * p222));
   return v;
}

-----------------------------------------------------------------------------

END OF RTNEWS

From m-cohen@cs.utah.edu Wed Jun 21 18:47:02 1989
Return-Path: <m-cohen@cs.utah.edu>
Received: from cs.utah.edu by turing.cs.rpi.edu (4.0/1.2-RPI-CS-Dept)
	id AA16128; Wed, 21 Jun 89 18:46:44 EDT
Received: by cs.utah.edu (5.61/utah-2.1-cs)
	id AA01339; Wed, 21 Jun 89 15:45:04 -0600
Date: Wed, 21 Jun 89 15:45:04 -0600
From: m-cohen@cs.utah.edu (Michael F Cohen)
Message-Id: <8906212145.AA01339@cs.utah.edu>
To: FISHER%3D.dec@decwrl.dec.com, apollo!arvo@eddie.mit.edu,
        apollo!johnf@eddie.mit.edu, atc@cs.utexas.EDU, barr@csvax.caltech.edu,
        barsky@miro.berkeley.edu, bogart%gr@cs.utah.edu,
        carl@mssun1.msi.cornell.edu, ckchee@dgp.toronto.edu,
        craig@weedeater.math.yale.edu, dana!mrk@hplabs.hp.com,
        daniel@apollo.com, dk@csvax.caltech.edu, dutio!fwj@mcvax.cwi.nl,
        dutrun!wim@mcvax.cwi.nl, glassner@xerox.com, grant@icdc.llnl.gov,
        gray%rhea.CRAY.COM@uc.msc.umn.edu, green@uk.ac.bristol.compsci,
        hohmeyer@miro.berkeley.edu, hpfcla!hpfcrs!eye!erich@hplabs.HP.COM,
        hultquis@prandtl.nas.nasa.gov, jaf@squid.graphics.cornell.edu,
        jeff@hamlet.caltech.edu, jevans@cpsc.ucalgary.ca, joy@ucdavis.edu,
        kyriazis@turing.cs.rpi.edu, lister@dg-rtp.dg.com, litwinow@apple.com,
        lytle@tc.tn.cornell.edu, m-cohen@cs.utah.edu,
        megatek!kuchkuda@ucsd.edu, ph@miro.berkeley.edu,
        phil@zeno0.rdrc.rpi.edu, pixar!pat@ucbvax.berkeley.edu,
        raycasting@duke.cs.duke.edu, roy@wisdom.graphics.cornell.edu,
        sgi!paul@pyramid.pyramid.com, tim@csvax.caltech.edu,
        vedge!kaveh@larry.mcrim.mcgill.edu
Subject: Ray Tracing News
Status: R

 _ __                 ______                         _ __
' )  )                  /                           ' )  )
 /--' __.  __  ,     --/ __  __.  _. o ____  _,      /  / _  , , , _
/  \_(_/|_/ (_/_    (_/ / (_(_/|_(__<_/ / <_(_)_    /  (_</_(_(_/_/_)_
             /                               /|
            '                               |/

			"Light Makes Right"

			   June 21, 1989
		        Volume 2, Number 4

Compiled by Eric Haines, 3D/Eye Inc, 2359 Triphammer Rd, Ithaca, NY 14850
    607-257-1381, hpfcla!hpfcrs!eye!erich@hplabs.hp.com, cornell!eye!erich
All contents are US copyright (c) 1989 by the individual authors

Contents:
    Introduction
    Hardcopy News (Andrew Glassner)
    New People (Stuart Green, Craig Kolb (and Ken Musgrave), Kaveh Kardan)
    Comments on "Free" Ray Tracers and on Renderman (Kaveh Kardan)
    Minimum Bounding Sphere, continued (Jack Ritter)
    Comments on "A Review of Multi-Computer Ray-Tracing" (Thierry Priol)
    ======== USENET cullings follow ========
    Query: Dataflow Architectures and Ray Tracing (George Kyriazis)
    More on Pixar's Noise Function (Jon Buller)
    DBW_render for Sun 3 (Tad Guy)
    Steel Colors (Eugene Miya)
    Dirty Little Tricks (Jack Ritter)
    Obfuscated Ray Tracer (George Kyriazis)
    Contents of FTP archives, skinner.cs.uoregon.edu (Mark VandeWettering)

-----------------------------------------------------------------------------

Introduction
------------

First off, please note that I now have a second mail address:

	eye!erich@wrath.cs.cornell.edu

This is a lot more direct than hopping the HP network through Palo Alto and
Colorado just to get to Ithaca.  We have the connection courtesy of the
Computer Science Dept at Cornell, and they have asked us to try to keep our
traffic down.  So, please don't be a funny guy and send me image files or
somesuch.

I just noticed that Andrew and I are out of sync: his hardcopy version is on
Volume 4, and I'm on Volume 3.  One excuse is that the first year of the email
edition is labeled "Volume 0", since it wasn't even called "The Ray Tracing
News" at that point.  An alternate excuse is that I program in "C", and so
start from 0.  Anyway, until maybe the new year, I'll stick with the current
scheme (hey, no one even noticed that last issue was misnumbered (and corrected
on the USENET copy)).

-----------------------------------------------------------------------------

Hardcopy News
-------------

by Andrew Glassner


The latest issue of the hardcopy Ray Tracing News (Volume 3, Number 1, May
1989) goes into the mail today, 31 May.  Everyone who is on the softcopy
mailing list should receive a copy.  If you don't get a copy in a week or two,
please let me know (glassner.pa@xerox.com).  It would help if you include your
physical mailing address, so I can at least confirm that your issue was
intended to go to the right place.

Contributions are now being solicited for Vol. 4, No. 2.  Start working on
those articles!

-----------------------------------------------------------------------------

New (and Used?) People
----------------------

The Cornell Program of Computer Graphics computers have all changed their
addresses.  Any computer with the name "*.tn.cornell.edu" is now
"*.graphics.cornell.edu".

------------

Stuart Green            - multiprocessor systems for realistic image synthesis
Department of Computer Science
University of Bristol
Queen's Building
University Walk
Bristol.  BS8 1TR
ENGLAND.
green@uk.ac.bristol.compsci

I am working on multiprocessor implementations of algorithms for realistic
image synthesis.  So far, this has been restricted to straightforward ray
tracing, but I hope to look at enhanced ray tracing algorithms and radiosity.
I've implemented a ray tracer on a network of Inmos Transputers which uses
mechanisms for distributing both computation and the model data amongst the
processors in a distributed memory MIMD system.

------------

Craig Kolb (and Ken Musgrave)

My primary interests include modeling natural phenomena, realistic image
synthesis, and animation.

I can be reached at:
Dept. of Mathematics
Yale University
P.O. Box 2155
Yale Station
New Haven, CT  06520-2155
(203) 432-7053

alias	craig_kolb	craig@weedeater.math.yale.edu
alias	ken_musgrave	musgrave-forest@yale.edu

...I've just started looking into ray/spline intersection.  We do a lot of
heightfield-tracing 'round here, and in the past have rendered them using a
triangle tessellation.  I'm giving splines a shot in order to render some
pictures of eroded terrain for our SIGGRAPH talk.  I notice that you list
spline intersection among your primary interests.  What sort of methods have
you investigated?  At the moment I've implemented (what I assume is) the
standard Newton's method in tandem with a DDA-based cell traversal scheme (as
per our SIGGRAPH paper).  Although this works, it's not exactly blindingly
fast...  Do you know of any 'interesting' references?

------------

Kaveh Kardan
Visual Edge Software Ltd.
3870 Cote Vertu
Montreal, Quebec H4R 1V4
(514)332-6430
larry.mcrim.mcgill.edu!vedge!kaveh

I graduated with a BS in Math from MIT in 1985, did some work in molecular
graphics at the Xerox Research Centre of Canada (XRCC), wrote the renderer at
Neo-Visuals (now known as SAS Canada) -- which included a raytracer --, and the
animation stuff at Softimage.  I'm currently working at Visual Edge on the UIMX
package: an X Windows user interface design system.

Regarding the Softimage raytracer: it was written by Mike Sweeney (who used
to be at Abel, and who did "Crater Lake" at Waterloo).

I will also be acting as a mail forwarder for Mike, as Softimage is not on any
networks.  So in effect, you should probably include Mike in the mailing list
as well, with my address -- or somehow let people know that he can be reached
tc/o me.

If I may make some comments about the stuff I have read so far in the back
issues:

====================

Jeff Goldsmith writes:

>    I don't get it.  Why doesn't every CG Software vendor supply a
>ray tracer.  It's definitely the easiest renderer to write.  Yes,
>they are slo-o-o-o-o-o-w, but they sound glitzy and (I bet) would
>stimulate sales, even if buyers never used them.

Having worked at two CG Software companies, I know firsthand how the "to do"
list grows faster than you can possibly implement features (no matter how many
programmers you have -- c.f."The Mythical Man-Month").

Jeff is right that ray tracing sounds glitzy, and, yes, it is another factor to
toss into the sales pitch -- but it is not at all clear that it is worth the
effort.

Most (if not all) ray tracers assume either infinite rendering time or infinite
disk space.  In the real world (a 68020 and a 144Meg disk) this is not the
case.  The raytracer I wrote at Neo Visuals was written in Fortran -- ergo no
dynamic memory allocation -- so I had to work on optimizing it without
significantly increasing the memory used.  This mostly involved intelligently
choosing when to fire rays.  The renderer performs a Watkins-style rendering,
and fires secondary rays from a pixel only if the surface at that pixel needs
to be raytraced.  Memory constraints prevented me from using any spatial
subdivision methods.

Yes, ray traced images are great sales tools.  They are also sometimes not
entirely honest -- novice users ("I want a system to animate Star Wars quality
images, about ten minutes of animation a day on my XT") are not aware of the
expense of raytracing, and very few salesmen go out of their way to point this
out.  However, these same users, unsure of the technology, put together a list
of buzzwords (amongst them "raytracing") and go out to find that piece of
software which has the most features on their list.  Hence I coined the phrase
"buzzword compatible" while at Neo-Visuals (and also "polygons for polygons
sake" -- but that's another story).

I have also seen demos, animations, and pictures at trade shows, presented by
hardware and software vendors, which were extremely and deliberately
misleading.  A very common example is to show nice animation that was not
really created with your software product.  The animation having been typically
created by special programs and add-ons.  An obvious example was Abel,
marketing their "Generation 2" software with "Sexy Robot", "Gold", "Hawaiian
Punch", etc.  I only mention Abel because they are no longer in business -- I
don't want to mention any names of existing companies.

I hadn't intended this to be a flame.  But that sums up why not all software
vendors bother with raytracing, and how it can be abused if not handled
carefully.

====================

On Steve Upstill's remarks on the Renderman standard:

Disclaimer: I have not read the Renderman specs, and have spoken to people who
liked it and people who didn't.

I would like to say that while I was at Neo-Visuals, Tom Porter and Pat
Hanrahan did indeed drop by to ask us about our needs, and to ensure that the
interface would be compatible with our system.  As I recall, we asked that the
interface be able of handling arbitrary polygons (n vertices, concave, etc).
As I recall, I was playing devil's advocate at the meeting, questioning
whether rendering had settled down enough to be standardized.  So yes, at
least Neo-Visuals did get to have a say and contribute to the interface.

I spoke to one rendering person at Siggraph who didn't appreciate the way Pixar
had handed down the interface and said "thou shalt enjoy."  Well the
alternative would be a PHIGS-like process: years spent in committees trying to
hash out a compromise which will in all likelihood be obsolete before the ink
is dry.  In fact, two hardware vendors decided to take matters into their own
hands and came up with PHIGS+.

Yes, the interface is probably partly a marketing initiative by Pixar.  Why
would they do it otherwise?  Why should they do it otherwise?  I would guess
that Pixar hopes to have the standard adopted, then come out with a board which
will do Renderman rendering faster than anyone else's software.  This seems a
natural progression.  More and more rendering features have been appearing in
hardware -- 2D, 3D, flat shaded, Gouraud, and now Phong and texture maps.  It
is very probable that in a few years, "renderers" will be hardware, except for
experimental, research, and prototype ones.

-----------------------------------------------------------------------------

Minimum Bounding Sphere, continued
----------------------------------

    by Jack Ritter, {ames,apple,sun,pyramid}!versatc!ritter

I noticed in "The Ray Tracing News" an answer to my query about minimum
bounding spheres. The answer following my question assumes there are 3 points.
(Search for "Ray Traced Bounding Spheres").  This is wrong; I meant n points in
3 space.  Since then, Lyle Rains, Wolfgang Someone and I have arrived at a fast
way to find a tight bounding sphere for n points in 3 space:

1) Make 1 pass through pts. Find these 6 pts:
	pt with min x, max x, min/max y, min/max z.
	Pick the pair with the widest dimensional span. This describes the
	diameter of the initial bounding sphere. If the pts are anywhere near
	uniform, this sphere will contain most pts.

2) Make 2nd pass through pts: for each pt still outside current sphere, update
	current sphere to the larger sphere passing through the pt on 1 side,
	and the back side of the old sphere on the other side.  Each new sphere
	will (barely) contain its previous pts, plus the new pt, and probably
	some new outsiders as well. Step 2 should need to be done only a small
	fraction of total num pts.

The following is code (untested as far as I know) to increment sp:

typedef double Ordinate;
typedef double Distance;
typedef struct { Ordinate x; Ordinate y; Ordinate z; } Point;
typedef struct { Point center; Distance radius; } Sphere;


Distance separation(pa, pb)
  Point *pa;
  Point *pb;
{
  Distance delta_x, delta_y, delta_z;

  delta_x = pa->x - pb->x;
  delta_y = pa->y - pb->y;
  delta_z = pa->z - pb->z;
  return (sqrt(delta_x * delta_x + delta_y * delta_y + delta_z * delta_z));
}


Sphere *new_sphere(s, p)
  Sphere *s;
  Point *p;
{
  Distance old_to_p;
  Distance old_to_new;

  old_to_p = separation(&s->center, p);
  if (old_to_p > s->radius) { /* could test vs r**2 here */
    s->radius = (s->radius + old_to_p) / 2.0;
    old_to_new = old_to_p - s->radius;
    s->center.x =
      (s->radius * s->center.x + old_to_new * p->x) / old_to_p;
    s->center.y =
      (s->radius * s->center.y + old_to_new * p->y) / old_to_p;
    s->center.z =
      (s->radius * s->center.z + old_to_new * p->z) / old_to_p;
  }
  return (s);
}

   Jack Ritter, S/W Eng. Versatec, 2710 Walsh Av, Santa Clara, CA 95051
   Mail Stop 1-7.  (408)982-4332, or (408)988-2800 X 5743
   UUCP:  {ames,apple,sun,pyramid}!versatc!ritter

[This looks to be a good quick algorithm giving a near-optimal solution.  Has
anyone come up with an absolutely optimal solution?  The "three point" solution
(in last issue) gives us a tool to do a brute force search of all triplets, but
this is insufficient to solve the problem.  For example, a tetrahedron's
bounding sphere cannot be found by just searching all the triplets, as all such
spheres would leave out the fourth point. - EAH]

-----------------------------------------------------------------------------

Comments on "A Review of Multi-Computer Ray-Tracing"
---------------------------------------------------

by Thierry Priol


I read with a great interest in the hardcopy "Ray Tracing News (May 1989)" "A
Review of Multi-Computer Ray-Tracing".  But, D.A.J. Jevans said that our work
presented in CGI "may even serve to cloud the important issues in
multi-computer ray-tracing"!  I do not agree with this remark.  The
presentation at CGI describes only a first step in using multi-processor ray
tracing algorithms.  It is true that there were no interesting results in this
paper.  D.A.J. Jevans also said that a hypercube architecture is hardware
specific.  I do not agree.  This kind of architecture represents a great part
of distributed memory architectures.  Our algorithm is not specific for this
topology and can work on a SYMULT-2010 which uses a mesh topology.  However, I
agree when he said that our algorithm provides little in the way of new
algorithms, since we used Cleary's algorithm.  But we think that for the
moment, the main problem is not to create new algorithms but to make
experiments with some algorithms presented by several authors because most of
them have been simulated but not implemented.  Our experiments show that many
problems due to distributed computing (not only load and memory balancing) were
not solved by the authors.

At present, our algorithm has been modified to take into account load balancing
and we have several results not yet published.  These new results may give some
important conclusions about the Cleary approach (processor-volume association).
We are working now on a new algorithm based on a global memory on distributed
memory architectures!  For my mind it is the best solution to obtain load and
memory balancing.  The ray coherence property is a means to have a sort of
locality when data is read in the global memory (best use of caches).

We are very interested (D. Badouel, K. Bouatouch and myself) in submitting to
the "Ray-tracing News" a short paper which summarizes our work in parallel
ray-tracing algorithm for distributed memory architecture.  This contribution
should present two ray tracing algorithms with associated results.  This work
has not been yet published outside France.

======== USENET cullings follow ===============================================

Subject: Dataflow architectures and Ray Tracing
From: kyriazis@rpics (George Kyriazis)
Newsgroups: comp.arch,comp.graphics
Organization: RPI CS Dept.

Hi!  I am wondering if anybody knows if there have been any attempts to port a
ray tracing algorithm on a dataflow computer, or if there has been such a
machine especially built for ray tracing.

I am posting to both comp.arch and comp.graphics since I think that it concerns
both newsgroups.

It seems to me that a dynamic dataflow architecture is more appropriate to this
problem because of the recursiveness and parallelism of the algorithm.

Thanks in advance for any info...

-----------------------------------------------------------------------------

Subject: Re: Pixar's noise function
Summary: my noise, better described (I hope)
Reply-To: bullerj@handel.colostate.edu.UUCP (Jon Buller)
Organization: Colorado State University, Ft. Collins CO 80523

In article <...> jamesa@arabian.Sun.COM (James D. Allen) writes:
>In article <...>, aaa@pixa article <...> coy@ssc-vax.UUCP (Stephen B Coy) writes:
>> >          ...My question:  Does anyone out there know what this
>> >noise function really is?
>> 
>> ... Conceptually, noise()
>> is a "stochastic three-dimensional function which is statistically
>> invariant under rotation and translation and has a narrow bandpass
>> limit in frequency" (paraphrased from [Perlin1985]).  This means that
>> you put three-space points in, and you get values back which are basically
>> random.  But if you put other nearby points in, you get values that are
>> very similar.  The differences are still random, but the maximum rate of
>> change is controlled so that you can avoid aliasing.  If you put a set
>> of points in from a different region of space, you will get values out
>> which have "the same amount" of randomness.
>
>       Anyone willing to post a detailed description of such an
>       algorithm?  (Jon Buller posted one, but I couldn't figure it out:
>       what is `Pts'?)

Sorry about not really describing my program to anyone, I know what it does,
and I never expected anyone else to see it (isn't it obvious) :-)

What it does is: pass a location in space, and an array of random numbers (this
is 'Pts').  I fill the array with 0.0 .. 1.0 but any values or range will work.
(I have other textures which color based on distance to the nearest point of a
random set, hence the name, It has 4 values per entry at times.)

Step 1: change the location to a group of points to interpolate.  This is where
xa,xb,xc,...zc come in, any location with the same coords (when trunc'ed) will
produce the same xa...zc values, making the same values for the interpolation
at the end.  These xa..zc are then hashed in to the 'Pts' array to produce
p000...P222, these 27 random numbers are then interpolated with a Quadratic 3d
B-Spline (the long ugly formula at the end).  The variables based on xf,yf, and
zf (I believe they are x0..z2) are the B-Spline basis functions (notice to get
DNoise, just take the (partial) derivatives of the basis functions and
re-evaluate the spline).

Step 2: now you have a value that is always smaller than the largest random
number in 'Pts' (equal to in the odd case that major bunches of the numbers are
also the maximum in the range).  By the same argument, all numbers returned are
larger than the smallest number in the array. (this can be handy if you don't
want to have to clip your values to some limit.)

I hope this explains the use of the routine better.  Sorry I didn't realize
that earlier.  If you have any other questions about it, mail them to me, and
I'll do my best to explain it.

Jon

-----------------------------------------------------------------------------

From: tadguy@cs.odu.edu (Tad Guy)
Newsgroups: comp.graphics
Subject: Re: DBW_render for SUN 3 ?
Organization: Old Dominion University, Norfolk, VA

daniel@unmvax.unm.edu writes:
>Has anyone converted the public domain ray trace program called DBW_render
>to run on a SUN workstation?

Ofer Licht <ofer@gandalf.berkeley.edu> has done just that.  His modified
DBW_Render is available via anonymous ftp from xanth.cs.odu.edu as
/amiga/dbw.zoo.  It is also designed to use ``DistPro'' to distribute the
computations between many machines (this is available as well as
/amiga/distpro.zoo).

His address:

	Ofer Licht  (ofer@gandalf.berkeley.edu)
	1807 Addison St. #4
	Berkeley, CA 94703
	(415) 540-0266

-----------------------------------------------------------------------------

From: eugene@eos.UUCP (Eugene Miya)
Newsgroups: comp.graphics
Subject: Re: Steel colors
Organization: NASA Ames Research Center, Calif.

In article <...> jwl@ernie.Berkeley.EDU.UUCP (James Wilbur Lewis) writes:
>In article <...> jep@oink.UUCP (James E. Prior) writes:
>>I've noticed that when I look closely at reasonably clean bare steel in good
>>sunlight that it appears to have a very fine grain of colors.  
>>
>>What is this due to?
>
>Probably a diffraction-grating type effect due to scratches, roughness, or
>possibly crystalline structure at the surface.

Funny you should mention this.  I was sitting with my officemate, George
Michael, he says Hi Kelly, and we were talking about stuff and he brought up
the subject of polish.  He said there were people at Livenomore who were
researching the issue of polish for big mirrors, but that polish really isn't
well understood, still open interesting physical science questions.  Polish
consists of minute "scratches" which have a set of interesting properties.  You
can probably [write] to them and get TRs on the topic.  Polish is more than
iridescence.

Also, since somebody asked, the date on the Science article by Greenberg on
Light reflection models for graphics is 14 April 1989, page 166.  It will
provide simple models for this type of stuff.

-----------------------------------------------------------------------------

From: ritter@versatc.UUCP (Jack Ritter)
Newsgroups: comp.graphics
Subject: Dirty Little Tricks
Organization: Versatec, Santa Clara, Ca. 95051

I've come up with a fast approximation to
3D Euclidean distance ( sqrt(dx*dx+dy*dy+dz*dz) ).
  (It's probably not original, but .....)

1) find these 3 values: abs(dx), abs(dy), abs(dz).

2) Sort them (3 compares, 0-3 swaps)

3) Approx E.D. = max + (1/4)med + (1/4)min.
                        (error: +/- 13%) 

        max +  (5/16)med + (1/4)min has  9% error.
        max + (11/32)med + (1/4)min has  8% error.

As you can see, only shifts & adds are used, and it can be done with integer
arithmetic.  It could be used in ray tracing as a preliminary test before using
the exact form.

We all have our dirty little tricks.

-----------------------------------------------------------------------------

From: kyriazis@turing.cs.rpi.edu (George Kyriazis)
Newsgroups: comp.graphics
Subject: Obfuscated ray tracer
Organization: RPI CS Dept.

A while ago, while it was still snowing, I was feeling adventurous, and a nice
weekend I decided to write an obfuscated ray tracer.  A friend of mine told me
that is not too obfuscated for the "Obfuscated C Contest", I had already wasted
one whole day, so I gave up.  Today, I was cleaning up my account, and I
thought it would be a very appropriate posting for comp.graphics.

It is a hacker's approach to ray tracing; produces a text image on the screen.
No shading; different characters represent different objects.  The source code
is 762 bytes long.  I KNOW that I'll get flamed, but who cares! :-)

Have fun people!

So, here it is:

Compile with  cc ray.c -o ray -lm


/* (c) 1988 by George Kyriazis */
#include <math.h>
#define Q "
#define _ define
#_ O return
#define T struct
#_ G if
#_ A(a,b) (a=b)
#define D double
#_ F for
#define P (void)printf(Q
#define S(x) ((x)*(1/*p-"hello"[6])/*comment*/*x))
T oo{D q,r,s,t;};int m[1]={2};T oo o[2]={{10,10,10,18},{15,15,17,27}};int x,y;D
I(i){D b,c,s1,s2;int*p=0,q[1];b=i/*p+1["_P]+(1-x*x)*erf(M_PI/i)/1*/**q+sin(p);{
{b=2*-(i+o)->s;c=S(x-i[o].q)+S(y-o[i].r)+S(i[o].s)-(o+i)->t;}A(s1,S(b));}{G((s2
=(S(b)-4*c)<0?-1:sqrt(-4*c+S(b)))<0){O(b-(int)b)*(i>=0-unix);}}s1=(-b+s2)/2;s2=
s1-s2;s1=s1<=0?s2:s1;s2=s2<=0?s1:s2;O s1<s2?s1:s2;}main(){D z,zz;int i,ii;F(A(y
,0);y<24;y-=listen(3,0)){F(x-=x;x<40;x++){F(z=!close(y+3),A(i,0);i<*m*(y>-1);i=
A(i,i+1))G(z<(A(zz,I(i))))z=zz,ii=i;G(!!z)P%d",ii);else P%c",32-'\0');}P\n");}}

-----------------------------------------------------------------------------

Subject: Contents of FTP archives, skinner.cs.uoregon.edu
From: markv@tillamook.cs.uoregon.edu (Mark VandeWettering)
Newsgroups: comp.graphics
Organization: University of Oregon CIS Dept.

Recently, the ftp archive of raytracing stuff was moved from our dying
VAX-750 (drizzle.cs.uoregon.edu) to our new fileserver, which is called
skinner.cs.uoregon.edu, or just cs.uoregon.edu.

There is more diskspace available, and I have expanded the archives to contain
several new items.  I thought I would post the README here to let people know
of its availability.

skinner.cs.uoregon.edu contains information largely dealing with the subject of
raytracing, although a radiosity tracer or solid modeler would be a welcome
addition to the contents there.  I am always busy looking for new software
aquisitions, so if you have anything you wish to put there, feel free to send
me a note.

Mark VandeWettering

-cut-cut-cut-cut-cut-cut-cut-cut-cut-cut-cut-cut-cut-cut-cut-cut-cut-cut-cut-cut

The old README was dated, so I thought I would update this new one...

dr-xr-xr-x  2 ftp           512 Feb 11 18:53 bibs

        contains bibliographies for fields that I am interested in,
        such as graphics and functional programming.

drwxrwxr--  2 ftp           512 May 13 23:44 gif.fmt

        descriptions of the gif format.  Too many people wanted this, so I 
        thought I would make it available.

drwxrwxr-x  2 root          512 May 24 22:25 grafix-utils

        Utilities for converting among graphics formats etc.
        Now includes fuzzy bitmap, should also include pbm
        and utah raster toolkit soon.

drwxrwxr--  2 ftp          1024 May 14 15:45 hershey

        The Hershey Fonts.  Useful PD fonts.

drwxrwxr-x  2 root          512 May 24 22:26 mtv-tracer

        My raytracer, albeit a dated version.

dr-xr-xr-x  2 ftp           512 Feb 16 17:24 musgrave

        Copies of papers on refraction by Kenton Musgrave.

drwxrwxr-x  2 root          512 May 24 22:26 nff
        
        Haines SPD raytracing package, with some other NFF images
        created by myself & others.  Useful for the mtv raytracer.

drwxr-xr-x  2 ftp          1536 May 24 11:44 off-objects

        Some interesting, PD or near PD images from the OFF distribution.

dr-xr-xr-x  2 ftp           512 Feb 15 22:48 polyhedra

        Polyhedra from the netlib server.  I haven't done anything with 
        these...

dr-xr-xr-x  2 ftp           512 Mar  6 17:45 qrt

        The popular raytracer for PCs.

dr-xr-xr-x  2 ftp           512 May 24 22:26 rayfilters

        Filters to convert the MTV output to a number of devices...

drwxrwxr-x  2 root          512 May 24 22:26 raytracers

        Other raytracers....

-rw-r--r--  1 ftp        323797 May 24 01:47 sunrpc.tar.Z

        SUN RPC v.3.9

[All issues of the email version of "The RT News" have been put in the
directory "RTNews" since this posting.]

-----------------------------------------------------------------------------
END OF RTNEWS

From m-cohen@cs.utah.edu Tue Aug 29 21:05:09 1989
Return-Path: <m-cohen@cs.utah.edu>
Received: from cs.utah.edu by turing.cs.rpi.edu (4.0/1.2-RPI-CS-Dept)
	id AA24854; Tue, 29 Aug 89 21:04:57 EDT
Received: by cs.utah.edu (5.61/utah-2.2-cs)
	id AA02550; Tue, 29 Aug 89 16:33:13 -0600
Date: Tue, 29 Aug 89 16:33:13 -0600
From: m-cohen@cs.utah.edu (Michael F Cohen)
Message-Id: <8908292233.AA02550@cs.utah.edu>
To: FISHER@3D.dec@decwrl.dec.com, arvo@apollo.com, atc@cs.utexas.edu,
        barr@csvax.caltech.edu, barsky@miro.berkeley.edu,
        bcorrie@uvicctr.uvic.ca, chapman@fornax, ckchee@dgp.toronto.edu,
        cs.utah.edu!esunix!tmalley@cs.utah.edu, daniel@apollo.com,
        dk@csvax.caltech.edu, dmarsh@apple.com, esl0422@ultb.isc.rit.edu,
        esl@422@ultb.isc.rit.edu, glassner.pa@xerox.com,
        grant@delvalle.llnl.gov, gray@rhea.CRAY.COM@uc.msc.umn.edu,
        green@COMPSCI.BRISTOL.AC.UK@CUNYVM.CUNY.EDU, hanrahan@princeton.edu,
        hench@csclea.ncsu.edu, hohmeyer@miro.berkeley.edu,
        hplabs!dana!mrk@cs.utah.edu, hultquis@prandtl.nas.nasa.gov,
        image.trc.amoco.com!zmel02@cs.utah.edu, jakob@humus.huji.ac.il,
        jeff@hamlet.caltech.edu, johnf@apollo.com, joy@ucdavis.edu,
        kolb@yale.edu, kyriazis@turing.cs.rpi.edu,
        larry.mcrcim.mcgill.edu!vedge!kaveh@cs.utah.edu, lister@dg-rtp.dg.com,
        litwinow@apple.com, mcvax.cwi.nl!dutio!fwj@cs.utah.edu,
        mcvax.cwi.nl!dutrun!wim@cs.utah.edu, meyer@ifi.unizh.ch, mike@brl.mil,
        mja@sierra.llnl.gov, mplevine@phoenix.princeton.edu,
        mssun1.msi.cornell.edu!carl@cs.utah.edu, musgrave-forest@yale.edu,
        paul@sgi.com, ph@miro.berkeley.edu, phil@rdrc.rpi.edu,
        raycasting@duke.cs.duke.edu, raytrace@cpsc.ucalgary.ca,
        rgb@caen.engin.umich.edu, squid.graphics.cornell.edu!jaf@cs.utah.edu,
        tcgould.tn.cornell.edu!lytle@cs.utah.edu, tim@csvax.caltech.edu,
        ucsd.ucsd.edu!megatek!kuchkuda@cs.utah.edu,
        wisdom.graphics.cornell.edu!roy@cs.utah.edu
Subject: Ray Tracing News
Status: R

 _ __                 ______                         _ __
' )  )                  /                           ' )  )
 /--' __.  __  ,     --/ __  __.  _. o ____  _,      /  / _  , , , _
/  \_(_/|_/ (_/_    (_/ / (_(_/|_(__<_/ / <_(_)_    /  (_</_(_(_/_/_)_
             /                               /|
            '                               |/

			"Light Makes Right"

			   August 29, 1989
			 Volume 2, Number 5

Compiled by Eric Haines, 3D/Eye Inc, 2359 Triphammer Rd, Ithaca, NY 14850
    hpfcla!hpfcrs!eye!erich@hplabs.hp.com, wrath.cs.cornell.edu!eye!erich
    [distributed by Michael Cohen <m-cohen@cs.utah.edu>, but send
    contributions and subscriptions requests to Eric Haines]
All contents are US copyright (c) 1989 by the individual authors

Contents:
    Introduction
    A SIGGRAPH Report, by Eric Haines
    Ray Tracing Poll, from the roundtable discussion
    _An Introduction to Ray Tracing_, Announcement and Errata
    _Graphics Gems_ Call for Contributions, by Andrew Glassner
    New People and Address Changes
    Bugs in MTV's Ray Tracer, by Eric Haines
    Bug in SPD, from Pete Segal
    Solid Textures Tidbit, by Roman Kuchkuda
    Sundry Comments, by Jeff Goldsmith
    Texture Mapping Question, by Susan Spach
    ======== USENET cullings follow ========
    Ray Traced Image Files, by Prem Subramanyan
    Image Collection, by Paul Raveling
    MTV-Raytracer on ATARI ST - precision error report, by Dan Riley
    Question on Ray Tracing Bicubic Parametric Patches, by Robert Minsk

-------------------------------------------------------------------------------

Introduction

	Now that the "RT News" is posted on comp.graphics, I'd like to make a
few changes.  First of all, you really don't need to get on the subscription
list of the RT News if you're an avid reader of comp.graphics.  However, I do
want to maintain an emailing list of people interested in ray tracing.

	So, I'll be keeping a single address list of interested people (which
I will call the "contact list"), and only some of the people on this list need
to be sent copies of the RT News.  Furthermore, it would be nice if various
schools, etc, would make a single email address the place where they will
receive the RT News.  For example, Duke has "raycasting@duke.cs.duke.edu" as
the address to which I should send the RT News.  They also have individuals on
the address list, but individual copies are not sent to them.  Instead, the
"raycasting" account they have remails the RT News to all people that are
interested at Duke.  In this way I can cut down on having issues sent out
unnecessarily, of worrying about bounced mail, of maintaining a long mailing
list (currently about 100 people), etc etc.

	To summarize, there are then a few different ways you can be listed.

	1) Individual subscriber - You don't read comp.graphics and want to
	   get the RT News.  Your name is put on the subscriber list and the
	   contact list.  To subscribe, simply send me your name, email and
	   snail mail addresses, and a one line summary (that I'll put next to
	   your name) describing any special areas you're interested in (e.g.
	   "parallelism, radiosity, filtering" might be one).  You might also
	   send me a few paragraphs about what you're up to nowadays, and I'll
	   include this in the News.

	2) Group subscriber - Like Duke.  This way you have full control over
	   who will automatically get an issue, and I won't have to change my
	   subscriber list each time someone else wants to get the RT News.

	   Some people have already switched over.  I just received this:

	   We went ahead and implemented the local alias.  It's "raytrace",
	   which you can reach as raytrace@cpsc.ucalgary.ca or
	   calgary!raytrace.  At the moment Dave Jevans and I (Bill Jones) are
	   the only subscribers.

	3) Contact list only - You read comp.graphics and so do not need to
	   subscribe.  However, you want to be on the list of people interested
	   in ray tracing.  Another advantage of the contact list is that
	   people may send you mail - for instance, I wrote all the people on
	   the list and invited them to come to a get-together for ray tracers
	   at SIGGRAPH (more on this later).  Everyone who is a subscriber is
	   also automatically on the contact list, but not vice versa.  You
	   can also be listed as an individual on the contact list if you're a
	   group subscriber.

	4) The silent majority - You don't need to be on any list, and are
	   happy just reading comp.graphics (or have hit the "n" key by now).

	Currently almost everyone on the contact list is also a subscriber.
So, if you read comp.graphics fairly constantly and are on the subscriber list,
please tell me to put you on only the contact list.  Clear as mud?  Good.  I'll
probably send the above to all people asking for subscriptions just to make
sure they know what's what, so don't be offended if you get one.

	Whew!  Well, with that done, I should mention that anyone can ask for
a copy of the contact list.  If you want to subscribe to the RT News, hardcopy
edition, that's another coastline entirely - contact Andrew Glassner at
glassner.pa@xerox.com for details, as he's the editor of that one.  The email
and hardcopy journals are mostly non-overlapping, so it's worth your while to
get both if you're seriously interested in the subject.

	I can't afford to send out the back issues of the email RT News, but
they are available via anonymous FTP from:

	cs.uoregon.edu - in /pub, also has lots of other ray tracing stuff, etc
	freedom.graphics.cornell.edu - in /pub, also has Xcu menu creator

	One other resource: I've been updating Paul Heckbert's ray tracing
bibliography while he's been busy at grad school.  If you would like the latest
copy of this list, or have anything to add or change in it, just write me.  I
also will be posting it to the two ftp sites realsoonnow.

-------------------------------------------------------------------------------

A SIGGRAPH Report, by Eric Haines

	Another SIGGRAPH has come and gone, and I had a good time.  My only
frustration was meeting many researchers for a minute and not getting to talk
with them.  I noticed that although the ray tracing session had but three
papers, there were about nine others throughout the proceedings that used ray
tracing techniques in various ways.  Ray tracing (and ray casting) as a
graphics tool has come into its own.

	A few more ray tracers are out on the market, such as "Sculpt 3D" by
Byte by Byte for the Macintosh and Amiga and "LazerRays" by Lazerus.  Hewlett
Packard is now shipping radiosity and ray tracing software bundled in with
every high-end graphics workstation.  Intergraph has been getting a fair bit
of airplay out of their new ray tracing package, though I haven't gotten
details yet.  Alias should be offering a ray tracer sometime soon.

	Ray tracing researchers got together informally for a "ray tracing
roundtable" for an hour and some.  Like last year, it was a large gathering,
with around 50 people attending.  We went around the group, with each person
giving a brief introduction.  I took an informal poll on a few questions,
Andrew noted that the ray tracing book was out, and asked for contributions to
"Graphics Gems" (more later), then we broke up and talked about this and that.

	Given the size of the gathering, it was a bit frustrating:  there are
all these people that I've wanted to meet and talk with in the same room, and
after half an hour they're all gone and I've spoke with only a few.
Basically, the roundtable meeting is too big for my tastes.  One possibility
that you all might consider is to invite people in your own special interest
to a lunch or dinner.  For example, I went to a pleasant "global illuminators"
lunch this SIGGRAPH, where there were about twelve people - a good size.

	Oh, some nice books came out this SIGGRAPH.  I'll plug the ray tracing
book later.  Others worth note (i.e. I bought them) are _Mathematical Elements
for Computer Graphics, Second Edition_ by Rogers and Adams, which has been
expanded to more than twice its original length, and _The RenderMan Companion_
by Steve Upstill, which looks like a nice book no matter what your feelings on
RenderMan itself.  I'm looking forward to the much expanded second edition of
Foley & Van Dam's _Fundamentals of Interactive Computer Graphics_, which should
be out early next year.

-------------------------------------------------------------------------------

Ray Tracing Poll, from the roundtable discussion


At the roundtable people were polled on a few questions.

1.  What efficiency schemes have you used?

	Grids			   - 20
	Octree/BSP		   - 26 (4 were specifically BSP)
	Bounding Volume Hierarchy  - 22
	Hybrid			   - 17
	Greater than 3D (4D or 5D) - 13
	Other			   -  5 (what were these, anyway?)

2.  How many processors have you used?

	One processor		   - 21.1
	Multiprocessor		   - 29.1
	More than 10 processors	   - 21
	More than 100 processors   -  4

	The most processors used was 1024 (2 people).

3.  How long do you usually wait for a picture?

	Less than a minute	  -  1
	Less than ten minutes	  -  6
	Less than an hour	  - 13
	Less than ten hours	  - 25

4.  What is your favorite background color?

	UNC "Carolina Blue"	  - 12
	Black			  - 15

-------------------------------------------------------------------------------

_An Introduction to Ray Tracing_, Announcement and Errata


	Well, the book is finally out.  It is edited by Andrew Glassner, and
sells for $49.95 from Academic Press.  It includes sections by Andrew, Pat
Hanrahan, Rob Cook, Paul Heckbert, Jim Arvo, Dave Kirk, and myself.  The book
is essentially the same as the 1988 SIGGRAPH Course Notes, with the addition
of some figures and images and some minor corrections.  Incidentally, if you
have the book and find any glaring errors, please notify Andrew Glassner;
enough copies sold at SIGGRAPH that another edition will be printed soon.
Considering that the authors are scattered around the country, the book flows
fairly well and covers most areas of ray tracing theory and practice.  Also,
there's finally a text to suggest when someone wants to know whether a point
is inside a polygon (Sedgewick's _Algorithms_ didn't solve it efficiently).
If you don't have a copy, buy a few and make us all fabulously wealthy beyond
our wildest dreams (i.e. then I could afford the Sears' hibachi instead of
the K-Mart version).

	Incidentally, the 1989 "Intro to RT Course Notes" included the book
and some additional notes.  The notes included reprints of classic articles,
the code for the SPD package (God forbid that anyone have to type it in,
though), and an article I whipped off called "Tracing Tricks", which I'll post
sometime soon, maybe during a slow month.

	Some bugs have already been reported to me about my section, so I'll
pass them on:


>From Tim O'Connor:

If you'll flip to page 66 you'll note a little algorithm for ray/box testing.
About halfway through you say:

	If T1 > Tnear, set T1 = Tnear
	If T2 > Tfar, set T2 = Tfar

Then you never use T1, or T2 again in that loop.  The example bears out my
suspicion that one should actually "set Tnear = T1" and "set Tfar = T2".


>From Ehood Baratz <{world}!hpbbn!hputlaa!ehood>:

In chapter 2 page 52 on formula (C9) it is written:

		If Pn * Rd  < 0
		(in other words, if Vd > 0)
		   then ....

	I think it should be "if Vd < 0" because Pn*Rd is Vd (look at page 51
after (C4)).

-------------------------------------------------------------------------------

_Graphics Gems_ Call for Contributions, by Andrew Glassner


CONTRIBUTE TO A NEW BOOK FOR COMPUTER GRAPHICS PROGRAMMERS!

Contributions are solicited for a new book, tentatively titled GRAPHICS GEMS.
This book will be a collection of short notes by and for computer graphics
programmers and researchers.  The basic idea is to create a book similar to
the CRC Mathematics Handbook, only tailored to the subject of computer
graphics.

The motivation for Graphics Gems comes from a desire to document and share the
many techniques that are a necessary part of every graphics programmer's
toolbox, yet don't appear in the standard literature.  Each Gem represents a
solution to a problem:  not necessarily a research result, nor even a deep
observation, but simply a good, practical technique for dealing with a typical
computer graphics programming problem.  A typical Gem may be a code fragment,
a short analysis of an interesting problem, a bit of mathematics, a data
structure, or a geometric relationship.

Here are some appropriate topics for Gems - this list contains only a few
suggestions for topics that might be covered by interesting Gems, and is far
from complete:

Two Dimensions:  Fill, smooth, blur, dither, 2d plots, line drawing, curve
drawing, bounding boxes, overlapping boxes, efficient bitblit (example:
automatic selection of tick marks on a plot).

Three Dimensions:  Scan conversion, highlight detection, shading, isosurfaces,
ray intersection, form factor calculation, visibility, texturing,
transformations, deformations, smoothing, 3d plotting, parameterizations,
surface subdivision, texturing functions, bounding boxes (example:  fast
shading formulae).

Graphics:  Colormap hacking, object manipulations, sampling, filtering,
optics, interaction techniques, modelling primitives, efficient rendering,
edge detection (example:  reconstruction from stochastic sampling).

General Math:  Algebra, calculus, geometry (e.g.  why normals don't move under
the same transformations as surfaces).

Programming:  Numerical integration, root finding, root polishing, data
structures (objects), data structures (programs), inner loops, interactive
debugging, graphical debugging, color map hacking, over- and under-flow
detection and correction, unusual functions (e.g.  polynomial root-finding).

Most Gems will be about 1 or 2 final printed pages (4 or 5 pages of
typewritten, double-spaced manuscript), though if you choose to include source
code the listings may run longer.  Rough figures and equations will be
professionally redrawn by the publisher.  Each contributor will have a chance
to review the final copy for his or her Gems before publication.  Each Gem
will be clearly identified with the name and affiliation of its
contributor(s).

If you have developed a nice solution to a problem that others might
encounter, be it a data structure, an inner loop, or even an algebraic
simplification that makes your programs shorter and more robust, then it would
probably make a splendid Graphics Gem.  Write it up and send it to the editor
at the address below, either in hardcopy or electronic mail.  Acceptable
formats are plain text, nroff, TeX, MacWrite, and Microsoft Word (Macintosh).
I would like to receive a rough draft of all Gems by November 1989.

Contribute and share your favorite tricks and techniques with the rest of the
community!  Send your Graphics Gems to:

Andrew Glassner
Editor, Graphics Gems
Xerox PARC
3333 Coyote Hill Road
Palo Alto, CA  94304  USA
email: glassner.pa@xerox.com
phone: (415) 494 - 4467

-------------------------------------------------------------------------------

New People and Address Changes


There were many new people at the roundtable at SIGGRAPH.  In the interest of
brevity, only new people who've sent an intro are listed.  If any of you (or
anyone else out there) would like to send in a paragraph or two of
introduction, it will be printed here.  You can always request the latest
contact list from me.

--------

We just received the 7 editions of "The Ray Tracing News" you posted recently
at USENET.  This is exactly the kind of information we need in our research
project an rendering software.  Not just the latest research news, but also
discussion on the results of them.

So PLEASE put us on your mailing list!

Just let me describe our past and future work to justify the costs for you.
We are a working group at the Institute for Interactive Graphic Systems at the
University of Tuebingen (West Germany) and are part of the faculty of physics.
The main future research aspect will go in direction of combining raytracing
and radiosity.  We come from the background of geometric modelling and
graphics hardware (here Phong shading in realtime).  I will start working on
this project 4Q89 as my PhD.

If you can give any additional information that might be interesting to us
(including other research going on in this area) please let us know.

Surface mail: Wilhelm-Schickard-Institut fuer Informatik
              Graphisch Interaktive Systeme
              z.Hd. Philipp Slusallek
              Auf der Morgenstelle 10
              D-7400 Tuebingen
Email       : philipp@infotue.uucp
Tel         : x49 7071 296356

--------

(internet)  zmel02@trc.amoco.com
(usenet)    uunet!apctrc!mlee
(snail mail)    Mark Lee
                Amoco Production Company
                Tulsa Research Center
                PO Box 3385
                Tulsa, OK  74102
        (phone) (918)-660-3556  or (918)-660-3000 for operator

A short introduction.  My interests lie in the areas of illumination models,
faster ray tracing, photorealism, numerical and statistical methods.  My real
work is more in the area of rendering algorithms and scientific visualization.
The areas that I enjoy working in, I pursue whenever I can squeeze it in.

Incidentally, did you find a copy of the article by Levner, Tassinari, and
Marini, "A Simple Method for Ray Tracing Bicubic Surfaces"?  [no] Is this in a
textbook or such?  How could I find of copy of this paper?  [Can anyone help?
Has anyone seen it, and can at least summarize?]

Some questions to post to the mailing list...

Did the paper "The Ray Tracing Kernel" by Jim Arvo, Dave Kirk and Olin Lathrop
ever get published?

Does anyone have a copy of Blinn's notes from the Siggraph '84 tutorial notes
on "The Mathematics of Computer Graphics" that they could send to me?  [hey,
I'd like one, too: I was a student volunteer at this course, and they ran out
of course notes and I never got a copy.]

--------

I am now doing a dissertation on realistic rendering for complex scenes, with
extensions for nonisotropically scattering gasses.  I am also part of a
project that uses raytracing for visualization of volumes of scalar data.

Peter Shirley
University of Illinois at Urbana-Champaign


        shirley@cs.uiuc.edu
	{pur-ee,convex,inhp4}!uiucdcs!shirley
        816 E. Oakland, #206
        Urbana, IL  61801
        (217) 328-6494

--------

Mike Sweeney - rendering, splines, object-oriented languages
Softimage Inc.
3510 Blvd St. Laurent, Suite 214
Montreal, Canada H2X 2V2
(514) 845-1636
[write to Kaveh Kardan at larry.mcrcim.mcgill.edu!vedge!kaveh to reach Mike]

Hello net-land, it's been a while.  Eric says he wants an introduction so here
goes....

I've been writing renderers for the last 6 years.  The Waterloo CGL Raytracing
Package was your basic naive ray tracer.  It did contain an iterative solution
to ray/bspline intersection, but I no longer believe in this approach - it's
much faster to break the spline into triangles.  My second attempt, the Alias
renderer, war a raycaster.  It was slow period.

Then came Abel, where I started playing with octrees.  The code was about
twice as fast as any implementation of Kay/Kajiya slabs I could come up with,
and at least ten times as fast as my implementation of Fujimoto's algorithm.
After Abel folded, I wound up at Softimage.  I added a modified Watkin's front
end, and rewrote the tracer to make the maximum use of empty space.

I've not tried to implement the Kirk/Arvo algorithm yet, but have an
instinctive distrust of anything that mixes preprocessing with the rendering
(the cost will go up with the sampling rate).  What is the group's experience
with this method?

--------

# Jerry Quinn - PhD research in raytracing at Dartmouth
# Dartmouth College
# Department of Math & Comp Sci
# Hanover, NH 03755
# (603) 646-2565
quinn@sunapee.dartmouth.edu

I am a PhD student in computer science at Dartmouth and am in my second year.
I'm currently interested in increasing efficiency in raytracing.  I'm also
looking at parallelism in RT, radiosity, and the combination of both.

--------

I have taken a job at Princeton, and thought I'd send you my new address for
the purposes of the RT News.  

Mark VandeWettering (markv@acm.princeton.edu)
c/o Program in Applied and Computational Math
Fine Hall
Princeton University, Princeton NJ, 08544

Another question you might know off the top of your head:  Alvy Ray Smith has
written a tech memo on Volumetric Rendering at Pixar.  Do you have his e-mail
address, or otherwise know how I might request Pixar Tech Memos?  [does anyone
else know?  - EAH]

The ray tracing archive on skinner.cs.uoregon.edu still will remain there.  I
have permission from the higher-ups at the U of O to keep it there.  If I lose
that permissions at some future date, it will probably move to Princeton
somewhere....

--------

Pat Hanrahan has also moved to Princeton, and is now at:

	hanrahan@princeton.edu

--------

And one more for Princeton:

alias	marshall_levine mplevine@phoenix.princeton.edu

[I asked about the ray tracing demo at SGI:]

The ray-tracing demo that you saw on an IRIS at SIGGraph is called Flyray.  It
was written by Benjamin Garlick in June-August 1988.  The underlying voxel
ray-tracer was written by Paul Haeberli in July 1983.  The demo is standard
around here; it should be on every demo machine.  I would guess that it would
be in the /usr/src/cmd/demo directory on most demo machines, but it depends on
that particular machine's configuration.  It is easily accessible through
Buttonfly, the SGI menu program.  Buttonfly displays menus of demos on 3D
buttons that twist, flip, and fly towards the screen when selected (with text
on them!).  You will probably find Flyray on Buttonfly under the
SGI/CPU-Intensive button (it will be listed as Ray-Tracer or Flyray).  If you
have any questions or want a description of the demo, just let me know and
I'll send you any information that I can dig up.  While you're at SIGGraph,
you should take a look at the newest version of Flight (By Rob Mace, one of
the guys in my group) on the IRIS 4D computers.  Take a look at the F-14.
I'll let it be a surprise, and trust me, you'll be very surprised!

--------

Please change my email address to palmer@ncsc.org (was palmer@ncifcrf.gov).

Thomas C. Palmer		North Carolina Supercomputing Center
Cray Research, Inc.		Phone: (919) 248-1117
PO Box 12732			Arpanet: palmer@ncsc.org
3021 Cornwallis Road
Research Triangle Park, NC
27709

--------

Another address change:

# Mark Reichert
# Program of Computer Graphics
# 120 Rand Hall
# Cornell University
# Ithaca, NY 14853
alias   mark_reichert   mcr@venus.graphics.cornell.edu

-------------------------------------------------------------------------------

Bugs in MTV's Ray Tracer, by Eric Haines


	Craig Kolb mentioned that he was having problems with Mark
VandeWettering's ray tracer.  Since I've been touting it all this time (but
having run it only once), I felt obligated to find the bugs.  There were two:
one was that the up vector was sometimes not perpendicular to the view vector,
which results in the "balls" image having a "comin' at ya" kind of distortion
(kind of interesting, but incorrect).  The other was that the frustum width
was affected by the distance to the hither, which it should not be.  One
result is that the "mountain" scene would get zoomed in on something fierce.
Finally, I changed the statistics output a bit to give information that I like
(e.g.  the number of reflected and refracted rays actually shot are counted
separately).  Anyway, at least apply the fixes to main.c and the first part of
screen.c.


diff old/main.c new/main.c
109c112
< 	printf("number of rays cast:	   %-6d\n", nRays);
---
> 	printf("number of eye rays:	   %-6d\n", nRays);
diff old/screen.c new/screen.c
50d49
< 	VecNormalize(upvec) ;
66a66,72
> 	 * Make sure the up vector is perpendicular to the view vector
> 	 */
> 
> 	VecCross(viewvec, leftvec, upvec);
> 	VecNormalize(upvec);
> 
> 	/*
71c77
< 	frustrumwidth = (view -> view_dist) * ((Flt) tan(view -> view_angle)) ;
---
> 	frustrumwidth = ((Flt) tan(view -> view_angle)) ;
129c135
< 			Trace(0, 1.0, &ray, color);
---
> 			Trace(0, 1.0, &ray, color, &nRays);
173c179
< 			Trace(0, 1.0, &ray, color);
---
> 			Trace(0, 1.0, &ray, color, &nRays);
238c244
< 				Trace(0, 1.0, &ray, color);
---
> 				Trace(0, 1.0, &ray, color, &nRays);
diff old/shade.c new/shade.c
112d111
< 		nReflected ++ ;
115c114,115
< 		Trace(level + 1, surf -> surf_ks * weight, &tray, tcol);
---
> 		Trace(level + 1, surf -> surf_ks * weight, &tray, tcol,
> 			&nReflected);
120d119
< 		nRefracted ++ ;
125c124,125
< 		Trace(level + 1, surf -> surf_kt * weight, &tray, tcol) ;
---
> 		Trace(level + 1, surf -> surf_kt * weight, &tray, tcol,
> 			&nRefracted) ;
diff old/trace.c new/trace.c
19c19
< Trace(level, weight, ray, color) 
---
> Trace(level, weight, ray, color, nr) 
23a24
>  int *nr ;
34c35
< 	nRays ++ ;
---
> 	(*nr) ++ ;

-------------------------------------------------------------------------------

Bug in SPD, from Pete Segal <pls@pixels.att.com>

Pete Segal reported a particularly subtle bug in my Standard Procedural
Databases package.  Turns out a "w" component needs to be initialized.  If you
have a machine that initializes everything to 0, you'd never notice it.  The
patches to the README and lib.c files are below.  I should be putting the
latest and greatest version on cs.uoregon.edu soon.


diff old/README new/README
4,5c4,5
< Version 2.5, as of 10/19/88
<     address: 3D/Eye, Inc., 410 East Upland Road, Ithaca, NY 14850
---
> Version 2.6, as of 8/28/89
>     address: 3D/Eye, Inc., 2359 N. Triphammer Rd, Ithaca, NY 14850
22a23,24
> Version 2.6 released August, 1989 - lib_output_cylcone fix (start_norm.w was
>     not initialized).
68a71,72
> 
>     The SPD package is also available via anonymous FTP from cs.uoregon.edu.
diff old/lib.c new/lib.c
5c5
<  * Version:  2.2 (11/17/87)
---
>  * Version:  2.6 (8/28/89)
437a438
> 	start_norm.w = 0.0 ;

-------------------------------------------------------------------------------

Solid Textures Tidbit, by Roman Kuchkuda <ucsd.edu!kuchkuda%megatek.UUCP>


One piece of research that you might mention in RTN:

I had used procedural wood textures for a while and wondered whether you could
use "real" wood textures.  Through some connections at the UNC hospital I had
a block of pine CT scanned.

Sure enough, the grain showed up very well.  The resulting pictures were more
"interesting" than procedural texture.  The structure is more complex than the
procedural model.

There were a few problems though:
1) There is a lot of data in the CT scans and memory paging slows down the 
   ray tracing like crazy.

2) Reality doesn't look nearly as "real" as a neat clean procedural model
   of it does.

The results were so mixed that I never tried to get this published anywhere.

-------------------------------------------------------------------------------

Sundry Comments, by Jeff Goldsmith <jeff@Iago.Caltech.Edu>

About the ray-tracing-as-a-sleazy-sales-gimmick idea:

    The whole point about my suggestion to include a ray tracer as part of a
CG system is that it is, in fact, mostly useless, and most buyers don't know
that.  Thus, it works as a great gimmick, which is exactly "promising someone
something that they think they want, but don't really."  No one will use it
after finding out how long it takes, so it doesn't have to have as many
features as the rest of the system.  Yes, this is a very cynical view, but
sales is a non-technical problem, and (at least around here) expensive systems
are usually purchased by people who know nothing about them, so it ought to be
an effective technique with not a whole lot of resource expenditure.  Besides,
most CG programmers enjoy hacking ray tracers so you boost morale at the
company at the same time.  ...by the way, this is not entirely a joke, but...

    Euclidean distance calculation:  Iterative approaches work great on this
problem.  A short summary (and code for one) is in Tom Duff's SIGGRAPH '84
Tutorial on Numerical Analysis (a terrific article, by the way, as is his
spline piece in the same place) most of which he got from Moler and Morrison's
"Replacing Square Roots by Pythagorean Sums" IBM Journal of Research and
Development, 27/6, Nov. 1983.  The method he describes has cubic convergence,
so it works well to unroll the loop and do a set number of iterations.  4
iterations (2/, 4*, 2+) yield 62 digits of precision.

-------------------------------------------------------------------------------

Texture Mapping Question, by Susan Spach <hpfcla!hplabs!spach>

What are good techniques for implementing texture mapping within raytracing?
Is point sampling with a good sampling strategy most commonly used?  Has
mipmapping been extended to secondary rays?  [I'm interested, too]


======== USENET cullings follow ===============================================

Ray Traced Image Files, Prem Subramanyan

Reply-To: prem@geomag.UUCP (Prem Subramanyan)
Organization: Florida State University Computing Center


We have quite a collection of rasterfiles of all sizes here at geomag.  You
can use the fbm package by Michael Mauldin to convert from Sun rasterfiles to
GIF files.  The main reason why I have chosen to keep them as rasterfiles,
rather than convert them to GIF is that on the Sun the rasterfile viewers are
neater.  In any case.  anonymous ftp to geomag.gly.fsu.edu (128.186.10.2) cd
to pub/pics and download any pics you want.  In the future, once I get the
latest fbm package, I will post it there as well.  We have a good collection
of files ray-traced on the eta-10g with QRT by Steve Koren.  The longest time
taken (for blue.rst.Z) was 1 1/2 hrs.  The small ones (640x400) went in
usually under 15 minutes.  In any case, they are quite interesting.

-------------------------------------------------------------------------------

Image Collection, by Paul Raveling

From: raveling@venera.isi.edu (Paul Raveling)
Organization: USC-Information Sciences Institute

With gobs of satisfaction, I'd like to announce the addition of some pictures
of my own to the "Img" collection available on venera.isi.edu.  Along with
this go sincere thanks to Nic Lyons and HPLabs for the opportunity to digitize
these photos.

venera's pub directory now contains 2 compressed files to facilitate retrieval
of images in this collection and of the imglib code.  These are:

	pub/img_ls-RAlF.Z       Directory listing of everything
				in the [~ftp/] images hierarchy

	pub/img.tar.Z           Code for imglib and the various
				simple programs using it.

The "root" directory, [~ftp/]images, and each of the four subdirectories
containing images has its own README file with additional info.

Credits for 2 images that weren't from my own pictures are:

	Window Rock:    My wife spotted and captured the natural
		spiral pattern at Window Rock, Arizona.

	Solings:        This is from the 1982 International Soling
		Association calendar.  I used to own and race one
		of these, but wouldn't risk taking a camera aboard.


Here's a subject summary of the new images in images/color_mapped:

aspens          Aspens in San Juan Mts, between Ouray & Silverton
blue_tigers     Blue Angels, in F11F Tigers
ds_train        Durango & Silverton narrow-gauge train
elk             An elk in Banff National Park
ghost_house     House in ghost town, somewhere between Ouray and Silverton
graycard        Kodak gray card, grayscale, and color patches
halfdome        Halfdome in polarized infrared light (Yosemite Nat'l Park)
harvey          Harvey, an African lion who lived at the L.A. Zoo
high_line       Durango & Silverton narrow gauge train on the high line
model           Model at one of Frank's photo day shows, probably in 1978
model_fullscale Model at one of Frank's photo day shows, probably in 1978
old497          Old 497:  One of Durango & Silverton's narrow gauge steamers
porcupine       Porcupine @ ranger cabin, Mosquito Flats, Banff
puff1 - puff2   Puff (the cat, not the dragon)
san_juans       San Juan Mountains, between Ouray and Silverton
smurf1 - smurf4 Whitesmith, aka "The Smurf"  [another cat]
snake           Non-digital snake (not an adder)
solings         Solings, from 1982 International Soling Association calendar
stream          Stream in San Juan Mountains, between Ouray and Silverton
tbirds          Thunderbirds
window_rock     Window Rock
x-1e            X-1E at NASA Ames Dryden Flight Research Center, Edwards AFB

-------------------------------------------------------------------------------

MTV-Raytracer on ATARI ST - precision error report, by Dan Riley

From: riley@batcomputer.tn.cornell.edu (Daniel S. Riley)
Organization: Cornell Theory Center, Cornell University, Ithaca NY


In article <1171@laura.UUCP> wagener@unidocv.UUCP (Roland Wagener) writes:
>I have ported the MTV-Raytracer on the ATARI ST using the Turbo-C-
>Compiler. The Program works fine and it uses about 3 hours CPU-Time
>to create the Balls-Picture in 320x200 Resolution.

>But there is a bug somewhere in the program. There are white spots in
>all reflecting surfaces. This bug does not appear on a IBM-PC programmed
>with Zortech-C++. But the PC needs 5 hours for a 200x200-Picture ...

This sounds like a floating point precision problem.  I've been playing
with a number of ray tracers, including MTV, on my Amiga.  I've seen
white spots and other sorts of splotches if I use the Motorola ffp format
floating point routines, which are single precision (32 bit) only.  They 
go away if I use IEEE math libraries (all calculations done with 64 bit
doubles).  3 hours cpu for a 320x200 picture on an 8 MHz 68000 sounds like
single precision to me, but I don't know the ST or Turbo-C that well.

I suppose there must be papers on controlling round-off errors in
ray tracing algorithms, but none of the ray-tracers I've seen make any
special efforts in that regard.  Of course, all the ones I have source
code to are meant to be clean and simple, not fast and convoluted...:-)

-Dan Riley (riley@tcgould.tn.cornell.edu, cornell!batcomputer!riley)
-Wilson Lab, Cornell U.

-------------------------------------------------------------------------------

Question on Ray Tracing Bicubic Parametric Patches, by Robert Minsk

From: ccoprrm@pyr.gatech.edu.UUCP (Robert E. Minsk)
Organization: Georgia Institute of Technology

  Does anyone have a routine of know of any pointers to articles to find a
intersection between a ray and a bicubic parametric patch besides a recursive
subdivision algorithm?  I am trying to speed things up a bit in my ray tracer.

-------------------------------------------------------------------------------
END OF RTNEWS

From m-cohen@cs.utah.edu Wed Sep 20 17:53:33 1989
Return-Path: <m-cohen@cs.utah.edu>
Received: from cs.utah.edu by turing.cs.rpi.edu (4.0/1.2-RPI-CS-Dept)
	id AA17625; Wed, 20 Sep 89 17:52:40 EDT
Received: by cs.utah.edu (5.61/utah-2.4-cs)
	id AA13301; Wed, 20 Sep 89 15:41:56 -0600
Date: Wed, 20 Sep 89 15:41:56 -0600
From: m-cohen@cs.utah.edu (Michael F Cohen)
Message-Id: <8909202141.AA13301@cs.utah.edu>
To: EHS0630@RITVAX.edu, andreww@dgp.toronto.edu,
        beauty.graphics.cornell.edu!wbt@cs.utah.edu,
        cs.utah.edu!esunix!tmalley@cs.utah.edu, dfr@cad.usna.mil,
        dmarsh@apple.com, esl0422@ultb.isc.rit.edu, esp@crabcake.cs.jhu.edu,
        gjward@lbl.gov, gray@rhea.CRAY.COM@uc.msc.umn.edu,
        image.trc.amoco.com!zmel02@cs.utah.edu, jp@apple.com,
        kyriazis@turing.cs.rpi.edu, lister@dg-rtp.dg.com, litwinow@apple.com,
        love.graphics.cornell.edu!phw@cs.utah.edu, lroy@sgi.com,
        markv@acm.princeton.edu, mcvax.cwi.nl!dutrun!frits@cs.utah.edu,
        mcvax.cwi.nl!ecn!jack@cs.utah.edu,
        mcvax.cwi.nl!inria!irisa!priol@cs.utah.edu, meyer@ifi.unizh.ch,
        mike@brl.mil, mjn@cs.brown.edu,
        mohta@titcce.cc.titech.junet@utokyo-relay.csnet@RELAY.CS.NET,
        mplevine@phoenix.princeton.edu, musgrave-forest@yale.edu,
        palmer@ncsc.org, pls@pixels.att.com, poulin@dgp.toronto.edu,
        pre@cs.hut.fi, pss@sgi.com, scofield@apollo.com,
        shaffer@vtopus.cs.vt.edu, shirley@cs.uiuc.edu, spach@hplabs.hp.com,
        spencer@tut.cis.ohio-state.edu, stadnism@clutx.clarkson.edu,
        stepoway@smu.edu, subramn@cs.utexas.edu,
        sun!gould!rti!ndl!jtw@cs.utah.edu,
        sunapee.dartmouth.edu!quinn@cs.utah.edu,
        tcgould.tn.cornell.edu!lytle@cs.utah.edu, tuck@cs.unc.edu,
        turk@cs.unc.edu, ucsd.ucsd.edu!megatek!kuchkuda@cs.utah.edu,
        webber@aramis.rutgers.edu, westover@cs.unc.edu,
        wisdom.graphics.cornell.edu!toc@cs.utah.edu
Subject: Ray Tracing News
Status: R

 _ __                 ______                         _ __
' )  )                  /                           ' )  )
 /--' __.  __  ,     --/ __  __.  _. o ____  _,      /  / _  , , , _
/  \_(_/|_/ (_/_    (_/ / (_(_/|_(__<_/ / <_(_)_    /  (_</_(_(_/_/_)_
             /                               /|
            '                               |/

			"Light Makes Right"

			September 20, 1989
		        Volume 2, Number 6

Compiled by Eric Haines, 3D/Eye Inc, 2359 Triphammer Rd, Ithaca, NY 14850
    NOTE ADDRESS CHANGE: wrath.cs.cornell.edu!eye!erich
    [distributed by Michael Cohen <m-cohen@cs.utah.edu>, but send
    contributions and subscriptions requests to Eric Haines]
All contents are US copyright (c) 1989 by the individual authors
Archive location: anonymous FTP at cs.uoregon.edu (128.223.4.1), /pub/RTNews

Contents:
    Introduction
    New People and Address Changes
    Q&A on Radiosity Using Ray Tracing, Mark VandeWettering & John Wallace
    Dark Bulbs, by Eric Haines
    MTV Ray Tracer Update and Bugfix, by Mark VandeWettering
    DBW Ray Tracer Description
    ======== USENET cullings follow ========
    Wanted: Easy ray/torus intersection, by Jochen Schwarze
    Polygon to Patch NFF Filter, by Didier Badouel
    Texture Mapping Resources, by Eric Haines, Prem Subrahmanyam,
	Ranjit Bhatnagar, and Jack Ritter

-------------------------------------------------------------------------------

Introduction

	There are a lot of new people, with some interesting fields of study.
There's been a lot of talk about texture mapping and the DBW ray tracer on the
net.  This discussion will probably continue into next issue, but I felt Jack
Ritter's posting a good way to end it for now.  I've also been toying with
texturing again, making my version of "Mount Mandrillbrot" (fractal mountain
with everyone's favorite beasty textured onto it), which some clever person
invented at the University of Waterloo (I think) some years ago (does anyone
know who?).  There are also other useful snippets throughout.

	However, one major reason that I'm flushing the queue right now is
that the node "hpfcrs" is disappearing off the face of the earth.  So, please
note my only valid address is the "wrath" path at the top of the issue.
Thanks!

-------------------------------------------------------------------------------

New People and Address Changes


Panu Rekola, pre@cs.hut.fi

To update my personal information in your files:

Surface mail:	Panu Rekola
		Mannerheimintie 69 A 7
		SF-00250 Helsinki, Finland
Phone:		+358-0-4513243 (work), +358-0-413082 (home)
Email:		pre@cs.hut.fi
Interests:	illumination models, texture mapping, parametric surfaces.

You may remove one of the names you may have in the contact list.  Dr. Markku
Tamminen died in the U.S.  when he was returning home from SIGGRAPH.  How his
project will go on, is still somewhat unclear.

--------

Andrew Pearce, pearce@alias

I wrote my MS thesis on Multiprocessor Ray Tracing, then moved to Alias where
I sped up Mike Sweeney's ray caster.  I've just completed writing the Alias
Ray Tracer using a recursive uniform subdivision method (see Dave Jevans paper
in Graphics Interface '89, "Adaptive Voxel Subdivision for Ray Tracing") with
additional bounding box and triangle intersection speed ups.

Right now, I'm fooling around with using the guts of the ray tracer to do
particle/object collision detection with complex environments, and
particle/particle interaction with the search space reduced by the spatial
subdivision.  (No, I don't use the ray tracer to render the particles.)

In response to Susan Spach's question about mip mapping, we use mip maps for
our textures, we get the sample size by using a "cone" size parameter which is
based on the field of view, aspect ratio, distance to the surface and angle of
incidence.  For secondary rays this size parameter is modified based on the
tangents to the surface and the type of secondary ray it is (reflection or
refraction).  This may be difficult to do if you are not ray tracing surfaces
for which the tangent information is readily available (smooth shaded
polygonal meshes?).

- Andrew Pearce
- Alias Research Inc., Toronto, Ontario, Canada.
- pearce%alias@csri.utoronto.ca   |   pearce@alias.UUCP
- ...{allegra,ihnp4,watmath!utai}!utcsri!alias!pearce

--------

Brian Corrie, bcorrie@uvicctr.uvic.ca

	I am a graduate student at the University of Victoria, nearing the
completion of my Masters degree.  The topic of my thesis is producing
realistic computer generated images in a distributed network environment.
This consists of two major research areas:  providing a distributed (in the
parallel computing sense) system for ray tracing, as well as a workbench for
scene description, and image manipulation.  The problems that need to be
addressed by a system for parallel processing in a distributed loosely coupled
system are quite different than those addressed by a tightly coupled parallel
processor system.  Because of the (likely) very high cost of communication in
a distributed processing environment, most parallel algorithms currently used
are not feasible (due to the high overhead).  The gains of parallel ray
tracing in a distributed environment are:  the obvious speedup by bringing
more processing power to bear on the problem, the flexibility of distributed
systems, and the availability of the resources that will become accessible as
distributed systems become more prominent in the computer community.

	Whew, what a mouthful.  In a nutshell, I am interested in:  ray
tracing in general, parallel algorithms, distributed systems for image
synthesis (any one know of any good references), and this new fangled
radiosity stuff.

--------

Joe Cychosz

    Purdue University CADLAB
    Potter Engineering Center
    W. Lafayette, IN  47906

    Phone: 317-494-5944
    Email: cychosz@ecn.purdue.edu

My interests are in supercomputing and computer graphics.  Research work is
Vectorized Ray Tracing.  Other interests are:  Ray tracing on MIMD tightly
coupled shared memory machines, Algorithm vectorization, Mechanical design
processes, Music synthesis, and Rendering in general.

--------

Jerry Quinn
Department of Math and Computer Science
Bradley Hall
Dartmouth College
Hanover, NH 03755
sunapee.dartmouth.edu!quinn

My interests are currently ray tracing efficiency, parallelism,
animation, radiosity, and whatever else happens to catch my eye at the
given moment.

--------

Marty Barrett - octrees, parametric surfaces, parallelism.
	mlb6@psuvm.bitnet

Here is some info about my interests in ray tracing:

I'm interested in efficient storage structures for ray tracing, including
octree representations and hybrid regular subdivision/octree grids.  I've
looked at ray tracing of parametric surfaces, in particular Bezier patches and
box spline surfaces, via triangular tessellations.  Parallel implementations of
ray tracing are also of interest to me.

--------

    Charles A. Clinton
    Sierra Geophysics, Inc.
    11255 Kirkland Way
    Kirkland, WA 98033 USA
    Email: ...!uw-beaver!sumax!ole!steven!cac
    Voice: (206) 822-5200 
    Telex: 5106016171
    FAX:   (206) 827-3893

I am doing scientific visualization of 3D seismic data. To see the kind of
work that I am doing, check out:

	'A Rendering Algorithm for Visualizing 3D Scaler Fields'
	Paolo Sabella
	Schlumberger-Doll Research
	Computer Graphics, Vol. 22, Number 4 (SIGGRAPH '88 Conference Proc.)
	pp 51-58

In addition, I try to keep up with ray-tracing and computer graphics in
general. I occasionally try my hand at doing some artistic ray-tracing.
(I would like to extend my thanks to Mark VandeWettering for distributing
MTV. It has provided a wonderful platform for experimentation.)

--------

Jochen Schwarze

   I've been developing several smaller graphics packages, e.g.  a 3D
visualization of turning parts etc.  Now I'm implementing the 2nd version of a
ray tracing program that supports modular programming using a description
language, C++ vector analysis and body class hierarchy, CSG trees, texture
functions and mapping, a set of body primitives, including typeface rendering
for logos, and a network ipc component to allow several cpu's to calculate a
single image.

   My interests lie - of course :-) - in speedup techniques, and the
simulation of natural phenomena, clouds, water, etc.  Just starting with this.

Jochen Schwarze                     Domain: schwarze@isaak.isa.de
ISA GmbH, Stuttgart, West Germany   UUCP:   schwarze@isaak.uucp
                                    Bang:   ...!uunet!unido!isaak!schwarze

				    S-Mail: ISA GmbH
					    c/o Jochen Schwarze
					    Azenberstr. 35
					    7000 Stuttgart 1
					    West Germany

-------------------------------------------------------------------------------

Q&A on Radiosity Using Ray Tracing, Mark VandeWettering & John Wallace


>From Mark VandeWettering:

I am currently working on rewriting my ray tracer to employ radiosity-like 
effects.  Your paper (with Wallace and Elmquist) is very nice, and suggests
a really straightforward implementation.  I just have a couple of questions 
that you might be able to answer.

When you shoot energy from a source patch, it is collected at a specific
patch vertex.  How does this energy get transferred to a given patch for 
secondary shooting?  In particular, is the vertex shared between multiple
patches, or is each vertex only in a single patch?  I can imagine the
solution if each vertex is distinct, but have trouble with the case where
vertices are shared.  Any quick insights?

The only other question I have is:  HOW DO YOU GET SUCH NICE MODELS TO RENDER?
[We use ME30, HP's Romulus based solids modeler - EAH]

Is there a public domain modeling package that is available for suns or sgi's 
that I can use to make more sophisticated models?  Something cheap even?

[The BRL modeler and ray tracer runs on a large number of machines, and they
like having universities as users - see Vol.2 No.2 (archive 6).  According
to Mike Muuss' write-up, some department in Princeton already has a copy.

The Simple Surface Modeler (SSM) works on SGI equipment.  It was developed at
the Johnson Space Center and, since they are not supposed to make any money
off it, is being sold cheap (?) by a commercial distributor.  COSMIC, at
404-542-3265, can send you some information on it.  It also runs on a Pixel
Machine (which is what I saw it running on at SIGGRAPH 88), though I don't
believe support for this machine will be shipped.  It's evidentally not
shipping yet (red tape - the product is done), but should be "realsoonnow".
More information when I get the abstract.  Does anyone else know any
resources?]

--------

Reply from John Wallace:

Computing the patch energy in progressive radiosity using ray tracing:

Following a step of progressive radiosity, every mesh vertex in the scene will
have a radiosity.  Energy is not actually collected at the mesh vertices.
What is computed at each vertex is the energy per unit area (radiosity)
leaving the surface at that location.  The patch radiosity is the average
energy per unit area over the patch.  Finally, the patch energy is the patch
radiosity times the patch area (energy per unit area times area).

The vertex radiosities can be considered a sampling of the energy per unit
area at selected points across the patch.  To obtain the average energy per
unit area over the patch, take the average of the vertex radiosities.  This
assumes that the vertices represent uniform sub-areas of the patch.  This is
not necessarily true, and when it is not a more accurate answer is obtained by
taking an area weighted average of the vertex radiosity.  The weight given to
a vertex is equal to the area of the patch that it represents.  In our work we
used a uniform mesh and weighted all vertices equally.

It doesn't matter whether vertices are shared by neighboring patches, since
we're talking about energy per unit area.  Picture four patches that happen to
all share a particular vertex.  The energy per unit area leaving any of the
patches at the vertex is not affected by the fact that other patches share
that vertex.  If we were somehow collecting energy at the vertex, then it
would have to be portioned out between the patches.

Once the patch radiosity is know, the patch energy is obtained by multiplying
patch radiosity times patch area.

-------------------------------------------------------------------------------

Dark Bulbs, by Eric Haines

	An interesting idea mentioned to me by Andrew Glassner was the concept
of "darkbulbs" in computer graphics.  This idea is a classic joke technology,
in which the darkbulb sucks light out of an area.  For example, if you want
to sleep during the daytime, you simply turn on your negative 100 watt dark
bulb and your bedroom is flooded in darkness.  Andrew noted that this
technology is entirely viable in computer graphics, and would even be useful
in obtaining interesting results.

	I happened to mention the idea to Roy Hall, and he told me that this
was an undocumented feature of the Wavefront package!  Last year Wavefront came
out with an image of two pieces of pottery behind a candle, with wonderful
texturing on the objects.  It turns out that the artist had wanted to
tone down the brightness in some parts of the image, and so tried negative
intensity light sources.  This turned out to work just fine, and the artist
mentioned this to Roy, who, as an implementer of this part of the package,
had never considered that anyone would try this and so never restricted the
intensity values to be non-negative.

-------------------------------------------------------------------------------

MTV Ray Tracer Update and Bugfix, by Mark VandeWettering


[this was extracted by me from personal mail, with parts appearing on
comp.graphics - EAH]

As was recently pointed out to me by Mike Schoenborn, the cylinder code in the
version of the MTV raytracer is broken somewhat severely.  Or at least it
appeared to be so, what actually happens is that I forgot to normalize two
vectors, which leads to interesting distortions and weird looking cylinders.
Anyway, the bug is in cone.c, in the function MakeCone().  After the vectors
cd -> cone_u and cd -> cone_v are created, they should be normalized.  A
context diff follows at the end of this.  This makes the SPD "tree" look MUCH
better.  (And all this time I thought it was Eric's fault :-)

This bugfix will be worked into the next release, and I should also update the
version on cs.uoregon.edu SOMETIME REAL SOON NOW (read, don't hold your breath
TOO anxiously).  Hope that this program continues to be of use...  :-)

Somebody has some texture mapping code that they are sending me, I will
probably try to integrate it in before I make my next release.  I am also
trying to get in spline surfaces, but am having difficulty to the point of
frustration.Any recommendations on implementing them?


*** ../tmp/cone.c	Fri Aug 25 20:25:52 1989
--- cone.c	Fri Aug 25 21:31:04 1989
***************
*** 240,247 ****
--- 240,251 ----
  	/* find two axes which are at right angles to cone_w
  	 */
  
+ 
  	VecCross(cd -> cone_w, tmp, cd -> cone_u) ;
  	VecCross(cd -> cone_u, cd -> cone_w, cd -> cone_v) ;
+ 
+ 	VecNormalize(cd -> cone_u) ;
+ 	VecNormalize(cd -> cone_v) ;
  
  	cd -> cone_min_d = VecDot(cd -> cone_w, cd -> cone_base) ;
  	cd -> cone_max_d = VecDot(cd -> cone_w, cd -> cone_apex) ;

-------------------------------------------------------------------------------

DBW Ray Tracer Description

[A ray tracer that has been mentioned in these pages (screens?) before is DBW.
Not having an Amiga and not being able to deal with "zoo" files, I never got a
copy.  Prem Subrahmanyam has now made it available via anonymous FTP from
geomag.gly.fsu.edu in /pub/pics/DBW.src.  Output is four bits for each
channel.

The original program was written by David B. Wecker, translating from a Vax
11/750 to the Amiga, with a conversion to Sun workstations by Ofer Licht
(ofer@gandalf.berkeley.edu). - EAH]

Below is an excerpt from the documentation RAY.DOC:


The RAY program knows how to create images composed of four primitive
geometric objects:  spheres, parallelograms, triangles, and flat circular
rings (disks with holes in them).  Some of the features of the program are:

Diffuse and specular reflections (with arbitrary levels of gloss or polish).
Rudimentary modeling of object-to-object diffusely reflected light is also
implemented, that among other things accurately simulates color bleed effects
from adjacent contrasting colored objects.

Mirror reflections, including varying levels of mirror smoothness or
perfection.

Refraction and translucency (which is akin to variable microscopic smoothness,
like the surface of etched glass).

Two types of light sources:  purely directional (parallel rays from infinity)
of constant intensity, and spherical sources (like light bulbs, which cast
penumbral shadows as a function of radius and distance) where intensity is
determined by the inverse square law.

Photographic depth-of-field.  That is, the virtual camera may be focused on a
particular object in the scene, and the virtual camera's aperture can be
manipulated to affect the sharpness of foreground and background objects.

Solid texturing.  Normally, a particular object (say a sphere) is considered
to have constant properties (like color) over the entire surface of the
object, often resulting in fake looking objects.  Solid texturing is a way to
algorithmically change the surface properties of an object (thus the entire
surface area is no longer of constant nature) to try and model some real world
material.  Currently the program has built in rules for modelling wood,
marble, bricks, snow covered scenes, water (with arbitrary wave sources), plus
more abstract things like color blend functions.

Fractals.  The program implements what's known as recursive triangle
subdivision, which creates all manners of natural looking surface shapes (like
broken rock, mountains, etc.).  The character of the fractal surface (degree
of detail, roughness, etc.)  is controlled by parameters fed to the program.

AI heuristics to complete computation of a scene within a user specified
length of time.  [???]

======== USENET cullings follow ===============================================

Wanted: Easy ray/torus intersection, by Jochen Schwarze


What I want to do is to turn a path consisting of line and arc segments around
an axis and then ray-trace the generated turning part.  The rotated line
segments produce cylinders or cones that are easy to intersect with a ray,
whereas the arcs produce tori.  To evaluate the intersection of the ray with a
torus I'd have to numerically solve a polynomial equation of fourth degree.

Does anybody know a way that avoids solving a general fourth- degree equation?
Perhaps something that respects torus geometry and allows to split the
equation into two quadric ones?  Any other fast way to do it?

Thanks very much.

Jochen Schwarze                     Domain: schwarze@isaak.isa.de
ISA GmbH, Stuttgart, West Germany   UUCP:   schwarze@isaak.uucp
                                    Bang:   ...!uunet!unido!isaak!schwarze

-------------------------------------------------------------------------------

Polygon to Patch NFF Filter, by Didier Badouel


This is a new filter program for NFF databases, it  converts polygons (p)
into patches (pp) computing normal vector for vertices. 

________________________________________________________________
  Didier  BADOUEL   		        badouel@irisa.fr
  INRIA / IRISA				Phone : +33 99 36 20 00
 Campus Universitaire de Beaulieu	Fax :   99 38 38 32
 35042 RENNES CEDEX - FRANCE		Telex : UNIRISA 950 473F
________________________________________________________________

[Code removed.  Find it at cs.uoregon.edu or write him - EAH]

-------------------------------------------------------------------------------

Texture Mapping Resources, by Eric Haines, Prem Subrahmanyam,
	Ranjit Bhatnagar, and Jack Ritter


From: Eric Haines

Robert Minsk had a question about how to do inverse mapping on a quadrilateral.
This was my response:

For the inverse bilinear mapping of XYZ to UV, see p. 59-64 of "An Introduction
to Ray Tracing", edited by Andrew Glassner, Academic Press (hot off the press).
Tell me if you find any bugs, since I need to send typoes to AP.  This same
info is in the "Intro to RT" SIGGRAPH course notes from 1987 & 1988,
with one important typo fixed (see old issues of the Ray Tracing News to
find out the typo).

An excellent discussion of the most popular mappings (affine, bilinear, and
projective), and for a discussion on why to avoid simple Gouraud interpolation,
get a copy of Paul Heckbert's Master's Thesis (again, hot off the press),
"Fundamentals of Texture Mapping and Image Warping".  It's got what you need
and is also a good start on sampling/filtering problems.  Order it as
Report No. UCB/CSD 89/516 (June 1989) from

        Computer Science Division
        Dept of Electrical Engineering and Computer Sciences
        University of California
        Berkeley, California  94720

It was $5.50 when I ordered mine.  Oh, I should also note: it has source
code in C for most of the algorithms described in the text.

--------

From: prem@geomag.fsu.edu (Prem Subrahmanyam)
Newsgroups: comp.graphics
Subject: Re: Texture mapping
Organization: Florida State University Computing Center

I would strongly recommend obtaining copies of both DBW_Render and QRT, as
both have very good texture mapping routines.  DBW uses absolute spatial
coordinates to determine texture, while QRT uses a relative position per each
object type mapping.  DBW has some really interesting features, like
sinusoidal reflection to simulate waves, a turbulence-based marble/wood
texture based on the wave sources defined for the scene.  It as well has a
brick texture, checkerboard, and mottling (turbulent variance of the color
intensity).  Writing a texture routine in DBW is quite simple, since you're
provided with a host of tools (like a turbulence function, noise function,
color blending, etc.).  I have recently created a random-color texture that
uses the turbulence to redefine the base color based on the spatial point
given, which it then blends into the object's base color using the color blend
routines.  Next will be a turbulent-color marble texture that will modify the
marble vein coloring according to the turbulent color.  Also in the works are
random color checkerboarding (this will require a little more thought),
variant brick height and mortar color (presently they are hard-wired), the
list is almost endless.  I would think the ideal ray-tracer would be one that
used QRT's user-definable texture patches which are then mapped onto the
object, as well as DBW's turbulence/wave based routines.  The latter would
have to be absolute coordinate based, while the former can use QRT's relative
position functions.  In any case, getting copies of both of these would be the
most convenient, as there's no reason to reinvent the wheel.

--------

From: ranjit@grad1.cis.upenn.edu (Ranjit Bhatnagar)
    4211 Pine St., Phila PA 19104
Newsgroups: comp.graphics
Subject: Re: Texture mapping by spatial position
Organization: University of Pennsylvania

The combination of 3-d spatial texture-mapping (where the map for a particular
point is determined by its position in space rather than its position on the
patch or polygon) with a nice 3-d turbulence function can give really neat
results for marble, wood, and such.  Because the texture is 3-d, objects look
like they are carved out of the texture function rather than veneered with it.
It works well with non-turbulent texture functions too, like bricks, 3-d
checkerboards, waves, and so on.  However, there's a disadvantage to this kind
of texture function that I haven't seen discussed before:  as generally
proposed, it's highly unsuited to _animation._ The problem is that you
generally define one texture function throughout all of space.  If an object
happens to move, its texture changes accordingly.  It's a neat effect - try it
- but it's not what one usually wants to see.

The obvious solution to this is to define a separate 3-d texture for each
object, and, further, _cause the texture to be rotated, translated, and scaled
with the object._ DBW does not allow this, so if you want to do animations of
any real complexity with DBW, you can't use the nice wood or marble textures.

This almost solves the problem.  However, it doesn't handle the case of an
object whose shape changes.  Consider a sphere that metamorphoses into a cube,
or a human figure which walks, bends, and so on.  There's no way to keep the
3-d texture function consistent in such a case.

Actually, the real world has a similar defect, so to speak.  If you carve a
statue out of wood and then bend its limbs around, the grain of the wood will
be distorted.  If you want to simulate the real world in this way and get
animated objects whose textures stay consistent as they change shape, you have
to use ordinary surface-mapped (2-d) textures.  But 3-d textures are so much
nicer for wood, stone, and such!  There are a couple of ways to get the best
of both worlds:  [I assume that an object's surface is defined as a constant
set of patches, whether polygonal or smooth, and though the control points may
be moved around, the topology of the patches that make up the object never
changes, and patches are neither added to or deleted from the object during
the animation.]

	1) define the base-shape of your object, and _sample its surface_ in
	   the 3-d texture.  You can then use these sample tables as ordinary
	   2-d texture maps for the animation.

	2) define the base-shape of your object, and for each metamorphosized
	   shape, keep pointers to the original shape.  Then, whenever a ray
	   strikes a point on the surface of the metamorphed shape, find the
	   corresponding point on the original shape and look up its
	   properties (i.e.  color, etc.)  in the 3-d texture map.  [Note:  I
	   use ray-tracing terminology but the same trick should be applicable
	   to other techniques.]

The first technique is perhaps simpler, and does not require you to modify
your favorite renderer which supports 2-d surface texture maps.  You just
write a preprocessor which generates 2-d maps from the 3-d texture and the
base-shape of the object.  However, it is susceptible to really nasty aliasing
and loss of information.  The second technique has to be built into the
renderer, but is amenable to all the antialiasing techniques possible in an
ordinary renderer with 3-d textures, such as DBW.  Since the notion of 'the
same point' on a particular patch when the control points have moved is
well-defined except in degenerate cases, the mapping shouldn't be a problem --
though it does add an extra level of antialiasing to worry about.  [Why?
Imagine that a patch which is very large in the original base-shape has become
very small - sub-pixel size - in the current animated shape.  Then a single
pixel-sized sample in the current shape could be mapped to a large part of the
original - using, for instance, stochastic sampling or analytic techniques.]

If anyone actually implements these ideas, I'd like to hear from you (and get
credit, heh heh, if I thought of it first).  I doubt that I will have the
opportunity to try it.

--------

From: ritter@versatc.UUCP (Jack Ritter)
Organization: Versatec, Santa Clara, Ca. 95051

[Commenting on Ranjit's posting]

It seems to me that you could solve this problem by transforming the
center/orientation of the texture function along with the object that is being
instantiated.  No need to store values, no tables, etc.  The texture function
must of course be simple enough to be so transformable.

Example, wood grain simulated by concentric cylindrical shells around an axis
(the core of the log):

    Imagine the log's center line as a half-line vector, (plus a position, if
necessary), making it transformable.  Imagine each object type in its object
space, BOLTED to the log by an invisible bracket.  As you translate and rotate
the object, you also sling the log around.  But be careful, some of these logs
are heavy, and might break your teapots.  I use only natural logs myself.

   Jack Ritter, S/W Eng. Versatec, 2710 Walsh Av, Santa Clara, CA 95051
   Mail Stop 1-7.  (408)982-4332, or (408)988-2800 X 5743
   UUCP:  {ames,apple,sun,pyramid}!versatc!ritter

-------------------------------------------------------------------------------
END OF RTNEWS

From m-cohen@cs.utah.edu Fri Oct 27 10:54:38 1989
Return-Path: <m-cohen@cs.utah.edu>
Received: from cs.utah.edu by turing.cs.rpi.edu (4.0/1.2-RPI-CS-Dept)
	id AA16161; Fri, 27 Oct 89 10:52:46 EDT
Received: by cs.utah.edu (5.61/utah-2.4-cs)
	id AA22078; Fri, 27 Oct 89 08:38:49 -0600
Date: Fri, 27 Oct 89 08:38:49 -0600
From: m-cohen@cs.utah.edu (Michael F Cohen)
Message-Id: <8910271438.AA22078@cs.utah.edu>
To: FISHER@3D.dec@decwrl.dec.com, arvo@apollo.com, atc@cs.utexas.edu,
        barr@csvax.caltech.edu, barsky@miro.berkeley.edu,
        bcorrie@uvicctr.uvic.ca, chapman@fornax, ckchee@dgp.toronto.edu,
        daniel@apollo.com, dk@csvax.caltech.edu,
        ecn.purdue.edu!cychosz@cs.utah.edu, esl0422@ultb.isc.rit.edu,
        glassner.pa@xerox.com, grant@delvalle.llnl.gov,
        gray@rhea.CRAY.COM@uc.msc.umn.edu, green@compsci.bristol.ac.uk,
        hanrahan@princeton.edu, hench@csclea.ncsu.edu,
        hohmeyer@miro.berkeley.edu, hplabs!dana!mrk@cs.utah.edu,
        hultquis@prandtl.nas.nasa.gov, image.trc.amoco.com!zmel02@cs.utah.edu,
        jakob@humus.huji.ac.il, jeff@hamlet.caltech.edu, johnf@apollo.com,
        joy@ucdavis.edu, kolb@yale.edu, kyriazis@turing.cs.rpi.edu,
        larry.mcrcim.mcgill.edu!vedge!kaveh@cs.utah.edu, lister@dg-rtp.dg.com,
        litwinow@apple.com, mcvax.cwi.nl!dutio!fwj@cs.utah.edu,
        mcvax.cwi.nl!dutrun!wim@cs.utah.edu, mja@sierra.llnl.gov,
        mplevine@phoenix.princeton.edu,
        mssun1.msi.cornell.edu!carl@cs.utah.edu, paul@sgi.com,
        ph@miro.berkeley.edu, raycasting@duke.cs.duke.edu,
        raytrace@cpsc.ucalgary.ca, rgb@caen.engin.umich.edu,
        squid.graphics.cornell.edu!jaf@cs.utah.edu,
        tcgould.tn.cornell.edu!lytle@cs.utah.edu, tim@csvax.caltech.edu,
        ucsd.ucsd.edu!megatek!kuchkuda@cs.utah.edu,
        wisdom.graphics.cornell.edu!ray-tracing-news@cs.utah.edu
Subject: Ray Tracing News
Status: RO

 _ __                 ______                         _ __
' )  )                  /                           ' )  )
 /--' __.  __  ,     --/ __  __.  _. o ____  _,      /  / _  , , , _
/  \_(_/|_/ (_/_    (_/ / (_(_/|_(__<_/ / <_(_)_    /  (_</_(_(_/_/_)_
             /                               /|
            '                               |/

			"Light Makes Right"

			October 27, 1989
		        Volume 2, Number 8

Compiled by Eric Haines, 3D/Eye Inc, 2359 Triphammer Rd, Ithaca, NY 14850
    NOTE ADDRESS CHANGE: wrath.cs.cornell.edu!eye!erich
    [distributed by Michael Cohen <m-cohen@cs.utah.edu>, but send
    contributions and subscriptions requests to Eric Haines]
All contents are US copyright (c) 1989 by the individual authors
Archive locations: anonymous FTP at cs.uoregon.edu (128.223.4.1) and at
		   freedom.graphics.cornell.edu (128.84.247.85), /pub/RTNews

Contents:
    Introduction
    Tracing Tricks, edited by Eric Haines

-------------------------------------------------------------------------------

Introduction

	I've decided to pass on an article published in the SIGGRAPH '89
"Introduction to Ray Tracing" course notes.  It's something of a "best of the
Ray Tracing News" compendium of ideas.  Since the notes are not easy for
everyone to access, and the article probably will not be printed elsewhere, I
thought it worthwhile to reprint here.

-------------------------------------------------------------------------------

Tracing Tricks, edited by Eric Haines

Over the years I have learnt a variety of tricks to increase the performance
and image quality of my ray tracer.  It's almost a cliche that today's
successful trick is tomorrow's established technique.  Photorealistic computer
graphics is, after all, concerned with figuring out shortcuts and
approximations for rendering various physical phenomena, i.e. tricks.

For whatever reason, many of the tricks mentioned here are not common
knowledge.  Some have been published (and sometimes overlooked), some have
been discussed informally and have never made it into research papers, and
others seem to have appeared out of nowhere.  It's most likely that there are
tricks that are commonly known that have not percolated over to me yet.

When possible, I have tried to give appropriate references or attributions; if
not attributed, the ideas are my own (I think!).  My apologies if I have
overlooked anyone.  Only references that do not appear in the book's "Ray
Tracing Bibliography" are included at the end of this article.  For more
general rendering hacks, see [Whitted85], which originally inspired me to
attempt to pass on some ideas from my bag of tricks.


Ambient Light

One common trick (origins unknown) is to put a light at the eye to do better
ambient lighting.  Normally if a surface is lit by only ambient light, its
shading is pretty crummy.  For example, a non-reflective cube totally in
shadow will have all of its faces shaded the exact same shade.  All edges
disappear and the cube becomes a hexagonal blob.  The light at the eye gives
the cube definition.  Note that a light at the eye does not need shadow
testing:  wherever the eye can see, the light can see, and vice versa.
However, this trick can lead to various artifacts, e.g.  there will always be
a highlight near the center of every specular sphere in the image.


Efficiency Schemes

There are any number of efficiency schemes out there, including Bounding
Volume Hierarchy, Octree, Grid, and 5D Ray Classification.  See Jim Arvo's
section of the book for an excellent overview of all of these.  The most
important conclusion is that any efficiency scheme is better than none.  Even
on the simplest scenes an efficiency scheme will help execution.  For example,
in one test scene with only ten objects, using an efficiency scheme made the
job take only one third the time.  Grid subdivision is probably the quickest
to implement, though the others are not that much harder.

While at the University of Utah, John Peterson and Tom Malley actually
implemented Whitted/Rubin, Kay/Kajiya, and an octree scheme, and found that
all three schemes were within 10-20% of each other speedwise.  In an informal
survey at SIGGRAPH '88, the BV Hierarchy, Octree, Grid and 5D schemes all had
about the same number of users (all the 5D users were from Apollo; on the
other hand, 5D is the new kid on the block).

There are a number of techniques I have found to be generally useful for all
efficiency schemes.

1) When shadow testing, keep the opaque object (if any) which shadowed each
   light for each ray tree node.  Try this object immediately during the next
   shadow test at that ray tree node.  Odds are that whatever shadowed your
   last intersection point will shadow again.  If the object is hit you can
   immediately stop testing because the light is not seen.  This was first
   published in [Haines86].

2) When shadow testing, save transparent objects for later intersection.  Only
   if no opaque object is hit should the transparent objects be tested.  The
   idea here is to avoid doing work on transparent filters when in fact the
   light does not reach the surface.

3) Don't calculate the normal for each intersection.  Get the normal only
   after all intersection calculations are done and the closest object for
   each node is known.  After all, each ray can have only one intersection
   point and one normal.  Saving intermediate results is worthwhile for some
   intersection calculations, which are then used if the object is actually
   hit.  This idea was first mentioned in [Whitted85].  Similarly, other
   calculations about the surface can be delayed, such as (u,v) location, etc.

4) One other idea (which I have not tested) is sorting each intersection list
   by various criteria.  Most efficiency schemes have in common the idea of
   lists of objects to be tested.  For a given list, the order of testing is
   important.  For example, all else being equal, if a list contained a spline
   surface and a polygon, I would want to test the polygon first since it is
   usually a quicker intersection test.  Given an opaque object and a bounding
   box in a list, I probably want to test the opaque object first when doing
   shadow testing, since I want to find any intersection as soon as possible.
   If two polygons are on the list, I probably want to test the larger one
   first, as it is more likely to cast a shadow or give me a maximum depth
   (see next section).  There are many variations on this theme and at this
   point little work has been done on these possibilities.


Bounding Volume Hierarchy

I have a strong bias towards this scheme since it handles a wide variety of
object sizes, types, and orientations in a robust fashion.  Other schemes will
often be faster, but this scheme has the fewest crippling pathological cases
(e.g. a grid subdivision scheme is useless whenever most of the objects fall
into one grid box).  I favor the automatic bounding volume generation technique
described by [Goldsmith87].

I have found a number of tricks to speed up hierarchy traversal, most of which
are simple to implement. Some of the ideas can also be useful for other
efficiency schemes.

1) Keep track of the closest intersection distance.  Whenever an object is
   hit, keep its distance as the maximum distance to search.  During further
   intersection testing use this distance to cut short the intersection
   calculations:  if an object or bounding box is beyond the maximum distance,
   no further testing of it needs to be done.  Note that for shadow testing
   the distance to the light provides an initial maximum.

2) When building a ray intersection tree, keep the ray tree which was
   previously built.  For each ray tree node, intersect the object in the old
   ray tree, then proceed to intersect the bounding box/object tree.  By
   intersecting the old object first you can usually obtain a good maximum
   distance immediately, which can then be used to aid trick #1.

3) When shooting rays from a surface (e.g.  reflection, refraction, or shadow
   rays), get the initial list of objects to intersect from the bounding
   volume hierarchy.  For example, a ray beginning on a sphere must hit the
   sphere's bounding volume, so include all other objects in this bounding
   volume in the immediate test list.  The bounding volume which is the parent
   of the sphere's bounding volume must also automatically be hit, and its
   other children should automatically be added to the test list, and so on up
   the object tree.  Note also that this list can be calculated once for any
   object, and so could be created and kept around under a least-recently-used
   storage scheme.  Another advantage of this scheme is that nearby neighbors
   of the object are tested for shadowing first.  These neighbors are more
   likely to cast a shadow on the point than any random object.  I first saw
   this trick used in Weghorst and Hooper's ray tracer at Cornell's Program of
   Computer Graphics.

4) Similar to trick #3, the idea is simply to do the same list making process
   for the eye position.  Check if the eye position is inside the topmost node
   of the hierarchy.  If it is, check the children which are boxes.  Continue
   to check and unwrap until you are left with a list of objects to intersect.
   Again, the idea is to avoid wasting time shooting a ray against boxes which
   you know must be hit.

   For light sources, since the farthest endpoint of the ray is also known, it
   can also be used to open some boxes early on.  The tradeoff here, however,
   is that for shadow testing we want to find any intersection we can.
   Wasting time opening boxes near the light or ray origin might be better
   spent trying to find an intersection as fast as possible.

5) An improvement to trick #3 is also to use trick #4 to open more boxes
   initially.  You work up the hierarchy opening all parent boxes; any
   children of the parent (except the original one, of course) are then tested
   against the ray position.  However, this can be done only when the trick is
   done on the fly, since the ray's origin will change.

Kay & Kajiya's hierarchy scheme [Kay86] is about the best overall traversal
method.  However, Jeff Goldsmith and others note that if you do use Kay &
Kajiya's heapsort on bounding volumes in order to get the closest, don't
bother to do it for illumination rays.  In shadowing, you don't care about the
closest intersection, but just whether anything blocks the light.


Octree

There are a few tips on accessing and moving through the octree.  Olin Lathrop
and others have pointed out that there is a faster method than Glassner's
hashing scheme for finding which octree voxel contains a point.  Quickest is
to simply transform the point into the octree's (0,0,0) to (1,1,1) space.  Say
you allow your octree a maximum of eight levels.  This means you'll want to
convert from world coordinates to eight bits in X, Y, and Z.  For example, if
the octree box went from 3.0 to 6.0 along the X axis in world space, you would
convert 5.25 into the binary fraction 0.11000000 (which is equal to 0.75
decimal, which is where 5.25 lies between 3.0 and 6.0).  The most significant
bit of each binary fraction for each coordinate is then combined and used to
access the correct topmost octree voxel (i.e. 0 through 7, similar to
Glassners' scheme).  The next-most significant three bits are then stripped
off, and the corresponding subordinate octree voxel is accessed, on down until
a leaf voxel is found.

In practice, each octree voxel notes whether it is a parent of further voxels
or is a leaf and contains a list of objects to hit.  If it is a parent, it
stores a list of 8 pointers to its subordinate octrees, with null pointers
meaning that the subordinate octree is empty; otherwise, a list of objects is
used.

One problem with building octrees is deciding when enough is enough.  You want
to subdivide an octree voxel if the number of objects is too many, but you may
find that these further subdivisions do not gain you anything.  Olin Lathrop
has an interesting method for octree subdivision.  First, the biggest win is
to subdivide on the fly.  Never subdivide anything until you find there is a
demand for it (this same idea was used by Arvo and Kirk [Arvo87] in their 5D
efficiency scheme).  His subdivision criteria are, in order of precedence:

1) Do not subdivide if subdivision generation limit is hit.

2) Do not subdivide if a voxel contains less than X objects (These first two
   criteria were first proposed in [Glassner84]).  Olin uses X=1.

3) Do not subdivide if less than N rays passed through this voxel, but did not
   hit anything.  Olin uses N=4.

4) Do not subdivide if M*K >= N, where M is the number of rays that passed
   through this voxel that did hit something, and K is a parameter you chose.
   Olin uses K=2, but suspects it should be higher.  This step seeks to avoid
   subdividing a voxel that may be large, but has a good history of producing
   real intersections anyway.  Keep in mind that for every ray that did hit
   something, there are probably shadow test rays that did not hit anything.
   This can distort the statistics, and make a voxel appear less "tight" than
   it really is, hence the need for larger values of K.

Another possible criterion is to base the subdivision generation limit
(criterion 1) on the number of objects in the octree.  If you had, say, 6
objects and 5 of them are clustered tightly together, you may find your octree
reaching its maximum depth without the subordinate octrees actually splitting
up the 5 objects.  These octree voxels are useless, costing extra time and
memory.  They could be avoid by setting the limit based on the total number of
objects.  I use something along the lines of the depth limit being equal to
log sub 4 of (number of objects).

Andrew Glassner has a better method to avoid this problem.  When you subdivide
a voxel, look at its children.  If only one child is non-empty, replace the
original voxel with its non-null child.  Do this recursively until the
subdivision criterion is satisfied.  He does this in his spacetime ray tracer,
and the speedup can be large.  Note that this scheme means adding a field to
the octree structure to identify what level in the hierarchy it represents.

An idea to speed octree traversal was first mentioned to me by Andrew Glassner
and later by Mike Kaplan.  The idea is to place a pointer on each face of each
octree voxel.  If a voxel's face is next to a larger or same size voxel, a
pointer to this neighbor is stored.  If the voxel face's neighbors are
smaller, then the face pointer is set to point at the bordering voxel of the
same size (which is these neighbors' common parent).  If there are no
neighbors (i.e. the face is on the exterior), a null pointer is stored.

When a ray exits a voxel, the voxel face is accessed and the next voxel found
directly.  This voxel may have to be descended, but this trick saves having to
descend the octree from the top.

Mike Kaplan independently arrived at a similar method, in which he stores
quadtrees at the faces so that he can immediately access the next voxel and
avoid any descent altogether.


Bounding Box Intersection

The fastest method in general is Kay and Kajiya's slab intersection method
[Kay86].  As they point out, precompute as much as you can for the ray, i.e.
check whether the ray is parallel to any of the axes, and for whichever axes
it is not, computing the multiplicative inverse of the ray direction vector.
However, there are other tests which can actually improve the performance of
the box intersector.  For example, I have found that for my particular ray
tracer, if we first do a quick inside-outside test with the ray origin and the
box, the overall box intersection time goes down (for a related trick, which
is something of a preprocess version of this method, see #4 under "Bounding
Volume Hierarchy").


Spline Surface Intersection

There are three camps on this question:  the numerical analysts, the polygon
meshers, and the synthesists (who do a little of each).  The following
comments are distilled from John Peterson's thoughts on the subject.  The
analytic methods are often slow, and there are many nightmares involved in
finding roots of two equations (see section 9.6 of [Press86]).  To find the
quadrilaterals, John recommends subdividing the surfaces by using the Oslo
Algorithm, due to its generality [Bartels87, Sweeney86].  Easiest to implement
is simply subdividing the surface by a given step size, then ray tracing the
mesh of polygons produced (throwing these polygons into an octree or grid
efficiency scheme is recommended).  Another method is to use adaptive
subdivision with a quadtree structure, checking a flatness criteria to see
whether a given quadrilateral should be subdivided into four
sub-quadrilaterals.

Peterson's subdivision criterion is to use a bounding box around each quad
generated, subdividing until the box is smaller than a certain number of
pixels.  A drawback of this method is that it does not elegantly handle
objects that are part of the scene yet do not appear in the viewing frustum
(e.g.  if the teapot is only seen in a mirror, we cannot get a good sense of
how much to subdivide it).  Snyder and Barr [Snyder87] have some good
recommendations on this process, and they use the change in the tangent vector
between the quad's points as a subdivision criterion.  This article also
points out other pitfalls of tessellation and of rendering polygons with a
normal at each vertex.

If adaptive techniques are used, one problem to guard against is cracking.
Say there are two adjacent quadrilaterals, and one has been subdivided into
four smaller quads.  Something must be done along the seam between the two
large quadrilaterals, as normally the subdivision point between the two common
vertices will not lie on the large, undivided quad.  If rendered from some
angle, there will be a noticeable crack between the large quad and the two
neighboring smaller quads.


Acknowledgements

This article owes a large debt to Andrew Glassner, who began "The Ray Tracing
News," an informal journal for ray tracing researchers to share ideas.  He has
kept the hardcopy version moving along, while I have had the pleasure of
running the electronic edition.  Most of the ideas given a personal
attribution in this article are from contributions to the "News", and I thank
all those who have contributed to it over the years.  Finally, my thanks to
Kells Elmquist and Andrew Glassner for their comments on this paper.


Bibliography

[Bartels87] Bartels, Richard H., John C.  Beatty, Brian A.  Barsky, An
Introduction to Splines for Use in Computer Graphics, Morgan Kaufmann, Los
Altos, California, 1987.

[Press88] Press, William H.  et al, Numerical Recipes in C, Cambridge
University Press, Cambridge, 1988.

-------------------------------------------------------------------------------
END OF RTNEWS

From rpi!batcomputer!toc Thu Jan  4 12:20:24 EST 1990
Article 5522 of comp.graphics:
Path: rpi!batcomputer!toc
>From: toc@batcomputer.tn.cornell.edu (Timothy F. O'Connor)
Newsgroups: comp.graphics
Subject: Ray Tracing News, Volume 3, Number 1
Summary: here's another one, at long last
Message-ID: <9490@batcomputer.tn.cornell.edu>
Date: 3 Jan 90 17:04:13 GMT
Reply-To: wrath.cs.cornell.edu!eye!erich (Eric Haines)
Organization: 3D/Eye, Inc
Lines: 1189


 _ __                 ______                         _ __
' )  )                  /                           ' )  )
 /--' __.  __  ,     --/ __  __.  _. o ____  _,      /  / _  , , , _
/  \_(_/|_/ (_/_    (_/ / (_(_/|_(__<_/ / <_(_)_    /  (_</_(_(_/_/_)_
             /                               /|
            '                               |/

			"Light Makes Right"

			January 2, 1990
		        Volume 3, Number 1

Compiled by Eric Haines, 3D/Eye Inc, 2359 Triphammer Rd, Ithaca, NY 14850
    NOTE ADDRESS CHANGE: wrath.cs.cornell.edu!eye!erich
    [distributed by Michael Cohen <m-cohen@cs.utah.edu>, but send
    contributions and subscriptions requests to Eric Haines]
All contents are US copyright (c) 1989,1990 by the individual authors
Archive locations: anonymous FTP at cs.uoregon.edu (128.223.4.1) and at
		   freedom.graphics.cornell.edu (128.84.247.85), /pub/RTNews
Other sites: uunet.uu.net:/graphics

Contents:
    Introduction
    New People
    Archive Site for Ray Tracing News, by Kory Hamzeh
    Ks + T > 1, by Craig Kolb and Eric Haines
    Quartic Roots, and "Intro to RT" Errata, by Larry Gritz and Eric Haines
    More on Quartics, by Larry Spence
    Question: Kay and Kajiya Slabs for Arbitrary Quadrics?
	by Thomas C. Palmer
    Ambient Term, by Pierre Poulin
    Book Reviews on Hierarchical Data Structures of Hanan Samet,
	by A. T. Campbell, III
    Comparison of Kolb, Haines, and MTV Ray Tracers, Part I, by Eric Haines
    Raytracer Performance of MTV, by Steve Lamont
    BRL-CAD Ray Tracer Timings, by Gavin Bell
    BRL-CAD Benchmarking and Parallelism, by Mike Muuss
    ======== USENET cullings follow ========
    Rayshade Patches Available, by Craig Kolb
    Research and Timings from IRISA, by Didier Badouel
    Concerning Smart Pixels, by John S. Watson
    Input Files for DBW Render, by Tad Guy
    Intersection with Rotated Cubic Curve Reference, by Richard Bartels
    Needed: Quartz surface characteristics, by Mike Macgirvin
    Solution to Smallest Sphere Enclosing a Set of Points, by Tom Shermer
    True Integration of Linear/Area Lights, by Kevin Picott

-------------------------------------------------------------------------------

Introduction

	This issue's focus is on timings from various people using a wide
selection of hardware and software.  Much of the delay in getting out this
issue was my desire to finish up my own timing tests of MTV's, Kolb's, and my
own ray tracers on the same machine.  I hope some of you will find this
information of use.

	Another feature of this issue is a pair of book reviews.  One of the
purposes of the RT News is to provide people with sources of information
relevant to ray tracing.  So, I would like to see more reviews, or even just
brief descriptions of articles and books that you come across.  Keeping up to
date in this field is going to take more time as the years go by, so please do
pass on any good finds you may have.  Also, if you're an author, please feel
free to send a copy of the abstract here for publication.  This service is
already provided to a certain extent by SIGGRAPH for thesis work.  However,
even a duplication of their efforts is worthwhile, since an electronic version
is much easier to search and manipulate.

	Finally,

-------------------------------------------------------------------------------

New People

# Kory Hamzeh
# 6640 Nevada Ave.
# Canoga Park, Ca 91303
# Email: UUCP:     avatar!kory or ..!uunet!psivax!quad1!avatar!kory
# INTERNET: avatar!kory@quad.com
alias	kory_hamzeh	quad.com!avatar!kory

I'm not professionally involved in ray tracing.  Just personally fascinated by
it.  I have written a couple of ray tracers (who hasn't yet?) and I'm in the
midst of designing a 24 bit frame buffer.  Since I don't do this on a
professional level, I lack some of the resources required to develop real
products.  I maintain a archive site with a lot of graphics related items
(including Ray Tracing News).  If you need to access the archive (anonymous
uucp only) please send me mail.

--------

#
# Steve Lamont, sciViGuy - parallelism
# NCSC,
# Box 12732,
# Research Triangle Park, NC 27709
alias	steve_lamont	cornell!spl%mcnc.org

--------

# Bob Kozdemba - novice tracer, futures, also radiosity
# Hewlett-Packard Co.
# 7641 Henry Clay Blvd.
# Liverpool, NY 14450
# (315-451-1820 x265)
alias   bob_kozdemba    hpfcla!hpfcse!hpurvmc!koz

I work for HP in Syracuse, NY as a systems engineer.  I will be attending SU
starting in Jan. `89 working toward my BS with a focus in computer graphics.
My job responsibilities are to provide technical support to customers and
sales in the areas of Starbase graphics and X Windows.  Lately I have been
experimenting with HP's SBRR product [radiosity and ray tracing part of the HP
graphics package] and trying to keep abreast of futures in graphics.  I have
written an extremely primitive ray tracer and I am looking for ideas on how to
implement reflections and transparency.

--------

# Robert Goldberg
# Queens College of CUNY
# Comp. Sci. Dep't
# 65-34 Kissena Blvd.
# Flushing, N.Y. 11367-0904
# Work : 3d Modeling algorithms, with appl. to graphics and image processing
# Phone: Work -  (718) 520-5100
alias	robert_goldberg	rrg@acf8.nyu.edu

--------

# John Olsen - refraction, radiosity, antialiasing, stereo images.
# Hewlett-Packard, Mail Stop 73
# 3404 E. Harmony Road
# Ft Collins, CO 80525
# (303) 229-6746
# email:  olsen@hq.HP.COM, hplabs!hpfcdq!olsen
alias	john_olsen	hpfcdq.hp.com!olsen

Currently, I've been spending some time tinkering with the DBWrender ray
tracer making it produce 24 bit/pixel QRT-format images.  I like the QRT
output format, but I like some of the features of DBWrender, such as
antialiasing and fading to a background color.

I've thought about writing my own ray tracer with all the features I want, but
so far I've resisted this evil temptation, and only looked for fancier ones
already done by others who could not resist the temptation.  :^)

I've just installed an alias for a local ray tracing news distribution.  You
can send it to raylist@hpfcjo.HP.COM (or if you can't reach, try something
like hplabs!hpfcdq!hpfcjo!raylist).

--------

# Andrew Hunt, andrew@erm.oz
# Earth Resource Mapping, 130 Hay St, Subiaco, Western Australia 6008
# Phone: +61 9 388 2900   Fax: +61 9 388 2901
# ACSnet: andrew@erm.oz
alias	andrew_hunt	uunet!munnari!erm.erm.oz.au!andrew

In 1987 I implemented a "Three Dimensional Digital Differential Analyser"
(3D-DDA) algorithm, along the lines of Fujimoto and Iwata`s, and used it to
speed up a raytracing system under development at the Computer Science
Department at Curtin University of Technology.

Recently I have got a bit busy developing commercial image processing software
to devote much time to Ray Tracing.

Sometime during 1990 I plan to try to port our Ray Tracing system to a
Transputer based platform.

--------

# Nick Beadman - Distributed Ray Tracing, Efficiency
# School of Information Systems
# University of East Anglia
# Norwich
# Norfolk
# United Kingdom
alias	nick_beadman	cmp7112@sys.uea.ac.uk

At the moment I'm trying to implement a distributed ray tracer on 8 t800s on a
meiko computing surface using C, with little success I should add.  It all
part of a big third year computing project worth a sixth of my degree.

--------

# Peter Miller - algorithms, realism
# 18 Daintree Cr
# Kaleen ACT 2617
# AUSTRALIA
# CONTACT LIST ONLY: subscription through melbourne
#Phone:  +61-62-514611 (W)
#	 +61-62-415117 (H)
#        UUCP    {uunet,mcvax,ukc}!munnari!neccan.oz!pmiller
#        ARPA    pmiller%neccan.oz@uunet.uu.net
#        CSNET   pmiller%neccan.oz@australia
#        ACSnet  pmiller@neccanm.oz
alias	peter_miller	cornell!uunet!munnari!neccan.necisa.oz.au!peter

  I have been interested in ray tracing since 1984, when I wrote a ray tracer
before I knew it was called ray tracing!  Since then I have been reading
journals and tinkering with my ray tracer.

  The last 3 years were spent marooned off the net, with very poor graphics
output devices; so while some work was done on the ray tracer, I now have a
lot of catching up to do.

--------

# Mike J. McNeill
# VLSI and Graphics Research Group,
# EAPS II,
# University of Sussex,
# Falmer, BRIGHTON, East Sussex, BN1 9Qt
# TEL: +44 273 606755 x 2617
# EMAIL: (JANET) mikejm@syma.sussex.ac.uk | (UUCP) ...mcsun!ukc!syma!mikejm
alias	mike_mcneill	mike@syma.sussex.ac.uk

I am currently researching into parallelising the RT algorithm on a
multiprocessor architecture.  I'm using object space subdivision using a fast
octree traversal algorithm.  At the moment I'm simulating the architecture
using C via TCP/IP protocols on a distributed Apollo network.  Interested in
all aspects of speed-up, parallel architectures, coherency and radiosity -
dare I mention animation and Linda?  Also does anyone know of a modelling
package for use on the Apollos?

-------------------------------------------------------------------------------

Archive Site for Ray Tracing News, by Kory Hamzeh

Avatar is an archive site which archives Ray Tracing News, comp.source.unix,
comp.sources.misc, and other goodies. Archived files can be access by any
one using anonymous uucp file transfers. A list of all files in the archive
can be found in avatar!/usr/archive/index. Note that this file is quite
large (about 40K). If you are only interested in the graphics related 
archives, snarf the file avatar!/usr/archive/comp/graphics/gfx_index.
All Ray Tracing News Archives (except for the first few) are stored in 
/usr/archive/comp/graphics directory. They are named in the following
fashion:

	RTNewsvXXiYY.Z

Where XX is the volume number, YY is the issue number. Note that all files
(except index and gfx_index) are compressed.

Before trying to access avatar, your L.sys file (or Systems file, depending
on which flavor of UUCP you have) must be edited to contain the following
info:

#
# L.sys entry for avatar.
#
# NOTE: 'BREAK' tells uucico to send a break on the line. This keyword
# may vary from one flavor of UUCP to another. Contact your system
# administrator if your not sure.
#
# 1200 baud 
avatar Any ACU 1200 18188847503 "" BREAK "" BREAK ogin: anonuucp
#
# 2400 baud
avatar Any ACU 2400 18188847503 "" BREAK ogin: anonuucp
#
# 19200 baud (PEP mode) for Telebit Trailblazers only
avatar Any ACU 19200 18188847503 ogin:-\n-ogin: anonuucp

After the previous lines are entered in you L.sys file, you may use the
following command to get the index file:

    uucp avatar!/usr/archive/index /usr/spool/uucppublic/index

This command will grab the file from avatar and put it in your 
/usr/spool/uucppublic directory using the name index. For example,
to get Ray Tracing News Volume 2, Issue 5, execute the following
command:

   uucp avatar!/usr/archive/comp/graphics/RTNewsv02i05.Z /tmp/RTnewsv02i05.Z

NOTE that some shells (like csh) use the "!" as an escape character, so
use a "\" before the "!".

If you experience problems getting to any of the files, I can be reached
via e-mail at:

    UUCP: avatar!kory or ..!uunet!psivax!quad1!avatar!kory
    INTERNET: avatar!kory@quad.com soon to be kory@avatar.avatar.com


Enjoy,
Kory Hamzeh

-------------------------------------------------------------------------------

Ks + T > 1, by Craig Kolb and Eric Haines

>From: Craig Kolb <craig@weedeater.math.yale.edu>

While adding adaptive tree pruning to rayshade (and discovering a bug), I came
across a question I've had regarding the SPD for a while.

Some objects have Ks + T > 1.  How can this be?  For example, the spheres in
"mountain" have Ks = .9 and T = .9.  Unless I'm completely out to lunch (which
is possible), ths means that subsequent specularly reflected rays are weighted
by .9, and that transmitted rays are also weighted by .9.  This leads to
"glowing" objects pretty quickly.

What's wrong in the above description?  Be warned that in "nff2shade.awk", I
set the "specular reflectance" to be min(Ks, 1. - T).

--------

My reply:

	Actually, true enough, Ks + T > 1 does occur in the SPD stuff.
I use T in a funny way, since I was trying to make databases that would
display both using hidden surface and ray tracing algorithms.  Hmmm, how
to explain?  Well, imagine you have a glass ball: under the hidden surface
system (without transparency), you'd like the ball to appear opaque, and
so a high Kd is in order.  Now if the thing's transparent, you don't want
Kd to be high.  So what I do is lower Kd and Ks by ( 1 - T ).  An admittedly
weird shading model, which I've now changed a bit (i.e. reflectance and
transmittance are now entirely separate).  So, your solution of turning down
the reflectance is fine.  I should add that I didn't really explain all this
as it's irrelevant for timings (all we care about is what rays get generated,
not the final color), but I agree, it would be nicer to get a good resulting
picture as a check.  I'll change that in the next update of SPD, actually...
thanks for pointing it out!

--------

Craig's reply:

Ah, I get it.

I brought it up because it does make a difference timing-wise if you're using
adaptive tree pruning.  Although you say in the SPD stuff that pruning
shouldn't be used, Mark's raytracer currently has no option to turn it off.  I
was comparing pruned vs. pruned, and noticed that I had many fewer reflected
rays, since my reflectance for the transparent gears was .05 rather than .95.

-------------------------------------------------------------------------------

Quartic Roots, and "Intro to RT" Errata,
	by Larry Gritz (vax5.cit.cornell.edu!cfwy)

    I was trying to find the solution to a quartic equation to solve a
ray-torus intersection test.  I've gotten lots of replies, but generally fall
into one of two categories:  either solve the quartic equation (I forget the
reference now, but I'll send you either a reference or the formula if you want
[It's in the _CRC Standard Math Tables_ book - EAH]), or by some iterative
method.  Everybody says (and I have confirmed this experimentally) that
solving the equation is very numerically unstable.  I have chosen to use the
Laguerre method from Press, et. al. _Numerical Recipes in C_, which is slow,
but seems to work, and finds all roots without needing to specify brackets for
the roots.  (An advantage since although I can bracket all possible real roots
with the bounding box test that I already do, I'm not really sure how many
roots lie within the brackets.)

    What actually turns out to be a bigger problem is that I got the quartic
coefficients from the SIGGRAPH '88 Ray Tracing Course Notes (on page 3-14 of
Pat Hanrahan's section).

[Larry and I thrashed this out over a number of passes (boy, I wish I had access
to Mathematica or somesuch), and came out with the following corrected
equation set for those on page 93 of _An Introduction to Ray Tracing_:

	a4 & a3 - Pat's are OK.
        a2 = 2(x1^2+y1^2+z1^2)((x0^2+y0^2+z0^2)-(a^2+b^2)) 
              + 4 * (x0x1+y0y1+z0z1)^2 + 4a^2z1^2
	a1 = 4 * (x0x1+y0y1+z0z1)((x0^2+y0^2+z0^2)-(a^2+b^2))
		+ 8a^2 * z0 * z1
	a0 = ((x0^2+y0^2+z0^2)-(a^2+b^2))^2 - 4a^2(b^2-z^2)

- EAH]

-------------------------------------------------------------------------------

More on Quartics, by Larry Spence
    From: larry@csccat.UUCP
    Newsgroups: comp.graphics
    Subject: Re: Solution to quartic eqn?

>I didn't ask the question, but thanks for your input.  However, Ferrari's
>theorem yields a fast and very accurate answer.
                  ^^^^
Are you sure about this?  If it's the same closed-form solution that you find
in the CRC books, etc., doesn't it use trig functions and cube roots?   Seems
to me there was a paper by Kajiya in the early '80s on numerical ray tracing,
and there have been several in the last few years.  My advice would be to go
look at SIGGRAPH proceedings from 1981 on.  Certainly, a closed form solution
like the one suggested wouldn't take advantage of any coherence in the problem,
unless you wrote all the trig stuff yourself.

[Comments, anyone?  I never saw any replies. - EAH]

-------------------------------------------------------------------------------

Question: Kay and Kajiya Slabs for Arbitrary Quadrics?
	by Thomas C. Palmer <palmer@mcnc.org>

Here's a question regarding slabs for arbitrary quadrics.  Kay & Kajiya '86
discusses computing slabs for polygons and implicit surfaces.  The method for
implicit surfaces using Lagrange multipliers and gives an example using
spheres.  This is easy and works quite nicely for canonical (i.e. centered
and axis aligned) quadrics.  K&K handle object rotations and translations
during the slab computation.  What about quadrics that have been transformed
by some arbitrary matrix prior to input and look like this:

ax^2 + 2bxy + 2cxz + 2dxw + ey^2 + 2fyz + 2gyw + hz^2 + 2izw + jw^2 = 0 ?

The xy, xz, and yz terms prevent a simple solution via Lagrange multipliers.
Has anyone done this?  How do you handle quadric bounding planes?  Note that
K&K cheated for the tree branchs in the tree models.  Each cylinder has
endpoint capping spheres so the cylinder's extent is just the combined extent
of the two spheres.

Thanks for your help -

-Tom

Thomas C. Palmer		North Carolina Supercomputing Center
Cray Research, Inc.		Phone: (919) 248-1117
PO Box 12732			Arpanet: palmer@ncsc.org
3021 Cornwallis Road
Research Triangle Park, NC
27709

-------------------------------------------------------------------------------

Ambient Term, by Pierre Poulin (poulin@dgp.toronto.edu)

I just read your Tracing tricks in the latest Ray Tracing News. Thanks for
passing that on to us, this was very interesting.

One trick you mentioned was to put a light source at the eye position in 
order to eliminate the ambient term. This is a simple trick I did not know.
However, you noted that highlights appears as artifacts.

Since you know that this light does not need any shadow rays, you could use
only the diffuse intensity created by this light to approximate the ambient
term, hence creating no undesirable highlights.

I know, this is very easy and everyone probably knows that. But just in
case you would not have thought about it, I wanted to indicate it to you. I
just hope I am not the 10,000th to tell you :^)

--------

My reply:

	I'm glad to hear that you enjoyed the "Tracing Tricks" article -
sometimes I worry that I'm just publishing ideas that everyone already knows.
I've tried having the ambient light have no highlight, and it's sort of
interesting, but the lack of highlight can look a little strange for those
objects where there really would be a highlight (it sort of makes them look
less shiny, though it also depends upon the other lights in the scene).
Nonetheless, turning it off is definitely worth exploring.  You're the first
person to comment on this, actually.  Thanks for taking the time to write, and
do keep in touch.

-------------------------------------------------------------------------------

Book Reviews on Hierarchical Data Structures of Hanan Samet,
	by A. T. Campbell, III (atc@cs.utexas.edu)


"The Design and Analysis of Spatial Data Structures", and 
"Applications of Spatial Data Structures", by Hanan Samet,
Addison-Wesley, 1990.

		Reviewed by A. T. Campbell, III

This two-volume series of books is one man's effort to provide a guide to the
study of hierarchical data structures.  The topic has extensive influence on
many fields, particularly computer graphics, image processing, and
computational geometry.  Hanan Samet is a well-established expert in the
field, with literally dozens of publications.  As a computer graphics
researcher, I eagerly anticipated the books' publication.  A close examination
of both volumes leads to one conclusion:  the books are extremely worthwhile.

The integration of diverse material is remarkable.  Comprehensive research
results throughout the spectrum of science are drawn together seamlessly.
Samet has really done a thorough job of pulling together literature from a
vast collection of conferences and journals, both major and minor.

The level of explanation is good.  Samet has obviously read all of his
references thoroughly.  The descriptions of algorithms reflect an
understanding of what is really going on.  Even algorithms mentioned briefly
are given a good essential description.

Numerous topics are covered.  Algorithms for such problems as
proximity-searching, efficient storage of image and volume data, constructive
solid geometry, data structure conversion, hidden surface removal, and
ray-tracing fill the books.  Pseudo-ALGOL code examples present detailed
explanations of how to build and traverse many of the data structures.
Ray-tracing enthusiasts in particular will enjoy a detailed description of how
to trace a path through an octree.

There are, however, a few problems with the presentation.  Despite the
ambitious titles of the volumes, there is nowhere near as much theory or
practical advice as one might expect.  The emphasis is instead on explaining
literally everything at an understandable level.  While this makes the books a
wonderful introduction to all sorts of stuff, the reader still needs guidance
in choosing what techniques to actually use.

The title of the first volume, "The Design and Analysis of Spatial Data
Structures", obviously invokes comparison with the classic text "The Design
and Analysis of Computer Algorithms", by Aho, Hopcroft, and Ullman.  However,
Samet's approach differs greatly from that of Aho et al.  While the data
structures are described and discussed in detail, the analysis is not very
formal.  Theorems and proofs, as well as detailed algorithm analysis, are not
much in evidence.  A more appropriate title might simply be "An Introduction
to Spatial Data Structures".

The second book, "Applications of Spatial Data Structures", covers basically
every research result in hierarchical algorithms, major or minor.  It is
exceptionally good at explaining techniques succinctly.  The depth is not
sufficient enough to implement the techniques without referring to the
original papers, however.  Additionally, the reader is given no good feel for
which results should actually be used.  If a technique is commonly used in
industry or never used because of impracticality, Samet never says so.  The
reader who expects a "cookbook" solution to his problem will be disappointed.

The books are primarily of use for two purposes.  First, they provide a good
introduction to those aspects of computational geometry and image processing
which are most likely to be of interest to a person working in graphics.
Second, they provide a very complete guidebook to the literature.  I would
suggest that researchers and practitioners have these volumes on their
reference shelves.  Due to the sheer volume of material presented, I would not
recommend them for use as course textbooks.

-------------------------------------------------------------------------------

Comparison of Kolb, Haines, and MTV Ray Tracers, Part I, by Eric Haines

	I decided to compare the MTV ray tracer, the Kolb "rayshade" software,
and my own modified "SBRR" ray tracing package to see the efficiency of each.
My goal was to see what sort of performance was obtained by each ray tracer
and note the strengths and weaknesses of each.  This first section of the
report marks the state of current results, consisting of timings from the
"gprof" command for each package, using the Standard Procedural Database (SPD)
package.  All three packages were run on an HP 350 workstation with a floating
point accelerator board.  The compiler options were "-g -G +ffpa" (debug,
profile, with special floating-point only compile), using HP-UX 6.5.

	The three ray tracers were selected from all the existing packages by
having the following properties:  (1) Each could handle all the primitives in
the SPD package, and (2) each had some automatic efficiency scheme.  Other
packages do not support all the primitives (e.g. DBW does not have cylinders,
cones, or n-sided polygons), or do not have automatic efficiency generation
(e.g.  QRT lets you explicitly create bounding boxes, but has no way for this
to happen automatically).

	The MTV ray tracer was created by Mark VandeWettering, and uses the
Kay/Kajiya hierarchy approach (i.e. sorting objects along X/Y/Z and splitting
each group, recursively).  To make it conform to the requirements of the SPD
tests, "minweight" was set to 0.0 in order to disable tree pruning a la Hall's
method.

	Craig Kolb's "rayshade" ray tracer uses a 22x22x22 grid on all scenes.
Because of the use of grids (i.e.  3DDDA), it was found to be sensitive to the
background polygons used in the SPD package.  In four of the SPD scenes
(balls, gears, rings, and tree) there is a "ground" polygon.  The "rayshade"
software allows some user intervention in how the database is structured.  It
was found that faster timings (sometimes strikingly quicker) could be obtained
by leaving this background polygon out of the grid structure.  Changing the
database in this manner is forbidden by the SPD tests, but both sets of
results are presented because of the difference in timings.

	The SBRR package is not public domain, but rather is part of the
graphics software in all HP workstations using the Turbo-SRX graphics
accelerator.  It uses the Goldsmith and Salmon automatic bounding volume
hierarchy method.  It should also be noted that this package is full featured,
which has a corresponding slowdown effect when intersection computations are
performed.  For example, polygons can be single or double sided, with
different materials, colors per vertex, normals per vertex, and other
combinations available.  Since the package has a "hardware assist" by using
the graphics engine as an item buffer (see Weghorst and Hooper), timings are
given both with and without this assist.  The times without are the fairer of
the two for comparison.

Without further ado, here are the timings:

MTV	   Basic
-----      -----
balls	   12604
gears	   38123
mount	    9307
rings	   24286
tetra	    1081
tree	    8056


Kolb	   Basic	Modified
-----      -----        --------
balls	   14871	    3224
gears	   12601	   11449
mount	    2989	    2989 (i.e. same - no modification needed)
rings	    8348	    8103
tetra	     836	     836 (i.e. same - no modification needed)
tree	   18957	    2505


SBRR	   Basic	Item Bfr
-----      -----        --------
balls	    5027	    4126
gears	   13561	   12776
mount	    5440	    3749
rings	   11044	   10446
tetra	    1187	     457
tree	    3229	    2719


So, considering the MTV ray tracer as 1.00, here are the relative performance
times of each tracer - (MTV time / RT time) - i.e. higher is faster, and can
be thought of as how many times faster it is:

SPD	MTV-Base	Kolb-Bas	Kolb-Mod	SBRR-Bas	SBRR-Bfr
-----   --------	--------	--------	--------        --------
balls	  1.00		  0.85		  3.91		  1.40		  3.05
gears	  1.00		  3.02		  3.32		  2.81		  3.33
mount	  1.00		  3.11		<--same		  1.71		  2.48
rings	  1.00		  2.91		  3.00		  2.20		  2.32
tetra	  1.00		  1.29		<--same		  0.91		  2.37
tree	  1.00		  0.42		  3.22		  2.50		  2.96

Some interesting phenomena are revealed by the statistics.  The "tetra"
database is pretty much the same absolute speed for everyone.  However, given
the performance for other scenes, it is noteworthy that MTV performs
relatively faster on this than others.  I've noticed this, too, when trying
Kay/Kajiya myself - this scheme just soars on tetra, though I am not sure why.
Perhaps it is the smallness and regularity of the objects, which would go well
with Kay/Kajiya's assumption that using the centroid of these is reasonable.
For other databases one can imagine better hierarchies that those constructed
by Kay/Kajiya.  For example, with mount, the four spheres above the fractal
mountain should be in their own cluster just off the top of the hierarchy
tree.

The "tetra" scene is a strange test in that most (around 81%) of the scene
is background, so what we tend to test here is affected more by how fast one can
traverse a scene, set up rays, shade, and store values in a file.  It will take
further analysis to see where the time is spent.

The Kolb ray tracer is interesting in how much its efficiency scheme is
affected by the geometry of the scene.  The "teapot in a football stadium"
effect I've written about previously hits grid efficiency schemes with a
vengeance.  For example, moving the ground polygon from the grid subdivision
to outside of it makes rayshade perform 4.6 times faster for balls, and 7.7
times faster for tree!  The point is that grids perform best when the scene is
relatively "compact".  The large ground polygons in these scenes cause the
entire grid to get larger in two directions, and so make many more objects
fall inside but a few grid squares, thereby ruining much of the efficiency of
the grid.

Comparing Kolb to MTV, we see that overall Kolb is faster.  Kolb does worse on
balls and tree using the unmodified database, but otherwise outperforms MTV,
being about twice as fast.  When the ground polygon is taken out of the grid
subdivision, Kolb is more than 3 times faster for all cases except tetra.

Comparing SBRR and MTV shows SBRR to be faster for most cases, with MTV being
slightly faster for tetra.  Overall SBRR is almost twice as fast with its
basic performance, and about 2.75 times faster when the item buffer is used.

Comparing SBRR and Kolb is a bit tricky, since there are two tests of each.
Taking the basic tests in each, Kolb and SBRR are comparable: Kolb outperforms
SBRR in four cases, and SBRR outperforms Kolb in two (and for one of those,
tree, it is almost 6 times faster).  SBRR has some things to learn from Kolb
(which is why I'm doing all this, anyway), as Kolb's modified database results
point towards the fact that faster performance is possible.

So, on the basis of pure raw timings, Kolb and SBRR without modifications or
accelerators are of comparable speed.  With user intervention into the
database structure, Kolb can become noticeably faster.  It should also be
noted that Kolb uses a default of 22x22x22 grid cells, which is under user
control and so could be tuned to further improve performance.

Actually, this is an interesting open question:  what heuristics can be used
to determine a reasonable grid size for a scene?  Also, is there a reasonable
way to determine if such performance destroyers such as ground polygons are
present automatically, and so remove them from the grid subdivision itself?
David Jevans' and Brian Wyvill's work on nested grid subdivisions ("Adaptive
Voxel Subdivision for Ray Tracing", Proceedings of Graphics Interface '89, p.
164-172) might lead to a less variable performance and to greater overall
speed.  Craig and I have discussed this, but unfortunately he has no time to
try this out - perhaps someone out there can experiment and compare
results.

As mentioned, this is research in progress.  My next step will be to analyze
the statistics generated by each program and see where time is spent.

-------------------------------------------------------------------------------

Raytracer Performance of MTV, by Steve Lamont

>   Could you pass on your timings on ray tracer performance on various
> machines and any thoughts or experiences you want to share about the subject?

The parallelization was done in a brute force manner, forking processes and
dividing the work by the number of processes.  The parent process remains
behind and reads the scanlines in a round robin fashion from pipes.  There is
no communication from the parent to the child processes once the forking has
been done; the ray tracing routines simply march down the scan lines.  This
approach works well on a homogeneous architecture where all processes run at
approximately the same speed and no process "dries up" or runs out of work to
do.

This works well for single frames.  However, my approach for a large animation
is to simply parcel out work on a frame per processor basis.

I've built the MTV raytracer on the Cray, the IRIS, and the Ardent Titan...
and here are some preliminary results on a 128 x 128 test image (the balls
image with reflections but no refractions, 3 light sources, 11 primitives (16
with bounding volumes)

			processes
		      (CPU seconds)
	Machine		1	4	Notes

	Y-MP/8-432	4.0	1.0	-hopt,intrinsics,vector
	IRIS (4D/240)*  8.2	2.1	-O2 (MIPS R3000/3010)
	Titan (P2)     30.0     7.7	-O3 (MIPS R2000 + prop. vector/fp unit)
	Titan (P3)	7.2	---	Run by vendor. (MIPS R3000/3010 +
					proprietary vector unit)

Wall clock times improved by a factor of 2.5 to 3, which squares pretty well
with Amdahl's law as extended for small parallel architectures.

These are *preliminary* results with respect to the Titan -- we've only had it
for a couple of weeks.  On none of the machines did MTV vectorize in any way
to speak of.  In fact, turning off vectorization improves performance for
several "short vector" loops; e.g., loops of vector length 3.

Timings were done on a fairly heavily loaded Cray and an empty IRIS and Titan.

The Cray is a Y-MP with four processors (upgradable to 8, hence the 8-4) and
32 mWords of central memory.  There is also a 128 mWord SSD (Solid-state
storage device).  We also have 40 gBytes of rotating storage (a combination of
DS-40s and DD-49s).

[*] Actually, this machine is a CDC Cyber 910-624 but the only difference
between it and a "genuine" Silicon Graphics IRIS 4D/240 is the color of its
box and the binding on the manuals.

[Disclaimer:  These comments are solely the responsibility of the author and
no endorsement by the North Carolina Supercomputing Center should be inferred]

Steve Lamont, sciViGuy			EMail:	spl@ncsc.org
NCSC, Box 12732, Research Triangle Park, NC 27709

-------------------------------------------------------------------------------

BRL-CAD Ray Tracer Timings, by Gavin Bell (gavin@krypton.sgi.com)

These results are third-hand; I can vouch for the accuracy of our machine's
results, but the BRL people may have more recent results to share.  The only
experience I have with ray-tracing timing on our machines is with a simple,
interactive ( :-> ) ray-tracer demo called 'flyray'.  I modified it to be
fully parallelized; it uses one CPU to display a real-time display of the
scene being ray-traced (complete with rays shooting into and bouncing around
the scene), and the other N-1 CPUs to compute ray-object intersections (these
results are shown in a separate window).  As for timing... runs REAL fast on
an 8 CPU system.

What follows is a form letter I've been sending to people who asked for
ray-tracing timings.

------------------------

This is a response to all of those people who asked me for the BRL-CAD
ray-tracing benchmark results.  I'm surprised at how many of you there are!

First, a little bit about myself.  I work in Technical Marketing, the 'Demos
and Benchmarks' group here at Silicon Graphics.  I'm not usually involved in
benchmarks; I work mainly on our demos.

The rest of this text comes directly from a 'SGI Benchmark Summary' prepared
by one of our Marketing people.  The numbers are communicated to him by the
software engineer who did the benchmark.  These benchmark summaries are
communicated to salespeople in a weekly newsletter as soon as the results come
in.  Other summaries done include:

'INDUSTRY STANDARD':
Dhrystones, Digital Review Labs, Linpack, Livermore Fortran Kernals, MUSBUS,
Whetstones.

OTHERS:
LES50 (computational fluid dynamics), Moldflow (finite element analysis),
Molecular Dynamics, Nord, UFLA, GROMOS (all computational chemistry).

If you want more information on any of these benchmarks, please see a SGI
sales rep-- I can't keep typing in all of these numbers!!  Also, please
remember that these benchmarks were done for a specific customer, who was
interested in a specific machine, so most of them were not benchmarked on our
whole product line (the 'Industry Standard' benchmarks, however, are usually
run on all of our products).

APPLICATION  BENCHMARK NAME  CUSTOMER
-----------  --------------  --------
Rendering    BRL-CAD 3.7     US Army Ballistic Research Lab

LANGUAGE     SUMMARY DATE
--------     ------------
C            9/5/89

DESCRIPTION
-----------
The BRL-CAD benchmark is a part of the US Army Ballistic Research Laboratory's
BRL-CAD package.  The core of the BRL-CAD benchmark is a ray-tracing program
which consists of about 150,000 lines of C code.  Computations are performed
in double precision floating point.

Five separate data bases are input to the ray tracing program, resulting in
six performance ratings (one for each, plus a total which is the mean of the
other five) .  When rendered, each data base will produce a 512x512x24 bit
color shaded image.  The images are of increasing complexity, and are
identified as 'Moss', 'World', 'Star', 'Bldg' and 'M35'.

RESULTS
-------
The result of this benchmark is reported as rays traced per second, and
referred to as Ray Tracing Figure of Merit (RTFM).  Higher numbers indicate
better performance.

The code was parallelized by inserting user directives to create multiple
processes to trace rays.

RESULTS FOR SGI MACHINES:
    Note:  The actual report has nice graphs here.
Machine        RTFM
-------------------
1x16 (4D/80)    714
2x16 (4D/120)  1358
4x25 (4D/240)  5034
8x25 (4D/280)  7434
    Note:  NxMM numbers refer to the number of processors in the
	machine (N) running at MM MHZ.

COMPETITIVE RESULTS:
Machine   # Processors  RTFM  Relative Performance
--------------------------------------------------
Vax 780        1          77     1.0
Sun3           1          88     1.1
Convex C120    1         163     3.6
Sun4           1         435     5.6
SGI 4D/120S    2        1358    17.5
Alliant FX/80  8        2783    33.6
SGI 4D/240S    4        4456    70.4
Cray X-MP/48   4        7217   116.1
SGI 4D/280     8        7434   119.7

ANALYSIS
--------
The BRL-CAD benchmark exhibits excellent speedup as processors are added.
This is due to the coarse granularity inherent in the ray tracing problem
being solved.  Each ray is processed independently, with no data dependencies
among the rays.  This means that multiple processors can each work on separate
rays simultaneously, with minimal need for synchronization among processors.

While the code is highly parallelizable, it is not efficiently vectorizable
because of short vector lengths.  The combination of these two
characteristics explain the phenomenal performance of Silicon Graphics
machines relative to vector machines like Cray and Alliant.

The characteristics of this benchmark that lead to high performance by the
Silicon Graphics machines are common to all ray tracing applications.

--------

Here is another note from Gavin:

My only experience with the BRL ray-tracer came when I was at Princeton
University - I installed it on Silicon Graphics machines there for the
Graphics Lab.  That was two years ago; as far as I could tell, it didn't use
octrees or any other space-partitioning algorithm.  I used a ray-tracer
written at Princeton (the precursor of what is now Craig Kolb's rayshade
program; Craig and I had the same thesis advisor) which did do octrees; it was
infinitely faster than the BRL beast.

-------------------------------------------------------------------------------

BRL-CAD Benchmarking and Parallelism, by Mike Muuss (mike@BRL.MIL)

I'm sort of on vacation right now, so I'm going to cop out and just send you
the TROFF input for several things that I have handy about how we benchmark
ray-tracing in the BRL-CAD Package.

The first one is our benchmark summary paper.

The second one is a portion of a paper that I wrote called ``Workstations,
Networking, Distributed Graphics, and Parallel Processing''.

You may publish and/or redistribute both documents as you wish.  Note that the
United States Government holds the "copyright", ie, it is not permissible to
copyright this material.

[These papers are rather lengthy, so I won't include them in this issue.
If you would like copies, look at the archive sites for Muuss.benchmrk and
Muuss.parallel, or write me. - EAH]


======== USENET cullings follow ===============================================

Rayshade Patches Available, by Craig Kolb
	From: craig@weedeater.math.yale.edu
	Newsgroups: comp.graphics
	Organization: Math Department, Yale University

Patches 1-3 for rayshade v3.0 are available via anonymous ftp from
weedeater.math.yale.edu (new address:  130.132.23.17) in directory
pub/rayshade.3.0/patches.  The patches fix several minor bugs, clean up the
documentation, and provide new features.

Several people have expressed an interest in 'trading' rayshade input files.
If you have an interesting input file that you'd like to share, feel free to
deposit it in the "incoming" directory on weedeater or send it to me via
email.  I will make these files available in the pub/Rayinput directory on
weedeater.

Rayshade is supposedly "on the verge" of appearing in comp.sources.unix,
patches and all.

-------------------------------------------------------------------------------

Research and Timings from IRISA, by Didier Badouel
	From: badouel@irisa.irisa.fr
	Newsgroups: comp.graphics
	Organization: IRISA, Rennes (Fr)

We have a parallel raytracer (called PRay) at IRISA which as MTV uses NFF
description databases.  This raytracer has been implemented on an iPSC/2, on a
SEQUENT BALANCE and also on serial computers (SUN3, GOULD NP1) to make better
comparisons.

I give you the various synthesis times for the well known 'Teapot' database.
The image has been rendering with a 512X512 resolution with 3 light sources.
Results are as follows :

                        #PEs    Time (in sec.)
        ________________________________________
        SUN3:                   8877 (~ 2h27mn)
        ________________________________________
        GOULD NP1:              1642 (~ 27mn)
        ________________________________________
        SEQUENT BALANCE 1       37121 (~ 10h18mn)
                        2       18567
                        3       12381
                        4       9285
                        5       7431
                        6       6197
                        7       5311
                        8       4656
                        9       4138 (~ 1h9mn)
        ________________________________________
        iPSC/2          1       6294 (~ 1h45mn)
                        2       3332
                        4       1700
                        8       860
                        16      440
                        32      224
                        64      119 (~ 2mn)
        ________________________________________

The code running on the iPSC/2 emulates a virtual shared memory over the local
PEs.  The database is not duplicated but all the local memories are used.  The
remaining memory after loading code and data is used as a cache to speed up
low global accesses.

-----

        In order to benchmark our parallel raytracer running on an iPSC/2,
which use Eric Haines' NFF file format as input, we would like to know 
if other people have a parallel raytracer using these databases
in order to make some comparisons.

        One of our goals is to render the largest possible 
database.  For the moment, we have rendered the 'tetra10'
database:
        - the database contains more than 1 million polygons (1048576
          polygons)
        - the size of the database with its 'object access structure' (a
          grid) is 140 MO.
        - the synthesis time is 526 seconds on the iPSC/2 with 64 nodes
          and 4 MO node memory.
        - however, on account of using NFF text file format, reading the
          input is  very slow (more than 3 hours for 'tetra10') when using 
          YACC and LEX. Furthermore, our iPSC/2 configuration does not have
          I/O  node  system.

________________________________________________________________
  Didier  BADOUEL                       badouel@irisa.fr
  INRIA / IRISA                         Phone : +33 99 36 20 00
 Campus Universitaire de Beaulieu       Fax :   99 38 38 32
 35042 RENNES CEDEX - FRANCE            Telex : UNIRISA 950 473F
________________________________________________________________

-------------------------------------------------------------------------------

Concerning Smart Pixels, by John S. Watson
	From: watson@ames.arc.nasa.gov (John S. Watson)
	Newsgroups: comp.graphics
	Organization: NASA - Ames Research Center

In article <207400033@s.cs.uiuc.edu> mccaugh@s.cs.uiuc.edu writes:
>
> Does anyone know of a system with "smart" pixels? 

Once upon a time I wrote a ray tracer in which the pixels used heuristics to
determine their sampling rate.  Since the reason for doing it was to speed
things up, the heuristic had to be simpler than casting a ray( s, if
sub-sampling).  I used difference in previous pixels values, with a little
randomness tossed in.  So pixels changing quickly were sampled every frames,
while pixels that were hardly ever changing were sampled only once every 10
frames.  The results:  much faster, but with a graininess on the edges of
moving objects.  I needed to make the pixels more aware of what was happening
with its neighbors.  Never got around to doing that.

Another problem is big pictures have lots of pixels ... 512x512 = 0.25 million.
To be smart you must have a memory.

To save memory, I combined the above with an Area-of-Interest/Variable Acuity
(AOIVA) Ray Tracer.

Hope this helps, 
John S. Watson, Civil Servant from Hell        ARPA: watson@ames.arc.nasa.gov 
NASA Ames Research Center                      UUCP:  ...!ames!watson
Any opinions expressed herein are, like, solely the responsibility of the, like,
author and do not, like, represent the opinions of NASA or the U.S. Government.

-------------------------------------------------------------------------------

Input Files for DBW Render, by Tad Guy
	From: tadguy@cs.odu.edu (Tad Guy)
	Newsgroups: comp.graphics
	Organization: Old Dominion University, Norfolk, VA

In article <6475@pt.cs.cmu.edu> te07@edrc.cmu.edu (Thomas Epperly) writes:
   I was wondering if anyone had any neat input files for DBW_Render available
   for anonymous ftp.

xanth.cs.odu.edu as /amiga/dbw.zoo.  If you have a network of, say, sun
workstations, you might as well get /amiga/distpro.zoo, which will allow to
distribute the computations over many machines.

-------------------------------------------------------------------------------

Intersection with Rotated Cubic Curve Reference, by Richard Bartels
	From: rhbartels@watcgl.waterloo.edu
	Newsgroups: comp.graphics
	Organization: U. of Waterloo, Ontario

In article <1445@tukki.jyu.fi> toivanen@tukki.jyu.fi (Jari Toivanen) writes:
 :
 :I would like to know is there any simple and effective solution to
 :following problem:
 :
 : [intersecting a ray with a rotated cubic curve]
 :
 :Jari Toivanen                           Segments are for worms ;-)
 :University of Jyvaskyla, Finland        Internet: toivanen@tukki.jyu.fi

Look at the article:

        Ray Tracing Objects Defined By Sweeping Planar Cubic Splines
        Jarke J. van Wijk
        ACM Transactions on Graphics
        Vol. 3, No. 3, July, 1984, pp. 223-237

I believe that the author subsequently wrote a whole book on the subject.

[Incidentally, this article also has a method for quickly intersecting a tight
fitting bounding volume around such curves.  I've found this test useful for
use as a torus bounding volume.  Also, does anyone know of the existence and
the name of the book Richard mentions?  - EAH]

-------------------------------------------------------------------------------

Needed: Quartz surface characteristics, by Mike Macgirvin
	From: mike@relgyro.stanford.edu
	Newsgroups: comp.graphics
	Organization: Stanford Relativity Gyro Experiment (GP-B)

I am in need of the surface characteristics for fused quartz.  Ambient,
diffuse and specular color characteristics, Phong coefficent, reflectance,
and transparency.  I have the index of refraction (Well, I have to average it,
c'est la vie).

I have attempted to derive these experimentally, but find the resulting traced
image lacking in many ways, and a simulation visualization I am working on
requires accuracy.

I am using Kolb's 'rayshade' if it affects your responses.

Please respond via e-mail if possible.

-------------------------------------------------------------------------------

Solution to Smallest Sphere Enclosing a Set of Points, by Tom Shermer
	From: shermer@cs.sfu.ca
	Newsgroups: comp.graphics
	Organization: School of Computing Science, SFU, Burnaby, B.C. Canada

>I need the solution for the following problem:
>find the smallest sphere that encloses a set of given points, in both
>2-D and 3-D (or even n-D, if you like).
>

This problem can be solved in linear time (in any fixed dimension)
by the technique of prune-and-search (sometimes called ``Megiddo's
Technique''), either directly or by first converting the problem
to a linear program.  The most relevant reference (for 2d & 3d) is:

Linear-time Algorithms for Linear Programming in R^3 and Related Problems,
        Nimrod Megiddo, SIAM J. Comput, v. 12, No. 4, Nov 1983, pp. 759-776.


Other related references:

Linear time algorithms for two- and three- variable linear programs,
        M.E. Dyer, SIAM J. Comput, v. 13, 1984, 31-45.

On a multidimensional search technique and its application to the
Euclidean one-center problem,
        M. E. Dyer, Dept. Math and Stats TR, Teesside Polytechnic,
        Middlesbrough, Cleveland TS1 3BA, UK, 1984.

Linear programming in linear time when the dimension is fixed,
        N. Megiddo, JACM 31, 1984, 114-127

The weighted Euclidean 1-center problem,
        N. Megiddo, Mathematics of Operations Research 8, 1983, 498-504

On the Ball Spanned by Balls
        N. Megiddo, manuscript (this may have appeared in the literature
        by now)

-------------------------------------------------------------------------------

True Integration of Linear/Area Lights, by Kevin Picott
	From: kpicott@alias.UUCP
	Newsgroups: comp.graphics
	Organization: Alias Research Inc.

Has anyone seen any work done on evaluating the diffuse and specular
illumination produced by linear and/or area lights?  I have checked all the
standard sources and all information I find gets to the point where the
integration is set up and then a little hand waving is performed accompanied
by the magical words "numerically integrated".  This works but is too slow
for my purposes.  Does anyone know of any work done in different directions
(ie faster evaluation) ?

--------

Thanks to all who replied to my query about linear and area lights.

In the area of linear lights, two papers on analytical solutions were found.
The first, by John Amanatides and Pierre Poulin has been submitted to
Eurographics '90 and I'll hopefully get a look at that soon.

The second, "Shading Models for Point and Linear Sources", ACM TOGS, 4(2),
April 1985, pp. 124-146.  by T. Nishita, I. Okamura, E. Nakamae, proposes
an analytic solution to the diffuse component, but only under certain
circumstances.

The latter unfortunately reduces to numerical integration in the majority of
cases where spline surfaces are involved, although a method of optimization is
given that reduces computation time for the numerical integration.  This
method would seem to be suited to lighting parallel and perpendicular to the
illuminated surfaces.

There was also a paper entitled "A Comprehensive Light-Source Description for
Computer Graphics", IEEE CG&A, July 1984, by Channing P. Verbeck and Donald
P. Greenberg that approximates both linear and area light sources as a series
of point sources.  This is a compromise to numerical integration, but is still
computationally expensive.

In summary, the analytical solution to linear sources exists and is
calculable, at least for the diffuse component.  The specular component
exists, but direct calculation is almost expensive as numerical integration.

As far as area light sources are concerned... no analytical solutions were
found.  In fact, from the work examined I was left with the impression that
even if the solution existed it would not be very useful from a light
illumination point of view (ie non-radiosity).  (Comments?)

--
 Kevin Picott   aka   Socrates   aka   kpicott%alias@csri.toronto.edu
 Alias Research Inc.  R+D     Toronto, Ontario... like, downtown

-------------------------------------------------------------------------------
END OF RTNEWS


From m-cohen@cs.utah.edu Wed Jan 17 16:36:28 1990
Return-Path: <m-cohen@cs.utah.edu>
Received: from cs.utah.edu by turing.cs.rpi.edu (4.0/1.2-RPI-CS-Dept)
	id AA25578; Wed, 17 Jan 90 16:36:22 EST
Received: by cs.utah.edu (5.61/utah-2.4-cs)
	id AA05840; Wed, 17 Jan 90 14:16:24 -0700
Date: Wed, 17 Jan 90 14:16:24 -0700
From: m-cohen@cs.utah.edu (Michael F Cohen)
Message-Id: <9001172116.AA05840@cs.utah.edu>
To: FISHER@3D.dec@decwrl.dec.com, arvo@apollo.com, atc@cs.utexas.edu,
        barr@csvax.caltech.edu, barsky@miro.berkeley.edu,
        bcorrie@uvicctr.uvic.ca, chapman@fornax, ckchee@dgp.toronto.edu,
        daniel@apollo.com, dk@csvax.caltech.edu,
        ecn.purdue.edu!cychosz@cs.utah.edu, esl0422@isc.rit.edu,
        glassner.pa@xerox.com, grant@delvalle.llnl.gov,
        gray@rhea.CRAY.COM@uc.msc.umn.edu, green@compsci.bristol.ac.uk,
        hanrahan@princeton.edu, hench@csclea.ncsu.edu,
        hohmeyer@miro.berkeley.edu, hpfcjo.hp.com!raylist@cs.utah.edu,
        hpfcla!hpfcse!hpurvmc!koz@cs.utah.edu, hplabs!dana!mrk@cs.utah.edu,
        hultquis@prandtl.nas.nasa.gov, image.trc.amoco.com!zmel02@cs.utah.edu,
        jakob@humus.huji.ac.il, jeff@hamlet.caltech.edu, johnf@apollo.com,
        joy@ucdavis.edu, kolb@yale.edu, kyriazis@turing.cs.rpi.edu,
        larry.mcrcim.mcgill.edu!vedge!kaveh@cs.utah.edu, lister@dg-rtp.dg.com,
        litwinow@apple.com, m-cohen@cs.utah.edu,
        mcvax.cwi.nl!dutio!fwj@cs.utah.edu,
        mcvax.cwi.nl!dutrun!wim@cs.utah.edu, mja@sierra.llnl.gov,
        mplevine@phoenix.princeton.edu,
        mssun1.msi.cornell.edu!carl@cs.utah.edu, paul@sgi.com,
        ph@miro.berkeley.edu, quad.com!avatar!kory@cs.utah.edu,
        raycasting@duke.cs.duke.edu, raytrace@cpsc.ucalgary.ca,
        rgb@caen.engin.umich.edu, squid.graphics.cornell.edu!jaf@cs.utah.edu,
        tcgould.tn.cornell.edu!lytle@cs.utah.edu, tim@csvax.caltech.edu,
        ucsd.ucsd.edu!megatek!kuchkuda@cs.utah.edu,
        wisdom.graphics.cornell.edu!ray-tracing-news@cs.utah.edu
Subject: Ray Tracing News
Status: R

 _ __                 ______                         _ __
' )  )                  /                           ' )  )
 /--' __.  __  ,     --/ __  __.  _. o ____  _,      /  / _  , , , _
/  \_(_/|_/ (_/_    (_/ / (_(_/|_(__<_/ / <_(_)_    /  (_</_(_(_/_/_)_
             /                               /|
            '                               |/

			"Light Makes Right"

			January 2, 1990
		        Volume 3, Number 1

Compiled by Eric Haines, 3D/Eye Inc, 2359 Triphammer Rd, Ithaca, NY 14850
    NOTE ADDRESS CHANGE: wrath.cs.cornell.edu!eye!erich
    [distributed by Michael Cohen <m-cohen@cs.utah.edu>, but send
    contributions and subscriptions requests to Eric Haines]
All contents are US copyright (c) 1989,1990 by the individual authors
Archive locations: anonymous FTP at cs.uoregon.edu (128.223.4.1) and at
		   freedom.graphics.cornell.edu (128.84.247.85), /pub/RTNews
Other sites: uunet.uu.net:/graphics

Contents:
    Introduction
    New People
    Archive Site for Ray Tracing News, by Kory Hamzeh
    Ks + T > 1, by Craig Kolb and Eric Haines
    Quartic Roots, and "Intro to RT" Errata, by Larry Gritz and Eric Haines
    More on Quartics, by Larry Spence
    Question: Kay and Kajiya Slabs for Arbitrary Quadrics?
	by Thomas C. Palmer
    Ambient Term, by Pierre Poulin
    Book Reviews on Hierarchical Data Structures of Hanan Samet,
	by A. T. Campbell, III
    Comparison of Kolb, Haines, and MTV Ray Tracers, Part I, by Eric Haines
    Raytracer Performance of MTV, by Steve Lamont
    BRL-CAD Ray Tracer Timings, by Gavin Bell
    BRL-CAD Benchmarking and Parallelism, by Mike Muuss
    ======== USENET cullings follow ========
    Rayshade Patches Available, by Craig Kolb
    Research and Timings from IRISA, by Didier Badouel
    Concerning Smart Pixels, by John S. Watson
    Input Files for DBW Render, by Tad Guy
    Intersection with Rotated Cubic Curve Reference, by Richard Bartels
    Needed: Quartz surface characteristics, by Mike Macgirvin
    Solution to Smallest Sphere Enclosing a Set of Points, by Tom Shermer
    True Integration of Linear/Area Lights, by Kevin Picott

-------------------------------------------------------------------------------

Introduction

	This issue's focus is on timings from various people using a wide
selection of hardware and software.  Much of the delay in getting out this
issue was my desire to finish up my own timing tests of MTV's, Kolb's, and my
own ray tracers on the same machine.  I hope some of you will find this
information of use.

	Another feature of this issue is a pair of book reviews.  One of the
purposes of the RT News is to provide people with sources of information
relevant to ray tracing.  So, I would like to see more reviews, or even just
brief descriptions of articles and books that you come across.  Keeping up to
date in this field is going to take more time as the years go by, so please do
pass on any good finds you may have.  Also, if you're an author, please feel
free to send a copy of the abstract here for publication.  This service is
already provided to a certain extent by SIGGRAPH for thesis work.  However,
even a duplication of their efforts is worthwhile, since an electronic version
is much easier to search and manipulate.

	Finally,

-------------------------------------------------------------------------------

New People

# Kory Hamzeh
# 6640 Nevada Ave.
# Canoga Park, Ca 91303
# Email: UUCP:     avatar!kory or ..!uunet!psivax!quad1!avatar!kory
# INTERNET: avatar!kory@quad.com
alias	kory_hamzeh	quad.com!avatar!kory

I'm not professionally involved in ray tracing.  Just personally fascinated by
it.  I have written a couple of ray tracers (who hasn't yet?) and I'm in the
midst of designing a 24 bit frame buffer.  Since I don't do this on a
professional level, I lack some of the resources required to develop real
products.  I maintain a archive site with a lot of graphics related items
(including Ray Tracing News).  If you need to access the archive (anonymous
uucp only) please send me mail.

--------

#
# Steve Lamont, sciViGuy - parallelism
# NCSC,
# Box 12732,
# Research Triangle Park, NC 27709
alias	steve_lamont	cornell!spl%mcnc.org

--------

# Bob Kozdemba - novice tracer, futures, also radiosity
# Hewlett-Packard Co.
# 7641 Henry Clay Blvd.
# Liverpool, NY 14450
# (315-451-1820 x265)
alias   bob_kozdemba    hpfcla!hpfcse!hpurvmc!koz

I work for HP in Syracuse, NY as a systems engineer.  I will be attending SU
starting in Jan. `89 working toward my BS with a focus in computer graphics.
My job responsibilities are to provide technical support to customers and
sales in the areas of Starbase graphics and X Windows.  Lately I have been
experimenting with HP's SBRR product [radiosity and ray tracing part of the HP
graphics package] and trying to keep abreast of futures in graphics.  I have
written an extremely primitive ray tracer and I am looking for ideas on how to
implement reflections and transparency.

--------

# Robert Goldberg
# Queens College of CUNY
# Comp. Sci. Dep't
# 65-34 Kissena Blvd.
# Flushing, N.Y. 11367-0904
# Work : 3d Modeling algorithms, with appl. to graphics and image processing
# Phone: Work -  (718) 520-5100
alias	robert_goldberg	rrg@acf8.nyu.edu

--------

# John Olsen - refraction, radiosity, antialiasing, stereo images.
# Hewlett-Packard, Mail Stop 73
# 3404 E. Harmony Road
# Ft Collins, CO 80525
# (303) 229-6746
# email:  olsen@hq.HP.COM, hplabs!hpfcdq!olsen
alias	john_olsen	hpfcdq.hp.com!olsen

Currently, I've been spending some time tinkering with the DBWrender ray
tracer making it produce 24 bit/pixel QRT-format images.  I like the QRT
output format, but I like some of the features of DBWrender, such as
antialiasing and fading to a background color.

I've thought about writing my own ray tracer with all the features I want, but
so far I've resisted this evil temptation, and only looked for fancier ones
already done by others who could not resist the temptation.  :^)

I've just installed an alias for a local ray tracing news distribution.  You
can send it to raylist@hpfcjo.HP.COM (or if you can't reach, try something
like hplabs!hpfcdq!hpfcjo!raylist).

--------

# Andrew Hunt, andrew@erm.oz
# Earth Resource Mapping, 130 Hay St, Subiaco, Western Australia 6008
# Phone: +61 9 388 2900   Fax: +61 9 388 2901
# ACSnet: andrew@erm.oz
alias	andrew_hunt	uunet!munnari!erm.erm.oz.au!andrew

In 1987 I implemented a "Three Dimensional Digital Differential Analyser"
(3D-DDA) algorithm, along the lines of Fujimoto and Iwata`s, and used it to
speed up a raytracing system under development at the Computer Science
Department at Curtin University of Technology.

Recently I have got a bit busy developing commercial image processing software
to devote much time to Ray Tracing.

Sometime during 1990 I plan to try to port our Ray Tracing system to a
Transputer based platform.

--------

# Nick Beadman - Distributed Ray Tracing, Efficiency
# School of Information Systems
# University of East Anglia
# Norwich
# Norfolk
# United Kingdom
alias	nick_beadman	cmp7112@sys.uea.ac.uk

At the moment I'm trying to implement a distributed ray tracer on 8 t800s on a
meiko computing surface using C, with little success I should add.  It all
part of a big third year computing project worth a sixth of my degree.

--------

# Peter Miller - algorithms, realism
# 18 Daintree Cr
# Kaleen ACT 2617
# AUSTRALIA
# CONTACT LIST ONLY: subscription through melbourne
#Phone:  +61-62-514611 (W)
#	 +61-62-415117 (H)
#        UUCP    {uunet,mcvax,ukc}!munnari!neccan.oz!pmiller
#        ARPA    pmiller%neccan.oz@uunet.uu.net
#        CSNET   pmiller%neccan.oz@australia
#        ACSnet  pmiller@neccanm.oz
alias	peter_miller	cornell!uunet!munnari!neccan.necisa.oz.au!peter

  I have been interested in ray tracing since 1984, when I wrote a ray tracer
before I knew it was called ray tracing!  Since then I have been reading
journals and tinkering with my ray tracer.

  The last 3 years were spent marooned off the net, with very poor graphics
output devices; so while some work was done on the ray tracer, I now have a
lot of catching up to do.

--------

# Mike J. McNeill
# VLSI and Graphics Research Group,
# EAPS II,
# University of Sussex,
# Falmer, BRIGHTON, East Sussex, BN1 9Qt
# TEL: +44 273 606755 x 2617
# EMAIL: (JANET) mikejm@syma.sussex.ac.uk | (UUCP) ...mcsun!ukc!syma!mikejm
alias	mike_mcneill	mike@syma.sussex.ac.uk

I am currently researching into parallelising the RT algorithm on a
multiprocessor architecture.  I'm using object space subdivision using a fast
octree traversal algorithm.  At the moment I'm simulating the architecture
using C via TCP/IP protocols on a distributed Apollo network.  Interested in
all aspects of speed-up, parallel architectures, coherency and radiosity -
dare I mention animation and Linda?  Also does anyone know of a modelling
package for use on the Apollos?

-------------------------------------------------------------------------------

Archive Site for Ray Tracing News, by Kory Hamzeh

Avatar is an archive site which archives Ray Tracing News, comp.source.unix,
comp.sources.misc, and other goodies. Archived files can be access by any
one using anonymous uucp file transfers. A list of all files in the archive
can be found in avatar!/usr/archive/index. Note that this file is quite
large (about 40K). If you are only interested in the graphics related 
archives, snarf the file avatar!/usr/archive/comp/graphics/gfx_index.
All Ray Tracing News Archives (except for the first few) are stored in 
/usr/archive/comp/graphics directory. They are named in the following
fashion:

	RTNewsvXXiYY.Z

Where XX is the volume number, YY is the issue number. Note that all files
(except index and gfx_index) are compressed.

Before trying to access avatar, your L.sys file (or Systems file, depending
on which flavor of UUCP you have) must be edited to contain the following
info:

#
# L.sys entry for avatar.
#
# NOTE: 'BREAK' tells uucico to send a break on the line. This keyword
# may vary from one flavor of UUCP to another. Contact your system
# administrator if your not sure.
#
# 1200 baud 
avatar Any ACU 1200 18188847503 "" BREAK "" BREAK ogin: anonuucp
#
# 2400 baud
avatar Any ACU 2400 18188847503 "" BREAK ogin: anonuucp
#
# 19200 baud (PEP mode) for Telebit Trailblazers only
avatar Any ACU 19200 18188847503 ogin:-\n-ogin: anonuucp

After the previous lines are entered in you L.sys file, you may use the
following command to get the index file:

    uucp avatar!/usr/archive/index /usr/spool/uucppublic/index

This command will grab the file from avatar and put it in your 
/usr/spool/uucppublic directory using the name index. For example,
to get Ray Tracing News Volume 2, Issue 5, execute the following
command:

   uucp avatar!/usr/archive/comp/graphics/RTNewsv02i05.Z /tmp/RTnewsv02i05.Z

NOTE that some shells (like csh) use the "!" as an escape character, so
use a "\" before the "!".

If you experience problems getting to any of the files, I can be reached
via e-mail at:

    UUCP: avatar!kory or ..!uunet!psivax!quad1!avatar!kory
    INTERNET: avatar!kory@quad.com soon to be kory@avatar.avatar.com


Enjoy,
Kory Hamzeh

-------------------------------------------------------------------------------

Ks + T > 1, by Craig Kolb and Eric Haines

From: Craig Kolb <craig@weedeater.math.yale.edu>

While adding adaptive tree pruning to rayshade (and discovering a bug), I came
across a question I've had regarding the SPD for a while.

Some objects have Ks + T > 1.  How can this be?  For example, the spheres in
"mountain" have Ks = .9 and T = .9.  Unless I'm completely out to lunch (which
is possible), ths means that subsequent specularly reflected rays are weighted
by .9, and that transmitted rays are also weighted by .9.  This leads to
"glowing" objects pretty quickly.

What's wrong in the above description?  Be warned that in "nff2shade.awk", I
set the "specular reflectance" to be min(Ks, 1. - T).

--------

My reply:

	Actually, true enough, Ks + T > 1 does occur in the SPD stuff.
I use T in a funny way, since I was trying to make databases that would
display both using hidden surface and ray tracing algorithms.  Hmmm, how
to explain?  Well, imagine you have a glass ball: under the hidden surface
system (without transparency), you'd like the ball to appear opaque, and
so a high Kd is in order.  Now if the thing's transparent, you don't want
Kd to be high.  So what I do is lower Kd and Ks by ( 1 - T ).  An admittedly
weird shading model, which I've now changed a bit (i.e. reflectance and
transmittance are now entirely separate).  So, your solution of turning down
the reflectance is fine.  I should add that I didn't really explain all this
as it's irrelevant for timings (all we care about is what rays get generated,
not the final color), but I agree, it would be nicer to get a good resulting
picture as a check.  I'll change that in the next update of SPD, actually...
thanks for pointing it out!

--------

Craig's reply:

Ah, I get it.

I brought it up because it does make a difference timing-wise if you're using
adaptive tree pruning.  Although you say in the SPD stuff that pruning
shouldn't be used, Mark's raytracer currently has no option to turn it off.  I
was comparing pruned vs. pruned, and noticed that I had many fewer reflected
rays, since my reflectance for the transparent gears was .05 rather than .95.

-------------------------------------------------------------------------------

Quartic Roots, and "Intro to RT" Errata,
	by Larry Gritz (vax5.cit.cornell.edu!cfwy)

    I was trying to find the solution to a quartic equation to solve a
ray-torus intersection test.  I've gotten lots of replies, but generally fall
into one of two categories:  either solve the quartic equation (I forget the
reference now, but I'll send you either a reference or the formula if you want
[It's in the _CRC Standard Math Tables_ book - EAH]), or by some iterative
method.  Everybody says (and I have confirmed this experimentally) that
solving the equation is very numerically unstable.  I have chosen to use the
Laguerre method from Press, et. al. _Numerical Recipes in C_, which is slow,
but seems to work, and finds all roots without needing to specify brackets for
the roots.  (An advantage since although I can bracket all possible real roots
with the bounding box test that I already do, I'm not really sure how many
roots lie within the brackets.)

    What actually turns out to be a bigger problem is that I got the quartic
coefficients from the SIGGRAPH '88 Ray Tracing Course Notes (on page 3-14 of
Pat Hanrahan's section).

[Larry and I thrashed this out over a number of passes (boy, I wish I had access
to Mathematica or somesuch), and came out with the following corrected
equation set for those on page 93 of _An Introduction to Ray Tracing_:

	a4 & a3 - Pat's are OK.
        a2 = 2(x1^2+y1^2+z1^2)((x0^2+y0^2+z0^2)-(a^2+b^2)) 
              + 4 * (x0x1+y0y1+z0z1)^2 + 4a^2z1^2
	a1 = 4 * (x0x1+y0y1+z0z1)((x0^2+y0^2+z0^2)-(a^2+b^2))
		+ 8a^2 * z0 * z1
	a0 = ((x0^2+y0^2+z0^2)-(a^2+b^2))^2 - 4a^2(b^2-z^2)

- EAH]

-------------------------------------------------------------------------------

More on Quartics, by Larry Spence
    From: larry@csccat.UUCP
    Newsgroups: comp.graphics
    Subject: Re: Solution to quartic eqn?

>I didn't ask the question, but thanks for your input.  However, Ferrari's
>theorem yields a fast and very accurate answer.
                  ^^^^
Are you sure about this?  If it's the same closed-form solution that you find
in the CRC books, etc., doesn't it use trig functions and cube roots?   Seems
to me there was a paper by Kajiya in the early '80s on numerical ray tracing,
and there have been several in the last few years.  My advice would be to go
look at SIGGRAPH proceedings from 1981 on.  Certainly, a closed form solution
like the one suggested wouldn't take advantage of any coherence in the problem,
unless you wrote all the trig stuff yourself.

[Comments, anyone?  I never saw any replies. - EAH]

-------------------------------------------------------------------------------

Question: Kay and Kajiya Slabs for Arbitrary Quadrics?
	by Thomas C. Palmer <palmer@mcnc.org>

Here's a question regarding slabs for arbitrary quadrics.  Kay & Kajiya '86
discusses computing slabs for polygons and implicit surfaces.  The method for
implicit surfaces using Lagrange multipliers and gives an example using
spheres.  This is easy and works quite nicely for canonical (i.e. centered
and axis aligned) quadrics.  K&K handle object rotations and translations
during the slab computation.  What about quadrics that have been transformed
by some arbitrary matrix prior to input and look like this:

ax^2 + 2bxy + 2cxz + 2dxw + ey^2 + 2fyz + 2gyw + hz^2 + 2izw + jw^2 = 0 ?

The xy, xz, and yz terms prevent a simple solution via Lagrange multipliers.
Has anyone done this?  How do you handle quadric bounding planes?  Note that
K&K cheated for the tree branchs in the tree models.  Each cylinder has
endpoint capping spheres so the cylinder's extent is just the combined extent
of the two spheres.

Thanks for your help -

-Tom

Thomas C. Palmer		North Carolina Supercomputing Center
Cray Research, Inc.		Phone: (919) 248-1117
PO Box 12732			Arpanet: palmer@ncsc.org
3021 Cornwallis Road
Research Triangle Park, NC
27709

-------------------------------------------------------------------------------

Ambient Term, by Pierre Poulin (poulin@dgp.toronto.edu)

I just read your Tracing tricks in the latest Ray Tracing News. Thanks for
passing that on to us, this was very interesting.

One trick you mentioned was to put a light source at the eye position in 
order to eliminate the ambient term. This is a simple trick I did not know.
However, you noted that highlights appears as artifacts.

Since you know that this light does not need any shadow rays, you could use
only the diffuse intensity created by this light to approximate the ambient
term, hence creating no undesirable highlights.

I know, this is very easy and everyone probably knows that. But just in
case you would not have thought about it, I wanted to indicate it to you. I
just hope I am not the 10,000th to tell you :^)

--------

My reply:

	I'm glad to hear that you enjoyed the "Tracing Tricks" article -
sometimes I worry that I'm just publishing ideas that everyone already knows.
I've tried having the ambient light have no highlight, and it's sort of
interesting, but the lack of highlight can look a little strange for those
objects where there really would be a highlight (it sort of makes them look
less shiny, though it also depends upon the other lights in the scene).
Nonetheless, turning it off is definitely worth exploring.  You're the first
person to comment on this, actually.  Thanks for taking the time to write, and
do keep in touch.

-------------------------------------------------------------------------------

Book Reviews on Hierarchical Data Structures of Hanan Samet,
	by A. T. Campbell, III (atc@cs.utexas.edu)


"The Design and Analysis of Spatial Data Structures", and 
"Applications of Spatial Data Structures", by Hanan Samet,
Addison-Wesley, 1990.

		Reviewed by A. T. Campbell, III

This two-volume series of books is one man's effort to provide a guide to the
study of hierarchical data structures.  The topic has extensive influence on
many fields, particularly computer graphics, image processing, and
computational geometry.  Hanan Samet is a well-established expert in the
field, with literally dozens of publications.  As a computer graphics
researcher, I eagerly anticipated the books' publication.  A close examination
of both volumes leads to one conclusion:  the books are extremely worthwhile.

The integration of diverse material is remarkable.  Comprehensive research
results throughout the spectrum of science are drawn together seamlessly.
Samet has really done a thorough job of pulling together literature from a
vast collection of conferences and journals, both major and minor.

The level of explanation is good.  Samet has obviously read all of his
references thoroughly.  The descriptions of algorithms reflect an
understanding of what is really going on.  Even algorithms mentioned briefly
are given a good essential description.

Numerous topics are covered.  Algorithms for such problems as
proximity-searching, efficient storage of image and volume data, constructive
solid geometry, data structure conversion, hidden surface removal, and
ray-tracing fill the books.  Pseudo-ALGOL code examples present detailed
explanations of how to build and traverse many of the data structures.
Ray-tracing enthusiasts in particular will enjoy a detailed description of how
to trace a path through an octree.

There are, however, a few problems with the presentation.  Despite the
ambitious titles of the volumes, there is nowhere near as much theory or
practical advice as one might expect.  The emphasis is instead on explaining
literally everything at an understandable level.  While this makes the books a
wonderful introduction to all sorts of stuff, the reader still needs guidance
in choosing what techniques to actually use.

The title of the first volume, "The Design and Analysis of Spatial Data
Structures", obviously invokes comparison with the classic text "The Design
and Analysis of Computer Algorithms", by Aho, Hopcroft, and Ullman.  However,
Samet's approach differs greatly from that of Aho et al.  While the data
structures are described and discussed in detail, the analysis is not very
formal.  Theorems and proofs, as well as detailed algorithm analysis, are not
much in evidence.  A more appropriate title might simply be "An Introduction
to Spatial Data Structures".

The second book, "Applications of Spatial Data Structures", covers basically
every research result in hierarchical algorithms, major or minor.  It is
exceptionally good at explaining techniques succinctly.  The depth is not
sufficient enough to implement the techniques without referring to the
original papers, however.  Additionally, the reader is given no good feel for
which results should actually be used.  If a technique is commonly used in
industry or never used because of impracticality, Samet never says so.  The
reader who expects a "cookbook" solution to his problem will be disappointed.

The books are primarily of use for two purposes.  First, they provide a good
introduction to those aspects of computational geometry and image processing
which are most likely to be of interest to a person working in graphics.
Second, they provide a very complete guidebook to the literature.  I would
suggest that researchers and practitioners have these volumes on their
reference shelves.  Due to the sheer volume of material presented, I would not
recommend them for use as course textbooks.

-------------------------------------------------------------------------------

Comparison of Kolb, Haines, and MTV Ray Tracers, Part I, by Eric Haines

	I decided to compare the MTV ray tracer, the Kolb "rayshade" software,
and my own modified "SBRR" ray tracing package to see the efficiency of each.
My goal was to see what sort of performance was obtained by each ray tracer
and note the strengths and weaknesses of each.  This first section of the
report marks the state of current results, consisting of timings from the
"gprof" command for each package, using the Standard Procedural Database (SPD)
package.  All three packages were run on an HP 350 workstation with a floating
point accelerator board.  The compiler options were "-g -G +ffpa" (debug,
profile, with special floating-point only compile), using HP-UX 6.5.

	The three ray tracers were selected from all the existing packages by
having the following properties:  (1) Each could handle all the primitives in
the SPD package, and (2) each had some automatic efficiency scheme.  Other
packages do not support all the primitives (e.g. DBW does not have cylinders,
cones, or n-sided polygons), or do not have automatic efficiency generation
(e.g.  QRT lets you explicitly create bounding boxes, but has no way for this
to happen automatically).

	The MTV ray tracer was created by Mark VandeWettering, and uses the
Kay/Kajiya hierarchy approach (i.e. sorting objects along X/Y/Z and splitting
each group, recursively).  To make it conform to the requirements of the SPD
tests, "minweight" was set to 0.0 in order to disable tree pruning a la Hall's
method.

	Craig Kolb's "rayshade" ray tracer uses a 22x22x22 grid on all scenes.
Because of the use of grids (i.e.  3DDDA), it was found to be sensitive to the
background polygons used in the SPD package.  In four of the SPD scenes
(balls, gears, rings, and tree) there is a "ground" polygon.  The "rayshade"
software allows some user intervention in how the database is structured.  It
was found that faster timings (sometimes strikingly quicker) could be obtained
by leaving this background polygon out of the grid structure.  Changing the
database in this manner is forbidden by the SPD tests, but both sets of
results are presented because of the difference in timings.

	The SBRR package is not public domain, but rather is part of the
graphics software in all HP workstations using the Turbo-SRX graphics
accelerator.  It uses the Goldsmith and Salmon automatic bounding volume
hierarchy method.  It should also be noted that this package is full featured,
which has a corresponding slowdown effect when intersection computations are
performed.  For example, polygons can be single or double sided, with
different materials, colors per vertex, normals per vertex, and other
combinations available.  Since the package has a "hardware assist" by using
the graphics engine as an item buffer (see Weghorst and Hooper), timings are
given both with and without this assist.  The times without are the fairer of
the two for comparison.

Without further ado, here are the timings:

MTV	   Basic
-----      -----
balls	   12604
gears	   38123
mount	    9307
rings	   24286
tetra	    1081
tree	    8056


Kolb	   Basic	Modified
-----      -----        --------
balls	   14871	    3224
gears	   12601	   11449
mount	    2989	    2989 (i.e. same - no modification needed)
rings	    8348	    8103
tetra	     836	     836 (i.e. same - no modification needed)
tree	   18957	    2505


SBRR	   Basic	Item Bfr
-----      -----        --------
balls	    5027	    4126
gears	   13561	   12776
mount	    5440	    3749
rings	   11044	   10446
tetra	    1187	     457
tree	    3229	    2719


So, considering the MTV ray tracer as 1.00, here are the relative performance
times of each tracer - (MTV time / RT time) - i.e. higher is faster, and can
be thought of as how many times faster it is:

SPD	MTV-Base	Kolb-Bas	Kolb-Mod	SBRR-Bas	SBRR-Bfr
-----   --------	--------	--------	--------        --------
balls	  1.00		  0.85		  3.91		  1.40		  3.05
gears	  1.00		  3.02		  3.32		  2.81		  3.33
mount	  1.00		  3.11		<--same		  1.71		  2.48
rings	  1.00		  2.91		  3.00		  2.20		  2.32
tetra	  1.00		  1.29		<--same		  0.91		  2.37
tree	  1.00		  0.42		  3.22		  2.50		  2.96

Some interesting phenomena are revealed by the statistics.  The "tetra"
database is pretty much the same absolute speed for everyone.  However, given
the performance for other scenes, it is noteworthy that MTV performs
relatively faster on this than others.  I've noticed this, too, when trying
Kay/Kajiya myself - this scheme just soars on tetra, though I am not sure why.
Perhaps it is the smallness and regularity of the objects, which would go well
with Kay/Kajiya's assumption that using the centroid of these is reasonable.
For other databases one can imagine better hierarchies that those constructed
by Kay/Kajiya.  For example, with mount, the four spheres above the fractal
mountain should be in their own cluster just off the top of the hierarchy
tree.

The "tetra" scene is a strange test in that most (around 81%) of the scene
is background, so what we tend to test here is affected more by how fast one can
traverse a scene, set up rays, shade, and store values in a file.  It will take
further analysis to see where the time is spent.

The Kolb ray tracer is interesting in how much its efficiency scheme is
affected by the geometry of the scene.  The "teapot in a football stadium"
effect I've written about previously hits grid efficiency schemes with a
vengeance.  For example, moving the ground polygon from the grid subdivision
to outside of it makes rayshade perform 4.6 times faster for balls, and 7.7
times faster for tree!  The point is that grids perform best when the scene is
relatively "compact".  The large ground polygons in these scenes cause the
entire grid to get larger in two directions, and so make many more objects
fall inside but a few grid squares, thereby ruining much of the efficiency of
the grid.

Comparing Kolb to MTV, we see that overall Kolb is faster.  Kolb does worse on
balls and tree using the unmodified database, but otherwise outperforms MTV,
being about twice as fast.  When the ground polygon is taken out of the grid
subdivision, Kolb is more than 3 times faster for all cases except tetra.

Comparing SBRR and MTV shows SBRR to be faster for most cases, with MTV being
slightly faster for tetra.  Overall SBRR is almost twice as fast with its
basic performance, and about 2.75 times faster when the item buffer is used.

Comparing SBRR and Kolb is a bit tricky, since there are two tests of each.
Taking the basic tests in each, Kolb and SBRR are comparable: Kolb outperforms
SBRR in four cases, and SBRR outperforms Kolb in two (and for one of those,
tree, it is almost 6 times faster).  SBRR has some things to learn from Kolb
(which is why I'm doing all this, anyway), as Kolb's modified database results
point towards the fact that faster performance is possible.

So, on the basis of pure raw timings, Kolb and SBRR without modifications or
accelerators are of comparable speed.  With user intervention into the
database structure, Kolb can become noticeably faster.  It should also be
noted that Kolb uses a default of 22x22x22 grid cells, which is under user
control and so could be tuned to further improve performance.

Actually, this is an interesting open question:  what heuristics can be used
to determine a reasonable grid size for a scene?  Also, is there a reasonable
way to determine if such performance destroyers such as ground polygons are
present automatically, and so remove them from the grid subdivision itself?
David Jevans' and Brian Wyvill's work on nested grid subdivisions ("Adaptive
Voxel Subdivision for Ray Tracing", Proceedings of Graphics Interface '89, p.
164-172) might lead to a less variable performance and to greater overall
speed.  Craig and I have discussed this, but unfortunately he has no time to
try this out - perhaps someone out there can experiment and compare
results.

As mentioned, this is research in progress.  My next step will be to analyze
the statistics generated by each program and see where time is spent.

-------------------------------------------------------------------------------

Raytracer Performance of MTV, by Steve Lamont

>   Could you pass on your timings on ray tracer performance on various
> machines and any thoughts or experiences you want to share about the subject?

The parallelization was done in a brute force manner, forking processes and
dividing the work by the number of processes.  The parent process remains
behind and reads the scanlines in a round robin fashion from pipes.  There is
no communication from the parent to the child processes once the forking has
been done; the ray tracing routines simply march down the scan lines.  This
approach works well on a homogeneous architecture where all processes run at
approximately the same speed and no process "dries up" or runs out of work to
do.

This works well for single frames.  However, my approach for a large animation
is to simply parcel out work on a frame per processor basis.

I've built the MTV raytracer on the Cray, the IRIS, and the Ardent Titan...
and here are some preliminary results on a 128 x 128 test image (the balls
image with reflections but no refractions, 3 light sources, 11 primitives (16
with bounding volumes)

			processes
		      (CPU seconds)
	Machine		1	4	Notes

	Y-MP/8-432	4.0	1.0	-hopt,intrinsics,vector
	IRIS (4D/240)*  8.2	2.1	-O2 (MIPS R3000/3010)
	Titan (P2)     30.0     7.7	-O3 (MIPS R2000 + prop. vector/fp unit)
	Titan (P3)	7.2	---	Run by vendor. (MIPS R3000/3010 +
					proprietary vector unit)

Wall clock times improved by a factor of 2.5 to 3, which squares pretty well
with Amdahl's law as extended for small parallel architectures.

These are *preliminary* results with respect to the Titan -- we've only had it
for a couple of weeks.  On none of the machines did MTV vectorize in any way
to speak of.  In fact, turning off vectorization improves performance for
several "short vector" loops; e.g., loops of vector length 3.

Timings were done on a fairly heavily loaded Cray and an empty IRIS and Titan.

The Cray is a Y-MP with four processors (upgradable to 8, hence the 8-4) and
32 mWords of central memory.  There is also a 128 mWord SSD (Solid-state
storage device).  We also have 40 gBytes of rotating storage (a combination of
DS-40s and DD-49s).

[*] Actually, this machine is a CDC Cyber 910-624 but the only difference
between it and a "genuine" Silicon Graphics IRIS 4D/240 is the color of its
box and the binding on the manuals.

[Disclaimer:  These comments are solely the responsibility of the author and
no endorsement by the North Carolina Supercomputing Center should be inferred]

Steve Lamont, sciViGuy			EMail:	spl@ncsc.org
NCSC, Box 12732, Research Triangle Park, NC 27709

-------------------------------------------------------------------------------

BRL-CAD Ray Tracer Timings, by Gavin Bell (gavin@krypton.sgi.com)

These results are third-hand; I can vouch for the accuracy of our machine's
results, but the BRL people may have more recent results to share.  The only
experience I have with ray-tracing timing on our machines is with a simple,
interactive ( :-> ) ray-tracer demo called 'flyray'.  I modified it to be
fully parallelized; it uses one CPU to display a real-time display of the
scene being ray-traced (complete with rays shooting into and bouncing around
the scene), and the other N-1 CPUs to compute ray-object intersections (these
results are shown in a separate window).  As for timing... runs REAL fast on
an 8 CPU system.

What follows is a form letter I've been sending to people who asked for
ray-tracing timings.

------------------------

This is a response to all of those people who asked me for the BRL-CAD
ray-tracing benchmark results.  I'm surprised at how many of you there are!

First, a little bit about myself.  I work in Technical Marketing, the 'Demos
and Benchmarks' group here at Silicon Graphics.  I'm not usually involved in
benchmarks; I work mainly on our demos.

The rest of this text comes directly from a 'SGI Benchmark Summary' prepared
by one of our Marketing people.  The numbers are communicated to him by the
software engineer who did the benchmark.  These benchmark summaries are
communicated to salespeople in a weekly newsletter as soon as the results come
in.  Other summaries done include:

'INDUSTRY STANDARD':
Dhrystones, Digital Review Labs, Linpack, Livermore Fortran Kernals, MUSBUS,
Whetstones.

OTHERS:
LES50 (computational fluid dynamics), Moldflow (finite element analysis),
Molecular Dynamics, Nord, UFLA, GROMOS (all computational chemistry).

If you want more information on any of these benchmarks, please see a SGI
sales rep-- I can't keep typing in all of these numbers!!  Also, please
remember that these benchmarks were done for a specific customer, who was
interested in a specific machine, so most of them were not benchmarked on our
whole product line (the 'Industry Standard' benchmarks, however, are usually
run on all of our products).

APPLICATION  BENCHMARK NAME  CUSTOMER
-----------  --------------  --------
Rendering    BRL-CAD 3.7     US Army Ballistic Research Lab

LANGUAGE     SUMMARY DATE
--------     ------------
C            9/5/89

DESCRIPTION
-----------
The BRL-CAD benchmark is a part of the US Army Ballistic Research Laboratory's
BRL-CAD package.  The core of the BRL-CAD benchmark is a ray-tracing program
which consists of about 150,000 lines of C code.  Computations are performed
in double precision floating point.

Five separate data bases are input to the ray tracing program, resulting in
six performance ratings (one for each, plus a total which is the mean of the
other five) .  When rendered, each data base will produce a 512x512x24 bit
color shaded image.  The images are of increasing complexity, and are
identified as 'Moss', 'World', 'Star', 'Bldg' and 'M35'.

RESULTS
-------
The result of this benchmark is reported as rays traced per second, and
referred to as Ray Tracing Figure of Merit (RTFM).  Higher numbers indicate
better performance.

The code was parallelized by inserting user directives to create multiple
processes to trace rays.

RESULTS FOR SGI MACHINES:
    Note:  The actual report has nice graphs here.
Machine        RTFM
-------------------
1x16 (4D/80)    714
2x16 (4D/120)  1358
4x25 (4D/240)  5034
8x25 (4D/280)  7434
    Note:  NxMM numbers refer to the number of processors in the
	machine (N) running at MM MHZ.

COMPETITIVE RESULTS:
Machine   # Processors  RTFM  Relative Performance
--------------------------------------------------
Vax 780        1          77     1.0
Sun3           1          88     1.1
Convex C120    1         163     3.6
Sun4           1         435     5.6
SGI 4D/120S    2        1358    17.5
Alliant FX/80  8        2783    33.6
SGI 4D/240S    4        4456    70.4
Cray X-MP/48   4        7217   116.1
SGI 4D/280     8        7434   119.7

ANALYSIS
--------
The BRL-CAD benchmark exhibits excellent speedup as processors are added.
This is due to the coarse granularity inherent in the ray tracing problem
being solved.  Each ray is processed independently, with no data dependencies
among the rays.  This means that multiple processors can each work on separate
rays simultaneously, with minimal need for synchronization among processors.

While the code is highly parallelizable, it is not efficiently vectorizable
because of short vector lengths.  The combination of these two
characteristics explain the phenomenal performance of Silicon Graphics
machines relative to vector machines like Cray and Alliant.

The characteristics of this benchmark that lead to high performance by the
Silicon Graphics machines are common to all ray tracing applications.

--------

Here is another note from Gavin:

My only experience with the BRL ray-tracer came when I was at Princeton
University - I installed it on Silicon Graphics machines there for the
Graphics Lab.  That was two years ago; as far as I could tell, it didn't use
octrees or any other space-partitioning algorithm.  I used a ray-tracer
written at Princeton (the precursor of what is now Craig Kolb's rayshade
program; Craig and I had the same thesis advisor) which did do octrees; it was
infinitely faster than the BRL beast.

-------------------------------------------------------------------------------

BRL-CAD Benchmarking and Parallelism, by Mike Muuss (mike@BRL.MIL)

I'm sort of on vacation right now, so I'm going to cop out and just send you
the TROFF input for several things that I have handy about how we benchmark
ray-tracing in the BRL-CAD Package.

The first one is our benchmark summary paper.

The second one is a portion of a paper that I wrote called ``Workstations,
Networking, Distributed Graphics, and Parallel Processing''.

You may publish and/or redistribute both documents as you wish.  Note that the
United States Government holds the "copyright", ie, it is not permissible to
copyright this material.

[These papers are rather lengthy, so I won't include them in this issue.
If you would like copies, look at the archive sites for Muuss.benchmrk and
Muuss.parallel, or write me. - EAH]


======== USENET cullings follow ===============================================

Rayshade Patches Available, by Craig Kolb
	From: craig@weedeater.math.yale.edu
	Newsgroups: comp.graphics
	Organization: Math Department, Yale University

Patches 1-3 for rayshade v3.0 are available via anonymous ftp from
weedeater.math.yale.edu (new address:  130.132.23.17) in directory
pub/rayshade.3.0/patches.  The patches fix several minor bugs, clean up the
documentation, and provide new features.

Several people have expressed an interest in 'trading' rayshade input files.
If you have an interesting input file that you'd like to share, feel free to
deposit it in the "incoming" directory on weedeater or send it to me via
email.  I will make these files available in the pub/Rayinput directory on
weedeater.

Rayshade is supposedly "on the verge" of appearing in comp.sources.unix,
patches and all.

-------------------------------------------------------------------------------

Research and Timings from IRISA, by Didier Badouel
	From: badouel@irisa.irisa.fr
	Newsgroups: comp.graphics
	Organization: IRISA, Rennes (Fr)

We have a parallel raytracer (called PRay) at IRISA which as MTV uses NFF
description databases.  This raytracer has been implemented on an iPSC/2, on a
SEQUENT BALANCE and also on serial computers (SUN3, GOULD NP1) to make better
comparisons.

I give you the various synthesis times for the well known 'Teapot' database.
The image has been rendering with a 512X512 resolution with 3 light sources.
Results are as follows :

                        #PEs    Time (in sec.)
        ________________________________________
        SUN3:                   8877 (~ 2h27mn)
        ________________________________________
        GOULD NP1:              1642 (~ 27mn)
        ________________________________________
        SEQUENT BALANCE 1       37121 (~ 10h18mn)
                        2       18567
                        3       12381
                        4       9285
                        5       7431
                        6       6197
                        7       5311
                        8       4656
                        9       4138 (~ 1h9mn)
        ________________________________________
        iPSC/2          1       6294 (~ 1h45mn)
                        2       3332
                        4       1700
                        8       860
                        16      440
                        32      224
                        64      119 (~ 2mn)
        ________________________________________

The code running on the iPSC/2 emulates a virtual shared memory over the local
PEs.  The database is not duplicated but all the local memories are used.  The
remaining memory after loading code and data is used as a cache to speed up
low global accesses.

-----

        In order to benchmark our parallel raytracer running on an iPSC/2,
which use Eric Haines' NFF file format as input, we would like to know 
if other people have a parallel raytracer using these databases
in order to make some comparisons.

        One of our goals is to render the largest possible 
database.  For the moment, we have rendered the 'tetra10'
database:
        - the database contains more than 1 million polygons (1048576
          polygons)
        - the size of the database with its 'object access structure' (a
          grid) is 140 MO.
        - the synthesis time is 526 seconds on the iPSC/2 with 64 nodes
          and 4 MO node memory.
        - however, on account of using NFF text file format, reading the
          input is  very slow (more than 3 hours for 'tetra10') when using 
          YACC and LEX. Furthermore, our iPSC/2 configuration does not have
          I/O  node  system.

________________________________________________________________
  Didier  BADOUEL                       badouel@irisa.fr
  INRIA / IRISA                         Phone : +33 99 36 20 00
 Campus Universitaire de Beaulieu       Fax :   99 38 38 32
 35042 RENNES CEDEX - FRANCE            Telex : UNIRISA 950 473F
________________________________________________________________

-------------------------------------------------------------------------------

Concerning Smart Pixels, by John S. Watson
	From: watson@ames.arc.nasa.gov (John S. Watson)
	Newsgroups: comp.graphics
	Organization: NASA - Ames Research Center

In article <207400033@s.cs.uiuc.edu> mccaugh@s.cs.uiuc.edu writes:
>
> Does anyone know of a system with "smart" pixels? 

Once upon a time I wrote a ray tracer in which the pixels used heuristics to
determine their sampling rate.  Since the reason for doing it was to speed
things up, the heuristic had to be simpler than casting a ray( s, if
sub-sampling).  I used difference in previous pixels values, with a little
randomness tossed in.  So pixels changing quickly were sampled every frames,
while pixels that were hardly ever changing were sampled only once every 10
frames.  The results:  much faster, but with a graininess on the edges of
moving objects.  I needed to make the pixels more aware of what was happening
with its neighbors.  Never got around to doing that.

Another problem is big pictures have lots of pixels ... 512x512 = 0.25 million.
To be smart you must have a memory.

To save memory, I combined the above with an Area-of-Interest/Variable Acuity
(AOIVA) Ray Tracer.

Hope this helps, 
John S. Watson, Civil Servant from Hell        ARPA: watson@ames.arc.nasa.gov 
NASA Ames Research Center                      UUCP:  ...!ames!watson
Any opinions expressed herein are, like, solely the responsibility of the, like,
author and do not, like, represent the opinions of NASA or the U.S. Government.

-------------------------------------------------------------------------------

Input Files for DBW Render, by Tad Guy
	From: tadguy@cs.odu.edu (Tad Guy)
	Newsgroups: comp.graphics
	Organization: Old Dominion University, Norfolk, VA

In article <6475@pt.cs.cmu.edu> te07@edrc.cmu.edu (Thomas Epperly) writes:
   I was wondering if anyone had any neat input files for DBW_Render available
   for anonymous ftp.

xanth.cs.odu.edu as /amiga/dbw.zoo.  If you have a network of, say, sun
workstations, you might as well get /amiga/distpro.zoo, which will allow to
distribute the computations over many machines.

-------------------------------------------------------------------------------

Intersection with Rotated Cubic Curve Reference, by Richard Bartels
	From: rhbartels@watcgl.waterloo.edu
	Newsgroups: comp.graphics
	Organization: U. of Waterloo, Ontario

In article <1445@tukki.jyu.fi> toivanen@tukki.jyu.fi (Jari Toivanen) writes:
 :
 :I would like to know is there any simple and effective solution to
 :following problem:
 :
 : [intersecting a ray with a rotated cubic curve]
 :
 :Jari Toivanen                           Segments are for worms ;-)
 :University of Jyvaskyla, Finland        Internet: toivanen@tukki.jyu.fi

Look at the article:

        Ray Tracing Objects Defined By Sweeping Planar Cubic Splines
        Jarke J. van Wijk
        ACM Transactions on Graphics
        Vol. 3, No. 3, July, 1984, pp. 223-237

I believe that the author subsequently wrote a whole book on the subject.

[Incidentally, this article also has a method for quickly intersecting a tight
fitting bounding volume around such curves.  I've found this test useful for
use as a torus bounding volume.  Also, does anyone know of the existence and
the name of the book Richard mentions?  - EAH]

-------------------------------------------------------------------------------

Needed: Quartz surface characteristics, by Mike Macgirvin
	From: mike@relgyro.stanford.edu
	Newsgroups: comp.graphics
	Organization: Stanford Relativity Gyro Experiment (GP-B)

I am in need of the surface characteristics for fused quartz.  Ambient,
diffuse and specular color characteristics, Phong coefficent, reflectance,
and transparency.  I have the index of refraction (Well, I have to average it,
c'est la vie).

I have attempted to derive these experimentally, but find the resulting traced
image lacking in many ways, and a simulation visualization I am working on
requires accuracy.

I am using Kolb's 'rayshade' if it affects your responses.

Please respond via e-mail if possible.

-------------------------------------------------------------------------------

Solution to Smallest Sphere Enclosing a Set of Points, by Tom Shermer
	From: shermer@cs.sfu.ca
	Newsgroups: comp.graphics
	Organization: School of Computing Science, SFU, Burnaby, B.C. Canada

>I need the solution for the following problem:
>find the smallest sphere that encloses a set of given points, in both
>2-D and 3-D (or even n-D, if you like).
>

This problem can be solved in linear time (in any fixed dimension)
by the technique of prune-and-search (sometimes called ``Megiddo's
Technique''), either directly or by first converting the problem
to a linear program.  The most relevant reference (for 2d & 3d) is:

Linear-time Algorithms for Linear Programming in R^3 and Related Problems,
        Nimrod Megiddo, SIAM J. Comput, v. 12, No. 4, Nov 1983, pp. 759-776.


Other related references:

Linear time algorithms for two- and three- variable linear programs,
        M.E. Dyer, SIAM J. Comput, v. 13, 1984, 31-45.

On a multidimensional search technique and its application to the
Euclidean one-center problem,
        M. E. Dyer, Dept. Math and Stats TR, Teesside Polytechnic,
        Middlesbrough, Cleveland TS1 3BA, UK, 1984.

Linear programming in linear time when the dimension is fixed,
        N. Megiddo, JACM 31, 1984, 114-127

The weighted Euclidean 1-center problem,
        N. Megiddo, Mathematics of Operations Research 8, 1983, 498-504

On the Ball Spanned by Balls
        N. Megiddo, manuscript (this may have appeared in the literature
        by now)

-------------------------------------------------------------------------------

True Integration of Linear/Area Lights, by Kevin Picott
	From: kpicott@alias.UUCP
	Newsgroups: comp.graphics
	Organization: Alias Research Inc.

Has anyone seen any work done on evaluating the diffuse and specular
illumination produced by linear and/or area lights?  I have checked all the
standard sources and all information I find gets to the point where the
integration is set up and then a little hand waving is performed accompanied
by the magical words "numerically integrated".  This works but is too slow
for my purposes.  Does anyone know of any work done in different directions
(ie faster evaluation) ?

--------

Thanks to all who replied to my query about linear and area lights.

In the area of linear lights, two papers on analytical solutions were found.
The first, by John Amanatides and Pierre Poulin has been submitted to
Eurographics '90 and I'll hopefully get a look at that soon.

The second, "Shading Models for Point and Linear Sources", ACM TOGS, 4(2),
April 1985, pp. 124-146.  by T. Nishita, I. Okamura, E. Nakamae, proposes
an analytic solution to the diffuse component, but only under certain
circumstances.

The latter unfortunately reduces to numerical integration in the majority of
cases where spline surfaces are involved, although a method of optimization is
given that reduces computation time for the numerical integration.  This
method would seem to be suited to lighting parallel and perpendicular to the
illuminated surfaces.

There was also a paper entitled "A Comprehensive Light-Source Description for
Computer Graphics", IEEE CG&A, July 1984, by Channing P. Verbeck and Donald
P. Greenberg that approximates both linear and area light sources as a series
of point sources.  This is a compromise to numerical integration, but is still
computationally expensive.

In summary, the analytical solution to linear sources exists and is
calculable, at least for the diffuse component.  The specular component
exists, but direct calculation is almost expensive as numerical integration.

As far as area light sources are concerned... no analytical solutions were
found.  In fact, from the work examined I was left with the impression that
even if the solution existed it would not be very useful from a light
illumination point of view (ie non-radiosity).  (Comments?)

--
 Kevin Picott   aka   Socrates   aka   kpicott%alias@csri.toronto.edu
 Alias Research Inc.  R+D     Toronto, Ontario... like, downtown

-------------------------------------------------------------------------------
END OF RTNEWS

From erich@wisdom.graphics.cornell.edu Thu Mar 22 18:05:17 1990
Return-Path: <erich@wisdom.graphics.cornell.edu>
Received: from devvax.TN.CORNELL.EDU by turing.cs.rpi.edu (4.0/1.2-RPI-CS-Dept)
	id AA23719; Thu, 22 Mar 90 18:04:06 EST
Received: from WISDOM.GRAPHICS.CORNELL.EDU by devvax.TN.CORNELL.EDU (5.59-1.12/1.3-Cornell-Theory-Center)
	id AA09563; Thu, 22 Mar 90 15:44:38 EST
Date: Thu, 22 Mar 90 15:43:14 est
From: erich@wisdom.graphics.cornell.edu (Eric A. Haines)
Received: by wisdom.graphics.cornell.edu (14.4.1.1/2.0nn-Program-of-Computer-Graphics)
	id AA21312; Thu, 22 Mar 90 15:43:14 est
Message-Id: <9003222043.AA21312@wisdom.graphics.cornell.edu>
To: FISHER@3D.dec@decwrl.dec.com, arvo@apollo.hp.com, atc@cs.utexas.edu,
        barr@csvax.caltech.edu, barsky@miro.berkeley.edu,
        bcorrie@uvicctr.uvic.ca, chapman@fornax, chet@cis.ohio-state.edu,
        ckchee@dgp.toronto.edu, daniel@apollo.com, dk@csvax.caltech.edu,
        cychosz@ecn.purdue.edu, esl0422@ultb.isc.rit.edu,
        glassner.pa@xerox.com, grant@delvalle.llnl.gov,
        gray@rhea.CRAY.COM@uc.msc.umn.edu, green@compsci.bristol.ac.uk,
        hanrahan@princeton.edu, hench@csclea.ncsu.edu,
        hohmeyer@miro.berkeley.edu, raylist@hpfcjo.hp.com,
        devvax!hpfcla.uucp!hpfcse!hpurvmc!koz, devvax!hplabs.uucp!dana!mrk,
        hultquis@prandtl.nas.nasa.gov, zmel02@image.trc.amoco.com,
        jakob@humus.huji.ac.il, jeff@hamlet.caltech.edu, johnf@apollo.com,
        joy@ucdavis.edu, kolb@yale.edu, kyriazis@turing.cs.rpi.edu,
        vedge!kaveh@larry.mcrcim.mcgill.edu, lister@dg-rtp.dg.com,
        litwinow@apple.com, m-cohen@cs.utah.edu, markc@emx.utexas.edu,
        dutio!fwj@mcvax.cwi.nl, dutrun!wim@mcvax.cwi.nl, mja@sierra.llnl.gov,
        mplevine@phoenix.princeton.edu, carl@mssun7.msi.cornell.edu,
        paul@sgi.com, ph@miro.berkeley.edu, avatar!kory@quad.com,
        ray-tracing-news@wisdom.graphics.cornell.edu,
        raycasting@duke.cs.duke.edu, raytrace@cpsc.ucalgary.ca,
        rgb@caen.engin.umich.edu, jaf@squid.graphics.cornell.edu,
        lytle@tcgould.tn.cornell.edu, tim@csvax.caltech.edu,
        megatek!kuchkuda@ucsd.edu
Subject: RT News, part 1 of 2
Status: R


 _ __                 ______                         _ __
' )  )                  /                           ' )  )
 /--' __.  __  ,     --/ __  __.  _. o ____  _,      /  / _  , , , _
/  \_(_/|_/ (_/_    (_/ / (_(_/|_(__<_/ / <_(_)_    /  (_</_(_(_/_/_)_
             /                               /|
            '                               |/

			"Light Makes Right"

			  March 20, 1990
		        Volume 3, Number 2

Compiled by Eric Haines, 3D/Eye Inc, 2359 Triphammer Rd, Ithaca, NY 14850
    wrath.cs.cornell.edu!eye!erich
All contents are US copyright (c) 1990 by the individual authors
Archive locations: anonymous FTP at cs.uoregon.edu (128.223.4.13), /pub and at
		   freedom.graphics.cornell.edu (128.84.247.85), /pub/RTNews
Unofficial site: uunet.uu.net [192.48.96.2], /graphics
UUCP access: check Vol 3, No 1 or Kory Hamzeh (quad.com!avatar!kory) for info.

Contents:
    Introduction
    New People, Address Changes, etc
    FTP Site List, by Eric Haines
    RayShade Posting and Patches and Whatnot, by Craig Kolb
    Common Rendering Language, by Eric Haines
    Avoiding Re-Intersection Testing, by Eric Haines
    Torus Equation Correction, by Pat Hanrahan
    "Introduction to Ray Tracing" Shading Model Errata, by Kathy Kershaw
    Comments on Last Issue, by Mark VandeWettering
    An Improvement to Goldsmith/Salmon, by Jeff Goldsmith
    Fiddling with the Normal Vector, by H. 'ZAP' Anderson
    A Note on Texture Sampling, by Eric Haines
    Unofficial MTV Patches, by Eric Haines
    ======== USENET cullings follow ========
    OFF Databases, by Randi Rost
    VM_pRAY is now available, by Didier Badouel
    Superquadrics, by Prem Subrahmanyam, Patrick Flynn, Ron Daniel, and
	Mark VandeWettering
    Graphics Textbook Recommendations, by Paul Heckbert, Mark VandeWettering,
	and Kent Paul Dolan
    Where To Find U.S. Map Data, by Dru Nelson
    References for Rendering Height Fields, Mark VandeWettering
    RayShade 3.0 Released on comp.sources.unix, by Craig Kolb
    Bibliography of Texture Mapping & Image Warping, by Paul Heckbert
    More Texturing Functions, by Jon Buller
    Ray/Cylinder Intersection, by Mark VandeWettering
    C Code for Intersecting Quadrics, by Prem Subrahmanyam
    Parallel Ray Tracing on IRISs, collected by Doug Turner

-------------------------------------------------------------------------------

Introduction

	Well, it's been awhile.  I've been detoured by making demos for NCGA,
but now that's over and I can catch up with editing the RT News.  Actually,
doing demos was interesting this time around:  I wrote a personal visualizer
("personal" is definitely the right word; so far, only I can use it) for
setting up the view, light sources, materials, and positions of objects (i.e.
no modeling capability per se), along with some animation tools.  Beats the
daylights out of my old visualizer (aka known as Unix's "vi" editor).  Anyway,
interactive graphics is pretty enjoyable - I'd forgotten.  Nice to have an
image come up in less than an hour, let alone less than a second.

	I've heard from a few people that Silicon Graphics' new machine is
pretty hot:  surface and reflection (environmental) texturing in hardware, and
it's pretty fast at it.  Intergraph evidentally has some kind of ability to
change material attributes of objects in a ray traced image and have the
changes to the image compute and display quite rapidly.  They claim to NOT be
storing the intersection trees for all of the pixels, so I'm looking forward
to seeing this myself and figure out how they do it (my current theories of
"it's just a videotape shown on the screen" and "they've made a pact with the
devil" not being the most scientific).

	By the way, the "Introduction to Ray Tracing" book is not out-of-print.
Evidentally there was a temporary shortage, but they've reprinted it and it
should be back in stock now.

-------------------------------------------------------------------------------

New People, Address Changes, etc


# Eric Chet - acceleration
# 4047 Forest Hill Drive
# Lorain, Ohio, 44053
alias   eric_chet   chet@cis.ohio-state.edu

     My main interest in ray tracing now is acceleration.  I'm developing a
ray tracer in 680x0 assembler to make it as efficient as possible with the
algorithms I'm implementing.  Spatial subdivision and ray coherence are the
techniques I'm working with.

--------

John McMillan, Department of Physics
	       University of Leeds
	       Leeds LS2 9JT
	       West Yorks
	       Great Britain
phy6jem@cms1.ucs.leeds.ac.uk

My interest is slightly different from your normal application of raytracing.
I'm not interested in images at all.  I design scintillation detectors in
which light is produced in a medium in response to a charged particle passing
through.  Some of this light eventually makes its way to a photomultiplier
(circular window) where it is converted into an electrical signal.  I'm
interested in following rays through various geometries of materials with
different refractive indices and reflection coefficients.  So I guess you can
probably see why I'm interested in Ray Tracing News.

--------

Marinko Laban - shading, anti-aliasing, CSG & Modelling
K.T.I. BV
Bredewater 26
2700 AB Zoetermeer (The Netherlands)
???-079-531521 (Don't know the USA access code)
e-mail address: hp4nl!hp4nl.nluug.nl!ktibv!ml

During my University study, I obtained various copies of your Ray Tracing
Newsletter.  These copies were given to me by the supervisor of my Master
project, M. Frits Post (Tech.University of Delft, Holland).

Under his supervision I've written my Master Thesis about distributed ray
tracing and I also wrote an implementation of one.  I've got my Master degree
for a few months now, and I'm currently working as a CAD software engineer at
Kinetics Technology International.

My professional tasks are developing & evaluating CAD & Engineering software
for Industrial Plant Design.  I also keep myself busy with our UNIX computers,
and all sorts of small bits and pieces.  When I have the opportunity & time I
have some SPARC's do some ray tracing from me ... I also keep my personal
Amiga busy doing all kinds of graphics stuff.

--------

David Tonnesen
Dept. of Computer Science
University of Toronto
Toronto, Ontario  M5S 1A4
(416) 978-6986 and 978-6619

research: modeling and rendering

e-mail:  davet@dgp.toronto.edu   (UUCP/CSNET/ARPANET)
         davet@dgp.utoronto      (BITNET)

I am a Ph.D. student at the University of Toronto and interested in a variety
of graphic topics including ray tracing.  Other areas I am interested in
include models of deformable models, global illumination models, models of
natural phenomena, physically based modeling, etc...  Basically modeling and
rendering.

--------

Jim Arvo, of Apollo, has the new email address:

        arvo@apollo.hp.com

--------

Carl Bass, of Ithaca Software, is moving to California.  He should have a
new mailing address soon.  His latest address has been updated to:

	carl@mssun7.msi.cornell.edu

-------------------------------------------------------------------------------

FTP Site List, by Eric Haines

This is my own collection of sites which have ftp-able software or documents
that have some relevance to ray tracer.  If you don't know how to use ftp, see
Didier Badouel's article in this issue for an example.  Recall that it's
considered polite to download large amounts of stuff only after business
hours.  If a site has a software name with asterisks around it (e.g. *RT
News*), this means that the site is the "home" of this stuff (or is updated
often enough by the site that the site's offering is usually the most current
version).  The archive manager (more or less) is listed at the end of each
entry.  Please do send on any corrections or clarifications you have.  Sites
are always changing, so please do keep me posted.  I'm not going to bother
with "gif" image file sites (though they are useful for texturing), as the
list would double in size.  The sites below are listed more or less in the
order of most to least extensive ray-trace related material stored.

Mark VandeWettering's MTV ray tracer was posted to comp.sources.unix and is
postings v18i070-072.  Sid Grange's ray tracer is v05i046.  Craig Kolb's
RayShade has just been posted to comp.sources.unix, v21i008.  Note: patch4
is now available for RayShade.

cs.uoregon.edu [128.223.4.13]:  /pub - *MTV ray tracer*, *RT News*, *RT
	bibliography*, other raytracers (including RayShade, QRT, VM_pRAY),
	SPD/NFF, OFF objects, musgrave papers, some Netlib polyhedra, Roy Hall
	book source code, Hershey fonts, old FBM.  Mark VandeWettering
	<markv@acm.princeton.edu>

hanauma.stanford.edu [36.51.0.16]: /pub/graphics/Comp.graphics - best of
	comp.graphics (very extensive), ray-tracers - DBW, MTV, QRT, and more.

weedeater.math.yale.edu [130.132.23.17]:  *Rayshade 3.0 ray tracer*, *color
	quantization code*, Utah raster toolkit, newer FBM.  Craig Kolb
	<kolb@yale.edu>

freedom.graphics.cornell.edu [128.84.247.85]:  *RT News back issues, source
	code from Roy Hall's book "Illumination and Color in Computer
	Generated Imagery", SPD package, Heckbert/Haines ray tracing article
	bibliography, Muuss timing papers.

uunet.uu.net [192.48.96.2]: /graphics - RT News back issues, other graphics
	related material.

life.pawl.rpi.edu [128.113.10.2]: /pub/ray - *Kyriazis stochastic Ray Tracer*.
	George Kyriazis <kyriazis@turing.cs.rpi.edu>

geomag.gly.fsu.edu [128.186.10.2]:  /pub/pics/DBW.src and DBW.microray.src -
	*DBW Render source*, ray traced images.  Prem Subramanyan
	<prem@geomag.fsu.edu>

xanth.cs.odu.edu [128.82.8.1]:  /amiga/dbw.zoo - DBW Render for the Amiga (zoo
	format).  Tad Guy <tadguy@cs.odu.edu>

munnari.oz.au [128.250.1.21]:  */pub/graphics/vort.tar.Z - CSG and algebraic
	surface ray tracer*, /pub - DBW, pbmplus.  David Hook
	<dgh@munnari.oz.au>

cs.utah.edu [128.110.4.21]: /pub - *Utah raster toolkit*.  Spencer Thomas
	<thomas@cs.utah.edu>

gatekeeper.dec.com [16.1.0.2]: /pub/DEC/off.tar.Z - *OFF objects*,
	/pub/misc/graf-bib - *graphics bibliographies (incomplete)*.  Randi
	Rost <rost@granite.dec.com>

expo.lcs.mit.edu [18.30.0.212]:  contrib - *pbm.tar.Z portable bitmap
	package*, *poskbitmaptars bitmap collection*, *Raveling Img*,
	xloadimage.  Jef Poskanzer <jef@well.sf.ca.us>

venera.isi.edu [128.9.0.32]:  */pub/Img.tar.z and img.tar.z - some image
	manipulation*, /pub/images - RGB separation photos.  Paul Raveling
	<raveling@venera.isi.edu>

ftp.ee.lbl.gov [128.3.254.68]: *pbmplus.tar.Z*.

ucsd.edu [128.54.16.1]: /graphics - utah rle toolkit, pbmplus, fbm, databases, 
	MTV, DBW and other ray tracers, world map, other stuff.  Not updated
	much recently.

okeeffe.berkeley.edu [128.32.130.3]:  /pub - TIFF software and pics.  Sam
	Leffler <sam@okeeffe.berkeley.edu>

curie.cs.unc.edu [128.109.136.157]:  /pub - DBW, pbmplus, /pub/graphics - vort.
	Jeff Butterworth <butterwo@cs.unc.edu>

irisa.fr [131.254.2.3]:  */iPSC2/VM_pRAY ray tracer*, /NFF - some non-SPD NFF
	format scenes.  Didier Badouel <badouel@irisa.irisa.fr>

hc.dspo.gov [192.12.184.4]:  {have never connected} Images?

netlib automatic mail replier:  UUCP - research!netlib, Internet -
	netlib@ornl.gov.  *SPD package*, *polyhedra databases*.  Send one
	line message "send index" for more info.

UUCP archive: avatar - RT News back issues.  For details, write Kory Hamzeh
	<kory@avatar.avatar.com>


Non-sites (places which used to have graphics stuff, but do no longer):

albanycs.albany.edu [128.204.1.4]: no longer carries graphics stuff
nl.cs.cmu.edu [128.2.222.56]: /usr/mlm/ftp/fbm.tar.Z - not found.  Michael
	Maudlin <mlm@nl.cs.cmu.edu>
panarea.usc.edu [128.125.3.54](not found?): archive for maps?

-------------------------------------------------------------------------------

RayShade Posting and Patches and Whatnot, by Craig Kolb

[Craig's excellent public domain ray tracer RayShade 3.0 has been posted to
comp.sources.unix.  He has also just recently posted patch4, a set of fixes
for this program.--EAH]

On the ray tracing front, I'm mulling over a total rewrite of rayshade, in an
attempt to make it more flexible/extensible.  Mark and I are talking about
tying together the scanline rendered he's working on with a new version of
rayshade.  It would be nice if they shared a common modeling/texturing
language, etc.  I think it could be quite nice if done correctly.

-------------------------------------------------------------------------------

Common Rendering Language, by Eric Haines

One question which I've received and seen posted on comp.graphics with some
frequency is "what input format/language should I use for scene description?".
As the inventor of NFF (Neutral File Format), I recommend AGAINST this
language as your first choice.  Hey, I made for testing efficiency schemes -
at one point I considered not even allowing the user to specify colors, but
esthetics got the best of me.  You currently cannot specify light intensity.
As it stands, I don't plan on extending NFF - the language is supposed to be as
minimal as possible and is for testing efficiency schemes.

I'd recommend anyone interested in this question to look at Craig Kolb's
RayShade language.  It's much fuller, includes many more primitives, texture
functions, instancing, etc.  You could always pick a subset that you have
implemented if the language is too extensive for your tastes.  One very nice
thing provided with RayShade is an Awk script which translates NFF files to
his format.

If you plan on making some money off of whatever you're doing, it's wise to
look at RenderMan.  There are definitely some grey areas in the spec as to how
certain functions actually perform (i.e.  what algorithm is used), as well as
some procedures which force the use of certain algorithms (e.g.  shadow maps).
But most of the language is reasonable and well thought out, the "RenderMan
Companion" is readable (at least you won't have to write documentation if you
choose this language), and certainly other companies are signing on to using
this language.  Caveat:  good luck beating Pixar at its own game in the
marketplace, especially with their years of lead time.

-------------------------------------------------------------------------------

Avoiding Re-Intersection Testing, by Eric Haines

The problem: when shooting a ray through a grid (3DDDA) or octree (or any other
scheme which can put an object into more than one reference location), you
encounter the problem of how to avoid performing the intersection test more
than once for the same ray and same object.  For example, imagine you have a
cylinder which overlaps two grid boxes.  Your ray enters the first grid box and
is tested against the cylinder, missing it.  So, now your ray moves into the
next grid box and the cylinder is listed in this second box.  Obviously, the
ray missed this cylinder on the previous test.  You would like to avoid testing
the cylinder against the ray again to save time.  How to do it?

There are some fairly bad solutions to this problem.  One is to keep a list of
flag bits, one per object.  When an object is tested for intersection, the
flag bit is checked:  if off, then it is set and the full test is performed.
If on, then we know we've tested the ray against the object already and can
move on.  The problem with this solution is that you must reset the flag bits
after each ray, which can be a costly procedure.  For example, imagine you
have 32000 objects in a scene.  Even with 32 bit words used for the flags, you
have to reset 1000 words to zero before EACH ray.  There are variants on this
scheme (e.g. you could store the locations to reset in a list, then simply
reset just those in this list before the next ray), but happily there is a
better solution with much less overhead.  [Note:  if you're really strapped
for memory, you might still want to go with the above algorithm]

The algorithm:  keep a list of integers, one per object - call this the "hit
list" (alternately, simply add this field to each object).  Initialize this
list to zeroes.  Also keep a counter which counts the total number of rays
generated, i.e. when a new ray is generated, the counter is incremented.  When
a ray is to be tested against an object, check the object's hit list value.
If this value does not match the ray's counter number, then set the hit list
value to the ray's counter number and test the object.  If the hit list value
matches the ray's number, then this object has already been tested by the ray.
If you use 32 bit words in the list, you probably won't have to worry about
the ray counter overflowing (2**32 rays is a lot).  However, you could even
use byte length integers for the hit list.  When the ray counter reaches 256,
then you reset the whole hit list to zero and reset the count to 1.  In some
ways this technique is an extension of the flag bit technique, with the cost
of a little more storage offset by the time savings of rarely (if ever) having
to reset the flags.

Over the past few months I've noticed that there are still a few researchers
who do not know about this technique.  Personally, I had to invent this
algorithm for my own use, and others have no doubt done the same.  Asking
around for references, I can see why people would not know about it.  The only
reference that I know which mentions this algorithm is:

%A Bruno Arnaldi
%A Thierry Priol
%A Kadi Bouatouch
%T A New Space Subdivision Method for Ray Tracing CSG Modelled Scenes
%J The Visual Computer
%V 3
%N 3
%D August 1987
%P 98-108
%K CSG

-------------------------------------------------------------------------------

Torus Equation Correction, by Pat Hanrahan

Pat took up my request last RTN to derive the ray/torus intersection equation
on Mathematica.  He found that in fact Larry Gritz's & my derivation still had
one small bug - I left out the subscript of z0 in the very last term of a0.  So,
here's the final, correct equation (I hope).  --EAH

	a4 & a3 - Pat's are OK.
        a2 = 2(x1^2+y1^2+z1^2)((x0^2+y0^2+z0^2)-(a^2+b^2)) 
              + 4 * (x0x1+y0y1+z0z1)^2 + 4a^2z1^2
	a1 = 4 * (x0x1+y0y1+z0z1)((x0^2+y0^2+z0^2)-(a^2+b^2))
		+ 8a^2 * z0 * z1
	a0 = ((x0^2+y0^2+z0^2)-(a^2+b^2))^2 - 4a^2(b^2-z0^2)
							^---I left this out

Pat sent me all of the equations in eqn format - here they are:

----
.EQ
define x0 'x sub 0'
define x1 'x sub 1'
define y0 'y sub 0'
define y1 'y sub 1'
define z0 'z sub 0'
define z1 'z sub 1'
define r11 '( x1 sup 2 + y1 sup 2 + z1 sup 2 )'
define r01 '( x0 x1 + y0 y1 + z0 z1 )'
define r00 '( x0 sup 2 + y0 sup 2 + z0 sup 2 )'
define r00ab '( r00 - ( a sup 2 + b sup 2 ) )'
.EN
.EQ
a sub 4 ~=~ r11 sup 2
.EN C
.EQ
a sub 3 ~=~ 4 r01 r11
.EN C
.EQ
a sub 2 ~=~ 2 r11 r00ab + 4 r01 sup 2 + 4 a sup 2 z sub 1 sup 2
.EN C
.EQ
a sub 1 ~=~ 4 r01 r00ab + 8 a sup 2 z sub 0 z sub 1
.EN C
.EQ
a sub 0 ~=~ r00ab sup 2 - 4 a sup 2 ( b sup 2 - z sub 0 sup 2 )
.EN
----

-------------------------------------------------------------------------------

"Introduction to Ray Tracing" Shading Model Errata, by Kathy Kershaw

p.148, section on distribution term D, 8th line, should say that alpha is the
angle between N and H.

p.156, section 5.4, 2nd paragraph:  I think Andrew meant "specular
transmission curve".

p.158, in F_dt(lambda) and eq 24 it says "diffuse reflection" instead of
"diffuse transmission" and "diffuse reflectance" instead of "diffuse
transmittance".

-------------------------------------------------------------------------------

Comments on Last Issue, by Mark VandeWettering

Glad to see that you spent the time to include the MTV raytracer in your
timings.  I was meaning to compare them myself at some time, but lacked the
time to do so.

You might be interested to know that I have eeked out a little time to make
some modifications to the MTV raytracer.  In particular, I too have switched
to an Goldsmith-Salmon hierarchy generating scheme.  It's been a while since I
benchmarked this one against the one that is available for ftp, but I did
realize significant speedups.  I added some primitive support for super
ellipsoids and "blocks" as well.

The main reason that I haven't released these changes is simple:  Craig Kolb's
raytracer is too good :-) It really is a slick piece of programming.  If it
had 2-d texture mapping, it would be ideal for a large variety of image
rendering tasks.

I also think that more adaptive methods (particularly K-K bounding volumes)
are better under a wide variety of image rendering tasks.  Maybe I should
construct an nff file for the "teapot in a stadium" idea and restore some of
the dignity that the MTV raytracer had by kicking some sand back in Craig's
face :-)

Another place where the MTV raytracer probably falls short is that for
primitive objects, I still take the trouble to intersect their bounding
volumes.  For simple, tightly bounded objects like spheres, this is pretty
wasteful.  Craig's code is fairly careful to weed those out.

If I had a good block of time, I would like to go back and 'do it all over'
with texture mapping primitives and other niceties.  But now that I am out in
the pseudo-real world of Princeton, such blocks of time are hard to come by.

Ah, if one only had infinite time.

Anyway, just an update from this side of the world.

-------------------------------------------------------------------------------

An Improvement to Goldsmith/Salmon, by Jeff Goldsmith

[Background:  as written Goldsmith & Salmon's automatic bounding volume
hierarchy generation algorithm (ABVH) has two insertion methods for an object:
an object can become a new sibling of an existing bounding box, or a new
bounding box can be created which replaces one of the existing siblings, and
the existing sibling and object are put in this new bounding box.]

    A new case for the ABVH programs:  First insertion check-- consider the
new node to be a sibling of the current root node.  That is, consider making a
new root with two children, one being the current root, and the other the new
node.  This should have a cost of 2*Area(new root).  Everything else is the
same, but this adds a case that I forgot and allows for less bushy roots when
you need them.

    I haven't tested this, and I'm not completely convince that 2A is the
right cost, but I think it is.  Since you use this algorithm, I'd appreciate
some trials to see if it ever happens and if it has a (predicted at least)
beneficial result on the final tree.

    By the way, Pete Segal inspired the idea.

[In fact, I tried this out some time ago.  The thrust of Jeff's comments are
that at the root node, the root can only get bushier (more sons added) or
else objects are added to existing sons which have little relation to those
existing sons.  His third case is to consider the root node itself a sibling
of some imaginary "father of the root" (which has only the root as its son and
is the same dimensions).  In this way, the object could also be added to this
"father" and not cause the root to become bushy.

This explanation implies a simple change to existing code:  simply create this
"father of the root" node at the start, and any time the above condition of a
sibling being added to this father occurs, again create the father for this
new root (i.e. that used to be the father of the root).

As an example, imagine some complicated scene that contains five spheres as
light sources.  The light sources are some distance outside the rest of the
scene, i.e. not in the bounding volume containing this data.  You now try to
add these light sources.  Under the old algorithm these lights would normally
get added as separate siblings of the root node.  So, when ray tracing, you
would always attempt to intersect all of these light sources when you first
looked in the root box.  The new algorithm should cause the root to be less
bushy and the lights may become a subcluster.  At least in theory - I still
want to think about it some more...--EAH]

-------------------------------------------------------------------------------

Fiddling with the Normal Vector, by H. 'ZAP' Anderson
	(y88.l-karvonen@linus.ida.liu.se)

    When I Wrote my first raytracer, it was in Basic, only spheres, one
lightsource, checkered ground, and rather primitive.  A while has passed since
then, and I have put behind me quite a few years of Raytracing both in my mind
and in my computer.  Not being fortunate enough to own a Cray III, or even a
VAX 11/780, but a mere CompaQ 386/20e, I have a certain passion for tricks
that enhance the picture quality without adding to the calculation time.  So,
besides Texture Mapping, my favorite hat trick, is 'Fiddling with the Normal
Vector'

    My first trick, is the simplest, and in my opinion the best (isn't it
always so?).  But first, some history:  WHAT is it the human eye want's to
see?  WHAT make's a picture look 'real'?  What makes a picture 'feel'
realistic?  There are, of course, MANY answers, but one of them is:  Specular
Highlights.

    I was at a demo at Hewlett Packard in Stockholm, and they showed me (with
badly hidden pride :-) their TurboSRX solids rendering engine.  When the demo
guy started to spin nurbs and superquadrics in realtime before my very eyes,
those eyes fell upon the specular highlights on the surfaces.  They moved along
as the surface twisted, and i thought:  'gosh, THAT looks real!'  Something
VERY important for the eye, and our mind, to really 'feel' the solidity of an
object, is when the specular highlights crawl around it's surface as we twist
them (sorry for you Radiosity fans :-).  But WHY does a computer image need
more specular highlights?  Aren't those only seen on cylinders and spheres, and
perhaps (in a lucky instance) on another surface?  The answer is a very very
big NO!

    Consider a coin.  Place it on the table in front of you.  Basically, this
coin is a rather short cylinder.  And if we would render it as such, it would
look no way like a coin.  Why is that?  Clue:  The microscope!  As we watch
closely, we see, that the coins edge is a bit rounded by years of transfer on
the black market.  The coin on your table almost ALWAYS have SOME part of it's
edge in a direction that mirrors light, and you have a specular highlight.
Taken further, this applies to ALL edges.  NO edge in natural life is an exact
90 degree edge.  ALL edges are a bit rounded, and therefore has a big
probability of generating a specular highlight.  Just look around the room, and
you'll know I'm right.

    Here goes the simple trick.  Imagine a cylinder in the XY plane with a
radius of 10 and a height of 10.  When we want to obtain the normal vector, we
normally get one pointing straight up (for the top face) and one pointing in
one direction in the XY plane, depending on intersection point.  This gives us
the simple cylinder, and nothing like the coin mentioned above.  BUT if we now
twiddle a bit with our famous Normal Vector, so IF we hit the cylinder close
to the top face (say .1 of cylinder height) we gently tweak the normal vector
upwards, scaled such, that when we reach the top face exactly, we have twisted
it UP along the Z axis 45 degrees.  Now do the same for the top face.  When
you approach the edge of the cylinder, maybe .9 of the radius, you gently
twiddle the little N vector outwards, again, scaled to obtain 45 degrees
exactly at the edge.  The result:  A cylinder with a rounded edge, without
extra calculation time or extra surfaces!!  I have implemented this myself,
and the resulting increase in image quality is stunning.  Everything that once
looked like synthetic and unreal cylindric objects, now turns out to look VERY
real, once they get that little extra shine on the edge.  This, of course,
applies to ALL edges, and are just as simple to implement on other primitives.

    Another 'Normal Vector Tricky' is the 'Modified surface smoothing' I use.
Consider a staircase looking surface, and 'standard' smoothing, with one
normal per vertex:  (now watch Fig 1!!)

Fig. 1                                 Fig. 2
                                   
     I      N12     N23                I
     I    /       /                    I--N1  N2
     I  /       /                      I      I
     I/       /                        I      I       
     ---------                         --------------
             I                                      I
             I                                      I---N3
             I                                      I

    Imagine the standard smoothing applied to this surface.  The surface
between the 'vertex normals' N12 and N23 would be shaded as if it ALL had a
normal vector same as N12 (or N23)!!  That isn't correct, is it?  Behold Fig.
2!  With one normal vector per surface, but smoothing from the center of
surface 1 to the center of surface 2, then smooth onwards from surf.  2 to
surf.  3, you will get a better result, PLUS you save yourself the trouble of
keeping interpolated vertex normals!

    Now, no 'real' surfaces are perfect.  Take a close look at your
neighbours' ferrari in sunlight.  You will see that even if it isn't a perfect
spline, nurb, or anything like that.  This can be simulated with gentle
surface rippling a la' DBW-render, or by actually MAKING the surface
non-perfect.  But there is another way.  When interpolating normals, you may
use other functions than standard linear interpolation.  Maybe use the square
root of the distance to the normals, or the distance squared?  This will yield
a surface 'not so smooth' as a perfectly smoothed may be, something to
consider?

    Another thing I am currently trying to implement in my tracer, is
something I have called 'profile mapping' (you may have been using a different
term?)  where I supply a 2d description of normal-vector-tweaking as a
function of local surface coordinates.  Simply put:  Texture mapping for the
Normal.  I may be able to generate the engravings on our coin in the previous
example, or other subtle surface characteristics.  Has anyone any experience in
this field?  Please let me know!

    And finally, a question:  I would very much like to implement Octrees in
my tracer, but I haven't REALLY understood how this works.  Could somebody, in
a few words, explain the basics for little o'l me?

ThanX in advance!!  /H 'ZAP'

-------------------------------------------------------------------------------
	
A Note on Texture Sampling, by Eric Haines

[this is edited from a note I wrote to `Zap' Anderson about sampling texture
maps.  I hope it helps someone out there to understand the problem a bit
better.]

	There are two problems that need solving when using texture maps:
interpolation (aka magnification) and decimation.  Interpolation is needed
whenever less than one texel (a sample on a texture map) covers a pixel.
Decimation is needed when more than one texel falls into a pixel.  Actually,
the number is really "2 texels or less/more", due to the Nyquist frequency
(see Heckbert's thesis), but forget that for now.

	Interpolation is relatively easy:  if a texel covers more than one
pixel, then we should consider the texture map (if it's an image) to be
representing a continuous sampling of something (e.g. the mandrill).  So,
each texel is just a sample along a continuous function.  In this case, when
we hit such a texel, we take the exact location on the function and
interpolate the color from the nearby texels (e.g. you could use bilinear
interpolation, which usually works well enough).  Note that you might not
always want to interpolate, e.g. if your map is a checkerboard, then you may
want the sharp edges between the squares, not blurred edges.

	Decimation is where the fun begins:  you want to take the values of
all the texels in the pixels and blend them.  You also want to do this
quickly, i.e. if 10,000 texels fall in the pixel, you want to quickly get
the sum of all of these samples.  This usually means doing some preprocess
like mipmapping - see Lance Williams' paper in SIGGRAPH 83, first article.
There are lots of other schemes (this topic is one of the great paper
generators), see Paul Heckbert's thesis (highly recommended) on this.

[Zap talked about what you should use as your criteria for anti-aliasing: edge
detection or luminosity differential (i.e. color change)]

	Edge vs. luminosity difference detection is an interesting question:
you actually probably want both, sorta.  Doing all the edges might be overkill,
since you could have something like a car body made of 36,000 polygons, with
each polygon being almost exactly the shade of the next one (esp. with
interpolated normals at the corners).  In this case, edges are a waste of time,
and you probably also want a threshhold luminosity to use for checking if the
edge is worth antialiasing.

-------------------------------------------------------------------------------

Unofficial MTV Patches, by Eric Haines

These are the patches I've personally made to the MTV ray tracer, from bugs
that Mark VandeWettering has noticed and from my own experience.  They are
unofficial patches, and Mark hopes to have official ones out sometime.
The patches fix a cylinder rendering bug and a coredumping bug in the box
intersector.  There are also fixes to make the statistics more descriptive.

old/data.c
new/data.c
32a33
> /* Mark's default prune value
33a35,36
> */
> Flt		minweight = 0.0 ;	/* no pruning */
old/antialiasing.c
new/antialiasing.c
23a24
> /* >>>>>
24a26,27
> */
> #define MAXRAND		(32767.0)
28a32
> 	/* >>>>>
29a34,35
> 	*/
> 	return (((Flt) rand ()) / ((Flt) MAXRAND));
old/main.c
new/main.c
43a44,46
> 	/* >>>>> */
> 	srand(10001) ;
> 
109c112
< 	printf("number of rays cast:	   %-6d\n", nRays);
---
> 	printf("number of eye rays:	   %-6d\n", nRays);
121a125,152
> }
> 
> /* >>>>> */
> char *
> rindex(sp, c)
>      register char *sp, c;
> {
>   register char *r;
> 
>   r = NULL;
>   do
>     {
>       if (*sp == c)
> 	r = sp;
>     } while (*sp++);
>   return(r);
> }
> 
> char *
> index(sp, c)
>      register char *sp, c;
> {
>   do
>     {
>       if (*sp == c)
> 	return (sp);
>     } while (*sp++);
>   return (NULL);
old/cone.c
new/cone.c
242a243
> 
244a246,248
> 
> 	VecNormalize(cd -> cone_u) ;
> 	VecNormalize(cd -> cone_v) ;
old/intersect.c
new/intersect.c
61a62,64
> 	    /* check if ray is not parallel to slab */
> 	    if ( den[i] != 0.0 ) {
> 
80a84,89
> 	    } else {
> 		/* ray is parallel to slab - see if it is inside slab */
> 		if ( ( obj -> o_dmin[i] - num[i] ) *
> 			( obj -> o_dmax[i] - num[i] ) > 0.0 )
> 		    return ;
> 	    }
104a114
> 	Flt		dendot ;
113c123,128
< 		den[i] = 1.0 / VecDot(ray -> D, Slab[i]) ;
---
> 		dendot = VecDot(ray -> D, Slab[i]) ;
> 		if ( dendot != 0.0 ) {
> 		    den[i] = 1.0 / dendot ;
> 		} else {
> 		    den[i] = 0.0 ;
> 		}
old/screen.c
new/screen.c
50d49
< 	VecNormalize(upvec) ;
66a66,72
> 	 * Make sure the up vector is perpendicular to the view vector
> 	 */
> 
> 	VecCross(viewvec, leftvec, upvec);
> 	VecNormalize(upvec);
> 
> 	/*
71c77
< 	frustrumwidth = (view -> view_dist) * ((Flt) tan(view -> view_angle)) ;
---
> 	frustrumwidth = ((Flt) tan(view -> view_angle)) ;
129c135
< 			Trace(0, 1.0, &ray, color);
---
> 			Trace(0, 1.0, &ray, color, &nRays);
173c179
< 			Trace(0, 1.0, &ray, color);
---
> 			Trace(0, 1.0, &ray, color, &nRays);
238c244
< 				Trace(0, 1.0, &ray, color);
---
> 				Trace(0, 1.0, &ray, color, &nRays);
old/shade.c
new/shade.c
112d111
< 		nReflected ++ ;
115c114,115
< 		Trace(level + 1, surf -> surf_ks * weight, &tray, tcol);
---
> 		Trace(level + 1, surf -> surf_ks * weight, &tray, tcol,
> 			&nReflected);
120d119
< 		nRefracted ++ ;
125c124,125
< 		Trace(level + 1, surf -> surf_kt * weight, &tray, tcol) ;
---
> 		Trace(level + 1, surf -> surf_kt * weight, &tray, tcol,
> 			&nRefracted) ;
old/trace.c
new/trace.c
19c19
< Trace(level, weight, ray, color) 
---
> Trace(level, weight, ray, color, nr) 
23a24
>  int *nr ;
34c35
< 	nRays ++ ;
---
> 	(*nr) ++ ;

======== USENET cullings follow ===============================================

...part 2 will follow separately (it's big!)...

From erich@wisdom.graphics.cornell.edu Thu Mar 22 19:05:01 1990
Return-Path: <erich@wisdom.graphics.cornell.edu>
Received: from devvax.TN.CORNELL.EDU by turing.cs.rpi.edu (4.0/1.2-RPI-CS-Dept)
	id AA24978; Thu, 22 Mar 90 19:02:40 EST
Received: from WISDOM.GRAPHICS.CORNELL.EDU by devvax.TN.CORNELL.EDU (5.59-1.12/1.3-Cornell-Theory-Center)
	id AA09565; Thu, 22 Mar 90 15:44:58 EST
Date: Thu, 22 Mar 90 15:43:31 est
From: erich@wisdom.graphics.cornell.edu (Eric A. Haines)
Received: by wisdom.graphics.cornell.edu (14.4.1.1/2.0nn-Program-of-Computer-Graphics)
	id AA21317; Thu, 22 Mar 90 15:43:31 est
Message-Id: <9003222043.AA21317@wisdom.graphics.cornell.edu>
To: FISHER@3D.dec@decwrl.dec.com, arvo@apollo.hp.com, atc@cs.utexas.edu,
        barr@csvax.caltech.edu, barsky@miro.berkeley.edu,
        bcorrie@uvicctr.uvic.ca, chapman@fornax, chet@cis.ohio-state.edu,
        ckchee@dgp.toronto.edu, daniel@apollo.com, dk@csvax.caltech.edu,
        cychosz@ecn.purdue.edu, esl0422@ultb.isc.rit.edu,
        glassner.pa@xerox.com, grant@delvalle.llnl.gov,
        gray@rhea.CRAY.COM@uc.msc.umn.edu, green@compsci.bristol.ac.uk,
        hanrahan@princeton.edu, hench@csclea.ncsu.edu,
        hohmeyer@miro.berkeley.edu, raylist@hpfcjo.hp.com,
        devvax!hpfcla.uucp!hpfcse!hpurvmc!koz, devvax!hplabs.uucp!dana!mrk,
        hultquis@prandtl.nas.nasa.gov, zmel02@image.trc.amoco.com,
        jakob@humus.huji.ac.il, jeff@hamlet.caltech.edu, johnf@apollo.com,
        joy@ucdavis.edu, kolb@yale.edu, kyriazis@turing.cs.rpi.edu,
        vedge!kaveh@larry.mcrcim.mcgill.edu, lister@dg-rtp.dg.com,
        litwinow@apple.com, m-cohen@cs.utah.edu, markc@emx.utexas.edu,
        dutio!fwj@mcvax.cwi.nl, dutrun!wim@mcvax.cwi.nl, mja@sierra.llnl.gov,
        mplevine@phoenix.princeton.edu, carl@mssun7.msi.cornell.edu,
        paul@sgi.com, ph@miro.berkeley.edu, avatar!kory@quad.com,
        ray-tracing-news@wisdom.graphics.cornell.edu,
        raycasting@duke.cs.duke.edu, raytrace@cpsc.ucalgary.ca,
        rgb@caen.engin.umich.edu, jaf@squid.graphics.cornell.edu,
        lytle@tcgould.tn.cornell.edu, tim@csvax.caltech.edu,
        megatek!kuchkuda@ucsd.edu
Subject: RT News, part 2 of 2
Status: R

...part 2 of the ray tracing news...

======== USENET cullings follow ===============================================

OFF Databases, by Randi Rost

Wondering where to get the famous teapot data?

As a service to the graphics community, Digital Equipment Corporation has
donated disk space and established an archive server to maintain a library of
(somewhat) interesting objects.  The objects collected are in OFF format.
Documentation on OFF, a library of useful OFF routines, and one or two useful
OFF utilities are also available through this archive server.

The archive server lets you obtain ASCII files across the network simply by
sending electronic mail.  To obtain help about using this service, send a
message with a "Subject:"  line containing only the word "help" and a null
message body to:

	object-archive-server@decwrl.dec.com

To get an index of all that is available through this server, use a subject
line of "send index" instead of "help".  To get a list of objects that are
available use a subject line of "send index objects" and a null message body.

In order to save disk space and transmission time, the more lengthy files
available through this archive are compressed using the UNIX "compress"
utility and then uuencoded so that they may be sent as ASCII files.  For those
of you who don't have access to these utilities, buddy up to someone who does.

As with other archive servers, it is only possible to get small portions of
the database at a time.  Small requests have priority over large ones.  If you
have ftp access, you can copy all of the objects and OFF programs from the
file ~pub/DEC/off.tar.Z on the machine gatekeeper.dec.com.

Please respect the copyright notices attached to the objects in the .off
header files.  The original author usually worked pretty hard to create the
model, and deserves some credit when it is displayed.  If anyone out there
knows something about any of the objects I've left uncredited, please let me
know so that I can include the appropriate credits.

We'd *LOVE* to add to this collection of useful programs and objects.  If
you'd like to submit an object, an OFF program, or an OFF converter of some
type for inclusion in the archive, send mail to:

	object-archive-submit@decwrl.dec.com

We cannot guarantee anything about when submissions will be made available as
part of the object archive, since maintaining the archive is an after-hours
activity.  We can only promise that an interesting or useful object that is
already in OFF format will make it into the archive more quickly than one that
has to be converted from another format and then tested.  To report problems
with the archive server, send mail to:

	object-archive-manager@decwrl.dec.com

This same archive server will also be used to distribute Benchmark Interface
Format (BIF) files for the Graphics Performance Characterization (GPC)
Picture-Level Benchmark (PLB).  These files contain commands that define how a
picture or sequence of pictures is to be drawn.  The time it takes to process
the BIF file on a particular platform can be measured.  It is therefore
possible to create a BIF file that mimics the behavior of your favorite
application area, process it on several platforms to which the PLB code has
been ported, and get an apples-to-apples comparison of exactly the type of
graphics performance that interests you most.

It is planned to release the PLB code and a sample set of BIF files at NCGA
'90.  When this happens, these things will be available as part of the object
archive server, as well as by ftp access from gatekeeper.  People that are
interested in finding out more about PLB and BIF should contact Dianne Dean at
NCGA, 2722 Merrilee Drive, Suite 200, Fairfax, VA 22031, (703) 698-9600 and
request the most current BIF spec.  We are also interested in redistributing
interesting BIF files that people develop, or programs that convert other
database types to BIF.  Such submissions should also be mailed to
object-archive-submit@decwrl.dec.com.

Finally, we have added to the graphics bibliography that is also available
through decwrl.  Bibliographies from the years 1976-1981, 1983, and 1985-1986
are now available for use with the graf-bib server.  This server can be
accessed in the same manner as the object archive server by sending mail to:

	graf-bib-server@decwrl.dec.com

The years 1982 and 1984 have been received and await further editing before
they can be included.  We hope to make them available by the end of the month
as well.  The bibliographies will also be available for ftp access from
gatekeeper once they are all ready to go.  For more information on using the
graf-bib server, see the April 1989 issue of Computer Graphics, pp.  185-186.

Randi J. Rost, rost@granite.dec.com
Workstations Advanced Technology Development
Digital Equipment Corporation

-------------------------------------------------------------------------------

VM_pRAY is now available, by Didier Badouel

>From: badouel@irisa.irisa.fr (Didier Badouel)

VM_pRAY (Virtual Memory parallel RAYtracer) is a parallel raytracer (using NFF
file format) running on an iPSC/2 and emulating a read-only shared memory.

VM_pRAY is now available from our site (irisa.fr (131.254.2.3)) by anonymous
ftp access.
________________________________________
ftp 131.254.2.3
Name (131.254.2.3:yourname): anonymous
Password: <your ident>

ftp>cd iPSC2/VM_pRAY
ftp>ls
VM_pRAYjan90.tar.Z
spirale.nff.Z
teapot.nff.Z
ftp>mget *
ftp>quit

uncompress *
tar -xvf VM_pRAYjan90.tar
_______________________________________

If you don't have ftp access, send me an e-mail; I will send you this software
by return.

As I refer in the README file, I maintain a mailing list to inform interested
persons of future VM_pRAY releases or patches.  For this reason, I would like
to get feedback from those who get this software.

Thanks.
________________________________________________________________
  Didier  BADOUEL                       badouel@irisa.fr
  INRIA / IRISA                         Phone : +33 99 36 20 00
 Campus Universitaire de Beaulieu       Fax :   99 38 38 32
 35042 RENNES CEDEX - FRANCE            Telex : UNIRISA 950 473F
________________________________________________________________

-------------------------------------------------------------------------------

Superquadrics, by Prem Subrahmanyam, Patrick Flynn, Ron Daniel, and
	Mark VandeWettering

>From: daniel@a.cs.okstate.edu (Daniel Ronald E )
Newsgroups: comp.graphics
Subject: Re: Superquadrics
Organization: Oklahoma State Univ., Stillwater

>From article <5991@cps3xx.UUCP>, by flynn@pixel.cps.msu.edu (Patrick J. Flynn):
> In article <438@fsu.scri.fsu.edu> prem@geomag.gly.fsu.edu (Prem Subrahmanyam) writes:
>>     One of the new shapes that [DBW_render] supports is called a 
>>     superquadric.  Now, I've attempted to look up info in IEEE CG&A about
>>     them and found out that the first issue ever to come out had an article
>>     about these, however, our library does not have this issue.  So, can   
>>     anyone point out another source for info about these (the full equation
>>     used for them, and how to do a ray-superquadric intersection (complete
>>     with normal calculation for a given point))?  Thanks in advance......
> 
> Parametric form for a point on a superquad.:
> 
> Let c(e,x) = (cos x)^e
>     s(e,x) = (sin x)^e
> 
> (x(u,v),y(u,v),z(u,v)) = ( c(e1,u)*c(e2,v) , c(e1,u)*s(e2,v), s(e1,u) )
> 
> u and v are parameters of latitude and longitude.  The parameters e1 and
> e2 control the shape of the primitive obtained when you sweep u and v
> over the sphere.  The normal can be obtained by differentiation.
> . . . 
> Patrick Flynn, CS, Mich. State U., flynn@cps.msu.edu

The nice thing about superquadrics is that a wide range of shapes can be
represented by varying only two parameters - e1 and e2.  Cubes, cylinders,
spheres, octahedrons, and double-ended cones are all special cases of a
superquadric.  All of the intermediate shapes are also available.

The equation for the normal vector of a superquadric (SQ) as a function of
longitude and latitude is:

    (Nx(u,v),Ny(u,v),Nz(u,v)) =
       ( c(2-e1,u)*c(2-e2,v)/a1 , c(2-e1,u)*s(2-e2,v)/a2, s(2-e1,u)/a3 )

where e1, e2, u, and v are as before and a1, a2, a3 are the scale factors for
the x,y, and z dimensions of the SQ.  Unfortunately, both these equations
require knowledge of u and v, which is not available for ray-tracing
algorithms.

A more useful formulation is the implicit equation for superquadrics:

    F(x,y,z) = ((x/a1)^(2/e1) + (y/a2)^(2/e2))^(e2/e1) + (z/a3)^(2/e1)

(Note that if e1=e2=0, this degenerates to the implicit equation for a
sphere.)  This equation is for a superquadric centered at the origin, use
standard transformations to translate and rotate it into world coordinates.
If the point (x,y,z) is on the surface of the SQ, F=1.  If (x,y,z) is inside
the SQ surface, F < 1.  If (x,y,z) is outside the SQ surface, F > 1.  Ray-
object intersections can be solved by finding zeros of the function (F-1).
Standard techniques such as Newton or secant methods can be used to find the
zeros, but they need some help.  Depending on the values of e1 and e2, there
can be from 0 to 4 roots.  Finding the closest root can be difficult.  The '89
SIGGRAPH proceedings has an article on restricting the search space of the
root-finder to an area that bounds the first intersection.  This technique
will work for any implicit surface, not just superquadrics.  The article is:

    Devendra Kalra and Alan Barr, "Guaranteed Ray Intersections with Implicit
    Surfaces", Computer Graphics, Vol.  23, No.  3, July 1989, pp.  297-306.

The expression for the normal vector as a function only of x,y,z (not u,v) is
more complex.  It can be obtained from the cross-product of the tangent
vectors along the lines of latitude and longitude.  The derivation is
presented in an appendix of

    Franc Solina, "Shape Recovery and Segmentation with Deformable Part
    Models", GRASP Lab Tech. Rept. 128, Dept. Comp. and Info. Sci., U.
    Penn, Philadelphia, PA., Dec. 1987.

This is Franc's PhD dissertation, so it should also be available from
University Microfilms.  The results of the normal vector derivation are:

   nx = 1/x [1-(z/a3)^(2/e1)][(x*a2/y*a1)^(2/e2) / (1+(x*a2/y*a1)^(2/e2))]
   ny = 1/y [1-(z/a3)^(2/e1)][ 1 / (1+(x*a2/y*a1)^(2/e2))]
   nz = 1/z (z/a3)^(2/e1)

If your library does not have the first issue of IEEE CG&A you should have
them issue an Inter-Library loan request for a copy of Barr's article.  It has
a lot of good stuff in it about other superquadric shapes, such as super-
toroids.  There is also another article that same year by Franklin and Barr,
sorry that I don't have that reference handy - it is at the office.  However,
it deals more with generating fast polyhedral tilings of SQs, rather than with
Ray-traced rendering of them.

Hope this helps.

Ron Daniel Jr.                            Electrical and Computer Engineering
(405) 744-9925                            202 Engineering South
daniel@a.cs.okstate.edu (preferred)       Oklahoma State University
daniel@rvax.ecen.okstate.edu              Stillwater, OK      74078

--------

>From: markv@gauss.Princeton.EDU (Mark VandeWettering)
Organization: Princeton University

Ron Daniels gave a very good article on super quadrics, very informative.  I
thought I would just toss in the (probably obvious) information.

Super ellipsoids are a useful special subcase of the general super quadric
equations.  Their interesting geometric property is that they are all convex.
The following ray intersection strategy works rather well, and was implemented
by me, so I know it works :-) The only real limitation is that you eye should
be "outside" the superquadric.

For a sphere, its implicit equation is 
	F(x, y, z) = x^2 + y^2 + z^2

If F(x, y, z) < 1.0, then you are inside the sphere, otherwise, you are
outside.  You could perform intersection with a sphere by:
	1.      finding the close approach of the ray with the origin call
		this xc, yc, zc
	2.      if F(xc, yc, zc) < 1.0, we have an intersection, but we don't
		know where.
	3.      For a sphere, there is actually a formula for finding these
		intersections.  Alternatively, we have a point which is
		outside (the ray origin) and a point which is inside (xc, yc,
		zc).  Because the sphere is convex, we know that there is
		precisely one point in between for which F(x,y,z) = 1.0, which
		we find by binary searching the ray.


You can perform the analogous algorithm on superquadrics, and it works the
same way.  The implicit function for superquadrics was given by Daniels as:

F(x,y,z) = ((x/a1)^(2/e1) + (y/a2)^(2/e2))^(e2/e1) + (z/a3)^(2/e1)
		      ^^ error, should be e2....

For simplicity, we can pretend that a1 = a2 = a3 = 1.0.  (These scale in each
of the 3 directions, and we can handle that easier with transformation
matrices, which we would need to have to rotate these anyway, so no loss)

F(x, y, z) = (x^(2/e2) + y^(2/e2))^(e2/e1) + z^(2/e1)

If F(x,y,z) < 1.0, then we are inside the super quadric, else we are outside.
So, the algorithm runs as follows:

	1:      determine the ray closest approach to the origin.  call this
		point xc, yc, zc
	2:      if F(xc, yc, zc) is < 1.0, then you have a guaranteed hit,
		because xc, yc, zc is inside the quadric, and your eye is
		"outside".
	3:      because you know have a inside point and an outside point, and
		you know there is a single root in between, you can find it
		quite easily by binary searching.  Half the interval, check to
		see whether the midpoint is inside or outside, and collapse
		the other interval until some closeness criteria is met.

In practice, your eye may be far away from the super ellipsoid to start, you
can substitute any outside point for you eye, I use the point where the ray
intersected the bounding box, which I have anyway.

As long as you stick to the convex forms, (spheres, rounded boxes & cylinders,
octahedra) this method works quite well.

-------------------------------------------------------------------------------

Graphics Textbook Recommendations, by Paul Heckbert, Mark VandeWettering, and
	Kent Paul Dolan

>From: ph@miro.Berkeley.EDU (Paul Heckbert)

It's been a while now since Newman&Sproull and Foley&van Dam came out, and
most of the graphics textbooks since then are either good but
non-comprehensive (e.g. Rogers), or were written by typographic idiots for a
readership of PC-hacking kindergarteners.

So I was pleasantly surprised to run across a couple of new graphics textbooks
that are at the same time (1) current, (2) comprehensive, and (3) competent,
(4) not overloaded with graphics standards pap.  They are:

%A Alan Watt
%T Fundamentals of Three-Dimensional Computer Graphics
%I Addison-Wesley
%C Wokingham, England
%D 1989
Discusses radiosity, functionally-based modeling, stochastic sampling, and
Fourier theory, among other topics.

%A Peter Burger
%A Duncan Gillies
%T Interactive Computer Graphics: Functional, Procedural,
and Device-Level Methods
%I Addison-Wesley
%C Wokingham, England
%D 1989
Discusses color image quantization, quaternions, and soft (blobby) objects,
among many other topics.

I haven't read these books, but I was favorably impressed while browsing them
in the bookstore.  Has anybody else run across these books?  Are there other
undiscovered gems out there?

Paul Heckbert, CS grad student
508-7 Evans Hall, UC Berkeley		INTERNET: ph@miro.berkeley.edu
Berkeley, CA 94720			UUCP: ucbvax!miro.berkeley.edu!ph

--------

Aye, I was actually going to suggest that they be added above all others to
the Frequently Asked Questions Posting, but you beat me to it.  I really like
the Watt book, and Burger and Gillies is also a good introductory text, much
better than the old Foley & Van Dam, or Rogers (my old standby) in my opinion.
I liked Watt's book so much when my sister's dog ate it over Christmas break,
I immediately replaced it :-)

Both books have good sections about raytracing, and would make a good addition
to your reference shelf.

Mark T. VandeWettering

--------

After a brief gloss over the book (I looked at the contents, figures,
listings, appendices, read the intro), this is the perfect transition for me
from moving clipped 3D wireframe graphics, which I've done, to rendering.

Watt takes the reader step by step through:

the basics (a simple 3D model, transformations, deformations, viewing systems,
the viewing pipeline);

reflection models (Phong, Cook & Torrance);

shading (Gouraud & Phong);

rendering (rasterization, hidden surface, composite models);

parametric representations (curves, surfaces, scan conversion);

ray-tracing (recursion, intersections, reflections & refraction, illumination,
shadows, distributed processing, anti-aliasing, adaptive depth control, fancy
bounding volumes, first hit speed up (?!?), spatial coherence, octrees, BSP
trees);

diffuse illumination and radiosity;

shadows, texture, and environment mapping;

functionally based modelling (particle systems, fractal systems, 3D texture
functions);

anti-aliasing (artifacts and Fourier theory, supersampling or postfiltering,
prefiltering or area sampling, stochastic sampling);

animation (key frame, parametric, programmed with scripting, model
driven/simulated, and temporal anti-aliasing);

color science (application, monitors, NTSC, models, colorimetry, the CIE
standard, realistic rendering and reflection models);

and appendices covering viewing transformations, wireframes, implementation of
a renderer, the Utah Teapot data set, a bit of theory, and highlight
detection.

[There, I didn't _quite_ type in the whole table of contents!  ;-) ]

Algorithms are in Pascal.  Coding tricks for speed are mentioned occasionally,
but the main emphasis is on clarity and correspondence to the theory, a
deliberate choice by the author.

Graphics standards (PHIGS, PHIGS+, GKS-3D) are mentioned here and there in
passing, but the model used throughout the book uses just two implementation
dependent calls:  paint pixel xy in color rgb, and inquire color rgb of pixel
xy; much too simple a graphics system to require a "standard".

I am too much of a newbie here to "recommend" a book I've barely opened, but
I'm sure going to enjoy this one, it covers exactly what I wanted to know and
seems to have the right level of detail, too.

xanthian@well.sf.ca.us  xanthian@ads.com (Kent Paul Dolan)

-------------------------------------------------------------------------------

Where To Find U.S. Map Data, by Dru Nelson

>From: dnelson@mthvax.cs.miami.edu (Dru Nelson)
Newsgroups: comp.graphics

I have searched far and wide and have come up with a couple sources for U.S.
Map data.

HANAUMA.STANFORD.EDU (36.51.0.16) has the world map as does the UCSD.EDU
Anonymous FTP site.  This database is the CIA World Bank II and it contains
the data and some source files explaining the format.  The World Bank II is
Public Domain.  Oh yes, I believe it also has the coordinates to some major
cities.  The data has the land boundaries, rivers, political boundaries, and
one other thing I can't remember ;-)

The U.S.  Map and state lines can also be purchased from Austin Code Works for
a small fee.  This is also public domain data.

At UCSD.edu there is also a mercator projection.

One last interesting database is on uunet.uu.net.  There is a large Census
file.  I didn't get it, but it might help somebody out.  I read an article in
Byte showing that the census has maps of all the roads on CD and this might be
one of their files.  It might be handy to play with if you don't have the most
recent CD of the data yet.

-------------------------------------------------------------------------------

References for Rendering Height Fields, Mark VandeWettering

Newsgroups: comp.graphics

>Could anyone recommend some good papers or books discussing the
>rendering of height fields?  A friend of mine is implementing a
>height field raytracer and any references would be extremely useful.

I liked the following paper when I first read it, it proved useful in 
a couple of ways.  

%A Sabine Coquillart
%A Michael Gangnet
%T Shaded Display of Digital Maps
%J IEEE Computer Graphics and Applications
%P 35-42
%D July, 1984
%K maps, terrain, ray tracing, priority list
%X Several methods for displaying height fields are presented.  
Bilinear interpolation of patches is used to define the surface.  Efficient
algorithms, and quite elegant.  Reminiscent of Kajiya's cut planes for
surfaces of revolution.

Also, Musgrave had this Yale tech report.  I ordered it from Yale Dept of
Computer Science, but don't have who to contact over there anymore.  Perhaps
someone else has the information.  Musgrave's approach was basically to make
an adaptation of 3DDA uniform subdivision algorithms for height fields.  I
believe this code is also implemented in Craig Kolb's rayshade program.

%A F. Kenton Musgrave
%T Grid Tracing: Fast Ray Tracing For Height Fields
%J Research Report YALEU/DCS/RR-639
%D July, 1988

-------------------------------------------------------------------------------

RayShade 3.0 Released on comp.sources.unix, by Craig Kolb

Submitted-by: Craig Kolb <craig@weedeater.math.yale.edu>
Posting-number: Volume 21, Issue 8
Archive-name: rayshade/part01

This is version 3.0 of rayshade, a raytracing program.  Rayshade reads
a multi-line ASCII file describing a scene to be rendered and produces
a Utah Raster RLE format file of the raytraced image.

Rayshade features:

	Eight types of primitives (box, cone, cylinder, height field,
	polygon, sphere, superquadric, flat- and Phong-shaded triangle)

	Composite objects

	Point, directional, and extended (area) light sources

	Solid procedural texturing and bump mapping of primitives, objects,
		and individual instances of objects

	Antialiasing through adaptive supersampling or "jittered" sampling

	Arbitrary linear transformations on primitives,
		instances of objects, and texture/bump maps

	Use of uniform spatial subdivision or hierarchy of bounding
		volumes to speed rendering

	Options to facilitate rendering of stereo pairs

	Support for the C-Linda parallel programming language

Rayshade has been tested on many different UNIX-based computers.  If your
machine has a C compiler and enough memory (at least 2Mb), rayshade should
be fairly easy to port.  Be warned that rayshade uses yacc and lex to
process input files.  If you do not have lex and yacc, try to get flex and
bison from the Free Software Foundation folks (ftp to prep.ai.mit.edu).

-------------------------------------------------------------------------------

Bibliography of Texture Mapping & Image Warping, by Paul Heckbert

>From: ph@miro.Berkeley.EDU (Paul Heckbert)
Organization: University of California at Berkeley

Here is a fairly complete bibliography on texture mapping and image warping.
About a year ago I posted a request for such references to comp.graphics
in order to help me with my Master's thesis, and I got a number of helpful
responses.

Incidentally, I finished my Master's thesis in May, and if you're interested
in the properties of simple geometric mappings such as affine, bilinear,
and projective (perspective), or the signal processing theory of resampling
involved in texture filtering and image warping, you might want
to check it out:

    Paul S. Heckbert
    Fundamentals of Texture Mapping and Image Warping
    Master's thesis, UCB/CSD 89/516
    CS Dept, UC Berkeley
    May 1989
    (order from CS Dept, UC Berkeley, Berkeley CA, 94720)
    [I highly recommend it--EAH]

The bibliography that follows includes articles on
    (computer graphics terms:)
	texture synthesis, geometric mapping, texture filtering
    (image processing terms:)
	image warping, geometric correction, distortion, image transformation,
	resampling, image interpolation and decimation, image pyramid

I've excluded references on rendering and signal processing in general,
and most textbooks, since they're generally derivative.
The list is in UNIX refer(1) format. %K is keywords, %Z is my comments.
Please email me any additions/corrections.

Since this list is so long, I'll make recommendations.
The best papers to read first are: Blinn-Newell76, Feibush-Levoy-Cook80,
Catmull-Smith80; and good surveys are Heckbert86 (IMHO), Wolberg88.

Paul Heckbert, Computer Science Dept.
Evans Hall, UC Berkeley			INTERNET: ph@miro.berkeley.edu
Berkeley, CA 94720			UUCP: ucbvax!miro.berkeley.edu!ph

----

%E A. Rosenfeld
%T Multiresolution Image Processing and Analysis
%O Leesberg, VA, July 1982
%I Springer
%C Berlin
%D 1984
%K image pyramid

%A Narendra Ahuja
%A Bruce J. Schachter
%T Pattern Models
%I John Wiley
%C New York
%D 1983
%K tessellation, Voronoi diagram, triangulation,
point process, stochastic process, texture synthesis, height field

%A Masayoshi Aoki
%A Martin D. Levine
%T Computer Generation of Realistic Pictures
%J Computers and Graphics
%V 3
%P 149-161
%D 1978
%K texture mapping
%Z planar parametric mapping, no filtering

%A Alan H. Barr
%T Decal Projections
%B SIGGRAPH '84 Mathematics of Computer Graphics seminar notes
%D July 1984
%K texture mapping, ray tracing

%A Eric A. Bier
%A Kenneth R. Sloan, Jr.
%T Two-Part Texture Mappings
%J IEEE Computer Graphics and Applications
%V 6
%N 9
%D Sept. 1986
%P 40-53
%K texture mapping
%Z projection parameterizations

%A James F. Blinn
%A Martin E. Newell
%T Texture and Reflection in Computer Generated Images
%J CACM
%V 19
%N 10
%D Oct. 1976
%P 542-547
%Z early paper on texture mapping, discusses spherical sky textures
%K texture mapping, reflection, ray tracing

%A James F. Blinn
%T Computer Display of Curved Surfaces
%R PhD thesis
%I CS Dept., U. of Utah
%D 1978
%O (TR 1060-126, Jet Propulsion Lab, Pasadena)
%K texture mapping, bump mapping, shading, patch

%A James F. Blinn
%T Simulation of Wrinkled Surfaces
%J Computer Graphics
(SIGGRAPH '78 Proceedings)
%V 12
%N 3
%D Aug. 1978
%P 286-292
%K bump mapping

%A Carlo Braccini
%A Giuseppe Marino
%T Fast Geometrical Manipulations of Digital Images
%J Computer Graphics and Image Processing
%V 13
%P 127-141
%D 1980
%K image warp

%A Peter J. Burt
%T Fast Filter Transforms for Image Processing
%J Computer Graphics and Image Processing
%V 16
%P 20-51
%D 1981
%Z gaussian filter, image pyramid

%A Brian Cabral
%A Nelson Max
%A Rebecca Springmeyer
%T Bidirectional Reflection Functions from Surface Bump Maps
%J Computer Graphics
(SIGGRAPH '87 Proceedings)
%V 21
%N 4
%D July 1987
%P 273-281

%A Richard J. Carey
%A Donald P. Greenberg
%T Textures for Realistic Image Synthesis
%J Computers and Graphics
%V 9
%N 2
%D 1985
%P 125-138
%K texture mapping, texture synthesis

%A K. R. Castleman
%T Digital Image Processing
%I Prentice-Hall %C Englewood Cliffs, NJ
%D 1979
%K geometric correction

%A Edwin E. Catmull
%T A Subdivision Algorithm for Computer Display of Curved Surfaces
%R PhD thesis
%I Dept. of CS, U. of Utah
%D Dec. 1974
%Z bicubic patches, 1st texture mapping

%A Edwin E. Catmull
%A Alvy Ray Smith
%T 3-D Transformations of Images in Scanline Order
%J Computer Graphics
(SIGGRAPH '80 Proceedings)
%V 14
%N 3
%D July 1980
%P 279-285
%K texture mapping
%Z two-pass image warp

%A Alan L. Clark
%T Roughing It: Realistic Surface Types and Textures in Solid Modeling
%J Computers in Mechanical Engineering
%V 3
%N 5
%D Mar. 1985
%P 12-16
%K shading
%Z implementation of Cook-Torrance81

%A James J. Clark
%A Matthew R. Palmer
%A Peter D. Lawrence
%T A Transformation Method for the Reconstruction of Functions
from Nonuniformly Spaced Samples
%J IEEE Trans. on Acoustics, Speech, and Signal Processing
%V ASSP-33
%N 4
%D Oct. 1985
%P 1151-1165
%K signal processing, antialiasing
%Z reconstruct from nonuniform samples by warping the samples to be uniform

%A Robert L. Cook
%T Shade Trees
%J Computer Graphics
(SIGGRAPH '84 Proceedings)
%V 18
%N 3
%D July 1984
%P 223-231
%K shading, texture mapping

%A Robert L. Cook
%A Loren Carpenter
%A Edwin Catmull
%T The Reyes Image Rendering Architecture
%J Computer Graphics
(SIGGRAPH '87 Proceedings)
%V 21
%N 4
%D July 1987
%K rendering

%A S. Coquillart
%T Displaying Random Fields
%J Computer Graphics Forum
%V 4
%N 1
%D Jan. 1985
%P 11-19
%K texture

%A Ronald E. Crochiere
%A Lawrence R. Rabiner
%T Multirate Digital Signal Processing
%I Prentice Hall
%C Englewood Cliffs, NJ
%D 1983
%K resampling
%Z rational scale affine 1-D resampling

%A Franklin C. Crow
%T Summed-Area Tables for Texture Mapping
%J Computer Graphics
(SIGGRAPH '84 Proceedings)
%V 18
%N 3
%D July 1984
%P 207-212
%K antialiasing

%A William Dungan, Jr.
%A Anthony Stenger
%A George Sutty
%T Texture Tile Considerations for Raster Graphics
%J Computer Graphics
(SIGGRAPH '78 Proceedings)
%V 12
%N 3
%D Aug. 1978
%P 130-134
%K image pyramid

%A Karl M. Fant
%T A Nonaliasing, Real-Time Spatial Transform Technique
%J IEEE Computer Graphics and Applications
%D Jan. 1986
%P 71-80
%Z two-pass image warp, trivial

%A Eliot A. Feibush
%A Marc Levoy
%A Robert L. Cook
%T Synthetic Texturing Using Digital Filters
%J Computer Graphics
(SIGGRAPH '80 Proceedings)
%V 14
%N 3
%D July 1980
%P 294-301
%K texture mapping
%Z antialiasing polygon edges and textures

%A Eliot A. Feibush
%A Donald P. Greenberg
%T Texture Rendering Systems for Architectural Design
%J Computer-Aided Design
%V 12
%N 2
%D Mar. 1980
%P 67-71
%K texture mapping

%A Leonard A. Ferrari
%A Jack Sklansky
%T A Fast Recursive Algorithm for Binary-Valued Two-Dimensional Filters
%J Computer Vision, Graphics, and Image Processing
%V 26
%N 3
%D June 1984
%P 292-302
%Z summed area table generalized to rectilinear polygons

%A Leonard A. Ferrari
%A Jack Sklansky
%T A Note on Duhamel Integrals and Running Average Filters
%J Computer Vision, Graphics, and Image Processing
%V 29
%D Mar. 1985
%P 358-360
%Z proof of algorithm in their 1984 paper

%A Leonard A. Ferrari
%A P. V. Sankar
%A Jack Sklansky
%A Sidney Leeman
%T Efficient Two-Dimensional Filters Using B-Spline Functions
%J Computer Vision, Graphics, and Image Processing
%V 35
%D 1986
%P 152-169
%Z b-spline filtering by repeated integration

%A Eugene Fiume
%A Alain Fournier
%A V. Canale
%T Conformal Texture Mapping
%B Eurographics '87
%D 1987
%P 53-64
%K geometric mapping, parameterization

%A Alain Fournier
%A Eugene Fiume
%T Constant-Time Filtering with Space-Variant Kernels
%J Computer Graphics
(SIGGRAPH '88 Proceedings)
%V 22
%N 4
%D Aug. 1988
%P 229-238
%Z variant of pyramid techniques: precompute convolutions with patch fragments
note: memory required is O(1) with minor mods, not O(m^2) as they say

%A Donald Fraser
%A Robert A. Schowengerdt
%A Ian Briggs
%T Rectification of Multichannel Images in Mass Storage Using Image
Transposition
%J Computer Vision, Graphics, and Image Processing
%V 29
%N 1
%D Jan. 1985
%P 23-36
%Z texture mapping, two-pass image warp, just like Catmull & Smith 80!
see corrigendum vol. 31, p. 395

%A D. E. Friedmann
%T Operational Resampling for Correcting Images to a Geocoded Format
%B Proc. 15th Intl. Symp. Remote Sensing of Environment
%C Ann Arbor
%D 1981
%P 195-212
%Z two-pass image warp

%A A. Gagalowicz
%T Texture Modelling Applications
%J The Visual Computer
%V 3
%N 4
%D 1987
%P 186-200

%A Andre Gagalowicz
%A Song de Ma
%T Model Driven Synthesis of Natural Textures for 3-D Scenes
%B Eurographics '85
%D 1985
%P 91-108
%K texture synthesis

%A Michel Gangnet
%A Didier Perny
%A Philippe Coueignoux
%T Perspective Mapping of Planar Textures
%B Eurographics '82
%D 1982
%P 57-71
%K texture mapping, antialiasing
%O reprinted in Computers and Graphics, Vol 8. No 2, 1984, pp. 115-123

%A Michel Gangnet
%A Djamchid Ghazanfarpour
%T Techniques for Perspective Mapping of Planar Textures
%I Colloque Image de Biarritz, CESTA
%D May 1984
%P 29-35
%K texture mapping, antialiasing
%Z in french

%A Geoffrey Y. Gardner
%T Simulation of Natural Scenes Using Textured Quadric Surfaces
%J Computer Graphics
(SIGGRAPH '84 Proceedings)
%V 18
%N 3
%D July 1984
%P 11-20
%K tree, cloud
%Z 3-D fourier series texture function

%A Geoffrey Y. Gardner
%T Visual Simulation of Clouds
%J Computer Graphics
(SIGGRAPH '85 Proceedings)
%V 19
%N 3
%D July 1985
%P 297-303
%K quadric, texture

%A Andrew S. Glassner
%T Adaptive Precision in Texture Mapping
%J Computer Graphics
(SIGGRAPH '86 Proceedings)
%V 20
%N 4
%D Aug. 1986
%P 297-306
%Z adaptive texture filter according to local variance, uses summed area table

%A W. B. Green
%T Digital Image Processing - A Systems Approach
%I Van Nostrand Reinhold
%C New York
%D 1983
%K geometric correction

%A Ned Greene
%A Paul S. Heckbert
%T Creating Raster Omnimax Images from Multiple Perspective Views
Using The Elliptical Weighted Average Filter
%J IEEE Computer Graphics and Applications
%V 6
%N 6
%D June 1986
%P 21-27
%K texture mapping, image warp, antialiasing

%A Ned Greene
%T Environment Mapping and Other Applications of World Projections
%J IEEE Computer Graphics and Applications
%V 6
%N 11
%D Nov. 1986
%K reflection mapping
%Z revised from Graphics Interface '86 version

%A Shinichiro Haruyama
%A Brian A. Barsky
%T Using Stochastic Modeling for Texture Generation
%J IEEE Computer Graphics and Applications
%D Mar. 1984
%P 7-19
%O errata Feb. 1985, p. 87
%K texture mapping, bump mapping, texture synthesis, fractal

%A Paul S. Heckbert
%T Texture Mapping Polygons in Perspective
%I NYIT Computer Graphics Lab
%R TM 13
%D Apr. 1983
%K antialiasing

%A Paul S. Heckbert
%T Filtering by Repeated Integration
%J Computer Graphics
(SIGGRAPH '86 Proceedings)
%V 20
%N 4
%D Aug. 1986
%P 317-321
%K filter, integration, convolution
%Z generalizes Perlin's selective image filter and Crow's summed area table

%A Paul S. Heckbert
%T Survey of Texture Mapping
%J IEEE Computer Graphics and Applications
%V 6
%N 11
%D Nov. 1986
%P 56-67
%Z revised from Graphics Interface '86 version

%A Paul S. Heckbert
%T Fundamentals of Texture Mapping and Image Warping
%R Master's thesis, UCB/CSD 89/516
%I CS Dept, UC Berkeley
%D May 1989

%A Berthold K. P. Horn
%T Hill Shading and the Reflectance Map
%J Proc. IEEE
%V 69
%N 1
%D Jan. 1981
%P 14-47
%K shading, terrain, illumination map
%Z exhaustive survey of ad hoc and theoretical terrain shading methods

%A J. C. Hourcade
%A A. Nicolas
%T Inverse Perspective Mapping in Scanline Order
onto Non-Planar Quadrilaterals
%B Eurographics '83
%D 1983
%P 309-319
%K texture mapping, antialiasing

%A James T. Kajiya
%T Anisotropic Reflection Models
%J Computer Graphics
(SIGGRAPH '85 Proceedings)
%V 19
%N 3
%D July 1985
%P 15-21
%K texture mapping, levels of detail
%Z frame mapping

%A James T. Kajiya
%A Timothy L. Kay
%T Rendering Fur with Three Dimensional Textures
%J Computer Graphics
(SIGGRAPH '89 Proceedings)
%V 23
%N 3
%D July 1989
%P 271-280
%Z solid texture

%A A. Kaufman
%A S. Azaria
%T Texture Synthesis Techniques for Computer Graphics
%J Computers and Graphics
%V 9
%N 2
%D 1985
%P 139-145

%A Douglas S. Kay
%A Donald P. Greenberg
%T Transparency for Computer Synthesized Images
%J Computer Graphics
(SIGGRAPH '79 Proceedings)
%V 13
%N 2
%D Aug. 1979
%P 158-164
%Z 2.5-D ray tracing: refraction by warping background image,
contains better refraction formula than Whitted
%K ray tracing

%A Wolfgang Krueger
%T Intensity Fluctuations and Natural Texturing
%J Computer Graphics
(SIGGRAPH '88 Proceedings)
%V 22
%N 4
%D Aug. 1988
%P 213-220

%A J. P. Lewis
%T Algorithms for Solid Noise Synthesis
%J Computer Graphics
(SIGGRAPH '89 Proceedings)
%V 23
%N 3
%D July 1989
%P 263-270
%Z solid texture

%A Song de Ma
%A Andre Gagalowicz
%T Determination of Local Coordinate Systems for Texture Synthesis
on 3-D Surfaces
%B Eurographics '85
%D 1985
%P 109-118
%K texture synthesis

%A Nadia Magnenat-Thalmann
%A N. Chourot
%A Daniel Thalmann
%T Colour Gradation, Shading and Texture Using a Limited Terminal
%J Computer Graphics Forum
%C Netherlands
%V 3
%N 1
%D Mar. 1984
%P 83-90
%K dither

%A R. R. Martin
%T Recent Advances in Graphical Techniques
%B 1985 European Conference on Solid Modeling
%O London, 9-10 Sept 1985
%I Oyez Sci. and Tech. Services
%C London
%D 1985
%K texture mapping, ray tracing

%A Nelson L. Max
%T Shadows for Bump-Mapped Surfaces
%B Advanced Computer Graphics
(Proc. of CG Tokyo '86)
%E Tosiyasu Kunii
%I Springer Verlag
%C Tokyo
%D 1986
%P 145-156

%A Gene S. Miller
%A C. Robert Hoffman
%T Illumination and Reflection Maps:
Simulated Objects in Simulated and Real Environments
%B SIGGRAPH '84 Advanced Computer Graphics Animation seminar notes
%D July 1984
%Z reflection maps: how to make and use them
%K ray tracing, illumination map

%A Alan Norton
%A Alyn P. Rockwood
%A Philip T. Skolmoski
%T Clamping: A Method of Antialiasing Textured Surfaces by
Bandwidth Limiting in Object Space
%J Computer Graphics
(SIGGRAPH '82 Proceedings)
%V 16
%N 3
%D July 1982
%P 1-8
%K texture mapping

%A D. A. O'Handley
%A W. B. Green
%T Recent Developments in Digital Image Processing at the
Image Processing Laboratory of the Jet Propulsion Laboratory
%J Proc. IEEE
%V 60
%N 7
%P 821-828
%D 1972
%K geometric correction

%A Alan W. Paeth
%T A Fast Algorithm for General Raster Rotation
%B Graphics Interface '86
%D May 1986
%K image warp, texture mapping, two-pass, three-pass
%P 77-81

%A Darwyn R. Peachey
%T Solid Texturing of Complex Surfaces
%J Computer Graphics
(SIGGRAPH '85 Proceedings)
%V 19
%N 3
%D July 1985
%P 279-286
%K 3-D texture

%A Ken Perlin
%T A Unified Texture/Reflectance Model
%B SIGGRAPH '84 Advanced Image Synthesis seminar notes
%D July 1984
%K texture mapping, bump mapping
%Z making microfacet distribution functions consistent
with normal perturbations; hash function

%A Ken Perlin
%T Course Notes
%B SIGGRAPH '85 State of the Art in Image Synthesis seminar notes
%D July 1985
%K antialiasing, filter, blur
%Z generalizing Crow's summed-area tables to higher order filter kernels

%A Ken Perlin
%T An Image Synthesizer
%J Computer Graphics
(SIGGRAPH '85 Proceedings)
%V 19
%N 3
%D July 1985
%P 287-296
%Z texture functions, pixel stream editor

%A Ken Perlin
%A Eric M. Hoffert
%T Hypertexture
%J Computer Graphics
(SIGGRAPH '89 Proceedings)
%V 23
%N 3
%D July 1989
%P 253-262
%Z using solid texture as an implicit function to define density & surfaces

%A H. J. Reitsema
%A A. J. Word
%A E. Ramberg
%T High-fidelity image resampling for remote sensing
%J Proc. of SPIE
%V 432
%D 1984
%P 211-215

%A Marcel Samek
%A Cheryl Slean
%A Hank Weghorst
%T Texture Mapping and Distortion in Digital Graphics
%J Visual Computer
%V 2
%N 5
%D 1986
%P 313-320

%A Michael Shantz
%T Two Pass Warp Algorithm for Hardware Implementation
%J Proc. SPIE, Processing and Display of Three Dimensional Data
%V 367
%D 1982
%P 160-164
%K two-pass image warp

%A S. Shlien
%T Geometric Correction, Registration and Resampling of Landsat Imagery
%J Canad. J. Remote Sensing
%V 5
%D 1979
%P 74-89

%A Alvy Ray Smith
%T Planar 2-Pass Texture Mapping and Warping
%J Computer Graphics
(SIGGRAPH '87 Proceedings)
%V 21
%N 4
%D July 1987
%P 263-272

%A Alvy Ray Smith
%T TEXAS (Preliminary Report)
%I NYIT Computer Graphics Lab
%R TM 10
%D July 1979
%K texture mapping

%A Alvy Ray Smith
%T Incremental Rendering of Textures in Perspective
%B SIGGRAPH '80 Animation Graphics seminar notes
%D July 1980
%K texture mapping

%A A. Tanaka
%A M. Kameyama
%A S. Kazama
%A O. Watanabe
%T A Rotation Method for Raster Image Using Skew Transformation
%B Proc IEEE Conf on Computer Vision and Pattern Recognition
%D June 1986
%P 272-277
%K texture mapping, image warp

%A S. L. Tanimoto
%A Theo Pavlidis
%T A Hierarchical Data Structure for Picture Processing
%J Computer Graphics and Image Processing
%V 4
%N 2
%D June 1975
%P 104-119
%K image pyramid

%A D. Thalmann
%T A Lifegame Approach to Surface Modelling and Texturing
%J The Visual Computer
%V 2
%N 6
%D 1986
%P 384-390

%A Nuio Tsuchida
%A Yoji Yamada
%A Minoru Ueda
%T Hardware for Image Rotation by Twice Skew Transformations
%J IEEE Trans on Acoustics, Speech, and Signal Processing
%V ASSP-35
%N 4
%D Apr. 1987
%P 527-532
%K image warp

%A Ken Turkowski
%T The Differential Geometry of Texture Mapping
%R Technical Report No. 10
%I Apple Computer
%C Cupertino, CA
%D May 1988
%K mapping, filter

%A Douglass Allen Turner
%T A Taxonomy of Texturing Techniques
%R Master's thesis
%I Dept. of Computer Science, U of North Carolina, Chapel Hill
%D 1988

%A Carl F. R. Weiman
%T Continuous Anti-Aliased Rotation and Zoom of Raster Images
%J Computer Graphics
(SIGGRAPH '80 Proceedings)
%V 14
%N 3
%D July 1980
%P 286-293
%K image warp, line drawing, digital line, texture mapping

%A Lance Williams
%T Pyramidal Parametrics
%J Computer Graphics
(SIGGRAPH '83 Proceedings)
%V 17
%N 3
%D July 1983
%P 1-11
%K texture mapping, antialiasing

%A George Wolberg
%T Geometric Transformation Techniques for Digital Images: A Survey
%R TR CUCS-390-88
%I Dept. of CS, Columbia U.
%D Dec. 1988

%A George Wolberg
%T Skeleton-Based Image Warping
%J Visual Computer
%V 5
%D 1989
%P 95-108

%A George Wolberg
%A Terrance E. Boult
%T Separable Image Warping with Spatial Lookup Tables
%J Computer Graphics
(SIGGRAPH '89 Proceedings)
%V 23
%N 3
%D July 1989
%P 369-377

%A Geoff Wyvill
%A Brian Wyvill
%A Craig McPheeters
%T Solid Texturing of Soft Objects
%B CG International '87
%C Tokyo
%D May 1987

%A John F. S. Yau
%A Neil D. Duffy
%T A Texture Mapping Approach to 3-D Facial Image Synthesis
%J Computer Graphics Forum
%V 7
%N 2
%D June 1988
%P 129-134
%Z face

-------------------------------------------------------------------------------

More Texturing Functions, by Jon Buller

>From: jonb@vector.Dallas.TX.US (Jon Buller)
Organization: Dallas Semiconductor

Last April, when there were many raging requests for the RenderMan Noise
function.  I posted the one that I got from Gary Herron (one of my Profs at
the time).  After I had fixed the hashing it used, so that there would not be
rows upon rows of blobs of the same shape, I posted it to comp.graphics.  I
didn't have it documented in the best form, since I posted it quickly to
hopefully stem the tide of requests.  There was a bug in it (that I knew
about, but forgot to tell everyone about, and it needed several followups to
explain how it worked and how to initialize it, etc.)

Well, I would like to get some various texture routines from the net at large,
just to see what others are doing, and for a bit of fun...  Of course, I don't
just want take things without some sort of return, so to get your collections
of 'new & unusual(tm)' textures started, I have included one of my favorites,
The Planet Texture.  It does not need watering, food, or specular reflection,
just take a dot-product of the surface normal and the light source direction
to add shadows, and have fun...  It is written in Pascal.  (No, I do not yet
have a C compiler for my Mac, but that WILL change in a couple of months.)  So
you must translate it to your language of choice, but I believe it is a bit
better documented than the first.

Anyway, I would like to see your favorite textures (call or write today!)  and
if there is a tremendous response, I may (or may not) compile them into a
list, summarize, and post.  Two textures I would REALLY like to see are 1) a
good Marble texture (like the one in [Perlin85], (guess what this paper is.),
and 2) something like the Wood_Burl texture I saw applied to your favorite
teapot and mine in a book by (or maybe edited by) Andrew Glassner.  I don't
remember the title (Computer Graphics for Artists, maybe?)

If you don't have a Noise function of your own, mine was translated and should
be around...  or you can link it to whatever function you like if it returns a
Real (er.. float) 8-).

Get Noise From UUNET (Is it still there?  I can't find out anymore.) or a
friend.  Change it to initialize the random array pts[] to fill with (-1.0 to
1.0) instead of (0.0 to 1.0) like it says.  My Thanks to Bill Dirks for doing
that translation, BTW.  (You may also notice that I am no longer a student at
Colorado State University...)

I included my Turb function with this, because last time I checked UUNET
(April '89) the noise file there said that Bill had not translated the Turb
function yet, and in the interest of completeness (and reducing mail volume
8-), it is here, but you must translate it yourself...

[Perlin85] An Image Synthesizer, Ken Perlin,
     Proceedings of SIGGRAPH '85

If you haven't guessed (after reading the code, of course 8-):
Add([x1,y1,z1], [x2,y2,z2]) = [x1+x2, y1+y2, z1+z2]
Subtract([x1,y1,z1], [x2,y2,z2]) = [x1-x2, y1-y2, z1-z2]
Scale(s, [x,y,z]) = [s*x, s*y, s*z]
Color and Vector coerce the given type of a 3-vector into the
      other kind of 3-vector

(* ------- cut here ------- Planet&Turb.p ------- cur here ------- *)

function Turb (Size: Integer; ScaleFactor: Real; Loc: Vector): Real;
(* Size = Size of largest feature: Smallest feature size = 1,
   ScaleFactor is related to the fractal dimension of the turbulence,
   Loc = point of turbulence to compute *)
var CurScale, Result: Real;
	Cur: Integer;
begin
   Result := Noise(Loc);
   CurScale := 1.0;

   Cur := 1;
   while Cur < Size do Begin
      Cur := BSL(Cur, 1);
           (* Shift Left is MUCH quicker than a multiply *)
      CurScale := CurScale * ScaleFactor;
      Loc := Scale(2.0, Loc);
      Result := Result + Noise(Loc) * CurScale;
   end;
   Turb := Result;
end;

(* This was designed to work with a unit sphere.  I think it works well, you
   may (should) form your own opinions... *)
Function Planet (Loc: Vector): Color;
Const Land  := [0.28125, 0.1875, 0.09375];
               (* A reasonably nice brown color *)
      Sea   := [0.0    , 0.0   , 1.0];
               (* Oceans are usually blue *)
      Cloud := [1.0    , 1.0   , 1.0];
               (* And Clouds are white *)
      (* OK, this part isn't Pascal, (I wish it was though...)
         but it beats the way it's done in the real code... *)
Var C: Color;
    Amt: Real;
Begin
   If Turb(430, 0.7, Loc) > 0.25
        (* The sphere I use is 430 pixels across *)
   Then C := Land
   Else C := Sea;

   Amt := Turb(18, 0.6, Scale(24.0, Loc));
           (* Clouds I like are 18 pixels across *)
   If Amt > -0.25 Then
   C := Color(Add(Vector(C), Scale(Amt + 0.25,
          Subtract(Vector(Cloud), Vector(C)))));
   
   (* I Wish I could do it like this...
          C := C + (Amt + 0.25) * (Cloud - C)    *)
End;

-- 
Jon Buller       jonb@vector.dallas.tx.us       ..!texbell!vector!jonb

-------------------------------------------------------------------------------

Ray/Cylinder Intersection, by Mark VandeWettering

>From: markv@gauss.Princeton.EDU (Mark VandeWettering)
Newsgroups: comp.graphics
Organization: Princeton University

>I am trying to do ray tracing of light through a 
>cylinder coming at different angle to the axis
>of the cylinder.  Could some one give me some
>pointers?

Ray cylinder intersection is (conceptually) just as easy as hitting a sphere.
Most of the problems come from clipping the cylinder so it isn't infinite.  I
can think of several ways to do this, but first let me mention that you should
consult Introduction to Ray Tracing edited by Andrew Glassner.  Articles by
Pat Hanrahan and Eric Haines go over most of this stuff.

It's easiest to imagine a unit cylinder formed by rotating the line x = 1 
in the xy plane about the y axis.  The formula for this cylinder is 
x^2 + z^2 = 1.  If your ray is of the form P + t D, with P and D three
tuples, you can insert the components into the original formula and 
come up with:

	(px + t dx)^2 + (pz + t dz)^2 - 1 = 0 

or	px^2 + 2 t dx px + (t dx)^2 + pz^2 + 2 t dz pz + (t dz)^2

or	(px^2 + pz^2) + 2 t (dx px + dz pz) + t^2 (dx^2 + dz^2)

which you can then solve using the quadratic formula.  If there are no roots,
then there is no intersection.  If there are roots, then these give two t
values along the ray.  Figure out those points using P + t D.  Now, clipping.
We wanted to have a finite cylinder, say within the cube two units on a side
centered at the origin.  Well, gosh, ignore any intersections that occur
outside this box.  Then take the closest one.

Now, to intersect with an arbitrary cylinder, work up a world transformation
matrix that transforms points into the objects coordinate system.  Transform
the ray origin and direction, and voila.  You do need to be careful to rescale
it appropriately, but its really not all that hard.

You might instead want to implement general quadrics as a primitive, or choose
any one of a number of different ways of doing the above.  Homogeneous
coordinates might make this simpler actually....  Hmmm....  And there is a
geometric argument that can also be used to derive algorithms like this.

Think about it.  It shouldn't be that difficult.

-------------------------------------------------------------------------------

C Code for Intersecting Quadrics, by Prem Subrahmanyam

>From: prem@geomag.fsu.edu (Prem Subrahmanyam)
Newsgroups: comp.graphics
Organization: Florida State University Computing Center

Here is the code I wrote in C for doing 3-dimensional quadratics.  The
derivations are rather complex, so I tried to comment the code as best I
could, but that's all I could do.  I hope people find this interesting.

--

#define TINY (float)1e-3

int hitconic(offset,eye,v,p,t,a,b,c,d,e,f,g,start,stop)

/* offset is the triple representing where the conic is to be moved */
/* from 0,0,0. */
/* eye and v represent the ray, with eye being the start point and v */
/* being the direction vector */
/* p is the point into which the intersection point value will be */
/* copied */
/* t is the value into which the line parameter will be copied for the */
/* ray. */
/* a,b,c,d,e,f,g are values for the 3-dimensional quadratic equation */
/* a*x^2 + b*y^2 + c*z^2 + d*x + e*y + f*z = g */
/* start and stop represent the bounds (when the equation is centered 
   at 0,0,0) in which to test the conic */
   /* example: if the bound around a conic were set at -1,-1,-1 to 1,1,1
      and the offset was 4,5,6 then the actual spatial extent of the
      object would be from 3,4,5 to 5,6,7 . */
/* So, the conic (3-d quadratic) should contain within its own data
   structure the offset, extent values (start,stop), and a,b,c,d,e,f,g
   constants */

vector offset,
       eye,
       v,
       p;
	   
float     *t,
           a,
           b,
	   c,
	   d,
	   e,
	   f,
	   g;
	   
vector start,
       stop; 

    { 
    /*************************************************/
    /* this code is a little messy, but it does work */
    /*************************************************/

      /* create some local points to use in testing */
      vector m,p2;  
      float radical,Ay,Be,Ce,t1,t2; /* constants for quadratic formula */
      /* generated for solving for the intersection of the equations for
	 the line and the equation for the quadratic */
      /* subtract offset from the ray start position to align ray and
	 quadratic*/
      m[0] = eye[0] - offset[0];
      m[1] = eye[1] - offset[1];
      m[2] = eye[2] - offset[2];

      /* Now, using the constant values, create values for the intersection
	 test formula */ 
      Ay = (a*v[0]*v[0]) + (b*v[1]*v[1]) + (c*v[2]*v[2]);
      Be = (2*a*m[0]*v[0]) + (2*b*m[1]*v[1]) + (2*c*m[2]*v[2]) +
		(d*v[0]) + (e*v[1]) + (f*v[2]);
      Ce = (a*m[0]*m[0]) + (b*m[1]*m[1]) + (c*m[2]*m[2]) +
		(d*m[0]) + (e*m[1]) + (f*m[2]) - g;
      radical = ((Be*Be) - (4*Ay*Ce));
      if (radical < 0.0) {
         return FALSE;
      }
      t1 = ((-1*Be) + sqrt(radical))/(2*Ay);
      t2 = ((-1*Be) - sqrt(radical))/(2*Ay);

   /* immediately eliminate cases in which return is false */
      if (((t1 < 0)&&(t2 < 0))||
	  ((t1 < 0)&&(fabs(t2) < TINY))||
	  ((t2 < 0)&&(fabs(t1) < TINY))||
	  ((fabs(t1) < TINY)&&(fabs(t2) < TINY)))
      {
	 return FALSE;
      }else{
	 /* t1 a bad parameter, but t2 may not be */
	 if ((t1 < 0)||(fabs(t1) < TINY)) {
	   if (!(fabs(t2) < TINY)) /* prevent spurious self-shadowing */ 
	   {
	     /* using the parameter, find the point of intersection */
	     p[0] = m[0] + v[0]*t2;
	     p[1] = m[1] + v[1]*t2;
	     p[2] = m[2] + v[2]*t2;
	     /* test to see if the point is within the bounds for the
	     quadratic section */
	     if ((start[0] <= p[0])&&
		 (stop[0] >= p[0])&&
		 (start[1] <= p[1])&&
		 (stop[1] >= p[1])&&
		 (start[2] <= p[2])&&
		 (stop[2] >= p[2]))
	     {
	       /* if it lies within the bounds, add offset back on and return
		  point */
		p[0] = p[0] + offset[0];
		p[1] = p[1] + offset[1];
		p[2] = p[2] + offset[2];
		*t = t2;
		return TRUE;
	     } else { /* point does not lie within the bounds */
		return FALSE;
	     }
	   }else{ /* t2 a bad parameter as well */
	     return FALSE;
	   }
	 }
	     
	 if ((t2 < 0)||(fabs(t2) < TINY)) {
	 /* t2 a false parameter, so test to see if t1 is good */
           if(!(fabs(t1) < TINY))
	   { /* find point by parameter */
	     p[0] = m[0] + v[0]*t1;
	     p[1] = m[1] + v[1]*t1;
	     p[2] = m[2] + v[2]*t1;
	     if ((start[0] <= p[0])&&
		 (stop[0] >= p[0])&&
		 (start[1] <= p[1])&&
		 (stop[1] >= p[1])&&
		 (start[2] <= p[2])&&
		 (stop[2] >=p[2]))
	     { /* same as above, see if point lies within bounds */
		p[0] = p[0] + offset[0];
		p[1] = p[1] + offset[1];
		p[2] = p[2] + offset[2];
		*t = t1;
		return TRUE;
	     } else {
		return FALSE;
	     }
	   }else{
		return FALSE;
	   }
	 }
	 /* if the program has gotten here, t1 and t2 are greater than 0 */
         /* and greater than TINY */
         p[0] = m[0] + v[0]*t1;
	 p[1] = m[1] + v[1]*t1;
	 p[2] = m[2] + v[2]*t1;
	 
	 p2[0] = m[0] + v[0]*t2;
	 p2[1] = m[1] + v[1]*t2;
	 p2[2] = m[2] + v[2]*t2;
 
	 /* see if the first point is within bounds */
	 if ((start[0] <= p[0])&&
	     (stop[0] >= p[0])&&
	     (start[1] <= p[1])&&
	     (stop[1] >= p[1])&&
	     (start[2] <= p[2])&&
	     (stop[2] >=p[2]))
         { /* now, see if second point is as well */
	     if ((start[0] <= p2[0])&&
	         (stop[0] >= p2[0])&&
		 (start[1] <= p2[1])&&
		 (stop[1] >= p2[1])&&
		 (start[2] <= p2[2])&&
		 (stop[2] >=p2[2]))
             {  /* both points are within bbox */
		if (t1 <= t2) /*see which one is smaller */
		{ /* t1 yields a closer point */
		    *t = t1;
		    p[0] = p[0] + offset[0];
		    p[1] = p[1] + offset[1];
		    p[2] = p[2] + offset[2];
		    return TRUE;
		} else {
		 /* t2 yields a closer point */
		    *t = t2;
		    p[0] = p2[0] + offset[0];
		    p[1] = p2[1] + offset[1];
		    p[2] = p2[2] + offset[2];
                    return TRUE;
		}
	     } else { /* second point not within box */
		*t = t1;
		p[0] = p[0] + offset[0];
		p[1] = p[1] + offset[1];
		p[2] = p[2] + offset[2];
		return TRUE;
	     }
	 } else { /* t1 not within bbox */
	    if ((start[0] <= p2[0])&&
		(stop[0] >= p2[0])&&
		(start[1] <= p2[1])&&
		(stop[1] >= p2[1])&&
		(start[2] <= p2[2])&&
		(stop[2] >=p2[2]))
	    { /* see if t2 is within bbox */
		*t = t2;
		p[0] = p2[0] + offset[0];
		p[1] = p2[1] + offset[1];
		p[2] = p2[2] + offset[2];
		return TRUE;
	    } else { /* neither is within bounding box */
		return FALSE;
	    }
	}
      }   
 }


/* HERE IS THE PORTION OF THE CODE THAT RETURNS THE NORMAL TO THE 
   QUADRATIC */ 
void findnormal(np,p,n) 
   node       *np;
   vector       p,
       n;
/* np is the pointer to the object node, p is the point of intersection,
   and n is the normal vector returned */
  {
    vector point;
    switch (np->kind) {
       /* conic section a la Prem */
       case CONIC    : point[0] = p[0] - cptr(np)->offset[0];
                       point[1] = p[1] - cptr(np)->offset[1];
		       point[2] = p[2] - cptr(np)->offset[2];
		       n[0] = 2.0*cptr(np)->a*point[0] + cptr(np)->d;
		       n[1] = 2.0*cptr(np)->b*point[1] + cptr(np)->e;
		       n[2] = 2.0*cptr(np)->c*point[2] + cptr(np)->f;
                       normalize(n);
		       break;
       }

-------------------------------------------------------------------------------

Parallel Ray Tracing on IRISs, collected by Doug Turner

>From: Douglass Turner <cornell!apple.com!turner@eye>


Here is the collection of responses that I got from various people
kind enough to respond to my query about doing parallel ray tracing
on the IRIS. 

--------

This is from George Drettakis:

I'm just finishing my MSc here, and I designed and implemented a hierarchical
scheme that uses several IRIS connected over an ethernet.  The rendering
algorithm is a global illumination approach developed locally that goes one
step further than radiosity and ray-tracing combined.  The approach is based
on space-subdivision that splits the original 3-d scene into cubes adaptively
depending on the difficulty in each volume.  I assign a group of sub-volumes
to each iris, and then do the processing for the volume on each IRIS in
parallel.  This actually includes a bit of ray-casting that we do for various
purposes in our algorithm.  I use the taskcreate/taskdestroy calls to
implement this, and I use the usnewsema and usnewlock call for locking and
synchronization.  The man pages are pretty straightforward, but if you have
trouble I can put together a sample program and send it to you.  I found using
the m_get_myid function from the "sequent compatibility library" was also
useful, as it gives you a unique 1 to n-1 proc id, for n procs.  For
ray-tracing, all you need to do is to split the screen, and give each pixel to
a processor, and you're done.  The key is to avoid using any global variables,
and speaking from painful experience, other peoples code is full of it, no pun
intended, as they didn't have parallelism in mind.

George Drettakis (416) 978 5473        Dynamic Graphics Project	
Internet: dret@dgp.toronto.edu         Computer Systems Research Institute
Bitnet:	  dret@dgp.utoronto            Toronto, Ontario M5S 1A4, CANADA

--------

This is from Steve Lamont:

I use brute force, forking off multiple processes.  This is probably not
optimum, but is guaranteed to work on a number of platforms, including a Cray
Y-MP running UNICOS (no shared memory between processes )-: ), a Stardent
Titan, a Convex C-220, and a SGI 4D/280GTX.  If you're interested, I'll be
glad to send you the code -- it is based on the MTV ray tracer, with
extensions.

Steve Lamont, sciViGuy	(919) 248-1120		EMail:	spl@ncsc.org
NCSC, Box 12732, Research Triangle Park, NC 27709

--------

This contribution is from Gavin [Bell--EAH] at SGI (not Gavin Miller!).

In it he mentions a demo tape that IRIS users can purchase ($100.00) that
contains source code for numerous demos including the "flyray" demo he
discusses.  I haven't gotten my copy yet but intend to soon.  The demos and
source code for other interesting looking stuff is listed in a pamphlet you
can get from IRIS (its free) called "IRIS software exchange".  I recommend it.

Have you seen the 'flyray' interactive ray-tracer demo running on a GTX?  It
used to be a two-processor ray-tracer (one processor computing ray/scene
intersections, the other feeding the graphics pipe) until I fully parallelized
it.  It will now use one processor to do send scene data (showing the
ray-trace going on) to the geometry subsystem, and the other N-1 to compute
the ray/scene intersections (if running over the Ethernet using the
Distributed Graphics Library all processors compute the ray/scene
intersections, with an extra process called on every once in a while to send
the results down wire).  [Excellent demo, by the way - see it. --EAH]

I used the sproc() system call to fork off processes, and set up a queue which
the main program (responsible for displaying the bitmap produced and showing
the rays being traced) reads from and all the child processes write into.
This queue is protected using the hardware lock feature of GTX systems (see
the usinit, usnewlock, ussetlock, and usunsetlock manual pages).

Rays are traced in blocks of up to 256 at a time (16x16 chunks), which cuts
down on the overhead associated with accessing the queue.  The program runs
amazingly fast on a 8-processor system, but could benefit from tracing even
larger chunks.

Source code to the flyray program is available from Silicon Graphics; Monica
Schulze is the person to contact (she handles distribution of demo source
code).  Try calling her at 415-962-3320.

	For your application, you probably don't need to do any inter-process
synchronization; each process will be busy computing its quarter of the image
and storing it in memory.  You might want to look at the m_fork() manual page;
your application will look something like:

	initialize_stuff (create the voxel space, etc);
	m_fork(child_process);
	m_kill_procs();
	save_image (or whatever you want to do with it)
	exit()

	... and each child_process does:
	int mynum = m_get_myid();
	int np = m_get_numprocs();
	... based on mynum and numprocs, do part of the screen here...
	return

The m_fork commands are nice because you get automatic synchronization (the
m_fork command returns after all child processes are finished), and it figures
out how many processors the system has for you.

Re: Distributed Graphics library:

	Yes, it is very easy to use; instead of linking with -lgl_s, you link
with -ldgl (and possibly some other libraries, depending on your networking
situation; usually, you need to add -lsun -lbsd also).  After doing a little
setup (you have to tell inetd to listen for DGL requests on a particular port
number and start up the DGL daemon), you put your program on the machine that
will execute it, and rlogin in to that machine from the SGI machine on which
you want the graphics to appear.  Then run the program.  All the graphics get
sent over the network and appear on the machine you are sitting at.

    However, ethernet speeds are fairly slow, so if you so a lot of drawing
you can easily overwhelm the ethernet bandwidth and slow down.  The best way
around this is to use the GL display-list commands; the object will be stored
on the display machine, and only one call needs to be sent over the network to
get it displayed.  Of course, as networks get faster this will become
unnecessary.

Feel free to redistribute any of this information.

--gavin
gavin%krypton@sgi.com

--------

This is source code from Michael John Muuss. I haven't looked at
it closely. His email path is mike@brl.mil

/*
 *			M A C H I N E . C
 *
 *  Machine-specific routines for doing raytracing.
 *  Includes parallel and vector support, replacement library routines, etc.
 *
 *  Author -
 *	Michael John Muuss

 ...

[for brevity (this issue is already a monster), I've deleted this code - 
look on the net, or contact Doug Turner or Michael Muuss if you are seriously
interested.]

--------

Heres a plug from Frank Phillips for SGI's course on writing parallel
applications.

just for reference, SGI offers a very good class (in Mt.View) on writing
parallel applications, using several different methods.  Definitely
recommended.

fap@sgi.com
frank phillips
SGI inSane Diego

--------

I hope people find this info useful.  I know I will.  Thanks to all who
responded so quickly.  A big unthank you to myself for waiting so long to
post.

Cheers,
Doug Turner
turner@apple.com
408-974-0484

-------------------------------------------------------------------------------
END OF RTNEWS


From craig@weedeater.math.yale.edu Fri Jul 13 19:54:33 1990
Return-Path: <craig@weedeater.math.yale.edu>
Received: from weedeater.math.yale.edu by turing.cs.rpi.edu (4.0/1.2-RPI-CS-Dept)
	id AA24169; Fri, 13 Jul 90 19:54:04 EDT
Received: by weedeater.math.yale.edu; Fri, 13 Jul 90 18:39:57 -0400
Date: Fri, 13 Jul 90 18:39:57 -0400
From: Craig Kolb <craig@weedeater.math.yale.edu>
Message-Id: <9007132239.AA21001@weedeater.math.yale.edu>
To: FISHER@3D.dec@decwrl.dec.com,
        anchor.colorado.edu!rt-colorado.?Unknown.weedeater.math.yale.edu.name@weedeater.math.yale.edu,
        arvo@apollo.hp.com, atc@cs.utexas.edu, barr@csvax.caltech.edu,
        barsky@miro.berkeley.edu, bcorrie@uvicctr.uvic.ca,
        chapman@fornax.CS.YALE.EDU, chet@cis.ohio-state.edu,
        ckchee@dgp.toronto.edu, daniel@apollo.com, dk@csvax.caltech.edu,
        dmi.ens.fr!asensio.?Unknown.weedeater.math.yale.edu.name@weedeater.math.yale.edu,
        duticg.tudelft.nl!fwj.?Unknown.weedeater.math.yale.edu.name@weedeater.math.yale.edu,
        duticg.tudelft.nl!wim.?Unknown.weedeater.math.yale.edu.name@weedeater.math.yale.edu,
        ecn.purdue.edu!cychosz.?Unknown.weedeater.math.yale.edu.name@weedeater.math.yale.edu,
        esl0422@ultb.isc.rit.edu, glassner.pa@xerox.com,
        grant@delvalle.llnl.gov, green@compsci.bristol.ac.uk,
        hanrahan@princeton.edu, hench@lea.csc.ncsu.edu,
        hohmeyer@miro.berkeley.edu, hpfcse!hpurvmc!koz%hpfcla.UUCP@CS.YALE.EDU,
        hpgtdadm.hp.com!raytrace.?Unknown.weedeater.math.yale.edu.name@weedeater.math.yale.edu,
        dana!mrk%hplabs.UUCP@CS.YALE.EDU, hultquis@prandtl.nas.nasa.gov,
        image.trc.amoco.com!zmel02.?Unknown.weedeater.math.yale.edu.name@weedeater.math.yale.edu,
        jakob@humus.huji.ac.il, jeff@hamlet.caltech.edu, johnf@apollo.com,
        joy@ucdavis.edu, kolb@CS.YALE.EDU, kyriazis@turing.cs.rpi.edu,
        larry.mcrcim.mcgill.edu!vedge!kaveh.?Unknown.weedeater.math.yale.edu.name@weedeater.math.yale.edu,
        m-cohen@cs.utah.edu, markc@emx.utexas.edu, mja@sierra.llnl.gov,
        mssun7.msi.cornell.edu!carl.?Unknown.weedeater.math.yale.edu.name@weedeater.math.yale.edu,
        paul@sgi.com, ph@miro.berkeley.edu,
        quad.com!avatar!kory.?Unknown.weedeater.math.yale.edu.name@weedeater.math.yale.edu,
        raycasting@duke.cs.duke.edu, raytrace@cpsc.ucalgary.ca,
        rgb@caen.engin.umich.edu, rrg@acf8.nyu.edu,
        squid.graphics.cornell.edu!jaf.?Unknown.weedeater.math.yale.edu.name@weedeater.math.yale.edu,
        tim@csvax.caltech.edu,
        ucsd.ucsd.edu!megatek!kuchkuda.?Unknown.weedeater.math.yale.edu.name@weedeater.math.yale.edu,
        wisdom.graphics.cornell.edu!ray-tracing-news.?Unknown.weedeater.math.yale.edu.name@weedeater.math.yale.edu
Subject: Ray Tracing News
Status: R

 _ __                 ______                         _ __
' )  )                  /                           ' )  )
 /--' __.  __  ,     --/ __  __.  _. o ____  _,      /  / _  , , , _
/  \_(_/|_/ (_/_    (_/ / (_(_/|_(__<_/ / <_(_)_    /  (_</_(_(_/_/_)_
	     /                               /|
	    '                               |/

			"Light Makes Right"

			   July 13, 1990
		        Volume 3, Number 3

Compiled by Eric Haines, 3D/Eye Inc, 2359 Triphammer Rd, Ithaca, NY 14850
    wrath.cs.cornell.edu!eye!erich or erich@eye.com
All contents are US copyright (c) 1990 by the individual authors
Archive locations: anonymous FTP at cs.uoregon.edu (128.223.4.13), /pub and at
		   freedom.graphics.cornell.edu (128.84.247.85), /pub/RTNews
UUCP access: see Vol 3, No 1 or Kory Hamzeh (quad.com!avatar!kory) for info.

Contents:
    Introduction
    Ray Tracing Roundtable Announcement
    New People, Address Changes, etc
    Jarke van Wijk Thesis Availability, by Erik Jansen
    New Name For "Distributed Ray Tracing", by Paul Heckbert et al
    NFF Files from Thierry Leconte
    RADIANCE Software Available, by Greg Ward
    Rayshade Updates & SPD Bug, by Craig Kolb
    New Version of Vort Ray Tracer, by David Hook
    Graphics Interface '90, by Eric Haines
    Real3d Review, Haakan "ZAP" Andersson
    Bits From a Letter by David Jevans
    On Antialiasing, & Light and Such, by Haakan "ZAP" Andersson
    ======== USENET cullings follow ========
    Summary: Uniform Random Distribution of Points on Sphere's Surface,
	Marshall Cline
    Ray Tracing & Radiosity, by Frank Vance, Mark VandeWettering
    Ray-Tracing the Torus, by Prem Subrahmanyam, Bob Webber
    Need Help on Superquadrics, by Wayne McCormick, Robert Skinner
    Ray Tracing Penumbral Shadows, Prem Subrahmanyam
    Ray with Bicubic Patch Intersection Problem, Wayne Knapp, John Peterson,
	Lawrence Kesteloot, Mark VandeWettering, Thomas Williams
    Rendering Intersecting Glass Spheres, by John Cristy, Craig Kolb
    DKBPC Raytracer, by Tom Friedel
    New release of IRIT solid modeller, by Gershon Elber
    Easier Use of Ray Tracers, by Philip Colmer, Mark VandeWettering,
	Jack Ritter
    Raytracer Glass, by F. Ken Musgrave, Michael A. Kelly
    Ray Intersection with Grid, by Alasdair Donald Robert McIntyre, Rick Speer
    Database for DBW-Render, by Prem Subrahmanyam

-------------------------------------------------------------------------------

Introduction

	Well, I've been absorbed in quality assurance for the updated version
of the radiosity/ray tracing software we've done with HP.  One thing we've
added is texture mapping, and I definitely feel a sense of deja vu, since my
second graphics project at Cornell was to integrate Rikk Carey's texture
mapping software into the modeller and ray tracer.  Here it is six years later
and we're finally using texture mapping in a commercial product.

	I should mention some resources for all of you who have scanners.
Phil Brodatz's "Textures" book (Dover) is definitely recommended - it's mostly
nicely photographed images of sand, gravel, fur, wood, paper, straw matting,
etc etc, perfect for scanning.  Another Dover book, "The Grammar of Ornament"
by Owen Jones, has many colorful designs from a large variety of cultures.
There are also many other Dover collections of designs which are worth looking
over.

	A related book that just came out is "Computers, Pattern, Chaos, and
Beauty" by Clifford Pickover (St. Martin's Press).  At first I thought this
was just another fractal book, but there are many other functions and
algorithms explored here.  This book is something like a collection of
"Computer Recreations" columns, and is worth checking out.  One topic mentioned
in the book is creating sea shells by a series of spheres.  This was also
covered in the IEEE CG&A November 1989 article, pages 8-11.  Here's a code
fragment that outputs a series of spheres in NFF (I leave a good view & lights
& colors to you).  Cut the number of steps way down for a rough idea where
the surface is located, and have fun playing with the various numbers and
formulae.

#include <stdio.h>
#include <math.h>

main(argc,argv)
int argc;  char *argv[];
{
static	double	gamma = 1.0 ;	/* 0.01 to 3 */
static	double	alpha = 1.1 ;	/* > 1 */
static	double	beta = -2.0 ;	/* ~ -2 */
static	int	steps = 600 ;	/* ~number of spheres generated */
static	double	a = 0.15 ;	/* exponent constant */
static	double	k = 1.0 ;	/* relative size */
double	r,x,y,z,rad,angle ;
int	i ;

    for ( i = -steps*2/3; i <= steps/3 ; ++i ) {
	angle = 3.0 * 6.0 * M_PI * (double)i / (double)steps ;
	r = k * exp( a * angle ) ;
	x = r * sin( angle ) ;
	y = r * cos( angle ) ;
	/* alternate formula: z = alpha * angle */
	z = beta * r ;
	rad = r / gamma ;
	printf( "s %g %g %g %g\n", x, y, z, rad ) ;
    }
}

The interesting thing about the database generated is that it's pretty slow
for most ray tracers (mine included).  There are a lot of spheres in a tiny
cluster at the start of the shell, and the final spheres are large enough that
this tiny cluster is a small part of the environment.  Because the spheres
overlap, each octree node, grid cell, or 5D list is going to be pretty long -
there's a huge amount of depth complexity in this database.  Definitely a
pathological case for most ray tracers, I suspect.

	Another book which just came out (as in, I just received my copy
minutes ago) is the new Foley & van Dam (& Feiner & Hughes) "Computer
Graphics:  Principles and Practice" from Addison Wesley.  1174 pages, with a
fair number of color plates.  Definitely a good place to start in on any
topic.  A must have; enough said.

	"Graphics Gems", edited by Andrew Glassner, should also be out soon.
Mine's on order, but hasn't arrived in time to review this issue.  Look for it
- it should be of interest.

	One more thing.  I recently created a radiosity bibliography which was
posted to USENET.  If you missed it on comp.graphics, it should be available
on freedom.graphics.cornell.edu (see the RT News header) soon.

-------------------------------------------------------------------------------

Ray Tracing Roundtable Announcement

	There will be a "birds of a feather" room for ray tracers to meet and
get to match names to faces at SIGGRAPH.  We'll meet 5:30 to 6:30 pm on
Thursday of SIGGRAPH at some room (see the room listings at registration).
Note that this shouldn't conflict with much, as it's after the sessions and
before the reception.

	As befits ray tracers, we're aiming for a minimal, parallel approach
for this year's meeting.  There won't be any roundtable, since last year's
meeting had near a hundred people - just going around the room getting names
and interests took half an hour.  So, there is absolutely no agenda for this
meeting; it's simply a place and time to come and meet others in the field.
Hope to see you there.

-------------------------------------------------------------------------------

New People, Address Changes, etc


Haakan 'ZAP' Andersson - efficiency, texturing, speed, etc.
EMT AB
Box 40
178 00 Ekeroe
SWEDEN
+(46)16 - 964 60
[no e-mail address for now...]

   Working with CAD technology, developing add-ons to the popular AutoCAD,
doing RayTracing as a personal midnight oil project, and it might even be
released realsoonow by EMT.  Since my computational powers are limited (PC
style) I enjoy things that make things pretty without bending the clock.  I
love anything that says 3D, but is also interested in other simulations of
real life (AI, sound and speech synthesis etc.).

P.S.  I showed "RayTracker" ( My Program ) at an exhibition and some people
thought it was darnfast...even though they were using some renderman stuff
earlier...  my suspicion was that my VERY low resolution led them astray
speed-wise, but I do another nice thing:  I trace every 8ighth pixel (Well,
really every 64th (8x8)) first, so you get a quick look of what's happening
very quickly.  Actually, it's a good timer!  When the scan has done the screen
the FIRST time, a 64th of the image is done, i.e.  the amount of seconds
(minutes) this scan takes, this amount of minutes (hours) will the full image
take, approximately.  That's nice.  Well, that IS all for now.

-------

Mark A. Cartwright - 3D Modeling, 3D Human Interface, Ray Tracing, Macintosh.
University of Texas
Computation Center
COM 1
Austin, Texas 78712-1110
Phone: (512)-471-3241 ext 306
email: markc@emx.utexas.edu

My background is mostly graphics work - with a little Numeric Control thrown
in for good measure.  Prior to my position here at the University I worked at
Byte By Byte here on the Macintosh version of Sculpt 3D.  Modeling and all
that relates to object creation and editing -- and of course ray tracing are
my main areas of interest.

--------

Pierre Poulin
Imager
Dept. of Computer Science
University of British Columbia
Vancouver, B.C., Canada
V6T 1W5

Just in case you might be interested to change it, my new email address is now:

	poulin@cs.ubc.ca

I followed my supervisor at the University of British Columbia. The DGP
group at Toronto kindly kept me on their system, but I would prefer being
contacted directly rather than each time through Toronto...

--------

Drew Whitehouse
E-mail:  drw900@csc2.anu.edu.au
Visualization Programmer,               Fax   : (06) 2473425
Australian National University,         Phone : (06) 2495985
Supercomputer Facility.
GPO Box 4, Canberra ACT Australia 2601.

   I work at the Australian National University Supercomputer Facility as a
visualization programmer.  My interests are scientific visualization, volume
rendering and raytracing.  Currently I'm keeping myself busy inserting bits
and pieces into the MTV raytracer.  The main thing being some sort of volume
rendering, both for voxel data and Perlin style hypertexture (what I've really
got in mind is a tribble teapot.....  )

--------

Michael A. Kelly
3105 Gateway St. #191
Springfield, Oregon  97477
(503) 726-1859 (home)
(503) 342-2277 (work)
mkelly@cs.uoregon.edu

Interests: accurate duplication of real-world objects, color, efficiency

Current Project:
I am attempting to write a rendering program for the Macintosh II series.
'Render3D' will be, at least at first, a shareware program.  It is a simple
ray tracer at this point, but by the end of summer it will be considerably
more.  Some of the features I plan to include are:

	Accurate surface color definitions using spectral energy distributions
		- color calculated on a wavelength-by-wavelength basis
		(now implemented)
	Radiosity method for diffuse reflections
	Phong and Gouraud shading for polygons
	Support for parametrically defined objects
		(including bicubic surface patches)
	Treat light sources as objects
		any normal object can be treated as a light source
	Texture mapping
	Import and Export several file formats, including RenderMan
	Animation

Of course, I probably won't get to all of these before the end of summer, but
I hope to finish the first four or five of them by the beginning of fall term.
(I am currently an undergraduate computer science major at the University of
Oregon, with one year left to go.)

--------

# Erik Jansen
# Dept of Industrial Design
# Delft University of Technology
# Jaffalaan 9
# 2628 BX Delft
# The Netherlands

Erik informs me that the site for a few people has changed:

alias	wim_bronsvoort	wim@duticg.tudelft.nl
alias	erik_jansen	fwj@duticg.tudelft.nl
alias	frits_post	frits@duticg.tudelft.nl

-------------------------------------------------------------------------------

Jarke van Wijk Thesis Availability, by Erik Jansen (fwj@duticg.tudelft.nl)

I saw your question about the book of Jarke van Wijk in the [January] issue.
Jack wrote a thesis book which is nicely printed by the Delft University Press.

The title is:
"On New Types Of Solid Models and Their Visualization with Ray Tracing".

Subjects that are covered:
- translational, rotational sweep; also in TOG 3(3) 223-237
- sphere sweep, also in EG'84 73-82; and in Computers and Graphics 9(3) 283-290
- blending surfaces; comparable with the work of Middletich and Sears, Sigg'85
- bicubic patches for non-rectangular meshes; also in CAGD 3(1) 1-13
- SML, solid modeling language; also in CAD 18(9).

There is no discussion in the thesis about efficiency methods (only bounding
boxes).

The book can be ordered for Dfl. 36.30 (ca. $20 including send costs) by:
Delft University Press
Stevinweg 1
Delft
The Netherlands
fax: +31-15-781661

-------------------------------------------------------------------------------

New Name For "Distributed Ray Tracing", by Paul Heckbert et al

[If you have more comments, please also send them for publication here. -- EAH]

A few weeks ago, I asked the following question of the people on
the Ray Tracing News mailing list:

    From ph@spline.Berkeley.EDU Wed Apr 11 18:48:05 1990

    This is a survey.  Assuming we could rename "Distributed Ray Tracing",
    what should we call it:

    1) Stochastic Ray Tracing
    2) Probabilistic Ray Tracing
    3) Distributed Ray Tracing (old name ok)
    4) other

THE RESULTS WERE:

   14  stochastic ray tracing
    3  monte carlo ray tracing
    2  distributed ray tracing
    1  probabilistic ray tracing
    1  rob cook ray tracing

"Stochastic Ray Tracing" is the clear favorite of those who responded.
Many commented that they've always found the term "distributed ray tracing"
to be a confusing misnomer.  Brian Corrie put it well:

    I have a serious problem with the name distributed ray tracing. The
    main reason is the name directly conflicts with the notion of using
    a distributed computer system for ray tracing. Trying to describe
    both of them is a hassle. Because parallel ray tracing is becoming
    quite prominent, this name conflict can be a major pain.

The plot thickens, however.  Michael Cohen and Joe Cychosz observed
that one could distribute rays regularly throughout a distribution, or
by some other deterministic algorithm, and most people would still call
it "distributed ray tracing".  Wouldn't the proposed new name
"stochastic ray tracing" then be misleading?

I agree.  That's why I've started using the name "distribution ray
tracing".  To me it captures the central idea that you're modeling the
surface properties as numerical parameters each with a probability
distribution instead of a single, specific value; analogous to the
distinction between a random variable and a standard variable.
There are various numerical integration techniques for simulating this
distribution: uniform sampling or stochastic sampling, non-adaptive or
adaptive, etc, but these are implementation details relative to the
main concept, which is that there's a variable in the shading
formula with a probability distribution that necessitates integration.
Another advantage of the name is that it's similar to the old one.

Andrew Glassner pointed out that there is another concept in Cook's and
Kajiya's papers that deserves a name, and that concept is that the
stochastic sampling can be done independently in each parameter. Andrew
suggested "simultaneous multidimensional sampling".  Another
possibility that occurs to me is "independent sampling".

What do y'all think?

What I'd like to know is: why didn't the reviewers of Cook's
1984 paper scream when he proposed that title?

Paul Heckbert
ph@miro.berkeley.edu

-------------------------------------------------------------------------------

NFF Files from Thierry Leconte

[I found some wonderful NFF files at the site irisa.irisa.fr [131.254.2.3] in
the NFF directory.  Particularly good is the Amiga computer model, but
all of them are pretty worthwhile.

BowlingPin.nff - guess
amiga.nff - an Amiga 2000 keyboard and computer (no monitor) - very nice
balls.nff - just a scene generated by SPD ball program
crypt.nff - a mysterious crypt (with columns and whatnot)
expresso.nff [sic] - T. Todd Elvins' Utah espresso maker in NFF format
hangglider.nff - a hang glider
jaws.nff - don't be afraid, it's just a nff scene
matchbox.nff - a box of matches and some matches
room.nff - an office with the desk and others things
spirale.nff - a spiral spline thingie
spring.nff - a spring thing
sps7.nff - a bull sps7 computer box
teapot.nff - the mythical one
temple.nff - pseudo-roman pagan temple something
tpotbis.nff - ye olde teapot, with lid ajar
watch.nff - a wristwatch
x29.nff - fighter plane

Unfortunately, there are some problems with some databases.  Problems included:

    - polygonal patches with normals given as [0,0,0].
    - polygons with no area (usually a triangle with a doubled vertex).
    - inconsistently defined polygons.  The NFF format specifies that you
      should output the polygon vertices in a particular order (counter-
      clockwise when viewed from above in a right-handed coordinate
      system).  I suspect you use double sided polygons so that it does not
      matter.
    - other minor problems.

The files I had problems with:

amiga.nff - there are a few polygons with no area (doubled vertices).  The
	first is around line 1518 of the file.
jaws.nff - there are tons of polygonal patches with normals of [0,0,0]
room.nff - polygons with [0,0,0] normals, and some no area polygons.  The first
	two polygons in this file are HUGE.  I cannot get the normals per
	vertex to display properly on my system, because some of the vertex
	normals differ inconsistently from the polygon normal (I haven't quite
	figured this out).
spring.nff - the "Shine" and "Transmission" values are switched here, I
	suspect.  Shine is 0, while Kt is 30!  You should definitely fix the
	material setup line here.
temple.nff - no area polygon around line 433.  I can't get the normals to line
	up properly, similar problem to room.nff.
watch.nff - lots of [0,0,0] normals here.
x29.nff - on line 3641 there is an extra vertex - the polygon says it has
	3 vertices, but 4 vertices appear (possibly my file is corrupt).


Some comments I pasted together from Thierry Leconte
(Thierry.Leconte@irisa.fr), who is the caretaker of the files:

In fact I'm only a novice in ray-tracing area.  I work on distributed systems
and parallelism.  But ray-tracers are good applications to test such systems.
Now I work on a modified version of VM_pRay (the parallel ray-tracer of
Didier.Badouel@irisa.fr) which run on our own distributed system (called
GOTHIC).  We are writing a motif based window interface for it and I am trying
to collect as many nff files as I can in order to run nice demos on the Gothic
system.  I have made available most of these files and some utilities (among
others yours) via anonymous ftp from irisa.irisa.fr.  Most of the non
classicals scenes I have come from a scene designer Xavier Bouquin who works
on a amiga with the Scult4D program.  and Philippe.Joubert@irisa.fr has
written a sculpttonff converter.  But if someone knows other converters or
interesting nff files I will be happy to add them to my collection!

VM_pRAY (ray tracer) of Didier Badouel is available at the same site.

Feel free to use, copy ,modify and distribute these files provided that they
are not use for commercial purpose and that the name of the author is
mentioned.

Most of these scenes was made on an amiga with Scult4D (a truly great modeler)
then they have been converted to nff file with sc2nff (a PD converter
available on the same site, same directory in the utils.tar archive).

The author of crypt,jaws,matchbox,room,temple,watch is Xavier Bouquin
(email to pjoubert@irisa.fr).

teapot, x29 were ftp'ed from cs.uoregon.edu.

amiga, hangglider, teapotbis were PD Scult4D files available on a amiga site
archive (will try in the future to collect any PD Scult4D file an convert them
to nff).

sps7 was made by hand.

balls and spirale - generated by program.

-------------------------------------------------------------------------------

RADIANCE Software Available, by Greg Ward (greg@hobbes.lbl.gov)

For a little over a year now I have been distributing a ray tracing lighting
simulation called RADIANCE through the Sun Academic Software Portfolio.  Since
the software runs on many machines besides the Sun, and not everyone gets the
portfolio as it is, I thought you might be willing to publicize the software
through your rtnews mailing list.

RADIANCE was written as a lighting simulation more than a renderer, so it does
not allow some of the tricks that are permitted in other ray tracing programs.
(For example, all light sources fall off as 1/r^2.)  It has some of the nicer
features of advanced renderers, such as textures and bump maps (I've always
argued for calling them patterns and textures, respectively), octree spatial
subdivision, several surface primitives and hierarchical instancing.  It's
main distinction, however, is its ability to calculate diffuse interreflection
like a radiosity calculation.  (See the article by Ward, Rubinstein and Clear
in the 1988 Siggraph proceedings.)

The software is free, and comes with all the C source code, but is not
available through anonymous ftp.  We want to keep track of people who have a
copy of the package so that we can send out update announcements, bug fixes
and so forth.  For this reason we also ask that people do not redistribute the
package without written permission (e-mail is fine).

Since I am just a mellow Californian who can't handle answering a 24 hour
support hotline, I want to discourage the idly curious, those who just want to
check out another ray tracer, from acquiring this software.  If you are
interested primarily in computer graphics, there are plenty of other ray
tracing programs that do a great job producing attractive imagery.  If, on the
other hand, you are really serious about lighting simulation, this is the
program for you.

Summary Description of RADIANCE:

	Lighting calculation and image synthesis using advanced ray tracing.
	Scenes are built from polygons, cones, and spheres made of plastic,
	metal, and glass with optional patterns and textures.

Detailed Description:

	RADIANCE was developed as a research tool for predicting the
	distribution of visible radiation in illuminated spaces.  It takes as
	input a three-dimensional geometric model of the physical environment,
	and produces a map of spectral radiance values in a color image.  The
	technique of ray tracing follows light backwards from the image plane
	to the source(s).  Because it can produce realistic images from a
	simple description, RADIANCE has a wide range of applications in
	graphic arts, lighting design, engineering and architecture.

	The reflectance model accurately predicts both diffuse and specular
	interactions, making it a complete simulation applicable to the design
	of unusual electric and day lighting systems.  Scenes are described by
	boundary representations with polygons, spheres and cones.  Materials
	include plastic, metal, glass, and light.  Textures and patterns can
	be described as functions or data.  Additional programs (generators)
	produce descriptions of compound objects, and allow regular
	transformations and hierarchical scene construction.  A 3D editor is
	being developed.

	The software package includes some image processing software and
	display programs for X, SunView, and the AED512, and comes with
	converters to Sun rasterfile and Targa formats.  Code is provided for
	writing other drivers, and the list is expected to grow.

Interface Description:

	The software is well integrated with the UNIX environment.  Many of
	the programs function as filters, with a reasonable degree of
	modularity.  An interactive program provides quick views of the scene,
	useful for debugging and view determination.  Scenes are described in
	a simple ascii format that is easy to edit and program.  Generators
	are provided for boxes, worms, surfaces of revolution, prisms, and
	functional surfaces (eg.  bicubic patches).  A small library of
	patterns and textures is included.  In general, the software is
	sensible but not mouse-based.

Overall Goals of Developer:

	The primary goal of the program is the accurate simulation of light in
	architectural spaces.  Secondary goals are image synthesis and
	geometric modeling.  Efficiency is an important concern in any ray
	tracing method.

Obtaining RADIANCE:

	Send a 30+ Mbyte tape cartridge with return envelope and postage to:

		Greg Ward
		Lawrence Berkeley Laboratory
		1 Cyclotron Rd., 90-3111
		Berkeley, CA  94720

If you have any questions regarding the applicability of this software to your
needs, feel free to call or (preferably) send e-mail to me directly.

Sincerely,

Greg Ward

Lighting Systems Research Group
GJWard@Lbl.Gov
(415) 486-4757

-------------------------------------------------------------------------------

Rayshade Updates & SPD Bug, by Craig Kolb

[from various notes to me]

The Rayshade 3.0 ray tracer is now up to patch level 5.  Rayshade 3.1 is
coming along nicely.  It fixes some of the major problems in rayshade, and
adds a bunch of new features.

By the way, while playing with the SPD, I ran across a couple of very minor
problems.  In lib.c, you make mention of OUTPUT_POLYGONS, when you really
mean OUTPUT_PATCHES.

[I have sent Craig release 2.7 of the SPD package, and it's now available for
FTP from weedeater.math.yale.edu (130.132.23.17).  It has minor typo fixes
(like the above) and clarifications of the NFF, but no code fixes. - EAH]

Craig also notes that Przemek Prusinkiewicz's book, "The Algorithmic Beauty
of Plants", will be released at SIGGRAPH this year, complete with lots of
pretty raytraced pictures using Rayshade.  Some of the images generated
were in the March 1990 issue of IEEE CG&A (including the front cover).

-------------------------------------------------------------------------------

New Version of Vort Ray Tracer, by David Hook (uunet.UU.NET!munnari!dgh)

[the following was pieced together from two notes - EAH]

I have just installed a new version of the ray tracer in
pub/graphics/vort.tar.Z on munnari.OZ.AU.  It's a bit of an improvement on the
last one, and has some stuff for displaying images as animations on sun's,
PC's, and X11.

One thing that may be of interest, one of the guys I work with, one Bernie
Kirby, implemented some marble and wood texturing using in the ray tracer.  He
had the following to say:

	It is an implementation of Ken Perlin's noise function as described in
	"Hypertexture, Computer Graphics, Vol 23, No.3, July 1989, pp
	255-256".  Most of the function names are as given in the paper.
	Also, I think that there may be an error in the paper as the given
	formulae of OMEGA would always produce zero if the actual point falls
	on a lattice point.  (ie. GAMMA(i, j, k) .  (u, v, w) = 0).  Anyway,
	I added in a pseudo random value for the noise at each lattice point
	as well as the gradient values (as described in his '85 paper and it
	seems to work.)

	The original version (ie.  the one that had zeros at the lattice
	points) produced almost exactly the same effects as displayed in Fig 2
	of J. P. Lewis' paper in SIGRAPH '89 "Algorithms for solid noise
	synthesis".  The changed algorithm still displays some of these
	artifacts (if you really look for them) but no where near as badly as
	in Lewis's paper.

The only other things of note are that Vort no longer requires
primitives to be axis-aligned and that most things can have tile patterns
mapped on to them (this includes toruses, although flat pixel files
mapped onto a donut can tend to look a little weird). Some people may find
the mapping functions of some use (although having just got a copy of
"Introduction to Ray-Tracing" I notice most of them are in there, the book
shop slugged me $100 bucks for it, sometimes I don't believe this place...)

Regards,
David.

PS: we ran your standard rings benchmark at four rays per pixel and it took
two and a half days on a Sun 4 3/90. Is that a record? ( :-) ) Next project
is to speed the mother up...

--------

[Also, a package called Vogel is also available at munnari - EAH]

>From Tom Friedel, tpf@jdyx.UUCP:

With Vogel you get 3-d transformation, 3-d and 2-d clipping,
perspective, orthogonal views, some patch functions, and back face removal.
I've wanted to build a bigger package on these routines, some sort of
.nff previewer, but haven't had the time.  Good package because you get
device independence with X11, postscript, others.

-------------------------------------------------------------------------------

Graphics Interface '90, by Eric Haines

	An interesting conference/proceedings which often is overlooked is
"Graphics Interface".  This year's conference, held in Halifax, Nova Scotia,
Canada, had quite a few papers on ray tracing.  I particularly liked Don
Mitchell's "Robust Ray Intersection with Interval Arithmetic" paper.  I'm sure
we'll be seeing interval analysis applied to more areas of graphics in the
years ahead, and this is an interesting application of the technique.
Graphics Interface proceedings can be ordered from:  Morgan Kaufmann
Publishers, (415)-965-4081.  What follows is a list of papers that may be of
interest:

	Image and Intervisibility Coherence in Rendering
		Joe Marks, Robert Walsh, Jon Christensen and Mark
		Friedell, Harvard U.

	Robust Ray Intersection with Interval Arithmetic
		Don P. Mitchell, AT&T Bell Laboratories, Murray Hill

	Approximate Ray Tracing
		David E. Dauenhauer and Sudhanshu K. Semwal, U. Colorado
		at Colorado Springs

	Octant Priority for Radiosity Image Rendering
		Yigong Wang and Wayne A. Davis, U. Alberta

	Exploiting Temporal Coherence in Ray Tracing
		J. Chapman, T. W. Calvert and J. Dill, Simon Fraser U.

	A Ray Tracing Method for Illumination Calculation in
	Diffuse-Specular Scenes
		Peter Shirley, U. Illinois at Urbana-Champaign

	Voxel Occlusion Testing: A Shadow Determination Accelerator for
	Ray Tracing
		Andrew Woo and John Amanatides, U. Toronto

	Some Regularization Problems in Ray Tracing
		John Amanatides and Don Mitchell, AT&T Bell Laboratories,
		Murray Hill

-------------------------------------------------------------------------------

Real3d Review, Haakan "ZAP" Andersson

  I was attending a computer exhibition in Stockholm the other day, and among
all the printers, PC computers and other boring stuff, there were some
bright spots.  In one stand I saw a screen with a checkerboard pattern and a
mirrored sphere on it....  I began to Yawn loudly, but as I was going to leave
a mirrored hand plunged up from the ground and grabbed the sphere, and the
camera made a spin around the whole scene.  It made me interested enough to
enter and look.

  REAL-3D is apparently a new (so new it's a RealSoonNow product) from the
icy Finland. It's a raytracing program for the Amiga, and it's darn fast
(for being an Amiga, that is). I talked to the program author and he said he
did NOT use the math coprocessor, and it was all in assembler...(Good Grief)
When discussing efficiency he proudly declared that he had NOT looked at others
for algorithms, because he did not want to be influenced, so he had invented it
all from scratch. He had been programming full-time for four years, and from
what I saw, the result was satisfactory, to say the least.

  A beautiful, OSF/Motif-like user interface and a great interactive object
editor that would make Sculpt-Animate 4d go home and hide under a rug. All
objects were arranged in a hierarchy for animating, and materials and textures
could be assigned to whole hierarchy tree branches. He could even do solid
modelling (I saw a cylinder cut out from another before my very eyes) with
different textures in the 'hole' and the piece the hole was cut out of. It
was also one of the few Amiga tracers to support texture mapping, i.e. you
could map any IFF image onto any surface via some different kinds of projective
mapping (Parallel, Ball, Cylinder)

  It had a good hybrid between analytic objects (spheres, cylinders, cones and
even hyperbolics (!)) and polygon surfaces. A nice entity was the 'sphere-line'
he used, you simply drew a series of connected lines, and all vertices got
a sphere, and all connecting lines became a cylinder.  [AKA "worms" or "cheese
logs" in other parts of the world - EAH]

  Animation support looked quite straight forward, and he made a simple
animation of the Real3d logo being flown through before my very eyes (Though
as a wire frame, but I got the hang of animating).


SPEED:

  Even though the program ran on an Amiga 2000 (14 Mhz clock) it's speed was
rather fast.  Comparing with my own tracer, running on a 386 machine with full
math coprocessor support, his program looked faster..(!)  The wire frame
representations, used for placing cameras an so forth was real-time, and he
traced a hyperbolic with wood texture with a cylindric hole in about a minute.
He also claimed that finding the colors (for the HAM mode in the Amiga) was
half the work.  He said that an early version used Black&White and it was
almost twice as fast...  (bragging?  :-) Upon being asked how it could be that
fast, he said, "Only me and my brother knows how it works..."  then he smiled.
Heaven knows why...?


Conclusion:

REAL3d looks like a speedy hack for the Amiga, and the editor alone makes it a
joy to watch.  I hope this little blond guy from Finland will sell some of
these programs, so he will be able to continue to develop it into something
really nice.

[Sorry, I don't have an address for where to get it.  Anyone? - EAH]

-------------------------------------------------------------------------------

Bits From a Letter by David Jevans

	...As for the Arnaldi paper on "mailboxes" (keeping a ray_id for each
object to avoid multiple intersections), I though that was common knowledge!
Cleary's analysis of ray tracing paper (Visual Computer '88), MacDonald's
paper and my paper (Graphics Interface '89) all mention the technique and
reference the paper.  It's a good thing that you put in the RTN if people
really don't know about it!

	Anyway, my recent work has some pretty important results for the ray
tracing community.  Essentially I show that further work in reducing
ray-object intersections is pretty much pointless (we are close to the fastest
possible time anyway).  I'm just writing a section that illustrates that
hierarchies of 3D voxel grids are superior to bounding volume hierarchies (as
if anyone will believe me!  :-)

David Jevans @ University of Calgary, Department of Computer Science
	       Calgary, Alberta, Canada T2N 1N4
uucp: jevans@cpsc.ucalgary.ca                  vox: 403-220-7685

-------------------------------------------------------------------------------

On Antialiasing, & Light and Such, by Haakan "ZAP" Andersson (no email)

(Disclamier: Since I havn't read all theses about anti-aliasing, this might
 be known already, so all I claim to be my ideas may be not, so the stupidity
 is solely on my part :-)


When raytracing, instead of having one wiz_bang function trace(ray) that
traces through the entire object list (traversing AHBV's as necessary) and
pinning down the poor bastard of an object blocking our way of light and
shining in our face, you might do one little thingy:

Use TWO functions, ONE that traverses the bounding volumes for a ray, building
an object_list of primitive object this ray (might) intersect, and ONE that
does the real object intersection.  i.e. this:

	current_list = traverse_abvh(ray);
	pixel = trace_this_list(ray,current_list);
	clear_list(current_list); /* Get rid of it somehow */

Why da hek do this?  Well, when anti-aliasing, you will virtually hit the same
object(s) inside each pixel, and the 'slack' around the object vs.  bbox will
allow the 'current_list' to contain all objects this ray hits, and any ray
being very close to this very ray, i.e.  all rays within a single pixel.  So,
when anti-aliasing, simply call 'trace_this_list()' with the same list all
over again, only SLIGHTLY different 'ray's.  Ok, there CAN be wrongies some
places, but since you do this for eye-rays, and _I_say_ that for eye-rays you
should use 2d bounding boxes on screen INSTEAD of real bboxes, you simply let
the 2d bboxes be one pixel bigger than they should be, and viola, each pixel
will always yield AT LEAST the object_list containing all objects to be hit in
this pixel and the neighbouring ones.  Got that?

Any flaws?  Well, since current_list() need to be alloced/malloced in some
way, there might be a speed problem.  Another solution is using a static
object list, and KEEPING the bbox traversal code in 'trace_this_list()' but
ALSO having a 'traverse_abvh()' function, used only upon eye-rays when
anti-aliasing is in effect.  The fact that the list 'trace_this_list()' gets
does NOT contain any bboxes once in a while (i.e.  when we supply a list made
by 'traverse_abvh()' instead of the 'full' object list) is not a problem from
that functions point of view.

Any flaws NOW?  Well, you might always run out of static storage space.  But
you can always 'realloc()' :-)

[Comments:  this has a certain sense to it.  By making a "likely candidate
list" for a pixel you can stop wasting time traversing the darn hierarchy.
You could even sort it by the box hit distance, so that when you get do get a
hit from the candidate list you can then simply test this hit against the
remaining candidate distances.  As soon as you reach a candidate distance that
is beyond your hit distance, you stop intersecting.  This candidate list idea
is similar in feel to my Light Buffer lists and Jim Arvo's 5D lists.

The trick in all this is to make sure your bounding boxes do not fit too
tightly that a pixel makes a difference:  this is easy enough to calculate for
eye rays.  - EAH]


--------

				Light and Such
				      or

			  "Something's REALLY wrong,
			   with Bui-Thui Phong"

	(I think that's the guys name, and since Bui-Thui is his last name,
	 Bui-Thui shading is more appropriate than Phong shading, but...)

			       By Haakan

I have been dissatisfied with Phong's shading model for quite some time, since
it does not accurately enough apply to reality.  There is at least two effects
in 'real life' that i think is important enough to be mad att ol' mr Phong.  I
have called these 'The Rough bug' and 'The nonlinearity bug'

* The Rough bug

There is many many ways to see The Rough bug in action in real life, and many
poems have been written about the biggest proof about The Rough bug's, many
lovers have been sitting under it, many songs written about it, and many a
astronomer hase gazed upon it:  The Moon.

Let's get out late at night, and trying to look straight up.  You will see the
moon, or half of it, or a third, or nothing ("Half moon tonight.  At least
it's better than no moon at all" - Fortune program) But does this moon look
like a Phong shaded sphere?  Were is the ambient, diffuse, and above all,
specular light?  What you (and I) see, is darkness, darkness, more darkness,
and smack, white moon surface, more white moon surface, swack, Darkness,
darkness...  Who stole Kd and Ka?

The answer lies in the texture of the moon's surface.  It's really a VERY
large amount of small objects, stones, rocks, sand, mountains, e.t.c.  So, any
given spot upon the moon surface, will have normal vectors pointing in every
possible direction, and (using Phong in a small scale) will be shaded (almost)
equally, until, of course, it's in shadow!  This is The Rough Bug, Rough sur-
faces are subject to this effect, and the Phong diffuse component gets more
and more out-of-sync with the actual surface as the surface get's rougher.

Some questions emerge:

Q: Is there any way to let a phong-type shading model take this into account,
   without having to model the surface exactly, either in actual geometry or
   as a bump map?

Another example of this bug you may find outside, at sunset (or almost sunset)
if you stand on a large, flat surface covered with sand, dirt, concrete or
something similar.  Looking towards the setting sun, the surface you are stan-
ding on is darker than when you look away from the sun, by the same reason,
the surface's Roughness.  Small rocks and stuff reflect the sun as you look
away from it (by 'it' i refer to the sun, not the rocks.  You will not see
anything if you look away from the rocks :-) and does not as you look into it
(same 'it').  So much for Specular reflection.  Perhaps some kind of inverted
specular reflection....??

* The Non Linearity Bug

Without having ANY physicists backing me up, I dare claim that the (very
linear) equation of the standard Ray Tracing reflection model is goddamnwrong!
I was wearing a ring the other day, sitting in front of a window, looking out
into the sun.  Since my mind is constantly on RayTracing, i saw the reflection
in the window of the sun reflecting in my ring.  I didn't see myself, nor the
ring, not even the room I was in, reflecting in the window glass, only the
(very bright) reflection of the ring (Perhaps the window used adaptive tree
pruning, filtering away all reflections below 0.1?  Nah...don't think so :-)

Another example was when a friend of mine was standing in front of the very
same window, but it was very very late at night, and the window was black.
But he was backlit by the light from the room, and as I observed his
reflection in the glass, I saw it had somehow higher contrast then my direct
vision of him standing there.  The dark parts of his face (where the shadow of
his nose fell, f'rinstance) was pitch black in the reflection, but the tip of
the very same nose, being lit from behind, appeared very bright in the
reflection.  There were no such differences in luminosity between nose-tip and
nose shadow in the true image of him, not even from the windows viewpoint,
something I verified by crawling up between the window and him, observing him
accurately (with him thinking I was a complete fool -- which I am).

A question emerges:

Q: Can this 'non linearity' of reflected images somehow be imported into
   the raytracing algorithm, or is this a local effect in the human eye?
   (Perhaps in MY eyes only... ?)

======== USENET cullings follow ===============================================

Summary: Uniform Random Distribution of Points on Sphere's Surface,
    Marshall Cline, (cline@cheetah.ece.clarkson.edu (Marshall Cline)
    Organization: ECE Dept, Clarkson Univ, Potsdam, NY

The original problem was:
> I need to uniformly(!) spray a sphere's surface with randomly located dots.
> We can assume the sphere has radius R and is centered at the origin.

SOLUTION #1 (by far the most popular):
> Choose 3 Uniform random values: (rand(-R,R), rand(-R,R), rand(-R,R)).
> If this is inside the sphere, project that vector onto the sphere's surface.
    (Sorry I have no references; many many people suggested this)

SOLUTION #2:
> Choose 3 Gaussian randoms: <Normal(0,1), Normal(0,1), Normal(0,1)>
> Project this vector onto the sphere's surface.
    bill@stat.washington.edu (Bill Dunlap, Statistics, U. Washington)
    jd@shamu.wv.tek.com (JD)

(Projecting vector <x,y,z> onto the sphere's surface is done by dividing
each component by sqrt of sum of squares (ie: the vector's length).)

SOLUTION #3:
> Pick a random latitude by the inverse sine of a number uniformly
> distributed over [-1,1].  Pi times another such random number gives
> you a random longitude, and you're done.
    dougm@rice.edu (Doug Moore)

Several other solutions suggested dividing the sphere's surface into small
patches and projecting uniformly into a randomly chosen patch.

Thanks to all who answered.
Marshall Cline

PS: I'm implementing this on the Connection Machine.  The SIMD nature of
the CM makes the first soln difficult, since each processor will have to
wait until that last straggler finds a point inside the sphere.  There are
ways around this, like a list of cached 3-space points in each processor,
but there's always a chance that one processor's list will be very short.
Thus I'm going to try the Normal(0,1) solution first.

PPS: paul@hpldola.HP.COM (Paul Bame) suggested a method (simulated annealing)
which would "evenly distribute" points on the sphere's surface.  Although my
app requires a uniform random distribution, I'm posting this as it may be
appealing (though slow) for someone who wants evenly distributed points.

-------------------------------------------------------------------------------

Ray Tracing & Radiosity, by Frank Vance (fvance@airgun.wg.waii.com)
Organization: Western Geophysical, Houston

The SIGGRAPH '87 proceedings contain three articles (see biblio. below)
which, taken as a whole, seem to imply that image synthesis will have to
combine both ray tracing and radiosity in order to be able to accurately
render images that contain many "real-world" phenomena.  Two of the papers
point out the difficulty of using ray tracing to render such things as
atmospheric scattering and "participating media".

I have not seen any further discussion of this view (although I have not yet
seen the SIGGRAPH '89 Proceedings [go easy on me, OK?]), and am wondering what
other researchers, particularly die-hard ray tracers, thought of this.
Can ray tracing correctly render such things without resorting to radiosity
tricks?  Or is the distinction between ray tracing and radiosity
essentially artificial?  What's your opinion?

Bibliography:
 All below from SIGGRAPH '87 Proceedings a.k.a. Computer Graphics, July 1987,
   v.21 n.4

	Wallace, John R.; Michael F. Cohen; Donald P. Greenberg
	"A Two-Pass Solution to the Rendering Equation: A Synthesis
	 of Ray Tracing and Radiosity Methods", pp 311-320

	Rushmeier, Holly E.; Kenneth E. Torrance
	"The Zonal Method for Calculating Light Intensities in the
	 Presence of a Participating Medium", pp 293-302

	Nishita, Tomoyuki; Yasuhiro Miyawaki; Eihachiro Nakamae
	"A Shading Model for Atmospheric Scattering Considering
	Luminous Intensity Distribution of Light Sources", pp 303-310

--------

Re: Ray Tracing & Radiosity, by Mark VandeWettering (markv@gauss.Princeton.EDU)

What I think SHOULD be implied is that normal raytracing techniques are
inadequate to solve a wide variety of lighting situations, particularly those
which deal with solutions to the "ambient" light contribution, diffuse
interreflection, participating media, or color bleeding.

This doesn't mean that raytracing can't be used to solve problems like this.
As a matter of fact, radiosity can be implemented quite simply using a
raytracer rather than a zbuffer-er for the hemicube calculations.  Raytracing
was a part of Holly Rushmeier's participating media radiosity solution, where
rays were used to perform spatial line integrals of the lighting equation.  If
you examine the 88 and 89 Siggraph proceedings, you will see that many
researchers have shifted to raytracing-like approaches to implement radiosity
solutions.

>Can ray tracing correctly render such things without resorting to radiosity
>tricks?  Or is the distinction between ray tracing and radiosity
>essentially artificial?  What's your opinion?

The distinctions aren't artificial, but they are subtle.  For a while,
radiosity meant using matrix equations to solve energy transfer between
Lambertian reflectors.  Later, the n^2 memory requirements were relaxed by
using progressive radiosity, and the algorithm became practical and
competitive with other methods.  Now, integrations of raytracing and radiosity
are beginning to show further improvements in both speed and the kinds of
situations they cover (specular reflection).  And you can be sure that there
will continue to be radiosity papers in THIS years Siggraph too.  (I can
hardly wait!)

Raytracing is generally conceived to offer solutions to precisely the
situations where early radiosity solutions failed:  environments with highly
specular environments.  It used to be thought that raytracing was too
expensive, but improvements in hardware and in algorithms have made raytracing
tractable and attractive.

Now, I believe that most algorithms "of the future" will have some sort of a
raytracing core to them, if not for modelling light interactions then probably
just for checking visibility of points.

How 'bout anyone else?  Any more ideas?

-------------------------------------------------------------------------------

Ray-Tracing the Torus, by Prem Subrahmanyam (prem@geomag.gly.fsu.edu)

Ok, I've contributed my quadric ray-tracing code.  Now, if someone could tell
me how to do the above, I would greatly appreciate it.  I know it is a 4th
order equation, but I have not even succeeded in locating the equation for the
torus in my math textbooks (except for a spherical coordinate version--and I
don't want to try to convert).  Any help would be appreciated.

[this has been answered a few times already in the RT News, but I found Bob
Weber's reference of interest.  He also gives a taste of Pat's explanation
-- EAH]

--------

Reply from Bob Webber (webber@fender.rutgers.edu):
Organization: Rutgers Univ., New Brunswick, N.J.

For planar curves we have J.  Dennis Lawrence's A Catalog of Special Plane
Curves (Dover 1972) to satisfy those times when one wakes up in the middle of
the night, racking one's mind trying to remember the equation for the
hippopede.  However, for 3-d, the best I have seen is Pat Hanrahan's A Survey
of Ray-Surface Intersection Algorithms that appears in Andrew Glassner's An
Introduction to Ray Tracing (Academic Press 1989).  There we find, among other
things, the equation for a torus as:

   (x**2 + y**2 + z**2 - (a**2 + b**2))**2 = 4 a**2 (b**2 - z**2)

This describes a torus centered at the origin defined by a circle of radius b
being swept along a center defined by a circle of radius a.  It is derived
from considering the intersection of a plane with a torus that yields the two
circles:

   ((x - a)**2 + z**2 - b**2)((x + a)**2 + z**2 - b**2) = 0

[if you are unfamiliar with this construction, it is worthwhile pausing here
and savouring how this equation actually works -- sometimes the equations are
prettier than the pictures] and then spinning this intersection by replacing
x**2 with x**2 + y**2 (after some algebraic simplification, which converted
the above to:

     (x**2 + z**2 - (a**2 + b**2))**2 = 4 a**2 (b**2 - z**2)

).  The section includes a reference to an unpublished 1982 note by Hanrahan
entitled:  Tori:  algebraic definitions and display algorithms.  The general
scheme for a number of variations on the torus is to start with a quartic
curve of the form f(x**2,y)=0 and then substitute x**2+y**2 for x**2 and z
for y.

-------------------------------------------------------------------------------

Need Help on Superquadrics, by Wayne McCormick (wayne@cpsc.ucalgary.ca)

	A few months ago I read some articles on superquadrics here on
the net.  It interested me and I decided to try to implement a modeler
based on superquadric shapes.  Since the inside-outside functions are
so easy to use in determining intersections and so forth I thought it would
be somewhat easy to do.  But I stumbled into a small problem.

	Parametrically, a superellipsoid is defined by

	x = c(u,e1) * c(v,e2)
	y = c(u,e1) * s(v,e2)
	z = s(u,e1)

	where -pi <= u <= pi, -2pi <= v <= 2pi, and c(u,e1) = cos(u)^e1,
s(u,e1) = sin(u)^e1.  O.K., this is the easy part.  By varying u and v
through the ranges we generate a bunch of points on the surface of the
ellipsoid.  But, the only place that the functions are defined for real
numbers is in the positive octant because once the sin or cos function
becomes negative and e1 and/or e2 are not integers, the function moves out
into the complex plane.

	Then I tried to calculate everything in the complex plane.  There
are two problems here.  1) speed, 2) how do you map back to image space?

	Then in Franklin and Barr's paper on "Faster calculation of
superquadric shapes", they say that using an explicit equation and reflecting
47 times is much faster.  Sure I can see that, but the patch that is
generated
by the explicit equation is small and odd shaped, and what 47 directions does
one have to reflect it?

--------

>From Robert Skinner (robert@texas.esd.sgi.com)
Organization: Silicon Graphics Inc., Entry Systems Division

I'm going to suggest this without poring over the references, so I'll
apologize ahead of time:

Try using the same identities for c(u,e) and s(u,e) as for sin() and
cos():

	c(-u,e) == c(u,e)
	s(-u,e) == -s(u,e)
	s(pi/2 - u, e) = c(u,e)
	c(pi/2 - u, e) = s(u,e)

you can make this restriction 0 <= u,v <= pi/2 and solve only the easy
cases.  This also means that you only have to compute 1/4th of u's
range, and 1/8 of v's range, a reduction of 32.  Define
your basic patch over the range above, then define what the other
ranges would be in terms of that:

e.g.
	0 <= v <= pi/2
	-pi/2 <= u' <= 0		(i.e u' = -u)

	then
 	x' = c(u',e1) * c(v,e2) = c(u,e1) * c(v,e2) = x
 	y' = c(u',e1) * s(v,e2) = c(u,e1) * s(v,e2) = y
 	z' = s(u',e1) = -s(u,e1) = -z

so just reflect your basic patch by -1 in the Z to draw this one.

Repeat for all other sections of the total range.
This should work, but it looks like you only get 32 sections, not 48.

-------------------------------------------------------------------------------

Ray Tracing Penumbral Shadows, Prem Subrahmanyam (prem@geomag.gly.fsu.edu)
Organization: Florida State University Computing Center

I would like to hear how people who have done the above have succeeded at
this.  Presently, I am working with DBW_Render which uses the following basic
algorithm.

Find the direction to the light source and determine distance to this source
(for inverse square shading).  Now, create a random unit vector and scale this
into the direction to light vector using the radius of the light source as the
scaling factor.  Test this new vector for shadows, etc.

This generates very poor shadows except when the anti-aliasing is turned way
up (6 or greater rays per pixel) since we are either in shadow or not (no
in-betweens).  Does anyone else have any usable suggestions as to how this can
be done where we vary the amount of light depending on how much in shadow it
is (short of firing multiple rays at the light source--pretty much the same as
anti-aliasing).

-------------------------------------------------------------------------------

Ray with Bicubic Patch Intersection Problem, Wayne Knapp
	(wayneck@tekig5.PEN.TEK.COM)
	Organization: Tektronix Inc., Beaverton, Or.

Time for a hard problem.  Anyone have a great idea on how to compute the
a ray intersection with a general bicubic patch?  The methods I've found in
papers so far tend to be very slow.  Seems like most papers take one of two
approaches:

    1. Sub-divide the patch in many small polygons and ray-trace that.  Works
       but when you have thousands of patches you can end up with millions of
       polygons.

    2. An Iterative numerical approach, chosing a x,y,z point on the ray and
       checking to see if it intersects the patch by using the x,y,z values
       in the system of equations given by the four cubic equations forming
       the patch.  This of coarse normally requires many trys.

Does anyone have any better ideas?

--------

John Peterson (jp@Apple.COM)
Organization: Apple Computer Inc., Cupertino, CA

I did my MS thesis on comparing techniques #1 and #2.  #1 was definitely the
winner, both in terms of speed and robustness.  #2 requires root finding,
which can have convergence problems (not finding a root, finding the wrong
root, etc).  Also, it performs the surface evaluation (which is expensive) in
the very inner loop of the ray tracing process where it is executed literally
billions of times for a complex image.

Reducing to polygons first allows the ray tracer to deal strictly with simple,
fast and stable linear math.  It also does the surface evaluation (to generate
the polygons) only once as a pre-process.  Once the polygons are generated,
there are several very effective schemes for reducing the ray-surface search
problem for large quantities of small, simple primitives (e.g., octrees,
bounding volume trees, 5D space, etc).

For the gory details, see "PRT - A System for High Quality Image Synthesis
of B-Spline Surfaces", MS Thesis, University of Utah, 1988.

--------

Lawrence Kesteloot, (lkestel@gmuvax2.UUCP)
Organization: George Mason Univ. Fairfax, Va.

Check out the book "An Introduction to Splines for use in Computer Graphics &
Geometric Modeling", by Richard H. Bartels, John C. Beatty, and Brian A.
Barsky.  (Morgan Kaufmann Publishers, 1987).  It has a section entitled
Ray-Tracing B-Spline Surfaces (p. 414).  It goes into several steps to speed
up the intersection:

(I have not read this yet.  I'm summarizing as I read.)

   1.  Refinement Preprocessing - This breaks the image down into many smaller
	 splines.  Each spline covers several 100 pixels.

   2.  Tree Construction - Break the new (smaller) spline into a bunch of
	 boxes, starting with one box for the whole spline, then break that
	 down (put all this into a tree).  Intersection with boxes is easy.
	 You can find out which of these boxes (check only the leaves of the
	 tree) intersects the ray.  This will give you the starting point for
	 Newton's iterations.

  3.  Do newton's iteration to find the exact distance.

I'm sorry if I've made any errors in the above description.  You're going to
have to get the book, of course, to implement it.  I'm going to implement it
in my own ray-tracing program in the next few weeks, so I'll post the source
if anyone is interested.  It seems like a complicated algorithm, but it may
speed things up quite a bit.  [I never did see the source posted - EAH]

--------

Mark VandeWettering (markv@gauss.Princeton.EDU) writes:

Well, there is another solution to this problem which people haven't fleshed
out a great deal: generate triangles and raytrace those.

I hear groans from the audience, but let me put forth the following reasons:

1.	ray/bicubic patch intersection IS floating point intensive.  The
	best figures I have seen quote around 3K floating point operations
	per ray/patch intersection.  I have a hard time believing you
	couldn't do better with a good hierarchy scheme + good triangle code.

2.	Even if you can convince me that your bicubic patch intersector
	worked faster than my combination, dicing into triangles is very simple
	and easy to implement.  The same code could also be used to generate
	views of nearly any parametric surface with minimal modification.

There are (of course) some difficulties.  You would like to subdivide
appropriately, which means being careful about silhouette edges and shadow
edges.  Barr and Von Herzen had an excellent paper in the 1989 siggraph to
illustrate how to implement subdivision.  You might want to check it out.

(Of course, all this is moot, cuz I never HAVE managed to implement
real ray/patch intersection code)

--------

Thomas Williams ({ucbvax|sun}!pixar!thaw) replies:
Organization: Pixar -- Marin County, California

Another problem which I haven't seen solved is subdivision for
reflections or transmissions which magnify the surface intersection
of this secondary ray.  For example, what is a suitable subdivision
algorithm for surfaces seen through a magnifying glass?  Adaptive techniques
that use gradient differentials can generate gillions of unneeded polygons.
Also the continuity you lose by approximating surfaces with triangles for
curved objects with more than one transmitting surface (like a bottle,
or thick glass) can cause some pretty horrible artifacts.  If it is
important to avoid these problems the only way I know that you can do
it it with ray-surface intersection.

--------

Mark VandeWettering (markv@gauss.Princeton.EDU) then replies:

The problems you list are legitimate.  However, I would counter with the
following arguments:

1. How often do scenes which have magnifications through reflection or
refraction REALLY occur.  The answer to this question for me was: never.
Much more difficult is to solve problems with shadow edges, which can
project edges which are irritatingly linear.  Two things will help
soften/alleviate problems which shadow edges:

	a.	using distributed raytracing to implement penumbras.
		fuzzy boundaries can be more piecewise without causing
		noticeable effects.
	b.	We can help eliminate artifacts by treating light sources
		specially, and subdividing on silhouettes with respect
		to the light source as well as the eye.

2.  Remember, your screen has on the order of 1M pixels.  Each patch
will probably cover only a vanishingly small fraction of these pixels.
If a patch covers 100 pixels, any subdivision beyond 10x10 is probably
overkill.  Use expected screensize as a heuristic to guide subdivision.

--------

Thomas Williams ({ucbvax|sun}!pixar!thaw) then replies:

Don't forget problems with areas of high curvature.  Especially animated
sequences where specular highlights "dance" on sharply curved edges.  A hybrid
approach might work well but, you had better have a _lot_ of memory for all
the polygons you generate.  Thrashing brings the fastest machines to their
knees.  So, I still think there is a place for ray-surface intersections.

Of course, I guess which approach you take depends on the audience your
playing to.

-------------------------------------------------------------------------------

Rendering Intersecting Glass Spheres, John Cristy (cristy@eplrx7.uucp)
Organization: DuPont Engineering Physics Lab

I am (still) looking for a renderer (raytracer, radiosity, scanline, etc.)
that can render two intersecting semi-transparent glass spheres and
realistically show the area of intra-penetration.  I have been looking for a
couple months now and have not found a renderer that does this well (or at
all).  Suggestions of public domain or commercial renderers that solve this
problem was welcome.  Please send Email to cristy@dupont.com.  Thanks in
advance.

--------

Craig Kolb (craig@weedeater.uucp) replies:
Organization: Yale University Department of Mathematics

>To accurately model nested objects (for example, if your two
>spheres had different indices of refraction), you also need to
>maintain a stack of refraction indices, since you can't assume that
>when you exit an object, you exit into `air'.

Rayshade does exactly this.  But there are still a couple of problems with
rendering intersecting transparent objects.  First, the renderer needs to keep
track of solid body color in order to achieve the proper "filtering" effect
of, say, white light passing through green glass and then blue glass.

The second and more fundamental problem is how to treat the volume
corresponding to the intersection of the two solids.  Given that solids A and
B each have a set of properties (solid body color, index of refraction, etc.),
what properties should be used in rendering the volume (A ^ B)?

Doing The Right Thing means resorting to CSG techniques so that one can
specify the properties of (A ^ B) as well as those of A and B.

-------------------------------------------------------------------------------

DKBPC Raytracer, Tom Friedel (tpf@jdyx.UUCP)
Organization: JDyx Enterprises (Atlanta GA)

The guys on Compuserve seem to have endorsed a (new) ray tracer
called DKBPC, which is available as source.  It appears to
support CSG and Textures from what little I've seen.  Has anyone
evaluated it (i.e., compared it to rayshade, vort, mtv, etc.)  Is it
archived anywhere?

-------------------------------------------------------------------------------

New release of IRIT solid modeller, Gershon Elber (gershon@cs.utah.edu)
Organization: University of Utah CS Dept

IRIT is a polygonal C.S.G. based solid modeller originally developed on and
for the IBM PC family of computers.  Version 1.x has been released about a
year ago for the IBM PC only.  Since then, it has been ported to unix
environment (SGI Irix 3.2 and BSD4.3) using X11, and all known bugs has been
fixed.

This is release 2.x of the solid modeller and its accompanying utilities which
include a data viewing program (poly3d), hidden line removal program
(poly3d-h) and simple renderer (poly3d-r).  Thanks to Keith Petersen, all the
sources (Ansi C) for these programs (and executables for the IBM PC) are
available on simtel20.arpa, directory PD1:<MSDOS.IRIT> :

IRIT.ZIP        Full CSG solid modeller, arbitrary orientation
IRITS.ZIP       Turbo C ver 2.0 sources for IRIT
IRITLIBS.ZIP    Libraries for IRIT sources
POLY3D.ZIP      Display 3D polygonal objects, part of IRIT
POLY3DS.ZIP     Turbo C ver 2.0 sources for POLY3D
POLY3D-H.ZIP    Create hidden line removed pict., part of IRIT
POLY3DHS.ZIP    Turbo C ver 2.0 sources for POLY3D-H
POLY3D-R.ZIP    Render poly data into GIF images, part of IRIT
POLY3DRS.ZIP    Turbo C ver 2.0 sources for POLY3D-R
DRAWFN3D.ZIP    Display 3D parametric surfaces, part of IRIT
DRAWFN3S.ZIP    Turbo C ver 2.0 sources for DRAWFN3D

All above sources are for the unix system as well, but DRAW*.ZIP which has not
been ported (MSDOS only).  In order to unpack ZIP archives in unix environment
you will need to ftp from directory PD3:<MISC.UNIX> the file UNZIP30.TAR-Z.

[list of changes deleted - EAH]

Elber Gershon                            gershon@cs.utah.edu
918 University village
Salt Lake City 84108-1180                Tel: (801) 582-1807 (Home)
Utah                                     Tel: (801) 581-7888 (Work)

-------------------------------------------------------------------------------

Easier Use of Ray Tracers, Philip Colmer, Mark VandeWettering, Jack Ritter


Philip Colmer (pcolmer@acorn.co.uk) writes:

As someone who has used QRT and MTV, and is about to try RayShade, could I
make a couple of suggestions to the authors of these packages, and any other
budding ray tracer programmers.

---------

Mark VandeWettering (markv@gauss.Princeton.EDU) answers Philip's points:

In article <...> pcolmer@acorn.co.uk (Philip Colmer) writes:

>1. Please try and provide an option to produce a quick and dirty outline
>   image. This would not do any reflections, shadows or any of the other
>   time consuming elements of ray tracing. Instead, it would just show
>   where the objects are. This would allow the basic picture to be checked
>   for accuracy. Not everyone can cope with visualizing a 3D world!

	Yeah, this should be configurable from within the data file,
	or via command line options.  Things like raydepth and stuff are
	not run-time configurable on MTV.

>2. MTV has a very nice colour database (pinched from X11). How about a
>   similar database for materials, ie just what ARE the parameters that
>   should be given for metal, glass and so on?

	Well, colors are a little easier than things like metals.  We should
	actually shift from an RGB representation of color to a more
	realistic wavelength model, and then convert.  Somewhere I have a list
	compiled by Andrew Glassner of reflection curves for a number of
	materials.   Perhaps these will work their way into Son of MTV.

>3. How about a fixed point integer system? This would make ray tracers go
>   one hell of a lot faster, but I'm not sure if this is a viable option.

	Guess what folks, this probably won't help.  Mainly because modern
	machines are spending alot more time to do fp multiplies than integer
	multiplies.  Note that on a machine like the i860, a double precision
	multiply can be done every two cycles.  A 32 bit integer multiply
	takes between four and eleven.  Net result: You lose.
	Similar things happen with the MIPS R3000.

	Another big lose for most machines, using single precision fp in
	C.  Doesn't help one iota in speed for every machine I tested, and
	hurt the accuracy.

--------

Jack Ritter (ritter@versatc.versatec.COM)
Organization: Versatec, Santa Clara, Ca. 95051
Summary: speed up tricks for approx ray tracer

In article <2413@acorn.co.uk> pcolmer@acorn.co.uk (Philip Colmer) writes:

>1. Please try and provide an option to produce a quick and dirty outline
>   image.

I have an option to scale the scene into an arbitrary NXM pixel area.  For
initial renderings, where I just want to the overall effect, & make sure
objects don't penetrate each  other, I have found that even a 30X30 pixel
rendering is revealing.  30X30 sure beats the hell out of 512X512, or whatever
these darn kids are using these days.

I also bound objects in screen space, which makes things very fast when you're
not doing reflections & refractions.  Some fairly complex scenes have taken
under a minute with all these trick in use.  Screen space bounding is
described in the upcoming book "Graphics Gems", which will no doubt also
contain many other speed-up tricks that I will wish I had thought of.

>3. How about a fixed point integer system? This would make ray tracers go
>   one hell of a lot faster, but I'm not sure if this is a viable option.
>
> 	Guess what folks, this probably won't help.  Mainly because modern
> 	machines are spending alot more time to do fp multiplies than integer

Yes, the latest processors have on-chip floating point, and are fast.
However, on the processors I have used:  Motorola 68000, 68010, 68020, I have
found that well thought-out fixed point code always beats the floating point
coprocessor, algorithm for algorithm.

-------------------------------------------------------------------------------

Raytracer Glass, F. Ken Musgrave (musgrave-forest@CS.YALE.EDU)
Organization: Yale University Computer Science Dept, New Haven CT  06520-2158

In article <...> pwh@bradley.UUCP writes:
>
>What are the spectral properties of glass
>that I could use in a raytracing program?
>
>I've a friend who's been working on the problem
>for a while now, and it's given some interesting results,
>but nothing that actually looks like glass....

  Glass is not so easy to do - I got a Master's degree for doing it!

  Three things are necessary:  (1) The proper index of refraction (1.5-1.9).
(2) The proper reflection function - Fresnel's Law.  (3) Dispersion.  Also,
you should propagate rays spawned by total internal reflection - many ray
tracers quash such rays outright; this can lead to ugly artifacts in (glass)
objects with planar surfaces.

  The first two things can be standard features in a ray tracer, the third is
uncommon.  There are two published solutions (that I know of):

	Thomas, S. W., "Dispersive Refraction in Ray Tracing", Visual
	Computer, vol. 2, no. 1, pp 3-8, Springer Int'l, Jan. '86

	Musgrave, F. K., "Prisms and Rainbows: a Dispersion Model for
	Computer Graphics", Proceedings of the Graphics Interface '89,
	London, Canada, June '89

  Neither of these references is easy to get.  Perhaps UC Santa Cruz would
provide a copy of my thesis:

	Musgrave, F. K., "A Realistic Model of Refraction for Computer
	Graphics", Master's Thesis, UC Santa Cruz, Santa Cruz CA, Sept. '87

  As an alternative, I will put the troff sources for my GI paper where you
can get them via anonymous ftp on weedeater.math.yale.edu - but you won't get
any of the nice illustrations.

  At any rate to get dispersion into a ray tracer requires some hacking, and
will in general slow down the rendering a *lot*.  Thomas & I used quite
different approaches; his would probably be faster for scenes without much
dispersion, and vice-versa.

  A future version of Craig Kolb's RayShade may feature dispersion...  (I'm
not at liberty to distribute my ray tracer with dispersion.)

--------

Michael A. Kelly (mkelly@comix.cs.uoregon.edu) replies:
[Organization: Department of Computer Science, University of Oregon]

In article <8600001@bradley> pwh@bradley.UUCP writes:
>
>What are the spectral properties of glass
>that I could use in a raytracing program?

Try "Color Science" by Wyszecki & Stiles (1982).  I don't have the book with
me but I'm pretty sure it has the information you need.

-------------------------------------------------------------------------------

Ray Intersection with Grid, Rick Speer (speer@boulder.Colorado.EDU)
Organization: University of Colorado, Boulder

In article 12598 of comp.graphics, aiadrmi@castle.ed.ac.uk
(Alasdair Donald Robert McIntyre) writes:

>  I am trying to raytrace rippling water and need to solve the following
>  problem:
>
>           Given a surface defined by heights on an square grid, find the
>           closest intersection of a ray with the surface thus defined.
>
>  I wonder if anyone knows of an efficient method to do this?
>
>  Replies by mail, or to the net.
>  Thanks in advance


You might check the following-

1. "Shaded Display of Digital Maps", by S. Coquillart and M. Gangnet
    in IEEE Computer Graphics and Applications V. 4 No. 7 (July 1984),
    p. 35-42.

2. "Vectorized Procedural Models for Natural Terrain: Waves and
    Islands in the Sunset", by N. Max in Computer Graphics V. 15 No. 3
    (Proceedings of SIGGRAPH '81), p. 317-324.

These should give you some good ideas.

[My own two cents:  Also look at "The Synthesis and Rendering of Eroded Fractal
Terrains" by F. Kenton Musgrave, Craig E. Kolb, and Robert S. Mace, SIGGRAPH
89.  Towards the end they describe their method to ray trace height fields.
- EAH]

-------------------------------------------------------------------------------

Database for DBW-Render, by Prem Subrahmanyam (prem@geomag.fsu.edu)
Organization: Florida State University Computing Center

Ok, here is a description file for a trio of balloons over reflective water
with fractal mountains in the background.  It should be pretty interesting.


& 0 400
R 24.0
a .5
b .8 .4 .4
e 0 10 100  0 -5 -200  0 1 0
w  0 0 -200  7 .1 1 0.00
w 0 0 0  5 .2 1 0.00
w 0 0 200  20 .4 1 0.50
w 200 0 0  2  .1  1  1.00
w -200 0 0  10 .15  1 1.00
w 50 0 0  6 .2 1 0.00
w 30 4 60  15 .3 1 .75
l  1 1 1  2 10 5
g .5 0 .8  15 15
f 4  0.1 0.5 0.7  3
f 4  .7 .6 .6  3
f 4  .5 .5 .7  3
{s 50 .2 0 1  0 0 0  .3 .3 .3  .6 .8 .2  0 29 0  10
{t 50 .2 0 1  0 0 0  .1 .1 .1  .6 .87 .2  -9 24.2 0  1.7 0 5.3  5.3 -10.2 1.2
 t 50 .2 0 1  0 0 0  .1 .1 .1  .6 .87 .2  -7.3 24.2 5.3  4.5 0 3.3  5 -10.2
-2
 t 50 .2 0 1  0 0 0  .1 .1 .1  .6 .87 .2  -2.8 24.2 8.6  5.6 0 0  2.8 -10.2
-4.7
 t 50 .2 0 1  0 0 0  .1 .1 .1  .6 .87 .2  2.8 24.2 8.6  4.5 0 -3.3  -.5 -10.2
-5.4
 t 50 .2 0 1  0 0 0  .1 .1 .1  .6 .87 .2  7.3 24.2 5.3  1.7 0 -5.3  -3.6
-10.2 -4.1
 t 50 .2 0 1  0 0 0  .1 .1 .1  .6 .87 .2  -2.3 14 3.2 -1.4 0 -2
-5 10.2 2
 t 50 .2 0 1  0 0 0  .1 .1 .1  .6 .87 .2  0 14 3.9 -2.3 0 -.7  -2.8 10.2 4.7
 t 50 .2 0 1  0 0 0  .1 .1 .1  .6 .87 .2  2.3 14 3.2  -2.3 0 .7  .5 10.2 5.4
 t 50 .2 0 1  0 0 0  .1 .1 .1  .6 .87 .2  3.7 14 1.2 -1.4 0 2  3.6 10.2
4.1}
{q 0 1 0 1  0 0 0  .1 .1 .1   .1 .1 .1  0 14 3.9  .4 0 0  0 -8 0
 q 0 1 0 1  0 0 0  .1 .1 .1   .1 .1 .1  3.7 14 1.2  .4 0 0  0 -8 0
 q 0 1 0 1  0 0 0  .1 .1 .1   .1 .1 .1  -3.7 14 1.2  .4 0 0  0 -8 0
 q 0 1 0 1  0 0 0  .1 .1 .1   .1 .1 .1  -.8 14 -3.9  .4 0 0  0 -8 0}
{q 3 .5 0 1  0 0 0  .1 .1 .1  .3 .4 .5  -3.7 6 1.2  3.7 0 2.7  0 -5 0
 q 3 .5 0 1  0 0 0  .1 .1 .1  .3 .4 .5  0 6 3.9  3.7 0 -2.7  0 -5 0
 q 3 .5 0 1  0 0 0  .1 .1 .1  .3 .4 .5  -3.7 6 1.2  2.9 0 -2.7  0 -5 0
 q 3 .5 0 1  0 0 0  .1 .1 .1  .3 .4 .5  -.8 6 -3.9  4.5 0 2.7  0 -5 0}}

 r 4 0 .55 1  0 0 0  0 0 0  .1 0 0   0 1 0   1 0 0  0 0 1  0  200

{s 70 .2 0 1  0 0 0  .3 .3 .3  .5 0 .2  20 30 -10  10
{t 70 .2 0 1  0 0 0  .1 .1 .1  .5 0 .2  11 25.2 -10  1.7 0 5.3  5.3 -10.2 1.2
 t 70 .2 0 1  0 0 0  .1 .1 .1  .5 0 .2  12.7 25.2 -4.7  4.5 0 3.3  5 -10.2
-2
 t 70 .2 0 1  0 0 0  .1 .1 .1  .5 0 .2  17.2 25.2 -1.4  5.6 0 0  2.8 -10.2
-4.7
 t 70 .2 0 1  0 0 0  .1 .1 .1  .5 0 .2  22.8 25.2 -1.4  4.5 0 -3.3  -.5 -10.2
-5.4
 t 70 .2 0 1  0 0 0  .1 .1 .1  .5 0 .2  27.3 25.2 -4.7  1.7 0 -5.3  -3.6
-10.2 -4.1
 t 70 .2 0 1  0 0 0  .1 .1 .1  .5 0 .2  17.7 15 -6.8 -1.4 0 -2
-5 10.2 2
 t 70 .2 0 1  0 0 0  .1 .1 .1  .5 0 .2  20 15 -6.1 -2.3 0 -.7  -2.8 10.2 4.7
 t 70 .2 0 1  0 0 0  .1 .1 .1  .5 0 .2  22.3 15 -6.8  -2.3 0 .7  .5 10.2 5.4
 t 70 .2 0 1  0 0 0  .1 .1 .1  .5 0 .2  23.7 15 -8.8 -1.4 0 2  3.6 10.2
4.1}
{q 0 1 0 1  0 0 0  .1 .1 .1   .1 .1 .1  20 15 -6.1  .4 0 0  0 -8 0
 q 0 1 0 1  0 0 0  .1 .1 .1   .1 .1 .1  23.6 15 -8.8  .4 0 0  0 -8 0
 q 0 1 0 1  0 0 0  .1 .1 .1   .1 .1 .1  16.3 15 -8.8  .45 0 0  0 -8 0
 q 0 1 0 1  0 0 0  .1 .1 .1   .1 .1 .1  19.2 15 -13.9  .4 0 0  0 -8 0}
{q 3 .5 0 1  0 0 0  .1 .1 .1  .3 .4 .5  16.3 7 -11.2  3.7 0 2.7  0 -5 0
 q 3 .5 0 1  0 0 0  .1 .1 .1  .3 .4 .5  20 7 -6.1  3.7 0 -2.7  0 -5 0
 q 3 .5 0 1  0 0 0  .1 .1 .1  .3 .4 .5  16.3 7 -8.8  2.9 0 -2.7  0 -5 0
 q 3 .5 0 1  0 0 0  .1 .1 .1  .3 .4 .5  19.2 7 -13.9  4.5 0 2.7  0 -5 0}}

{s 5 .2 0 1  0 0 0  .3 .3 .3   0 .5 .8  -30 40 -20  10
{t 5 .2 0 1  0 0 0  .1 .1 .1   0 .5 .8  -39 35.2 -20  1.7 0 5.3  5.3 -10.2 1.2
 t 5 .2 0 1  0 0 0  .1 .1 .1   0 .5 .8  -37.3 35.2 -14.7  4.5 0 3.3  5 -10.2
-2
 t 5 .2 0 1  0 0 0  .1 .1 .1   0 .5 .8  -32.8 35.2 -11.4  5.6 0 0  2.8 -10.2
-4.7
 t 5 .2 0 1  0 0 0  .1 .1 .1   0 .5 .8  -27.2 35.2 -11.4  4.5 0 -3.3  -.5 -10.2
-5.4
 t 5 .2 0 1  0 0 0  .1 .1 .1   0 .5 .8  -22.7 35.2 -14.7  1.7 0 -5.3  -3.6
-10.2 -4.1
 t 5 .2 0 1  0 0 0  .1 .1 .1   0 .5 .8  -32.2 25 -16.2 -1.4 0 -2
-5 10.2 2
 t 5 .2 0 1  0 0 0  .1 .1 .1   0 .5 .8  -30 25 -16.1 -2.3 0 -.7  -2.8 10.2 4.7

 t 5 .2 0 1  0 0 0  .1 .1 .1   0 .5 .8  -27.7 25 -16.8  -2.3 0 .7  .5 10.2 5.4
 t 5 .2 0 1  0 0 0  .1 .1 .1   0 .5 .8  -26.3 25 -18.8 -1.4 0 2  3.6 10.2
4.1}
{q 0 1 0 1  0 0 0  .1 .1 .1   .1 .1 .1  -30 25 -16.1  .4 0 0  0 -8 0
 q 0 1 0 1  0 0 0  .1 .1 .1   .1 .1 .1  -26.4 25 -18.8  .4 0 0  0 -8 0
 q 0 1 0 1  0 0 0  .1 .1 .1   .1 .1 .1  -33.7 25 -18.8  .4 0 0  0 -8 0
 q 0 1 0 1  0 0 0  .1 .1 .1   .1 .1 .1  -30.8 25 -23.9  .4 0 0  0 -8 0}
{q 3 .5 0 1  0 0 0  .1 .1 .1  .3 .4 .5  -33.7 17 -18.8  3.7 0 2.7  0 -5 0
 q 3 .5 0 1  0 0 0  .1 .1 .1  .3 .4 .5  -30 17 -16.1  3.7 0 -2.7  0 -5 0
 q 3 .5 0 1  0 0 0  .1 .1 .1  .3 .4 .5  -33.7 17 -18.8  2.9 0 -2.7  0 -5 0
 q 3 .5 0 1  0 0 0  .1 .1 .1  .3 .4 .5  -30.8 17 -23.9  4.5 0 2.7  0 -5 0}}
 x 60 0 0 1  0 0 0  .1 .1 .1  .4 .4 .4  -100 0 -170  0 30 -200  100 0 -170
 x 61 0 0 1  0 0 0  .1 .1 .1  .4 .4 .4  -50 0 -170  -150 50 -132  -180 0 -85
 x 62 0 0 1  0 0 0  .1 .1 .1  .4 .4 .4  50 0 -170  160 30 -132  180 0 -85
 k .8 0 .9  5 5 5  0 0 0

-------------------------------------------------------------------------------
END OF RTNEWS

From craig@weedeater.math.yale.edu Tue Oct  2 18:11:35 1990
Return-Path: <craig@weedeater.math.yale.edu>
Received: from turing.cs.rpi.edu by iear.arts.rpi.edu (3.2/HUB10);
	id AA00589; Tue, 2 Oct 90 18:10:39 EDT for kyriazis
Received: from weedeater.math.yale.edu by turing.cs.rpi.edu (4.0/1.2-RPI-CS-Dept)
	id AA05362; Tue, 2 Oct 90 18:10:19 EDT
Received: by weedeater.math.yale.edu; Tue, 2 Oct 90 17:00:51 EDT
Date: Tue, 2 Oct 90 17:00:51 EDT
From: Craig Kolb <craig@weedeater.math.yale.edu>
Message-Id: <9010022100.AA01798@weedeater.math.yale.edu>
To: mja@sierra.llnl.gov, arvo@apollo.hp.com, barr@csvax.cs.caltech.edu,
        barsky@miro.berkeley.edu, carl@mssun7.msi.cornell.edu,
        daniel@apollo.com, rgb@caen.engin.umich.edu, wim@duticg.tudelft.nl,
        raytrace@cpsc.ucalgary.ca, atc@cs.utexas.edu, markc@emx.utexas.edu,
        chapman%fornax.UUCP@CS.YALE.EDU, ckchee@dgp.toronto.edu,
        chet@cis.ohio-state.edu, m-cohen@cs.utah.edu,
        rt-colorado@anchor.colorado.edu,
        ray-tracing-news@wisdom.graphics.cornell.edu, cychosz@ecn.purdue.edu,
        raycasting@duke.cs.duke.edu, jaf@squid.graphics.cornell.edu,
        FISHER@3D.dec@decwrl.dec.com, flynn@cse.nd.edu, johnf@apollo.com,
        glassner.pa@xerox.com, rrg@acf8.nyu.edu, jeff@hamlet.caltech.edu,
        jakob@humus.huji.ac.il, sbgowing@ultima.cs.uts.oz.au,
        grant@delvalle.llnl.gov, green@compsci.bristol.ac.uk, paul@sgi.com,
        avatar!kory@quad.com, hanrahan@princeton.edu, ph@miro.berkeley.edu,
        hench@lea.csc.ncsu.edu, raytrace@hpgtdadm.fc.hp.com,
        hohmeyer@miro.berkeley.edu, dgh@munnari.oz.au,
        hultquis@prandtl.nas.nasa.gov, fwj@duticg.tudelft.nl, joy@ucdavis.edu,
        dana!mrk%hplabs.UUCP@CS.YALE.EDU, vedge!kardan@larry.mcrcim.mcgill.edu,
        tim@csvax.cs.caltech.edu, dk@csvax.cs.caltech.edu,
        hpfcse!hpurvmc!koz%hpfcla.UUCP@CS.YALE.EDU, megatek!kuchkuda@ucsd.edu,
        kyriazis@turing.cs.rpi.edu, esl0422@ultb.isc.rit.edu,
        zmel02@image.trc.amoco.com, mplevine@phoenix.princeton.edu
Subject: Ray Tracing News v3n4
Reply-To: craig@weedeater.math.yale.edu
Status: OR

 _ __                 ______                         _ __
' )  )                  /                           ' )  )
 /--' __.  __  ,     --/ __  __.  _. o ____  _,      /  / _  , , , _
/  \_(_/|_/ (_/_    (_/ / (_(_/|_(__<_/ / <_(_)_    /  (_</_(_(_/_/_)_
	     /                               /|
	    '                               |/

			"Light Makes Right"

			  October 1, 1990
			Volume 3, Number 4

Compiled by Eric Haines, 3D/Eye Inc, 2359 Triphammer Rd, Ithaca, NY 14850
    wrath.cs.cornell.edu!eye!erich or erich@eye.com
All contents are US copyright (c) 1990 by the individual authors
Archive locations: anonymous FTP at cs.uoregon.edu [128.223.4.13], /pub/RTNews,
		   weedeater.math.yale.edu [130.132.23.17], /pub/RTNews, and
		   freedom.graphics.cornell.edu [128.84.247.85], /pub/RTNews.
UUCP access: Vol 3, No 1 or write Kory Hamzeh (quad.com!avatar!kory) for info.

Contents:
    Introduction
    New People, Address Changes, etc
    Photorealism, and the color Pink (TM), from Andrew Glassner
    DKBTrace v2.0 and Ray Tracing BBS Announcement, by David Buck, Aaron Collins
    Article Summaries from Eurographics Workshop, by Pete Shirley
    Convex Polyhedron Intersection via Kay & Kajiya, by Eric Haines
    New Radiosity Bibliography Available, by Eric Haines
    A Suggestion for Speeding Up Shadow Testing Using Voxels, by Andrew Pearce
    Real3d, passed on by Juhana Kouhia, "Zap" Andersson
    Utah Raster Toolkit Patch, by Spencer Thomas
    NFF Shell Database, by Thierry Leconte
    FTP list update and New Software, by Eric Haines, George Kyriazis
    ======== USENET cullings follow ========
    Humorous Anecdotes, by J. Eric Townsend, Greg Richter, Michael McCool,
	Eric Haines
    Graphics Interface '91 CFP
    Parametric Surface Reference, by Spencer Thomas
    Solid Light Sources Reference, by Steve Hollasch, Philip Schneider
    Graphics Gems Source Code Available, by Andrew Glassner, David Hook
    Graphics Gems Volume 2 CFP, by Sari Kalin
    Foley, Van Dam, Feiner and Hughes "Computer Graphics" Bug Reports,
	by Steve Feiner
    Radiosity via Ray Tracing, by Pete Shirley
    Algorithm Order Discussion, by Masataka Ohta, Pete Shirley
    Point in Polygon, One More Time..., by Mark Vandewettering, Eric Haines,
	Edward John Kalenda, Richard Parent, Sam Uselton, "Zap" Andersson,
	and Bruce Holloway
    Quadrant Counting Polygon In/Out Testing, by Craig Kolb, Ken McElvain
    Computer Graphics Journals, by Juhana Kouhia

-------------------------------------------------------------------------------

Introduction

Well, SIGGRAPH has come and gone, and as usual I ran out of time.  There's
just too many people to meet and too much to see and do.  Highlights for me
included the SIGGRAPH Trivia Bowl, meeting in a cemetery for lunch with other
renderers after an all night session of Hurtenflurst (don't ask), and getting
my head turned into data at Failure Analysis Associates (still haven't heard
from them, though.  Does anyone have the phone number or address for this
company?  I'd like to call and find out what is happening with them).  At
SIGBowl our Ithaca team was able to lock in third place [of 3 teams] early on
and held it against all comers.  Nonetheless, a great time:  one of many
favorite moments was:

	Pike (the moderator): "Who's biographical sketch reads as follows-"

		BUZZ!

	Team 1 (with no other information): "Turner Whitted!?"

	Pike: "No, wrong.  Now let me read the question" and proceeds to read
		some biographical data.

		BUZZ!

	Team 2: "Is it Turner Whitted!?"

	Pike: "Still wrong."

Team 3, sadly, did not guess "Turner Whitted?", but, rather, declined to answer.

--------

Does anyone have a nice, compact version of the Teapot?  That is, just the
spline description (which I do have) and a tessellator which will turn it into
quads/triangles (which I have, but it's deeply buried in some scary code)?
This would be a nice addition to SPD 3.0, but I don't have one easily
accessible.  Here's your chance to get your code distributed in the next
release.  I'm willing to massage your code into outputting NFF format.

I'll also announce that there is a new version of the ray tracing bibliography
which I've finished as of today.  Many of the new articles listed are due to
the untiring work of Tom Wilson, who says the librarians now hate him.  This
new version should be available via FTP by October 5th on
freedom.graphics.cornell.edu and cs.uoregon.edu (see the FTP site list later
in this issue for more information).

Important dates:
	Graphics Interface papers due October 31st
	Graphics Gems, Volume 2 submissions due November 1st

-------------------------------------------------------------------------------

New People, Address Changes, etc


# Tom Wilson - parallel ray tracing, spatial subdivision
# Center for Parallel Computation
# Department of Computer Science (407-275-2341)
# University of Central Florida
# P.O. Box 25000
# Orlando, FL 32816-0362
alias	tom_wilson	wilson@ucf-cs.ucf.edu

I am a Ph.D. student at UCF.  I am working in the area of parallel processing
with ray tracing as an application.  I am working on a nonuniform spatial
subdivision scheme that will eventually be implemented on our BBN Butterfly
GP1000.

--------

# Steve Hollasch - efficiency, algorithms, ray-tracing
# Master's Student in C.S., Arizona State University
# 1437 West 6th Street
# Tempe, AZ  85281-3205
# (602) 829-8758
alias   steve_hollasch   hollasch@enuxha.eas.asu.edu

    I'm working on completing my thesis in December 1990 or January 1991.  The
research basically covers viewing 4D objects (as physical entities, not 3D
scalar fields).  I'm working on a 4D raytracer right now that handles
hyperspheres and tetrahedrons, and I hope to extend it to hyperellipsoids,
hypercones, hypercylinders and so on.  I'd also like to learn how to tessellate
4D solids with tetrahedrons to be able to view arbitrary 4D solids.  Practical
uses?  I'm still trying to think off something...  =^)

--------

# Christophe Vedel - improving radiosity
# LIENS, Ecole Normale Superieure
# 45, rue d'Ulm
# 75230 PARIS Cedex 05 FRANCE
# (33) 1 43 26 59 74
e-mail  vedel@ens.ens.fr
	vedel@FRULM63.BITNET

I'm interested in improving the radiosity method for finer results or larger
scenes.  Spatial subdivision techniques developed for ray tracing should help
to achieve this goal.  I'm also working on interactivity with lighting
simulation programs.

--------

# Frederic Asensio - behavioral animation, radiosity
# LIENS, Ecole Normale Superieure
# 45, rue d'Ulm
# 75230 PARIS Cedex 05 FRANCE
# (33) 1 43 26 59 74
e-mail  asensio@ens.ens.fr
	asensio@FRULM63.BITNET

I am interested in interactive radiosity methods, which could be used in the
design process and in stochastic methods for light sampling with rays.

--------

# Scott Senften - efficiency, scientific visualization
# McDonnell Douglas Corp.
# McDonnell Aircraft Co.
# Mail Code 1065485
# St. Louis, MO 63166
# (314)232-1604
alias scott_senften	  senften%d314vx.decnet@mdc.com
alias senften_tu     senften@tusun2.utulsa.edu

Presently I'm doing research in neural nets.  There is a real need in the NN
society for some good scientific visualization.  I'm presently doing work in
speech recognition, diagnostics, flight reconfiguration, and signal
processing, as well as graphic visualization of neural networks.  My graduate
work was in Ray Tracing efficiency.

--------

Sam Uselton

Internet:
uselton@nas.nasa.gov

    Work			 Home
Phone:
(415) 604-3985		       (415) 651-6504

USMail:
Samuel P. Uselton, PhD	       Sam Uselton
Computer Sciences Corp.
M/S T 045-1		       1216 Austin St.
NASA Ames Research Center      Fremont, CA 94539
Moffett Field, CA 94035


Ray Tracing interests:
efficiency, accuracy, distributed ray tracing, parallel implementations


At NASA I'm working on volume visualization for computational fluid dynamics.
I've used this as an excuse ( :-) ) to write a ray caster (no bounces) that
integrates through the volume, accelerated by taking advantage of the special
grids, as well as an item buffer-like initial hit finder.

I'm also working with Redner & Lee on extending our distributed ray tracing
work to handle diffuse illumination, caustics, wavelength dependent
refraction, and other realistic effects.

Once these problems are completely solved ( :-) :-) ), there are several
parallel machines here that I would like to wring out.

Oh yes, we (Lee, Steve Stepoway, Scott Senften and myself) have yet another
Acceleration technique we like better than any of the popular ones.  (Actually
it can combine with bounding boxes.)  It is somewhere between regular space
subdivision (a la medical volume folks and Japanese SEADS..)  and the bounding
slabs idea (Kaplan?  [no, Kay]).  It was written and submitted for pub once,
but we keep thinking of new tweaks, so it is being re-written.

--------

# Juhana Kouhia
# Ylioppilaankatu 10 C 15
# 33720 Tampere
# Finland
alias	juhana_kouhia	jk87377@tut.fi

I have done my wireframe, scanline and ray tracing programs with Jim Blinn's
modeling language.

[Another renderer with a last name starting with K!  There's Kajiya, Kaplan,
Kay, Kirk, Kolb, Kuchkuda, Kyriazis, just to mention a few. --EAH]

--------

# Michael L. Kaufman  - speed, ray tracing on a PC, ray tracing fractal images
# Northwestern University
# 2247 Ridge Ave #2k
# Evanston IL, 60201
# (708) 864- 7916
# kaufman@eecs.nwu.edu

I am just getting started in Ray-Tracing.  I am interested in coming up with
algorithms that are fast enough to do on a 386 PC.  I am also interested in
using Ray-Tracing techniques as a new way of looking at certain types of
fractal images.

--------

Stephen Boyd Gowing - light transmission, motion, art
B.App.Sc (Computing) Undergraduate.
University of Technology, Sydney

4/10 Short St, Glebe 2037 AUSTRALIA (home)

alias	stephen_gowing	sbgowing@ultima.cs.uts.oz.au

I'm currently half way through a project to study the effect of light passing
through sphere(s).  The eventual aim is to produce a short animation showing a
single sphere inverting an image which breaks (unmerges?)  into two spheres
one behind the other and study the "flow" of the image through them.  At the
moment I am still rewriting the tracer I used for my 2nd-last graphics
assignment last year.  I'd like to do it in C++ but this would make things
awkward as we don't have access to a compiler.  I'd also like to implement
some sort of shading language into the image description - probably using
something like forth.

--------

# Rick Brownrigg  --  efficiency, texturing, visualization, DTM displays
# Kansas Geological Survey
# University of Kansas
# 1930 Constant Avenue
# Lawrence, KS  USA  66046
# 913 864 4991

-------------------------------------------------------------------------------

Mail, from Andrew Woo, George Kyriazis, Zap Andersson


from Andrew Woo:

I have gone through many changes in the last year, from graduating with a
M.Sc. at University of Toronto, to working at SAS Institute, to working at
Alias Research - which is my current working address.  My e-mail at Alias is
awoo@alias.UUCP.

I checked out your latest RT News.  I think David Jevans was a little too hard
on you about not knowing about the ray id for voxel traversal.  In fact, in GI
90, one of the authors was unaware of it, too (in the "Approximate Ray
Tracing" paper).  So you are not alone.  In fact, there is an explicit
mentioning of ray id in an early paper that David did not mention:

	John Amanatides, "A Fast Voxel Traversal Algorithm for Ray Tracing",
	EuroGraphics, 1987, pp. 1-10.

On the claim that David Jevans made about wasting time on more intersection
culling schemes, I tend to agree.  In fact, I have stopped looking for more
clever ways to cull intersections.  But I intend to check out another route to
ray tracing savings.  Once I get some promising results, I will share the idea
with you (to avoid any embarrassments in case the idea backfires).

On renaming DIstributed Ray Tracing, I am more of a fan of coming up with
amusing acronyms for the algorithms.  For Cook's, I thought DiRT might be
interesting.  I also came up with an acronym for my paper in GI 90 - Voxel
Occlusion Method as an Intersection-culling Technique, which spells VOMIT.

--------

from George Kyriazis:

I was reading the last RTN issue I received.  In there it states about a
program that converts sculpt4d to NFF, and it also commented that Sculpt4D is
an exceptional modeler.  Well, I have my doubts.  Last spring I was working on
a program to raytrace Scultpt4D files and animations on a Pixel Machine.
Well, the file format is far from being strait-forward.  We had to call Byte
by Byte for a few questions and stayed on the phone for more than an hour.
One of the Sculpt4D unwanted 'features' was that it normalized the intensity
of all displayed pixels wrt the brightest.  The result on the Pixel Machine
was disastrous!  Images were either overexposed or totally black!  :-) Well,
this is one of the problems faced, but I don't want to list more...

-------------------------------------------------------------------------------

Photorealism, and the color Pink (TM), from Andrew Glassner

>From a Tektronix ad in this month's Advanced Imaging, we learn that it is now
possible to turn real photographs into photorealistic images (presumably
thereby increasing realism):

"Traditional workstations can't handle reality.  They create a simulated
world...  With Dynamic Visualization, reality and simulation are brought
together by TekImaging (TM), powerful image-processing software...  [and now,
the big one -ag] TekImaging lets you transform real-world images into
photo-realistic scenes, complete with light, shadow, and texture."

So now I can take a picture of myself, and using their software make it look
photo-realistic and indistinguishable from a real picture - what a
breakthrough!

--------

Beware of the colors you pick for your images.  Here's a footnote in an
Owens-Corning Fiberglas ad in this month's [August?] Popular Science (pg 21):

"* The color Pink is a trademark of Owens-Corning Fiberglas Corp."

I think I'll get a trademark on red before all the good colors are taken...

-------------------------------------------------------------------------------

DKBTrace v2.0 and Ray Tracing BBS Announcement, by David Buck, Aaron Collins

I recently completed version 2.0 of my freely distributable raytracer called
DKBTrace.  This raytracer is a project I've been working on (actually, off and
on) for the past five years.  The raytracer features:

   - primitives for spheres, planes, triangles, and quadrics
   - constructive solid geometry
   - solid texturing using a Perlin-style noise function
   - image mapping (IFF and GIF formats)
   - hierarchical bounding boxes
   - transparency
   - fog
   - 24 bit output files
   - adaptive anti-aliasing using jittered supersampling
   - "quick" previewing options

The raytracer was developed on an Amiga computer, but it is easily portable to
other platforms.  The interface to the raytracer raytracer is a scripting
language (sorry, no GUI).  The language is quite readable compared to many
other input file formats.

The source and Amiga executables are available by anonymous FTP from
alfred.ccs.carleton.ca [134.117.1.1].  The source and executables are freely
distributable.

As with any raytracer, there are still things that need to be done, added, or
improved with DKBTrace.  The image mapping was added at the last minute, so it
only uses a simple projection scheme.  This means that the mappings look
bizarre when viewed from the sides or the back.  For the most part, however,
the solid textures are quite powerful and can create some very interesting
effects.

To implement transparency, I decided to add the transparency value value to
the color structure.  A color contains red, green, blue, and alpha components.
This makes it much easier to use in textures.

The textures also allow you to define your own color map and to have the
colors automatically interpolated from one range to the next.  Interpolating
the colours makes the images look much better than just a simple step
function.

Questions and comments concerning the raytracer can be directed to
David_Buck@Carleton.CA on BITNET.  The IBM port (and several enhancements)
were done by Aaron Collins.  Aaron can be reached on Compuserve.

--------

DKBtrace dox info:

Aaron Collins can be reached on the following BBS'es

Lattice BBS		      (708) 916-1200
The Information Exchange BBS  (708) 945-5575
Stillwaters BBS		      (708) 403-2826

AAC:  As of July of 1990, there will be a Ray-Trace specific BBS in the (708)
Area Code (Chicago suburbia) for all you Traceaholics out there.  The phone
number of this new BBS is (708) 358-5611.  I will be Co-Sysop of that board.
There is also a new Ray-Trace and Computer-Generated Art specific SIG on
Compuserve, GO COMART.  And now, back to the DOCS...

-------------------------------------------------------------------------------

Article Summaries from Eurographics Workshop, by Pete Shirley
	(shirley@iuvax.cs.indiana.edu)

[What follows are summaries of ray tracing related articles given during the
Eurographics Workshop on Photosimulation, Realism and Physics in Computer
Graphics in Rennes, France, June 1990. - EAH]

INCREMENTAL RAY TRACING by Murakami and Hirota (Fujitsu)
A extension of Parameterized ray tracing that allowed moving some scene
objects in addition to changing material properties.  Used tables of
changed voxels to determine if a ray's interaction with geometry has
changed.  Included implementation on multiple CPUs.  Also includes
reference to a paper in Japanese by Matsumoto on Octree ray tracing in
1983!

PARAMETRIC SURFACES AND RAY TRACING by Luc Biard (IMAG, France)
Like most parametric patch papers, this one went over my head, but
it seems to be an interesting paper, and includes some implementation results.

A PROGRESSIVE RAY TRACING BASED RADIOSITY WITH GENERAL REFLECTANCE FUNCTIONS
by Le Saec and Schlick (France)
A proposal for general BRDF radiosity (very similar to what I propose in the
paper above).  Also includes method for interactive display - display only
non-mirrors until viewer stops, then ray trace (UNC style).

LIGHT SOURCES IN A RAY TRACING ENVIRONMENT by Roelens et al. (France)
A method for showing 'cone of light' effect when there is not very
dense participating media in a room (primary scattering only).

METHODS FOR EFFICIENT SAMPLING OF ARBITRARILY DISTRIBUTED VOLUME DENSITIES
by Hass and Sakas (FRG)
Methods of sampling a volume density along a ray.

-------------------------------------------------------------------------------

Convex Polyhedron Intersection via Kay & Kajiya, by Eric Haines

I like Kay and Kajiya's bounding slabs method [SIGGRAPH 86] a lot.  By
thinking about how this algorithm actually works, we can derive a method for
intersecting convex polyhedra.  A working definition of a convex polyhedron is
a polyhedron which can have at most two intersection points for any ray.

To review:
Their idea is to form a bounding volume around an object by making slabs.  A
slab is the volume between two parallel planes.  The intersection of all slabs
is the bounding volume (i.e. the volume which is inside all slabs).  The
intersection method works by intersecting each slab and checking out
intersection conditions for it alone, then comparing these results to the
answer for previous slabs.  That is, first we get the near and far
intersection points for the ray with the slab.  If the far hit is behind the
ray's origin (i.e. negative distance) or the near hit is beyond the maximum
distance (i.e. you might limit how far the ray can travel by keeping track of
the closest object hit so far, or the distance to the light if this is a
shadow ray), then the ray cannot hit the bounding volume.

If the near and far hits are within bounds, then we essentially do a set
operation of this line segment and any previous slab line segments formed.
For example, if this is the second slab tested, we try to find the overlap of
the first slab's hits and this slab's hits.  If the two segments do not
overlap, then there is no point along the ray that is inside both slabs.  In
other words, there is no point inside the bounding volume, since this volume
is the logical intersection of all slabs.  To get the logical intersection of
the slab segments, we merely store the largest of the near hits and the
smallest of the far hits.  If this largest near hit is greater than this
smallest far hit, there was no overlap between segments, so the bounding
volume is missed.  If near is less than far, this new high-near, low-far
segment is used against successive slab segments.  If the ray survives all
segment tests, then the resulting near and far values are the distance along
the ray of the near and far hits on the bounding volume itself.

What's interesting about this test is that it is essentially doing
Constructive Solid Geometry operations between slabs, similar to [Roth 82].
This simple idea can be extended further, allowing you to quickly intersect
convex polyhedra.  Another good feature is that the polyhedra can be defined
entirely by the plane equations for the faces, so no vertices have to be
stored.

Imagine that you have defined a convex polyhedron by specifying each face
using a plane equation, Ax + By + Cz + D = 0.  Have each face's normal {A,B,C}
point outwards.

To intersect this object with a ray, simply test the ray against each face:
if you hit a plane that faces towards the ray (i.e. the dot product of the
normal and the ray direction is negative), then use this as a hit for near;
if you hit a plane facing away, then use this as a hit for far.   Now, instead
of having a line segment for each slab, you have an unbounded ray for each
plane.  For example, if you have a hit for a near distance (i.e. you hit a
forward facing plane), then you would first check if this near is beyond your
current maximum distance.  If not, then you would store the maximum of this
near value and the previous (if any) near value.  If at this point the stored
near value is greater than the stored far value, the ray has missed.  This
process continues with each plane until you miss or you run out of planes.
The near and far points are then your intersection points with the polyhedron.

Essentially, this process is an extension of Kay & Kajiya: the ray is tested
against each face and the intersection point is used to form a segment which
is unbounded (infinite) on one end.  This segment is tested for validity
against the ray's segment and the polyhedron's current intersection segment.
If the polyhedron's segment is valid at the end of the process, then the ray
hit the polyhedron.  Basically, each face of the polyhedron defines a
half-space in which the inside of the polyhedron must lay.  The intersection of
these half-spaces is the polyhedron itself.

One minor problem is what happens when the ray is parallel to the plane.  You
could use Kay & Kajiya's suggestion of using some epsilon for the divisor,
but I prefer handling this special case separately.  If parallel, then we
want to check if the ray origin is inside the half-space:  if it is not, then
the ray cannot ever hit the polyhedron (since the ray does not cross into the
volume of valid intersections formed by this half-space).  In other words, if
we substitute the origin into the plane equation, the result of the expression
must be negative for the point to be inside the half-space; if not, the ray
must miss the polyhedron.

The pseudo-code:

    Maxdist = maximum distance allowed along ray
    Tnear = -MAXFLOAT, Tfar = MAXFLOAT
    For each plane in the polyhedron:
	Compute intersection point T and sidedness
	dp = ray.direction DOT plane.normal
	do = (ray.origin DOT plane.normal) - plane.d
	If ( dp == 0 )
	    /* ray is parallel to plane - check if ray origin is inside plane's
	       half-space */
	    If ( do > 0 )
		/* ray is outside half-space */
		return MISSED
	Else
	    /* ray not parallel */
	    T = do / dp
	    If ( dp < 0 )
		/* front face - T is a near point */
		If ( T > Maxdist ) return MISSED
		If ( T > Tnear )
		    Tnear = T
		    Normal = plane.normal
	    Else
		/* back face - T is a far point */
		If ( T < 0.0 ) return MISSED
		If ( T < Tfar ) Tfar = T
	    If ( Tnear > Tfar ) return MISSED
    endfor
    return HIT

At the end, Tnear is the intersection distance and Normal is the surface
normal.  Tfar is the exit distance, if needed.

That's it:  instead of laborious inside/outside testing of the polygon on each
face (and storing all those vertices), we have a quick plane test for each
face.  If the number of planes is large, it might have been better to store
the polygons and use an efficiency scheme.  However, the method above is
certainly simpler to code up and is pretty efficient, compact, and robust:
for example, there are no special edge intersection conditions, as there are
no edges!

An aside:
One thing that I hadn't mentioned is how we can better initialize the near and
far hit distances before the slab tests.  It turns out that when testing
bounding volumes we can set Tnear (the near distance) to 0 and Tfar to the
maximum ray distance (if any - else set it to "infinity").  This corresponds
to the segment formed by the ray:  we consider points between 0 and the
maximum ray distance to be valid points on the ray, and so want to find slab
segments that overlap this segment.  Note that these initial conditions gets
rid of some complication for the algorithm:  we now don't have to test our
slab segment against the ray's segment and the previous bounding volume
segment, but rather are always testing against a single segment which
represents the intersection of both of these.  This idea is not in Kay &
Kajiya's presentation, by the way.

Note that there is one change that occurs if you initialize the near and far
distances in this way:  the near and far distances computed when a volume is
hit will not yield the true surface intersections, but rather will give the
first and last points inside the volume.  This is useful for bounding volumes,
since we usually just want to know if we hit them and have some idea of the
distance.

-------------------------------------------------------------------------------

New Radiosity Bibliography Available, by Eric Haines

A bibliography of publications related to radiosity is now available at:

	freedom.graphics.cornell.edu [128.84.247.85]: /pub/Radiosity

It's a compressed package using "refer" format.  Articles related to radiosity
or "non-classical" rendering (soft shadows, caustics, etc) are included here.
This version is significantly improved from the version I posted some months
ago.  Many thanks to all who helped update it.

-------------------------------------------------------------------------------

A Suggestion for Speeding Up Shadow Testing Using Voxels, by Andrew Pearce

Requirements:
------------
This method is applicable to any type of spatial subdivision.  It is probably
best suited to those of us who tessellate our objects to ray trace them.

Method:
------
I've expanded on Eric Haines' method of storing the last object hit by a
shadow ray with each light source, I now save a pointer to the voxel which
contains the last object hit (or at least the voxel the intersection occurred
in if the object spans multiple voxels).  My rationale is that if the shadow
ray does NOT intersect the "last object" which shadowed that light source,
then the likelihood of it hitting something in the neighborhood of that same
object is pretty good.  If we save the voxel which the shadowing occurred in
for the previous ray, we can examine the other objects in that voxel for
possible light source occlusion WITHOUT the ray startup and voxel traversal
costs.  Now this assumption is likely untrue if you're just tracing spheres
and checker boards (some slight intended :^) but it works quite well for
tessellated objects (NURBS patches in my case).

I NULL my pointers to both the last object and last voxel if no shadow
intersection was found on the last shadow ray to this light.

I store a ray_id with every object to insure that any given ray is tested for
intersection with an object ONLY ONCE even if it spans multiple voxels.  Each
ray has a unique id.  (I thought, as did David Jevans, that this was a well
known technique).  So, even if the shadow ray misses all of the objects in
"last voxel" and must be traced like a regular shadow ray, we are likely not
losing much since if the shadow ray enters the "last voxel" during it's
traversal of the voxels, the ray will see that it has already been intersected
with all of the objects in that voxel, and that voxel will be skipped
relatively quickly (slightly slower than an empty voxel; the time it takes to
compare the ray ID against the ray ID stored with each object).

Pseudo-code:
-----------
/****************************************************************************/
/* Obviously we cannot use the "last object hit" for transparent	    */
/* objects since multiple objects may be involved in the shadowing process. */
/* The code outlined below assumes that we only store the "last object" and */
/* "last voxel" for shadow rays generated from a primary ray intersection.  */
/* What we really should have is a tree indicating what was hit at each     */
/* level of recursion. Ie. What object shadowed the intersection point      */
/* generated by the last refraction ray, generated from a reflection ray    */
/* generated by a primary ray?						    */
/****************************************************************************/
float check_shadowing(ray, light)
RAY_REC   ray;   /* ray from shading point to light source */
LIGHT_REC light; /* the light source we are interested in */
{
if (light.last_object != NULL) {

	/* intersect_object() marks object as having been */
	/* intersected by this ray. */
	hit = intersect_object( ray, light.last_object, &object);

	if (hit) {
		return(1.0); /* full shadowing */
	}

	if (light.last_voxel != NULL) { /* implied !hit */

		/* intersect_object_in_voxel_for_shadows() returns hit = TRUE */
		/* on first affirmed intersection with a non-transparent
		 * object. */
		/* It ignores transparent objects altogether. */
		hit = intersect_objects_in_voxel_for_shadows( ray,
							      light.last_voxel,
							      &object);
		if (hit) {
			light.last_object = object;
			return(1.0);
		}
	}
}

/* traverse_voxels_for_shadows() DOES intersect transparent objects and */
/* sorts the intersections for proper attenuation of the light intensity. */
/* If it hits multiple objects, the object returned is the transparent one. */
hit = traverse_voxels_for_shadows(ray, &object, &voxel, &shadow_percent);

if (!hit) {
	light.last_object = light.last_voxel = NULL;
	return(0.0);
}
if (object.transparency_value > 0.0) {
	/* the object is transparent */
	light.last_object = light.last_voxel = NULL;
}
else {
	light.last_object = object;
	light.last_voxel  = voxel;
}
return ( shadow_percent );
}


Results:
-------
(For the discussion below, "positive occlusion" means we have guaranteed that
the point we are shading is shadowed from the light source.)

The "last object hit" method provided a positive occlusion 52% of the time,
and if the "last object hit" method did not provide positive occlusion, the
"last voxel" method provided a positive occlusion 76% of the time.

I performed a "pixie" of the code with and without this opt. on an SGI
Personal Iris with no other code changes or scene changes, there is ONE light
source in the scene which is casting shadows.  The ray tracer with the "last
voxel" optimization used 2% fewer cycles.  (Actual system times vary wildly
based on load, but the last voxel version did run about 10% faster using the
UNIX "times()" system call, but I don't trust "times()").  Two percent
doesn't seem like an awful lot, but this is just a quick and dirty hack, and I
would expect to save 2% on EACH light source which casts shadows.

The test scene I'm using is a snare drum with a shadow casting spotlight on
it.  See IEEE Computer Graphics & Applications, Sept.  1988, the cover story
includes a tiny reproduction of the image should you wish to see what I'm
playing with, although the image in CG&A was done with a 2D projective
renderer (ray casting), not ray tracing.  The reflections in the floor and on
the chrome in that image were realised using two separate cubic environment
maps, the shadows were done with a depth map, the wallpaper is a simple
parametric map and the floor boards have 6 separate solid wood textures
randomly assigned to them.

The test scene contains approximately 60,000 triangles and I'm rendering at
512x512 resolution with no anti-aliasing, and a limit of one recursion level
for both shadow and reflection rays for a total of 638,182 rays.  There is
only one light in the scene which casts shadows.

I'll be doing tests on more scenes with various levels of voxel subdivision,
and object distribution.  I'll let you know the results, even if they're
negative!  (The above results did surprise me a little).

Additional Note:  I urge anyone doing ray/triangle intersections to use Didier
Badouel's "An Efficient Ray-Polygon Intersection" (Graphics Gems pp.
390-393).  I have implemented both Badouel's method and Snyder/Barr's
barycentric method, and Badouel's method is about 19% faster (I optimized the
snot out of my implementation of the barycentric method, but I used most of
those same opts in Badouel's method as well).  This result is from comparing
the same scene ray traced with the two versions.

_________________________________________________________________________

[A second article followed some days later - EAH]

More results from the voxel/shadow optimization:
-----------------------------------------------

One thing I neglected to mention in the previous message was that you should
be sure to NULL out your last object and last voxel hit pointers at the end of
each scanline.

NEW TEST SCENES:
---------------
The test scenes producing the results below are 40x40x40 arrays of polygons
(each polygon is broken down into 8 triangles).  The polygons are distributed
at unit distances from each other, and then their orientations are jittered
(rotations) and their positions are jittered (translate of -1.  to +1.).  Each
polygon is roughly a unit in width and height.  The polygons are inside of a
larger box (the room) with 15 shadow casting lights in a 5x3 array just below
the "ceiling".  There were no reflection or refraction rays generated.  All
images were computed at 645x486 pixels with 4 supersamples per pixel.  Every
intersection generated 15 shadow rays.  The "sparse" scene had every polygon
scaled by 0.2 in all dimensions.  The resulting sparse image looks like an
exploding particle system (due mainly to the 145 degree field of view).  In
the dense image, almost no background can be seen.

CHART DESCRIPTION:
-----------------
"Last Voxel" speed up refers to the percentage fewer cycles the "last voxel"
method took to compute the same image.  Since this percentage is calculated
based on the number of cycles the entire ray trace took, it is an exact
measure of the speed up.  Negative numbers mean that the "last voxel" method
was slower.  It is important to note that file read, tessellation and spatial
subdivision time is included in the count of the cycles, so the actual speed
up to the ray tracing alone may be greater than is stated, depending on how
you want to measure things.

Average Hits Per Ray is included as a general measure of the density of the
scene, it is the number of confirmed ray/triangle intersections divided by the
total number of rays (shadow rays included).  In the sparse scene it is less
than one since most of the shadow rays made it to the light sources without
hitting anything.  The dense scene is greater than one because some confirmed
intersections are abandoned due to nearer intersections being found in the
same voxel.

Average Hits Per Primary Ray is the number of confirmed ray/triangle
intersections divided by the number of primary (eye) rays.

--------------+-----------------+-----------------+
Scene         | 64,000 jittered | 64,000 jittered |
Description   | polygons (0.2)  | polygons (1.0)  |
	      |   (sparse)      |   (dense)       |
--------------+-----------------+-----------------+
Number of     |   551,408       |  551,408        |
Triangles     |                 |                 |
--------------+-----------------+-----------------+
Number of     |                 |                 |
Shadow Casting|       15        |     15          |
Lights        |                 |                 |
--------------+-----------------+-----------------+
Number of Rays|   11,324,318    |  8,427,904      |
--------------+-----------------+-----------------+
"Last Object" |                 |                 |
Success Rate  |       50.7%     |  90.9%          |
--------------+-----------------+-----------------+
"Last Voxel"  |                 |                 |
Success Rate  |       23.4%     |  39.3%          |
--------------+-----------------+-----------------+
"Last Voxel"  |                 |                 |
Speed Up      |       1.04%     |  3.6%           |
--------------+-----------------+-----------------+
Average Hits  |                 |                 |
Per Ray       |       0.265     |  1.001          |
--------------+-----------------+-----------------+
Average Hits  |                 |                 |
Per Primary   |       2.363     |  6.638          |
Ray           |                 |                 |
--------------+-----------------+-----------------+

It is encouraging that there is a speedup even in very sparse scenes (however
slight a speed-up it is).  These "random" scenes are not very indicative of
the type of scenes we are generally interested in ray tracing.  (Really, these
scenes look like particle systems, I can think of much better ways to render
them with to produce similar images :^).  So, here's the same chart for the
snare drum with increasing numbers of lights.  The extra lights are scattered
around the "room" and all point towards "Spanky" the snare drum.

--------------+---------+---------+---------+
Scene         | Snare 1 | Snare 3 | Snare 6 |
Description   | Shadow  | Shadow  | Shadow  |
	      | Light   | Lights  | Lights  |
--------------+---------+---------+---------+
Number of     |  59,692 |  59,692 |  59,692 |
Triangles     |         |         |         |
--------------+---------+---------+---------+
Number of     |         |         |         |
Shadow Casting|    1    |    3    |    6    |
Lights        |         |         |         |
--------------+---------+---------+---------+
Number of Rays| 638,182 |1,097,569|1,737,021|
--------------+---------+---------+---------+
"Last Object" |         |         |         |
Success Rate  |  52.6%  |  89.0%  |  88.7%  |
--------------+---------+---------+---------+
"Last Voxel"  |         |         |         |
Success Rate  |  76.3%  |  77.0%  |  76.9%  |
--------------+---------+---------+---------+
"Last Voxel"  |         |         |         |
Speed Up      |  1.97%  |  3.37%  |  4.39%  |
--------------+---------+---------+---------+
Average Hits  |         |         |         |
Per Ray       |  0.75   |  0.67   |  0.59   |
--------------+---------+---------+---------+
Average Hits  |         |         |         |
Per Primary   |  1.84   |  2.84   |  3.94   |
Ray           |         |         |         |
--------------+---------+---------+---------+

Well, the speed-up isn't quite 2% per light as I said in my previous article,
but it is there.  The "last voxel" trick has not slowed down the ray tracing
process in any of these tests which is quite encouraging.

------------------------------------------------

Another helpful hint if you are ray tracing tessellated or planar surfaces:
In general when spawning a shadow ray, one must be careful to avoid
intersecting the object just struck.  Usually this is done by adding some
small epsilon to the origin of the shadow ray along it's direction of travel.
However, if you store a ray id with every object (triangle) to record if that
object has been tested against the current ray, then you can use that id to
avoid testing your shadow ray against the intersected object which generated
the shadow ray.  Before spawning the shadow ray, place the id number of the
shadow ray into the ray id field of the object which has just been
intersected.  This method won't work for objects which can self shadow (eg.
parametric or implicit surfaces), but it works fine for planar surfaces (eg.
triangles from surface tessellations).

--------------------------------------------------------

- Andrew Pearce, Alias Research, Toronto, like Downtown
- pearce%alias@csri.utoronto.ca   |   pearce@alias.UUCP

-------------------------------------------------------------------------------

Real3d, passed on by Juhana Kouhia (jk87377@tut.fi) and "Zap" Andersson

[This was an advertisement on some amiga board.]

	Real 3D FEATURES
	----------------

Real 3D is design and animation program for producing high quality, realistic
pictures of three dimensional objects.  It provides an impressive set of
advanced features including:

Ray Tracing	    A ray tracing of Real 3D is strongly based on the
		    physical reality of the real world. Real 3D
		    produces pictures by simulating the laws of physics,
		    and consequently they represent reality with
		    astonishing accuracy.

Speed		    Innovative methods and new ray tracing algorithms
		    make Real 3D really fast. When using fastest ray tracing
		    mode, rendering time is typically from 1 to 15 minutes.

Hierarchical	    With Real 3D you can create hierarchical objects.
Object		    This means that objects you create can be made of
Oriented	    subobjects, and these subobjects may have their
Construction	    own substructure and so on. This kind of a tree
of Objects	    structure is well known in the context of disk
		    operating systems, in which you can create directories
		    inside directories. In Real 3D counterparts of these
		    directories are used to collect objects into logical
		    groups.
		    This kind of approach makes for example object
		    modifications extremely easy, because it is possible
		    to perform operations to logical entities. If you
		    want to copy a DOS directory, you don't have to
		    take care of the files and directories inside it.
		    In the same manner, you can stretch a complex object in
		    Real 3D as easily as one part of it.

True Solid	    Real 3D includes a true solid modeler. Solid model is the
Model		    most sophisticated way to represent three dimensional
		    objects. This modeling technique requires much computing
		    power and therefore it has earlier been used only in
		    environments, which are many times faster than Amiga.
		    Now it is possible to Amiga owners to reach all the
		    advantages of solid model, thanks to ultimate
		    optimizations carried out when developing Real 3D.

Smoothly Curved     In addition to plane surfaces, Real 3D includes several
Surfaces	    curved surfaces, such as ball, cylinder, cone and
		    hyperboloid. This means that no matter how much you
		    enlarge a ball created by Real 3D, you don't find
		    any edges or corners on its surface. Furthermore,
		    this makes the program much faster. And what is most
		    important, the produced pictures look really good.

Boolean		    Solid model allows Boolean operations between objects.
Operations	    It is possible, for example, to split an object into
		    two pieces and move the pieces apart so that the inner
		    structure of the object is revealed.
		    Operations can also be done so that the properties of
		    the material of the target object are changed. By using
		    a brilliant cylinder one can drill a brilliant hole into
		    a matt object. Operations are a powerful way to create
		    and modify objects. Especially in modeling technical
		    objects these Boolean operations are indispensable.

Properties of       A user of Real 3D is not restricted to use some basic
Surfaces	    surface brilliancies such as matt or shiny. Instead,
		    the light reflection properties can be freely adjusted
		    from absolute matt to totally mirrorlike, precisely
		    to the desired level.

Properties of       Due to solid model, it is possible to create objects
Materials	    from different materials, which have suitable physical
		    properties. Just as surface brilliancy, also transparency
		    of a material can be adjusted without any restrictions.
		    Even light refraction properties are freely adjustable
		    so that it is possible to create optical devices from
		    glass lenses. These devices act precisely as their
		    real counterparts: a magnifying glass in Real 3D world
		    really magnifies!

Texture Mapping     The texture mapping properties of Real 3D are not
		    restricted to a typical chequered pattern: Any IFF
		    picture can be used to paint objects. You can create
		    pictures with your favorite painting program as well
		    as with a video digitizer or a scanner. For example, by
		    digitizing a wood filament pattern, it is easy to create
		    wooden objects looking very realistic.
		    Pictures can be located precisely to desired places,
		    with desired sizes and directions.
		    Real 3D offers as many as seven texture mapping methods,
		    including parallel, cylinder, ball and spiral projections.

Light Sources       Unlimited number of light sources of desired color and
		    brightness. The only restriction is amount of memory.

Animation	    As well as you can create single pictures, you can
Support		    create series of pictures, animations. Real 3D includes
		    also software for representing these animations
		    interactively. Animation representation can be directed
		    by a script language from ascii files or even from
		    keyboard. Instead of looping animations you can define
		    infinitely many ways to represent your pictures. Therefore
		    you can create animations from a small number of pictures
		    by displaying them various ways.

Rendering	    Real 3D includes several different rendering techniques:
Techniques	    a real time wireframe model, a hidden line wireframe model,
		    a high speed one light source ray tracing model,
		    a non-shadow-casting ray tracing model and a perfect ray
		    tracing model. You can select either a HAM display mode
		    with 4096 colors or a grey scale display mode offering
		    higher resolution. Also version with 16 million color
		    rendering (24 bits output) will become available during
		    November 1990.

Availability	    When writing this document (6.9.1990), marketing of
		    Real 3D is already started in Europe with a little
		    lower price than that of Sculpt 4D. The distribution
		    in the USA is not yet arranged. For further information
		    of Real 3D, please contact:

			REALSOFT KY
			KP 9, SF-35700 VILPPULA
			FINLAND

			Tel. +358-34-48390

--------

from "Zap" Andersson:

REAL3d ('member that?) is available (in Sweden at least) from:
KARLBERG & KARLBERG AB
FLADIE KYRKOVAEG
S-23700 BJARRED
SWEDEN
Phone: +46(0)46-47450
Phax:  +46(0)46-47120

-------------------------------------------------------------------------------

Utah Raster Toolkit Patch, by Spencer Thomas

The first patch for the Utah Raster Toolkit version 3.0 is now available.  The
patch file is urt-3.0.patch1, and is currently available from cs.utah.edu and
freebie.engin.umich.edu, and will soon be available from our other archive
sites (depending on how quickly the archive maintainers grab the patch file).
There are also slight changes to the files urt-img.tar and urt-doc.tar (in
particular, if you had trouble printing doc/rle.ps, this is fixed).

[p.s. there was also a fix to a getx11 bug for the Sun 4, which is
pub/urt-SUNOS4.1-patch.tar.Z on freebie and weedeater. --EAH]

Here is the description from the patch file:
	Fixed core dump in rletogif, compile warnings in giftorle.
	Minor update bug in getx11 fixed.
	getx11 now exits if all its input files are bogus.
	New program: rlestereo, combines two images (left and right eye)
	into a red-blue (or green) stereo pair.
	Configuration file for Interactive Systems 386/ix.
	Minor fix to rleskel: ignore trailing garbage in an input image.

And the list of the current archive sites, from the urt.README file in
the ftp directory:

  North America
     East coast
	weedeater.math.yale.edu	130.132.23.17	(pub/*)
     Midwest
	freebie.engin.umich.edu	35.2.68.23	(pub/*)
     West
	cs.utah.edu		128.110.4.21	(pub/*)
  Europe
     Sweden
	alcazar.cd.chalmers.se	129.16.48.100	(pub/urt/urt-3.0.tar.Z)
	maeglin.mt.luth.se	130.240.0.25	(pub/Utah-raster/*)
  Australia
     	ftp.adelaide.edu.au	129.127.40.3	(pub/URT/*)
	or, if you know what this means:
		Fetchfile:     sirius.ua.oz in URT

=Spencer W. Thomas 		EECS Dept, U of Michigan, Ann Arbor, MI 48109
spencer@eecs.umich.edu		313-936-2616 (8-6 E[SD]T M-F)

-------------------------------------------------------------------------------

NFF Shell Database, by Thierry Leconte (Thierry.Leconte@irisa.fr)

[Below is Thierry Leconte's code for an NFF version of the seashell generator
I listed last issue.  He added some reasonable lights and a view (which I was
too lazy to do).  I'll probably add it to the 3.0 version of the SPD.  Setting
"steps" is similar to an SPD SIZE_FACTOR type control.  You'll need the SPD to
compile and link this program.  -- EAH]

#include <stdio.h>
#include <math.h>
#include "def.h"
#include "lib.h"

main(argc,argv)
int argc;  char *argv[];
{
static  double  gamma = 1.0 ;   /* 0.01 to 3 */
static  double  alpha = 1.1 ;   /* > 1 */
static  double  beta = -2.0 ;   /* ~ -2 */
static  int     steps = 600 ;   /* ~number of spheres generated */
static  double  a = 0.15 ;      /* exponent constant */
static  double  k = 1.0 ;       /* relative size */
double  r,x,y,z,rad,angle ;
int     i ;
COORD4  back_color, obj_color ;
COORD4  center, light ;
COORD4  from, at, up ;
COORD4  sphere;

#define OUTPUT_FORMAT	OUTPUT_CURVES

/* output viewpoint */
SET_COORD( from, 6, 60, 35 ) ;
SET_COORD( at, 0.0, 8.0, -15.0 ) ;
SET_COORD( up, 0.0, 0.0, 1.0 ) ;
lib_output_viewpoint( &from, &at, &up, 45.0, 0.5, 512, 512 ) ;

/* output background color - UNC sky blue */
SET_COORD( back_color, 0.078, 0.361, 0.753 ) ;
lib_output_background_color( &back_color ) ;

/* output light sources */
SET_COORD( light, -100.0, -100.0, 100.0 ) ;
lib_output_light( &light ) ;

/* set up sphere color */
SET_COORD( obj_color, 1.0, 0.8, 0.4 ) ;
lib_output_color( &obj_color, 0.8, 0.2, 100.0, 0.0, 1.0 ) ;

    for ( i = -steps*2/3; i <= steps/3 ; ++i ) {
	angle = 3.0 * 6.0 * M_PI * (double)i / (double)steps ;
	r = k * exp( a * angle ) ;
	sphere.x = r * sin( angle ) ;
	sphere.y = r * cos( angle ) ;
	/* alternate formula: z = alpha * angle */
	sphere.z = beta * r ;
	sphere.w = r / gamma ;
	lib_output_sphere( &sphere, OUTPUT_FORMAT ) ;
    }
}

-------------------------------------------------------------------------------

FTP list update and New Software, by Eric Haines, George Kyriazis

I posted my FTP site list for ray tracing related stuff, and a few people were
nice enough to write and update this list.  The new list is posted after this
note from George Kyriazis at RPI (his site has the friendliest login I've ever
seen):

--------

iear.arts.rpi.edu [128.113.6.10]:

There was an article in IEEE CG&A a while ago from Sandia National Labs (the
guy's name was Mareda I think) that uses fourier transforms and digital
filters to create wave height fields from white noise.  What I have in the
directory is an implementation of this algorithm and a program that raytraces
it on the AT&T Pixel Machine.

A list of what exists in there follows:

graphics-gems:	source code to Glassner's Graphics Gems book.
ray-tracing-news:What else?
wave:		Rendering of ocean waves using fft. (Mareda, et. al.)
coldith:	conversion from my image format to sun rasterfiles
		and dithering from 32 or 24 bit -> 8 bit rasterfiles.
drt:		A ray-tracer from the Netherlands by Marinko Laban
gk:		A distributed raytracer by me.
microray:	DBW_uRAY by David Wecker
mtv:		Well, you know what this is.  It probably is an old version.
non-completed-OO-modeler:
		Something I was working on.  It barely works, but I put
		it out there just for the fun of it.
ohta:		Masataka Ohta's constant time ray-tracer;
		with a few improvements.
pxm-and-vmpxm-ray.etc:
		Two raytracers for the AT&T Pixel Machine.  The second
		one uses some virtual memory code to store more objects.
		The VM source is included also.
qrt:		Well, QRT!
rayshade:	Rayshade 2.21 by Craig Kolb.

I hope I'll have a few anonymous ftp sessions after this.. :-)

--------

Corrected FTP site list for ray tracing related material:

weedeater.math.yale.edu [130.132.23.17]:  /pub - *Rayshade 3.0 ray tracer*,
	*color quantization code*, RT News, *new Utah raster toolkit*, newer
	FBM, *Graphics Gems code*.  Craig Kolb <kolb@yale.edu>

cs.uoregon.edu [128.223.4.13]:  /pub - *MTV ray tracer*, *RT News*, *RT
	bibliography*, other raytracers (including RayShade, QRT, VM_pRAY),
	SPD/NFF, OFF objects, musgrave papers, some Netlib polyhedra, Roy Hall
	book source code, Hershey fonts, old FBM.  Mark VandeWettering
	<markv@acm.princeton.edu>

hanauma.stanford.edu [36.51.0.16]: /pub/graphics/Comp.graphics - best of
	comp.graphics (very extensive), ray-tracers - DBW, MTV, QRT, and more.
	Joe Dellinger <joe@hanauma.stanford.edu>

freedom.graphics.cornell.edu [128.84.247.85]:  *RT News back issues, source
	code from Roy Hall's book "Illumination and Color in Computer
	Generated Imagery", SPD package, Heckbert/Haines ray tracing article
	bibliography, Muuss timing papers.  [CURRENTLY NOT AVAILABLE]

alfred.ccs.carleton.ca [134.117.1.1]:  /pub/dkbtrace - *DKB ray tracer*.
	David Buck <david_buck@carleton.ca>

uunet.uu.net [192.48.96.2]:  /graphics - RT News back issues (not complete),
	other graphics related material.

iear.arts.rpi.edu [128.113.6.10]:  /pub - *Kyriazis stochastic Ray Tracer*.
	qrt, Ohta's ray tracer, other RT's (including one for the AT&T Pixel
	Machine), RT News, Graphics Gems, wave ray tracing using digital
	filter method.  George Kyriazis <kyriazis@turing.cs.rpi.edu>

life.pawl.rpi.edu [128.113.10.2]: /pub/ray - *Kyriazis stochastic Ray Tracer*.
	George Kyriazis <kyriazis@turing.cs.rpi.edu>

xanth.cs.odu.edu [128.82.8.1]:  /amiga/dbw.zoo - DBW Render for the Amiga (zoo
	format).  Tad Guy <tadguy@cs.odu.edu>

munnari.oz.au [128.250.1.21]:  */pub/graphics/vort.tar.Z - CSG and algebraic
	surface ray tracer*, /pub - DBW, pbmplus.  David Hook
	<dgh@munnari.oz.au>

cs.utah.edu [128.110.4.21]: /pub - *Utah raster toolkit*.  Spencer Thomas
	<thomas@cs.utah.edu>

gatekeeper.dec.com [16.1.0.2]: /pub/DEC/off.tar.Z - *OFF objects*,
	/pub/misc/graf-bib - *graphics bibliographies (incomplete)*.  Randi
	Rost <rost@granite.dec.com>

abcfd20.larc.nasa.gov [128.155.23.64]: /amiga - DBW,
	/usenet/comp.{sources|binaries}.amiga/volume90/applications -
	DKBTrace 2.01.  Tad Guy <tadguy@cs.odu.edu>

expo.lcs.mit.edu [18.30.0.212]:  contrib - *pbm.tar.Z portable bitmap
	package*, *poskbitmaptars bitmap collection*, *Raveling Img*,
	xloadimage.  Jef Poskanzer <jef@well.sf.ca.us>

venera.isi.edu [128.9.0.32]:  */pub/Img.tar.z and img.tar.z - some image
	manipulation*, /pub/images - RGB separation photos.  Paul Raveling
	<raveling@venera.isi.edu>

ftp.ee.lbl.gov [128.3.254.68]: *pbmplus.tar.Z*.

ucsd.edu [128.54.16.1]: /graphics - utah rle toolkit, pbmplus, fbm, databases,
	MTV, DBW and other ray tracers, world map, other stuff.  Not updated
	much recently.

okeeffe.berkeley.edu [128.32.130.3]:  /pub - TIFF software and pics.  Sam
	Leffler <sam@okeeffe.berkeley.edu>

irisa.fr [131.254.2.3]:  */iPSC2/VM_pRAY ray tracer*, /NFF - many non-SPD NFF
	format scenes.  Didier Badouel <badouel@irisa.irisa.fr>

surya.waterloo.edu [129.97.129.72]: /graphics - FBM, ray tracers

vega.hut.fi [128.214.3.82]: /graphics - RTN archive, ray tracers (MTV, QRT,
	others), NFF, some models

netlib automatic mail replier:  UUCP - research!netlib, Internet -
	netlib@ornl.gov.  *SPD package*, *polyhedra databases*.  Send one
	line message "send index" for more info.

UUCP archive: avatar - RT News back issues.  For details, write Kory Hamzeh
	<kory@avatar.avatar.com>

======== USENET cullings follow ===============================================

Humorous Anecdotes, by J. Eric Townsend, Greg Richter, Michael McCool,
	Eric Haines


from J. Eric Townsend (jet@uh.edu):

Went to lunch with a good (but non-technical) friend of mine the other day.
On the way home, we were talking about what a nice day it was for lying around
and reading.  She mentioned that she'd gotten a Joyce Carol Oates book I was
looking for and was going to read it.

I said, "I wish I had time to read something fun, but I've got to read some
more ray tracing today for my special problems class."

She said, "Ray Tracing?  Who's that.  I don't think I've heard of him."

She thought it was funny only after I stopped laughing and took a few minutes
to explain it to her.

Tomorrow, I think I'm going to add a second name to my door (we have slots for
two as most RA's share offices):  "Ray Tracing".

--------

from Greg Richter ({emory,gatech}!bluemtn!greg)

We had a similar incident with a new secretary who informed me that a salesman
had called asking about our Gator Rays.  She wanted to know if we were
involved in animal testing.

--------

from Michael McCool (mccool@dgp.toronto.edu)

>She said, "Ray Tracing?  Who's that.  I don't think I've heard of him."

For some reason the term seems to collect fun.  At SIGGRAPH in Dallas this
year the Ray-tracing course organizers [Glassner & Arvo] kept bringing up the
"related fields" of rat-racing and tray-racing.  Last SIGGRAPH there was a
"tutorial video" on tray-racing that was shown; it involved sliding trays
down stairs, and gave examples of acceleration techniques, etc.  One wonders.

--------

from Eric Haines:

And don't forget the t-shirt (a number of Canadians, Calgarians I think, had
them on):  There's a picture of a nerdy guy holding a pencil and kneeling over
a man laying on the ground covered with a sheet of paper.  The entire figure
is labeled "Ray Tracing".  The caption:  "Okee dokee, Ray, here comes Mr.
Pencil."

John Wallace also mentioned to me seeing a poster in Denver of jewels with the
words "Ray Tracy" below, as this was the jeweller's name.

For those who've moved from graduate work to the world of business, a slogan:
"Once I was a ray-tracer, now I'm a rat racer."

I received the "Spherical Aberration" award from Andrew Glassner & Jim Arvo
for helping organize various ray tracing things.  The award was a glass sphere
on a wooden base.  It sat on my office windowsill until I moved to a new
office one day.  Looking at the award, I noticed black marks at the base.  The
sphere had acted like a (crummy, luckily) magnifying glass and burnt grooves
in the wood!  It was interesting to see how the marks seemed to correspond to
the movement of the sun and the seasons.  Anyway, the moral is "Caustics can
be dangerous" (after all, why do you think they call them "caustic"?).

-------------------------------------------------------------------------------

G r a p h i c s   I n t e r f a c e   ' 9 1
	Calgary, Alberta, Canada
	3-7 June 1991

CALL FOR PARTICIPATION

Graphics Interface '91 is the seventeenth Canadian Conference devoted to
computer graphics and interactive techniques, and is the oldest regularly
scheduled computer graphics conference in the world.  Now an annual
conference, film festival, and tutorials, Graphics Interface has
established a reputation for a high-quality technical program.  The 1991
conference will be held in Calgary, Alberta, June 3-7 1991 in conjunction
with Vision Interface '91. Graphics Interface '91 is sponsored by the
Canadian Man-Computer Communications Society.

IMPORTANT DATES:

Four copies of a Full Paper due:	31 Oct. 1990	*** N O T E ***
Tutorial Proposals due:			15 Nov. 1990
Authors Notified:			 1 Feb. 1991
Cover Submissions due:			 1 Feb. 1991
Final Paper due:			29 Mar. 1991
Electronic Theatre Submissions due:	 1 May  1991

TOPICS:

Contributions are solicited describing unpublished research results and
applications experience in graphics, including but not restricted to the
following areas:

Image Synthesis & Realism		User Interface
Shading & Rendering Algorithms		Windowing Systems
Geometric Modeling			Computer Cartography
Computer Animation			Image Processing
Interactive Techniques			Medical Graphics
Graphics for CAD/CAM			Graphics in Education
Computer-Aided Building Design		Graphics & the Arts
Industrial & Robotics Applications	Visualization
Graphics in Business			Graphics in Simulation

Four copies of full papers should be submitted to the Program Chairman
before Oct.31 1990. Include with the paper full names, addresses, phone
numbers, fax numbers and electronic mail addresses of all the authors. One
author should be designated "contact author"; all subsequent correspondence
regarding the paper will be directed to the contact author. The other
addresses are required for follow-up conference mailings, including the
preliminary program.

FOR GENERAL INFORMATION:		SUBMIT PAPERS TO:

Wayne A. Davis				Brian Wyvill
GI '91 General Chairman			GI '91 Program Chairman
Department of Computing Science		Department of Computer Science
University of Alberta			University of Calgary
Edmonton, Alberta, Canada		Calgary, Alberta, Canada
T6G 2H1					T2N 1N1

Tel:	403-492-3976			Tel:	403-220-6009
EMail:	usersams@ualtamts.bitnet	EMail:	blob@cpsc.ucalgary.ca

[There are often a fair number of papers on ray tracing at this conference
{vs.  maybe one at SIGGRAPH}, so it is a good place to consider submitting RT
research.  --EAH]

-------------------------------------------------------------------------------

Parametric Surface Reference, by Spencer Thomas

In article <3632.26f78d3f@cc.curtin.edu.au> Kessells_SR@cc.curtin.edu.au writes:
> Can someone please tell me where I can find an algorithm for
> finding the intersection of a ray and a Bezier and/or B-Spline
> patch.

You might look at

Lischinski and Gonczarowski, "Improved techniques for ray tracing
parametric surfaces," Visual Computer, Vol 6, No 3, June 1990, pp
134-152.

Besides having an interesting technique, they refer to most of the
other relevant work.

[This entire issue is dedicated to ray tracing, and is worth checking out.
-EAH]

--
=Spencer W. Thomas 		EECS Dept, U of Michigan, Ann Arbor, MI 48109
spencer@eecs.umich.edu		313-936-2616 (8-6 E[SD]T M-F)

-------------------------------------------------------------------------------

Solid Light Sources Reference, by Steve Hollasch, Philip Schneider
from: comp.graphics

Steve Hollasch (hollasch@enuxha.eas.asu.edu) writes:

    How do raytracers make light sources out of arbitrary objects?  I
thought a while back that one approach would be to find the direction to
the object from the illuminated point, fire a random cone of rays at the
object, and assign some fraction of the object's light to the point for
each unobstructed ray.

    The main drawback of this approach, as I see it, is that it would
yield a mottled illuminated area, and the mottling would vary in a
random manner.

    About five minutes ago I had an idea for another approach:

    -  Find the 2D bounding box (from the illuminated point's view) of the
       illuminating object.

    -  From this box, get the two orthogonal basis vectors.

    -  Now subdivide this bounding box (using the basis vectors), just
       as you would the original raytrace grid.

    -  For each light ray fired, determine if the ray intersects the
       illuminating object.  If it does, increment the `silhouette'
       counter.  If the light ray intersects no other object, then
       increment the `light' counter.

    -  Once done, the light that shines on the illuminated point is
       (light_counter/silhouette_counter) * object_light.

    This technique would also lend itself to numerous optimizations.
For example, if you assume that all light objects cast a convex
silhouette, then you could use binary search techniques to locate the
edges of the silhouette.  That is, you can assume that all scan lines
will be runs of space-silhouette-space intersections, hone in on the
edges, and then multiply the resulting silhouette width by the scanline
height to get the relative area.

    Is there a better way to do this?  I haven't come across this
problem in any of the graphics texts I've read.

--------

Philip Schneider (pjs@decwrl.dec.com) replies:

    Get in touch with the University of Washington Department of Computer
Science.  Two or three years ago Dan O'Donnell wrote an M.S. thesis
on what he called "solid light sources".  (Sorry, my copy is in a box
right now, so I don't recall the exact title :-(

    Real nice work, as I recall, and the resulting pictures were pretty
interesting -- one of them featured a coffee mug, with steam rising from it
that turned into a glowing "neon sign" light formed into the shape of
the word "Espresso" (of course, I'm biased from having worked alongside him
at the UW graphics lab :-)

Philip J. Schneider
DEC Advanced Technology Development

[Has anyone seen this thesis?  Could you give me an exact reference (for the
Ray Tracing Bibliography)?  --EAH]

-------------------------------------------------------------------------------

Graphics Gems Source Code Available, by Andrew Glassner, David Hook

As many readers of the usenet are aware, at Siggraph '90 Academic Press
released a new book, "Graphics Gems" (edited by Andrew Glassner, published by
Academic Press, Cambridge MA, 864 pp, $49.95, ISBN 0-12-286165-5).  The book
is a compilation of many people's work, showing how they solved important
problems in computer graphics.  Many of the Gems are realized with
ready-to-run C implementations, presented in two appendices.

The authors and the publisher are pleased to release this source code to the
public domain:  neither the authors nor publisher hold any copyright
restrictions on any of these files.  The code is freely available to the
entire computer graphics community for study, use, and modification.  We do
request that the comment at the top of each file, identifying the original
author and the program's original publication in the book Graphics Gems, be
retained in all programs that use these files.

Each Gem is made available on an as-is basis; although considerable effort has
been expended to check the programs as originally designed and their current
release in electronic form, the authors and the publisher make no guarantees
about the correctness of any of these programs or algorithms.

All source files in the book are now available via anonymous ftp from site
'weedeater.math.yale.edu'.  To download the files, connect to this site with
an ftp program.  For user name type the word 'anonymous'; for password enter
your last name.  When you are logged in, type 'cd pub/GraphicsGems/src'.  Each
program from the book is stored in its own plaintext file.  I suggest you
first download the file README (type 'get README', then quit ftp and open the
file with any text editor); among other things it describes how to download
the rest of the directory, identifies the administrator of the site (who will
collect bug reports, etc.), and provides a table of contents so you can
identify the source files with their corresponding Gems.

We have enjoyed putting this book together.  It was a pleasure for me to work
with the many talented people who contributed to the success of this project.
A central theme of the book's philosophy was for the results to be practical
and useful - public release of the source code is a happy result of this
philosophy, shared by the authors, editor, and publisher.

We all hope this free source code is a useful resource for programmers
everywhere.

--------

>From David Hook:

AARnet/ACSnet sites can now obtain GraphicGems source code from
gondwana.ecr.mu.oz.au (128.250.1.63) via anonymous ftp in pub/GraphicsGems or
through fetchfile, (for a general info file do a "fetchfile
-dgondwana.ecr.mu.oz pub/GraphicsGems/GEMS.INFO" and for a general listing
"fetchfile -dgondwana.ecr.mu.oz -LV pub".

Please note this is a clone site of the GraphicsGems code at
weedeater.math.yale.edu, and bug reports, etc...  should still be forward to
the administrators there.  Their addresses are listed in the GEMS.INFO and
README files.

--------

[I felt the following was a good way to summarize much of what "Graphics Gems"
has in it.  It's excellent, highly recommended (no, I don't have anything in
it).  Andrew Woo's trick for a quick rejection test on polygons gave me a few
percent speedup on intersecting these, for example.  Oddly enough, I had tried
his idea earlier (Kells Elmquist independently discovered it here), but didn't
get much speedup.  Seeing it in print made me try it again, but this time on a
variety of databases.  This time it gave some speed-up - the first time I
tried it on a database particularly ill-suited for performance improvement!
Finding out that someone else had used it successfully encouraged me to
explore further.  What's his trick?  You'll have to see the book...  - EAH]

The table below gives the correspondence between each source
file in this directory and the name of the Gem it implements.
Each implementation illustrates one way to realize the
techniques described by the accompanying Gem in the book.
The files here contain only the source code for that
realization.  For a more complete description of the
algorithms and their applications see the Gems of the same
name in the first 11 Chapters of the book.

---------- header files ----------
GraphicsGems.h	       / Graphics Gems C Header File

----------    C code    ----------
2DClip		       / Two-Dimensional Clipping:
			 A Vector-Based Approach
AALines                / Rendering Anti-Aliased Lines
AAPolyScan.c           / Fast Anti-Aliasing Polygon
			 Scan Conversion
Albers.c               / Albers Equal-Area Conic Map
			 Projection
BinRec.c               / Recording Animation in Binary Order
			 For Progressive Temporal Refinement
BoundSphere.c          / An Efficient Bounding Sphere
BoxSphere.c            / A Simple Method for Box-Sphere
			 Intersection Checking
CircleRect.c           / Fast Circle-Rectangle Intersection
			 Checking
ConcaveScan.c          / Concave Polygon Scan Conversion
DigitalLine.c          / Digital Line Drawing
Dissolve.c 	       / A Digital "Dissolve" Effect
DoubleLine.c           / Symmetric Double Step Line Algorithm
FastJitter.c           / Efficient Generation of Sampling
			 Jitter Using Look-up Tables
FitCurves.c            / An Algorithm for Automatically
			 Fitting Digitized Curves
FixedTrig.c            / Fixed-Point Trigonometry with
			 CORDIC Iterations
Forms.c                / Forms, Vectors, and Transforms
GGVecLib.c             / 2D And 3D Vector C Library
HSLtoRGB.c             / A Fast HSL-to-RGB Transform
Hash3D.c               / 3D Grid Hashing Function
HypotApprox.c          / A Fast Approximation to
			 the Hypotenuse
Interleave.c           / Bit Interleaving for Quad-
			 or Octrees
Label.c                / Nice Numbers for Graph Labels
LineEdge.c             / Fast Line-Edge Intersections On
			 A Uniform Grid
MatrixInvert.c         / Matrix Inversion
MatrixOrtho.c          / Matrix Orthogonalization
MatrixPost.c           / Efficient Post-Concatenation of
			 Transformation Matrices
Median.c               / Median Finding on a 3x3 Grid
NearestPoint.c         / Solving the
			 Nearest-Point-On-Curve Problem
			 and
			 A Bezier Curve-Based Root-Finder
OrderDither.c          / Ordered Dithering
PixelInteger.c         / Proper Treatment of Pixels
			 As Integers
PntOnLine.c            / A Fast 2D Point-On-Line Test
PolyScan               / Generic Convex Polygon
			 Scan Conversion and Clipping
Quaternions.c          / Using Quaternions for Coding
			 3D Transformations
RGBTo4Bits.c           / Mapping RGB Triples Onto Four Bits
RayBox.c               / Fast Ray-Box Intersection
RayPolygon.c           / An Efficient Ray-Polygon
			 Intersection
Roots3And4.c           / Cubic and Quartic Roots
SeedFill.c             / A Seed Fill Algorithm
SquareRoot.c           / A High-Speed, Low-Precision
			 Square Root
Sturm                  / Using Sturm Sequences to Bracket
			 Real Roots of Polynomial Equations
TransBox.c             / Transforming Axis-Aligned
			 Bounding Boxes
TriPoints.c            / Generating Random Points
			 In Triangles
ViewTrans.c            / 3D Viewing and Rotation Using
			 Orthonormal Bases

-------------------------------------------------------------------------------

Graphics Gems Volume 2 CFP, by Sari Kalin

This is a quick note to let everyone know that Academic Press will be
publishing a sequel to the book GRAPHICS GEMS, edited by Andrew Glassner.
Since Andrew decided to take a breather from editing books, I'll be doing the
honors this time around.  So, if you're interested in contributing a clever
graphics algorithm or insight (i.e.  a "Gem") to the new volume, contact Sari
Kalin (the editor at Academic Press) and she'll send you an author's packet.
Here's the address:

    Sari Kalin
    Academic Press
    955 Massachusetts Ave.
    Cambridge, MA  02139

    tel:   (617) 876-3901
    email: cdp!skalin@labrea.stanford.edu

AP wants to get the book out by SIGGRAPH `91, so that means the schedule is
tight.  The submission deadline for first drafts is November 1, 1990, so don't
delay!  Also, I'm sure AP would appreciate hearing any comments you might have
about the first volume.

-------------------------------------------------------------------------------

Foley, Van Dam, Feiner and Hughes "Computer Graphics" Bug Reports,
	by Steve Feiner

Sender: feiner@cs.columbia.edu (Steven Feiner)
Organization: Columbia University Dept. of CS

We're in the process of setting up a separate email account to make it easy to
report bugs, suggest changes, and obtain a copy of the bug list for Computer
Graphics:  Principles and Practice, 2nd Ed. by Foley, van Dam, Feiner, and
Hughes.  Since Dave Sklar, the person who is setting up the account, is also
busting to get the versions of the book's SRGP and SPHIGS graphics packages
ready for their SIGGRAPH demos, the email account won't be available until
after SIGGRAPH.  We'll post details as soon as it's up.

Meanwhile, here are fixes for some dangling references and a missing exercise,
all of which had been devoured by a hungry gremlin:

1) Dangling references:

FIUM89 Fiume, E.L. The Mathematical Structure of Raster Graphics, Academic
Press, San Diego, 1989.

FOUR88 Fournier, A. and D. Fussell,
``On the Power of the Frame Buffer,'' ACM TOG, 7(2), April 1988,
103-128.

HUBS82 Hubschman, H. and S.W. Zucker, ``Frame-To-Frame Coherence and the Hidden
Surface Computation: Constraints for a Convex World,'' ACM TOG, 1(2),
April 1982, 129-162.  [The bibliography includes HUBS81, the SIGGRAPH 81 paper
on which HUBS82 was based.]

SNYD87 Snyder, J.M. and A.H. Barr, ``Ray Tracing Complex Models Containing
Surface Tessellations,'' SIGGRAPH 87, 119-128.

TAMM82 Tamminen, M. and R. Sulonen.  ``The EXCELL Method for Efficient
Geometric Access to Data,'' in Proc. 19th ACM IEEE Design Automation
Conf., Las Vegas, June 14-16, 1982, 345-351.

2) A missing exercise:

15.25 If you have implemented the z-buffer algorithm, then add hit detection
to it by extending the pick-window approach described in Section 7.12.2 to
take visible-surface determination into account.  You will need a SetPickMode
procedure that is passed a mode flag, indicating whether objects are to be
drawn (drawing mode) or instead tested for hits (pick mode).  A SetPickWindow
procedure will let the user set a rectangular pick window.  The z-buffer must
already have been filled (by drawing all objects) for pick mode to work.  When
in pick mode, neither the frame-buffer nor the z-buffer is updated, but the
z-value of each of the primitive's pixels that falls inside the pick window is
compared with the corresponding value in the z-buffer.  If the new value would
have caused the primitive to be drawn in drawing mode, then a flag is set.
The flag can be inquired by calling InquirePick, which then resets the flag.
If InquirePick is called after each primitive's routine is called in pick
mode, picking can be done on a per-primitive basis.  Show how you can use
InquirePick to determine which object is actually visible at a pixel.

Steve Feiner
feiner@cs.columbia.edu

-------------------------------------------------------------------------------

Radiosity via Ray Tracing, by Pete Shirley (shirley@iuvax.cs.indiana.edu)
from: comp.graphics


kaufman@delta.eecs.nwu.edu (Michael L. Kaufman) writes:

>This may be a stupid question, but why can't radiosity be handled by ray
>tracers?  Also, are there any archives that contain code/papers on radiosity
>that I can learn from?
>Michael

Ray Tracers can indeed do radiosity.  Check out the Sillion and Puech paper
or the Wallace et al. paper in Siggraph '89.  Also the Airey et al. paper
in proceedings of the symposium of Interactive 3D graphics (Computer
Graphics 24 (2)), and my Graphics Interface 90 paper.  Also Heckbert's
Siggraph 90 paper.

If all you want is a brute force radiosity code (and this code works pretty
well), then start with N polygons.  Each polygon will have reflectivity Ri
and will emit power Ei.   Assume you want to send R rays on the first pass.
We will now do what amounts to physical simulation:

T = 0		   // T is total emitted power
For i = 1 to N
	Ui = Ei    // Ui is unsent power
	Pi = Ei    // Pi is total power estimate
	T = T + Ei

dP = T / R	   // dP is the approximate power carried by each ray
For b = 1 to NumReflections
   For i = 1 to N
	r = int(Ui / dP)  // patch i will send power Ui in r rays
	dPi = Ui / r
	Ui = 0
	for j = 1 to r
		choose random direction with cosine weighting and
 		send ray with until it hits polygon p (see Ward et al.
		Siggraph 88 paper equation 2 p 87)

		Up = Up + Rp * dPi
		Pp = Pp + Rp * dPi


// Once this is done, we will have the power Pi coming from each surface,
// Now we should convert to radiance (see my GI 90 paper or Kajiya's
// Course notes in Siggraph 90 Advanced RT notes)

For i = 1 to N
	Li = Pi / (pi * Ai)  // Li is radiance, pi is 3.14..., Ai is area


This ignores interpolation to polygon vertices to give a smooth image
(see Cohen & Greenberg siggraph 85 figure 8A).


You might also want to check out Ward et al.'s Siggraph 88 paper.  Not true
radiosity, but is ray tracing based and has some of the best features
of ray tracing methods.  If your scene is all diffuse, that method may be
the way to go.

-------------------------------------------------------------------------------

Algorithm Order Discussion, by Masataka Ohta, Pete Shirley


In article <1150@mti.mti.com> adrian@mti.UUCP (Adrian McCarthy) writes:

>Rather than using recursive or hierarchical
>spatial subdivision techniques to reduce ray-object intersection tests
>(which are of O(log n) algorithms)

Can you prove it? I think (but can't prove) hierarchical spatial
subdivision is only O(n**(1/3)).

>many instances can use a surface map for
>a single bounding volume as a lookup table to immediately determine a small
>subset of objects to test (which is truly of O(1)). (Small subset here is
>roughly equivalent to the set of objects in the smallest volume in a
>comparable hierarchical scheme.)

I already published an O(1) ray tracing algorithm (See "An introduction
to RAY TRACING" edited by Andrew S. Glassner, Academic Press, 6.3 The Ray
Coherence Algorithm, page 232-234, or, "Computer Graphics 1987 (Proc.
of CG International '87)", page 303-314, Ray Coherence Theorem and
constant time ray tracing algorithm).

Judging from the above brief description of your algorithm, it may be
similar or identical to mine.

>It's *not* general,

Hmmm, mine is general.

--------

Kaplan also has claimed a constant time algorithm.  I don't believe that
his is constant time - it (like an octree) is a divide and conquer
search, so it will AT BEST be O(logN)  (it takes O(logN) time to travel
down the height of a tree).

I don't really see how we can even say what the big-O of a ray intersection
is without precisely stating what rays are allowed on what geometry.
For example, suppose I set up N epsilon radius spheres in random locations
within a cube, where epsilon is small.  In the typical case a ray might
come close to many spheres.  If an octree is used, the candidate sets
of many leaves might be checked (worse than O(logN)).  If all of the spheres
have the same center, how can any scheme get a candidate set for a ray
through the center that doesn't include all spheres?  This would be
O(NlogN) for an octree and O(N) optimally.  How would your method be O(1)?
It seems that often we have good algorithm behavior in practice with
pathological cases giving terrible big-Os.  Perhaps we can bound the set
of scenes somehow?

--------

Ohta replies:

>Kaplan also has claimed a constant time algorithm.  I don't believe that
>his is constant time--

I agree with you. I know what is O(1).

>I don't really see how we can even say what the big-O of a ray intersection
>is without precisely stating what rays are allowed on what geometry.

Well, it is assumed that, ray object intersection calculation takes only
constant time. That is, to ray trace a surface defined by N-th degree
polynominal in constant time regardless to N, is just impossible.

>For example, suppose I set up N epsilon radius spheres in random locations
>within a cube, where epsilon is small.  In the typical case a ray might
>come close to many spheres.  If an octree is used, the candidate sets
>of many leaves might be checked (worse than O(logN)).  If all of the spheres
>have the same center, how can any scheme get a candidate set for a ray
>through ythe center that doesn't include all spheres?  This would be
>O(NlogN) for an octree and O(N) optimally.  How would your method be O(1)?

Good question.  But, first, we should make it clear what is constant time ray
tracing.  As for the per-scene complexity, no algorithm (including your first
example) can be better than O(N) just because of input complexity (reading all
the input takes O(N) time).  So, we should talk about complexity to trace a
single ray.

My algorithm may consume possibly long and computationally complex
pre-processing time to analyze a scene.  But, the important fact is that the
time for pre-processing is dependent only on the scene and not on the number
of rays.

And, after pre-precessing, all rays can be traced in constant time.

With your example of co-centric spheres (assuming epsilon radius is different
on each sphere, or the problem is reduced to too trivial to trace a single
sphere), each sphere should further be subdivided into smaller pieces to
obtain constant time property.  With such a finer subdivision, even octree
method may be able to do better than O(NlogN).

>It seems that often we have good algorithm behavior in practice with
>pathological cases giving terrible big-Os.  Perhaps we can bound the set
>of scenes somehow?

It may be reasonable to define complexity of a scene by the complexity of
pre-processing to obtain constant-timeness.

--------

In a separate note, Ohta writes:

>Can you prove it? I think (but can't prove) hierarchical spatial
>subdivision is only O(n**(1/3)).

I have found it is obvious.  All spatial subdivision is at most O(n**(1/3)) if

	1) Objects are tiny
	2) Objects are uniformly scattered in space

1) means that the possibility of intersection is negligible.
2) means at least O(n**(1/3)) subspace must be traversed.

-------------------------------------------------------------------------------

Point in Polygon, One More Time..., by Mark Vandewettering, Eric Haines,
	Edward John Kalenda, Richard Parent, Sam Uselton, "Zap" Andersson, and
	Bruce Holloway


Mark VandeWettering writes about the point in polygon test:

The solution proposed by rusmin@twinsun.com was based upon the Jordan Curve
theorem: any ray from inside a polygon crosses an odd number of edges on its
way to infinity.  He chose a random ray that began at the point in question,
and counted the number of intersections.  Problem: if you intersected a vertex
you were kind of clueless.  Solution, fire another ray.

You solve these problems by simplifying everything.  The ray you shoot should
go to the positive X axis.  Assume without loss of generality that your
point is the origin.  Now: if you are going to intersect a vertex, its because
the y component of an edge endpoint is == 0.  Well, decide whether you want
to count this as positive or negative.  Assume positive (I always do).  It
turns out you get the right answer anyway.  For example

      origin	     ^
		     |
	o	     o			counts as one intersection
		     |
		     v

      origin	     ^ ^
		     |/
	o	     o			counts as zero or two intersections

      origin

	o	     o			counts as zero or two intersections
		     |\
		     v v


Voila!  C'est explique!


[and in a later note:]

Hardly my own idea.  It was shown to me by Eric Haines originally, but I don't
think he claims credit for it either.  Any takers?  Is it patented by Unisys
as well :-)?

--------

Eric Haines [the crazy fool!] replies:

	Anyway, having an ego and all that, I do indeed claim to be the
inventor of the "consider all points on the line to be above the line"
solution, which gets rid of those pesky special cases where vertices are on the
test ray.  I started with some very complicated special case code in 1986
which worried about whether the points were on the ray.  Months went by...
One day, looking at the code, I noticed another stupid special case that I
didn't handle correctly (something like "if the last point is on the ray and
the first point is on the ray...").  I looked at the problem again and
realized that the only property of the ray that I really cared about was that
it was a divider, not that it was a set of points forming a ray.  Eureka,
Arkansas!  Get rid of those points, and so define away the problem - no points
can lie on the ray, if we define the ray as having no points.  O happy day, it
even worked.  Anyway, it's all written up in my section of "An Introduction to
Ray Tracing", edited by Andrew Glassner, Academic Press.

	Historical note:  the earliest reference I have to the point in
polygon test is the classic "A Characterization of Ten Hidden-Surface
Algorithms" by Ivan Sutherland et al, 1974, ACM Computing Surveys v6 n1.  This
has both the ray crossing test and the sum of the angles test, plus the convex
polygon test (check if the point is inside all lines by substitution into each
edge's 2D line equation).

	Computational geometry fiends noted that the method has the problem of
being indeterminate on edges:  if your intersection point is exactly on an
edge, it's arbitrary whether the point is determined to be inside or outside
the polygon (that is, do you consider an edge crossing the point to be to the
right or left of the point, or do you want a third category, such as "on"?).

	However, there is a general consistency to the ray crossing test for
ray tracing.  If the point being tested is along the edge joining two visible
polygons (which both face the same general direction, i.e. not a silhouette
edge) and the polygons are projected consistently upon the plane perpendicular
to the ray, the point is guaranteed to be considered to be inside one and only
one of the polygons.  That is, no point will be considered within any two
non-overlapping polygons, nor will it "fall in the crack" between polygons.

	Think about it this way:  if points on the left edges of the polygon
are considered outside, then the points on the right edges will be considered
inside.  This is because we would then consider an edge crossing the x axis at
exactly 0 as being to the right of the test point.  Since one polygon's left
edge is another's right edge, it all balances out to make each point be inside
only one polygon in a polygon mesh.  Horizontal edges are treated the same
way:  if a point is actually on such an edge, when tested, the point will be
categorized as "below" the edge consistently.  This will lead it to be inside
one and only one polygon in a mesh.

	In reality, when ray tracing you very rarely get points that are
exactly on an edge, so this is something of a non-problem.  But if you really
really care about this possibility, make sure to always cast your polygons
onto the plane perpendicular to the ray (this plane is also good for
consistency if you want to get edges for Gouraud RGB interpolation, Phong
normal interpolation, etc).

	Finally, for you incredibly demonic CompGeom types: yes, indeed,
points on silhouette edges are still inconsistent.

P.S.  As our patent on the 90 degree angle was successful, the pending patent
on the Jordan Curve Theorem should come through any day now... ;-)

--------

Edward John Kalenda replies:

Sorry to be a poop Eric, but my boss at Calma, Steve Hoffman, told me
about the "points on the ray are above the ray" thing in 1981. He claimed
someone told HIM several years before.

I think it's one of those things that just need to be attributed to the
anonymous coder.

--------

I reply:

	Oh, well, at least I can claim to be the first to publish!  Sadly
enough, the word still hasn't fully percolated out.  The latest Foley & Van
Dam (& Feiner & Hughes) says on page 34:  "Next, choose a ray that starts at
the test point and extends infinitely in any direction, and that does not pass
through any vertices".  Page 339 makes reference to "intersections at vertices"
being a "special case".  Passing through a vertex is still considered to be a
problem (even though it isn't).

	It's this kind of thing that makes me happy to see books like
"Graphics Gems" come out.  Letting the world know about the little tricks of
the trade saves us all a lot of time and replication of effort (I sure wish I
had known about the "above the ray" trick in 1986 - it would have saved me
hours of struggle).

--------

Richard Parent (parent@cis.ohio-state.edu) writes:

With respect to 'consider all the points on the line to be above the line',
both Steve Levine (Ph.D. from Stanford, I believe) and myself discussed
this as a solution we used in our respective problems; mine was implementing
Lourtrel's hidden line routine and I imagine his had to do with the
hierarchical clipping he was doing.  This was at one of the Siggraph's in the
mid to late seventies, '76, if I had to guess.  It's been around for awhile.

--------

I reply:

	I guess I could compare myself to Leibniz vs. Newton (both
independently discovering calculus), but I'm probably more in the class of the
guy that discovers that Wintergreen Life Savers give off sparks when chewed in
the dark.

--------

Sam Uselton writes:

I've been off news for a few days and just saw your posting of sept 6.  Isn't
the "consider all points on the line to be above the line" the same as
"shortening an edge to prevent the vertex lying on the scan line" as suggested
in Foley & Van Dam (the old one) when describing polygon scan conversion?
That's not the first place I heard this idea either, it was just the easiest
reference to grab off the shelf.

I think this is one of those solutions that is just subtle enough that most
people don't think of it themselves and think it is neat when they see it but
just obvious enough that a few people every year re-invent it, and go showing
it around to others.  Just the kind of thing IMHO that makes Glassner's
collection and the RTNews such useful things, because it probably couldn't get
published "traditionally" .

Trying VERY hard to pull up archival storage, I'm pretty sure I saw this first
while still at UTD in grad school, which means Henry Fuchs was probably
explaining it.  Whether he is one of the many originators or he got it from a
fellow Utah grad student, I couldn't guess.  (Oh, yeah.  Time?  1975-1979).

--------

I note:

For scan conversion it is still a bit tricky:  you have to think a bit about
where edges begin and end (you want edges with vertices exactly on the scan
line to be handled correctly, e.g. if the edge's maximum y ends on the scan
line, delete this edge just before checking scan line).  This kind of problem
goes away for ray tracing, since you don't have to worry about active edges,
etc.

--------

Sam replies:

After a weekend of R&R old memory is more easily accessible.  I seem to
associate my first connection with point in polygon/ ray intersection/ Jordan
curve theorem/etc etc to an informal seminar, led by Henry Fuchs and Zvi
Kedem, attended by Alain Fournier, Don Fussell, Jose Barros, myself, and I
think Bruce Naylor, in which we read and studied early Michael Shamos papers
(Shamos & Hoey, Bentley & Shamos...).  Must have been about 1978 or so.

--------

I reply:

	Thanks for the info.  What I find amazing about all this is that
somewhere, somewhere this was not mentioned in the ray tracing literature
before "An Intro to RT".  Until you (and others) pointed out that this problem
and solution is analogous to the scan-line problem, I honestly hadn't made the
connection!  Nor, it seems, had anyone else in print.  Sedgewick's
"Algorithms" talks about this problem but doesn't give a "points above the
ray" solution, Berlin in 1985 SIGGRAPH course notes gives a few solutions to
the problem, but says that test rays through vertices are a special case & are
very nasty.  Certainly Cornell's ray tracer didn't solve this problem.
Glassner used the sum of the angles method, so he didn't know about it, so UNC
didn't know about it.  Sounds like somewhere out west was the point of origin,
but it never made it east of the Mississippi.  Curious.

	Sutherland et al's 1974 paper "A Characterization of Ten Hidden
Surface Algorithms" mentions the problem as a problem ("care is required to
get consistent results").  Anyway, sounds like there probably wasn't a
solution before them (if anyone would know, I would think it would be them).
Interesting puzzle to try to figure out the identity of this anonymous
programmer.

--------

>From "Zap" Andersson:

As I think I posted earlier, this is somewhat similar to my own approach in
getting rid of special cases: Assume we are testing for a ray along X axis...
For all line segments, swap 'em, so it's "first" point is upwards (Have a lower
Y coordinate). Then, for each segment, test if the point's Y > firstY. If not,
discard (This, in essence, is a version of Eric's idea, only mine :-) then,
check if Y <= lastY. If not, discard. IMPORTANT ISSUE HERE is that I actually
CARE about the 'second' point of the line, and include it in the check, that's
MY way of getting rid of problem of a corner pointing downwards: It will yield
2 intersections, Eric's method yields none. No big deal, really.... after
this simple boundstest, it's time for the dreary math on checking if the
intersection is on POSITIVE X, i.e. if X > firstX && X > lastX, discard, and
last but not least, check the real intersection with some math (Left as an
exercise for the reader :-).

OBTW, fr'got! FIRST, you check if (LastY - Y) * (FirstY - Y) is > 0. If so,
discard.

--------

Bruce Holloway posts:

In article <1990Sep11.163420.13592@odin.corp.sgi.com> robert@sgi.com writes:
>
>In article <KPS5FNC@xds1.ferranti.com>, peter@ficc.ferranti.com (Peter
>da Silva) writes:
>|> In article <33619@cup.portal.com> ekalenda@cup.portal.com (Edward
>John Kalenda) writes:
>|> > Sorry to be a poop Eric, but my boss at Calma, Steve Hoffman, told me
>|> > about the "points on the ray are above the ray" thing in 1981. He claimed
>|> > someone told HIM several years before.
>|> >
>|> > I think it's one of those things that just need to be attributed to the
>|> > anonymous coder.
>|>
>|> Another of those obvious techniques like the XOR-cursor. Good thing nobody
>|> patented *this* one...
>
>I didn't see any smiley's after this one Peter.  I'm sure that
>many readers don't realize that the XOR-cursor in hardware *IS* patented.
>
>... and that's not the only obvious technique that this particular
>company has a patent on.

This thread is a riot.  My old boss at Calma, Joe Sukonick, patented the
XOR-cursor technique based on work he did around 1974, when I went to work
there.  I remember meeting Jim Clark for the first time around 1979 in front
of Stacey's bookstore in Palo Alto before he became famous for the Geometry
Engine.  He was married to a friend of mine from Calma (whom we were all in
love with!) & said he didn't think Joe's patent was valid because it wasn't
original.  I agree with him.

One of the things the patent says is that XOR works because it is "idempotent".
Joe had a Ph.D. in math from MIT & liked to use words like that, but I thought
AND & OR were idempotent & XOR worked because it wasn't!  What you need is
any operation that can be undone, or inverted, like ADD, say.  (What is XOR
but an ADD without carry?)  Anyway, the patent is now owned by Cadtrak, a
corporate shell whose charter is to sue everyone under the sun over that
patent.  Last I heard, Western Digital was going to take them to court to
overturn it.

Now as to the "points on the ray are above the ray", this is really the same
as the idea of half-open intervals which has broad usage in computer graphics.
When you point-sample a polygon, this is how you make sure that rectangles
of area m*n hit exactly m*n pixels, even if they lie on your sample grid.
When I was just a kid at Calma, I wrote two programs that used that idea.
One was a program to cross-hatch concave (and convex) polygons for making
plots of overlapping layers on ICs.  Another was a program which intersected
arbitrary closed polygons with each other.  This was an application program
written in Calma's GPL which was used to calculate the capacitance between
layers of a chip, by finding the areas of all the overlaps.  Both of these
algorithms needed to use the half-open idea to take care of the corner cases.

I left Calma in 1976, too inexperienced to realize what a singular place it
had been.  After the ownership changed, most everyone scattered to the four
corners of the globe.  Later, I actually had a contract job working for
Cal & Irma Hefte, the founders of Calma.  Cal passed away some years ago--
he seemed to me to be a lot like Willy Loman in "Death of a Salesman".
He told me "people are the most complicated & interesting things in the
world".  Mrs. Hefte and her daughters have a flower shop on Blossom Hill Rd.
A genuine Silicon Valley story!  (There's a lot more.)

-------------------------------------------------------------------------------

Quadrant Counting Polygon In/Out Testing, by Craig Kolb, Ken McElvain
from: comp.graphics

[After all that, I thought it was worth posting some real live code, but
instead for a quadrant counting method that is almost the same as the point in
polygon ray test.  The "infinite ray" test is really counting quadrants (in
"An Intro to RT" I show how you can get the winding number using it), but does
not bother to evaluate the exact quadrant unless need be.  For example, if
both endpoints of an edge are in the +Y space, we have no need to evaluate
whether we're in +X or -X.  -EAH]

In article <1990Mar29.171010.28348@agate.berkeley.edu> raymond@sunkist.UUCP
(Raymond Chen) writes:

>Nobody yet has mentioned my favorite method for point-in-polygon
>detection:  Using the winding number.

[...]

>/* Point in polygon detection.  Original author unknown.
> * Except where noted, I have NOT edited the code in any way.
> * I didn't even fix the misspellings.  (And no, I don't code
> * in this style.)
> */

[...]

>	return(quad);	/**rjc**/ /* for some reason, the original
>				     author left out this crucial line! */
>}
>--
>A remark:  The whichquad function is not perfect.  It doesn't handle
>the boundary cases right (but as I said, that's only a problem
>if the point lies on the polygon).

(After a long-overdue search through my archives...)

The code was posted to comp.graphics in February of '88 by Ken McElvain
(decwrl!sci!kenm).  Regarding the typos and the boundary cases, Ken wrote:

> I pulled this out of some other code and hopefully didn't break it in
> doing so.  There is some ambiguity in this version as to whether a point
> lying on the polygon is inside or out.  This can be fairly easily
> detected though, so you can do what you want in that case.

I once did a number of tests on Ken's winding number code and an implementation
of the 'ray' algorithm, using a ray tracer (rayshade).  For the test scenes
I rendered, I found that the winding number method was approximately 10% faster
than the ray method.  Your mileage may vary.

[It surprises me that this algorithm is ever faster:  the infinite ray
solution is essentially the same idea as the quadrant idea, except that it
avoids x compares when the signs of the y components are the same.  Maybe I'll
hack rayshade & prove it someday...  -- EAH]

From: kenm@sci.UUCP (Ken McElvain)
Organization: Silicon Compilers Systems Corp. San Jose, Ca

In article <5305@spool.cs.wisc.edu>, planting@speedy.cs.wisc.edu (W. Harry Plantinga) writes:
> In article <7626@pur-ee.UUCP> beatyr@pur-ee.UUCP (Robert Beaty) writes:
> >In article <20533@amdcad.AMD.COM> msprick@amdcad.UUCP (Rick Dorris) writes:
> >>In article <971@ut-emx.UUCP> jezebel@ut-emx.UUCP (Jim @ MAC) writes:
> >>>Have a nice one here:
> >>>Have a boundary defined on the screen. Boundary is composed of points
> >>>joined by lines... Now some random points are generated and I want to check
> >>>if a given point is within or outside the existing boundary.. Any algorithm for
> >> <rick suggests the infinite ray/intersection counting algorithm>
> >
> >I have seen this type of algorithm before and the one I thought up
> >in an afternoon is FAR superior and vastly simpler to code up.
> >
> ><robert beaty suggests an algorithm that measures sums the angles of
> >the lines connecting the points and the test point>
>
> Haven't I seen this discussion of point-in-polygon algorithms about 15
> times before? :-)
>
> Robert, your algorithm is simpler only in the kind of problems it
> handles.  It will only work for convex polygons, whereas the other
> works for any polygons.  It's not even much simpler; measuring angles
> and determining whether line segments intersect are of comparable
> difficulty.
>
> Maybe you should have said that although your algorithm is far
> inferior, it's not any easier to code?


Robert's suggestion is a good one.  The sum of the angles about the
test point is known as the winding number.  For non intersecting
polygons if the winding number is non-zero then the test point is
inside the polygon.  It works just fine for convex and concave
polygon's.  Intersecting polygon's give reasonable results too.
A figure 8  will give a negative winding number for a test point
in one of the loops and a positive winding number for the other loop,
with all points outside having a winding number of 0.  Other advantages
of the winding number calculation are that it does not suffer from
the confusion of the infinite ray algorithm when the ray crosses a vertex
or is colinear with an edge.

Here is my version of a point in poly routine using a quadrant
granularity winding number.  The basic idea is to total the angle
changes for a wiper arm with its origin at the point and whos end
follows the polygon points.  If the angle change is 0 then you are
outside, otherwise you are in some sense inside.  It is not necessary to
compute exact angles, resolution to a quadrant is sufficient, greatly
simplifying the calculations.

I pulled this out of some other code and hopefully didn't break it in
doing so.  There is some ambiguity in this version as to whether a point
lying on the polygon is inside or out.  This can be fairly easily
detected though, so you can do what you want in that case.


--------
/*
 * Quadrants:
 *    1 | 0
 *    -----
 *    2 | 3
 */

typedef struct  {
	int x,y;
} point;

pointinpoly(pt,cnt,polypts)
point pt;       /* point to check */
int cnt;	/* number of points in poly */
point *polypts; /* array of points, */
		/* last edge from polypts[cnt-1] to polypts[0] */
{
	int oldquad,newquad;
	point thispt,lastpt;
	int a,b;
	int wind;       /* current winding number */

	wind = 0;
	lastpt = polypts[cnt-1];
	oldquad = whichquad(lastpt,pt); /* get starting angle */
	for(i=0;i<cnt;i++) { /* for each point in the polygon */
		thispt = polypts[i];
		newquad = whichquad(thispt,pt);
		if(oldquad!=newquad) { /* adjust wind */
			/*
			 * use mod 4 comparisons to see if we have
			 * advanced or backed up one quadrant
			 */
			if(((oldquad+1)&3)==newquad) wind++;
			else if((((newquad+1)&3)==oldquad) wind--;
			else {
				/*
				 * upper left to lower right, or
				 * upper right to lower left. Determine
				 * direction of winding  by intersection
				 *  with x==0.
				 */
				a = lastpt.y - thispt.y;
				a *= (pt.x - lastpt.x);
				b = lastpt.x - thispt.x;
				a += lastpt.y * b;
				b *= pt.y;

				if(a > b) wind += 2;
				else wind -= 2;
			}
		}
		lastpt = thispt;
	}
	return(wind); /* non zero means point in poly */
}

/*
 * Figure out which quadrant pt is in with respect to orig
 */
int whichquad(pt,orig)
point pt;
point orig;
{
	if(pt.x < orig.x) {
		if(pt.y < orig.y) quad = 2;
		else quad = 1;
	} else {
		if(pt.y < orig.y) quad = 3;
		else quad = 0;
	}
	return ( quad ) ;	/* [I added the fix - EAH] */
}


Ken McElvain
decwrl!sci!kenm

-------------------------------------------------------------------------------

Computer Graphics Journals, by Juhana Kouhia (jk87377@tut.fi)

Here is a list, not as good as it could be.  I would like to see a short
descriptions about each journals on it.  I would like to see more information
about geometry related journals or journals that covers this newsgroup
subjects.  End of this list has list of journals, which was suggested to me,
but I didn't find enough information about them or I was too lazy to find
anything.  If somebody has suggestions what to do with them, please let me
know.

Additions, corrections, etc. are welcome to the list.  Thank you all who
helped me.


Computer Graphics, Geometry, Image Processing Journals
======================================================

ACM Transactions On Graphics
----------------------------
   -Quarterly
   -Available from:
	Association For Computing Machinery
	11 West 42 Street
	New York, NY 10036
	USA

Computer Aided Design
---------------------
   -10 issues / year
   -Available from:
	Butterworth Scientific Ltd.
	88 Kingsway
	London WC2 6AB
	England

Computer Graphics
-----------------
   -Includes SIGGRAPH conference proceeding
   -Includes SIGGRAPH panel proceeding
   -Available from:
	Association For Computing Machinery
	11 West 42 Street
	New York, NY 10036
	USA
   -Also available from:
	Academic Press

Computer Graphics Forum
-----------------------
   -Journal of the European Association of Computer Graphics
    (EuroGraphics)
   -Available from:
	Elsevier Science Publishers B.V.
	Journals Department
	P.O. Box 211
	1000 AE Amsterdam
	The Netherlands

Computer Graphics World
-----------------------
   -Professional graphical application and industrial use
   -Monthly
   -Available from:
	PennWell Publishing Company
	119 Russell Street
	P.O. Box 457
	Littleton, Massachusetts 01460-0457
	USA

Computer Images International
-----------------------------
   -Professional graphical application and industrial use
   -Monthly
   -Available from:
	P.O. Box 109
	Maclaren House
	Scarbrook Road
	Croydon CR9 1QH
	England
	Phone: 01-688 7788
	Telex: 946665
	Fax:   01-681 1672

Computers & Graphics
--------------------
   -Available from:
	Oxword Books (GB)

Computer Vision, Graphics and Image Processing
----------------------------------------------
   -Available from:
	Academic Press
---> [split 1991]

Computer Vision, Graphics and Image Processing:
 Graphical Models and Image Processing
-----------------------------------------------
   -Available from:
	Academic Press

Computer Vision, Graphics and Image Processing:
 Image Understanding
-----------------------------------------------
   -Available from:
	Academic Press

Graphics Interface
------------------
   -Proceedings of Canadian computer graphics convention
   -Available from in Canada:
	Canadian Information Processing Society
	243 College Street, 5th Floor
	Toronto, Ontario
	M5T 2Y1
	(416)-593-4040
   -Available from in USA:
	Morgan Kaufmann Publishers
	Order Fulfillment Center
	PO Box 50490
	Palo Alto, CA 94303
	USA
	(415)-965-4081

IEEE Computer Graphics & Applications
-------------------------------------
   -Monthly
   -Available from:
	IEEE Computer Society
	Circulation Department
	P.O. Box 3014
	Los Alamitos, CA 90720-9804
	USA

Image and Vision Computing
--------------------------
   -Image analysis
   -Available from:
	Butterworth Scientific Ltd.
	88 Kingsway
	London WC2 6AB
	England

Pixel
-----
   -Available from:
	Pixel Communications, Inc.
	245 Henry St., Suite 2-G
	Brooklyn, NY 11201
	USA
	Phone: (718) 624-3386

Signal Processing: Image Communication
--------------------------------------
   -A publication of the European Association for Signal Processing
    (EURASIP)
	EURASIP
	P.O. Box 134
	CH-1000 Lausanne 13
	Switzerland
   -Also available from:
	Elsevier Science Publishers B.V.
	Journals Department
	P.O. Box 211
	1000 AE Amsterdam
	The Netherlands

Visual Computer
---------------
   -Available from:
	Springer-Verlag
	Heidelberger Platz 3
	D-1000 Berlin 33
	Germany
   -Also available from:
	Springer-Verlag New York, Inc.
	Journal Fulfillment Services, Dept. J
	P.O. Box 2485
	Secaucus, N.J. 07094
	USA

******************************************************************************

Miscellaneous Journals
======================

Leonardo
--------
   -Journal of the International Society for the Arts, Sciences &
    Technology
   -SIGGRAPH art show catalogue
   -Available from:
	Pergamon

******************************************************************************

What?
=====

IEEE Trans. on Pattern Analysis and Machine Intelligence,

IEEE Trans. on Systems, Man, and Cybernetics

Journal of the Discrete and Computational Geometry

Machine Vision and Applications

Pattern Recognition
   -Available from:
	Pergamon

Pattern Recognition Letters
   -Available from:
	North-Holland

Symposium on Computational Geometry

Symposium on Foundation of CS
   -One or two papers a year on geometry

Symposium on Theory of Computing

-------------------------------------------------------------------------------
END OF RTNEWS


From craig@weedeater.math.yale.edu Fri Mar  1 15:05:07 1991
Return-Path: <craig@weedeater.math.yale.edu>
Received: from turing.cs.rpi.edu (cs.rpi.edu) by iear.arts.rpi.edu (3.2/HUB10);
	id AA03510; Fri, 1 Mar 91 15:04:24 EST for kyriazis
Received: from weedeater.math.yale.edu by turing.cs.rpi.edu (4.0/1.2-RPI-CS-Dept)
	id AA18717; Fri, 1 Mar 91 15:04:23 EST
Received: by weedeater.math.yale.edu; Fri, 1 Mar 91 13:55:34 -0500
Date: Fri, 1 Mar 91 13:55:34 -0500
From: Craig Kolb <craig@weedeater.math.yale.edu>
Message-Id: <9103011855.AA04917@weedeater.math.yale.edu>
To: mja@sierra.llnl.gov, barr@csvax.cs.caltech.edu, barsky@miro.berkeley.edu,
        carl@mssun7.msi.cornell.edu, daniel@apollo.com,
        rgb@caen.engin.umich.edu, wim@duticg.tudelft.nl,
        raytrace@cpsc.ucalgary.ca, atc@cs.utexas.edu, markc@emx.utexas.edu,
        chapman%fornax.UUCP@MATH.YALE.EDU, ckchee@dgp.toronto.edu,
        chet@cis.ohio-state.edu, mcohen@cs.utah.edu,
        rt-colorado@anchor.colorado.edu, ray-tracing-news@graphics.cornell.edu,
        cychosz@ecn.purdue.edu, raycasting@duke.cs.duke.edu,
        jaf@graphics.cornell.edu, FISHER%3D.dec@decwrl.dec.com,
        flynn@cse.nd.edu, johnf@apollo.com, glassner.pa@xerox.com,
        rrg@acf8.nyu.edu, jeff@hamlet.caltech.edu, jakob@humus.huji.ac.il,
        sbgowing@ultima.cs.uts.oz.au, grant@delvalle.llnl.gov,
        green@compsci.bristol.ac.uk, paul@sgi.com, kory%avatar@quad.com,
        hanrahan@princeton.edu, ph@miro.berkeley.edu, hench@cs.unc.edu,
        raytrace@hpgtdadm.fc.hp.com, hohmeyer@miro.berkeley.edu,
        dgh@munnari.oz.au, hultquis@prandtl.nas.nasa.gov,
        fwj@duticg.tudelft.nl, joy@ucdavis.edu, mrk%dana@hplabs.com,
        kardan%vedge@larry.mcrcim.mcgill.edu, tim@csvax.cs.caltech.edu,
        dk@csvax.cs.caltech.edu, koz%hpurvmc%hpfcse@hpfcla.com,
        kuchkuda%megatek@ucsd.edu, kyriazis@turing.cs.rpi.edu,
        esl0422@ultb.isc.rit.edu, zmel02@image.trc.amoco.com
Subject: Ray Tracing News, Volume 4, Number 1
Reply-To: erich@eye.com
Status: OR

 _ __                 ______                         _ __
' )  )                  /                           ' )  )
 /--' __.  __  ,     --/ __  __.  _. o ____  _,      /  / _  , , , _
/  \_(_/|_/ (_/_    (_/ / (_(_/|_(__<_/ / <_(_)_    /  (_</_(_(_/_/_)_
	     /                               /|
	    '                               |/

			"Light Makes Right"

			   March 1, 1990
			Volume 4, Number 1

Compiled by Eric Haines, 3D/Eye Inc, 2359 Triphammer Rd, Ithaca, NY 14850
    erich@eye.com or uupsi!eye!erich
All contents are US copyright (c) 1990 by the individual authors
Archive locations: anonymous FTP at weedeater.math.yale.edu [130.132.23.17],
	/pub/RTNews, and others.
UUCP archive access: write Kory Hamzeh (quad.com!avatar!kory) for info.

Contents:
    Introduction
    New People, Address Changes, etc
    Ray Tracing Abstract Collection, by Tom Wilson
    Report on Lausanne Hardware Workshop, by Erik Jansen
    New Version of SPD Now Available, by Eric Haines
    Teapot Timings, by John Spackman
    The Very First Point in Polygon Publication, by Chris Schoeneman
    The Acne Problem, by Christophe Schlick
    Shirley Thesis Available via Anonymous FTP, by Pete Shirley
    Some Thoughts On Anti-aliasing, by Zap Anderssen, Eric Haines
    New VORT Release, and the VORT Chess Set, by Eric H. Echnida, David Hook
    Best (or at least Better) of RT News, by Tom Wilson
    At Long Last, Rayshade v4.0 is Available for Beta-testing, by Craig Kolb
    Parallel Ray Tracer, by Kory Hamzeh, Mike Muuss, Richard Webb
    Distributed DKB, by Jonas Yngvesson
    Quadrangle Tessellation, by Christophe Schlick
    New Release of Radiance Software & Bug Fixes, by Greg Ward
    Radiance Timings on a Stardent, by Eric Ost
    RTSYSTEM Fast Ray Tracer, by Patrick Aalto
    DKBTrace Beta Available, by David Buck
    "Computer Graphics" 2nd Edition Corrections, Software, etc by Brown Emailer
    Papers which are currently available from the Net via anon. FTP, J. Kouhia
    ======== USENET cullings follow ========
    Ray-Cylinder Intersection Tutorial, by Mark VandeWettering
    C++ Fractal and Ray Tracing Book, by Fractalman
    Ray/Spline Intersection Reference, by Spencer Thomas
    Superquadric Intersection, by Rick Speer, Michael B. Carter
    Comments on Interval Arithmetic, by Mark VandeWettering, Don Mitchell
    Platonic Solids, by Alan Paeth
    Moon Problems, by Ken Musgrave, Pete Shirley
    Ken gave another problem with rendering the moon:
    Shadow Testing Simplification, by Tom Wilson
    SIMD Parallel Ray Tracing, by George Kyriazis, Rick Speer
    Rayscene Animator, by Jari K{hk|nen [sic]
    SIPP 2.0 3d Rendering Package, by Jonas Yngvesson and Inge Wallin
    3DDDA Comments, by John Spackman
    Radiosity Implementation Available via FTP, by Sumant
    Dirt, by Paul Crowley
    Thomson Digital Images University Donation Program, by Michael Takayama
    A Brief Summary of Ray Tracing Related Stuff, Angus Y. Montgomery

-------------------------------------------------------------------------------

Introduction

Well, it's been awhile.  I've essentially fallen off the face of the earth,
with too much to do.  So, it's time to throw something together during compile
& link time (pity my new machine does is much faster - it makes it harder to
get out an issue).  As usual, the comp.graphics version does not have all the
comp.graphics cullings.  If you want the full issue, pull it in via FTP from
weedeater.math.yale.edu or any other site which carries it.

My big fun for the week has been looking at my head:  I finally received my
head database from the Visual Computing Group (415-688-7315) after getting my
head scanned in by their Cyberware scanner at SIGGRAPH.  Quite a bargain:
around $40 for a 256x512 mesh of my head.  There are lots of bad points in my
database:  the top of my head is missing and my hair has a natty dredlock
look, but still it's pretty amazing, and a little data interpolation here and
there cleans up the dropout data points.

The data comes in a cylindrical coordinate system format, and VSG provides
source code for a number of programs and utilities to convert the data in
various ways.  I've heard Spencer Thomas has rendered his face as an unrolled
cylinder, i.e. a height field.  So, have others out there played with these
data sets much?  Anybody want to trade heads?

-------------------------------------------------------------------------------

New People, Change of Addresses, etc

--------

Michael Cohen's email address is now mcohen@cs.utah.edu

Jim Arvo's email address is now arvo@graphics.cornell.edu

Peter Miller's email address is now pmiller@pdact.pd.necisa.oz.au

--------

Brian Corrie, University of Victoria, Victoria, B.C.

I am back from my trip down under, and have a new account here
at UVic.

My new address is bcorrie@csr.uvic.ca

One new area of interest.  I like the idea of the shading language provided by
the RenderMan Interface.  Do you know of anyone doing work in this area?  I
want to make a shader compiler, along with a shader library that has some of
the pixar functions (noise..etc), that will compile a shader description into
a linkable module that a renderer can link to at compile time.  Kind of a half
way step between interpreting the shader description and hard coding the
shader into the renderer itself.  If I set up some binding rules then any
renderer that follows those rules will be able to link to any shader compiled
with the shader compiler.  We can then pass around shader descriptions between
users.  What do you think?

--------

# Patrick Flynn - geometry (solid and image), algorithmics, computer vision
# Department of Computer Science and Engineering
# University of Notre Dame
# Notre Dame, Indiana 46556
# (219) 239-8375
alias patrick_flynn flynn@cse.nd.edu

My interest in ray tracing was an itch that I couldn't scratch during 1989 and
much of 1990, while I was dissertating.  I'm presently mapping subproblems
(like rendering) in model-based computer vision onto parallel architectures,
and developing a graphics course for seniors and grad students, with future
(unfocused) plans for an ongoing research program in graphics.

--------

# Jean-Philippe Thirion - coherency, interval arithmetic, data structure
# I.N.R.I.A.
# Domaine de Voluceau - Rocquencourt
# B.P. 105 - 78153 LE CHESNAY CEDEX
# FRANCE
# (33 1) 39 63 52 79
alias thirion thirion@bora.inria.fr

	I worked from 1986 to 1990 for a PHD thesis at the L.I.E.N.S., Paris,
France, with Pr Claude Puech.  First about data structures and hidden part
removal problems, then about the uses of coherency in order to improve ray
tracing.  This work have lead to a PHD thesis about object space and ray
coherency for ray tracing (in french), and two research reports (in English):

- Tries, data structure based on binary representation for ray tracing
- Interval arithmetic for high resolution ray tracing

	The first report have been published in Eurographics' 90.  I am
writing now a third report about some enumeration problems on binary space
partitioning trees.

--------

Christophe Schlick
LaBRI
351, Cours de la Liberation
33405 TALENCE CEDEX --- FRANCE
Email: schlick@geocub.greco-prog.fr

I am currently working on my PhD thesis on "Realistic Image Synthesis".  It
will include some classic topics as ray-tracing, radiosity, hydrid methods,
antialising and parallelization of those techniques.

--------

# Henry Kaufman
# Brown University graphics group
alias	henry_kaufman	hek@cs.brown.edu

I worked on an ARTS ray tracer (with the SEADS) data structure.  Since the
details of their algorithm were not well described in their paper (in my
opinion), I had to figure out most of the algorithm myself.  The part I found
interesting was "scan converting" my quadric surface primitives into the SEADS
voxels.  I tried to implement Bresenham style scan converters, which turned
out to be fairly efficient.  I just thought I'd mention it, even though you
are probably already familiar with the algorithms involved.

--------

# Nick Holliman - parallelism, CSG, spatial subdivision, ray tracing.
# School of Computer Studies,
# University of Leeds,
# Woodhouse Lane,
# Leeds, West Yorkshire, LS2 9JT.
# ENGLAND
# (0532) 335446
email: nick@uk.ac.leeds.dcs

I have just completed my PhD at Leeds and currently have an IBM research
fellowship at Leeds, working with the IBM UK Scientific Centre, Winchester.

My PhD work was the development of a parallel CSG ray tracer.  This takes a
CSG model and creates a distributed octree using a parallel algorithm.  It can
then ray trace the distributed octree and uses caching to avoid replicating
all the data on each processor.  The system scales well making good use of up
128 T800 transputers (as many as I could find in one place) handling models of
up to 20,000 CSG primitives.  We have recently extended this to support
parallel incremental update algorithms, so given a small change to the CSG
model the system updates the distributed octree and then updates the region of
image affected by the change.  Current work is looking at increasing the model
size, improving the scalability and porting the system to other parallel
machines.

I would be keen to hear from anyone working on similar parallel algorithms and
if anyone is interested they are welcome to a copy of my thesis.

--------

Ning Zhang

I have written a Radiosity package and both input and output are based on the
NFF file format.  Have you considered to have a similar benchmarking package
for comparing different Radiosity software and hardware?  I think that would
be an interesting subject.  [Any volunteers? - EAH]

----- Self description -----
# Ning Zhang - Radiosity, Raytracing, Image Compression
# School of Electrical Engineering
# The University of Sydney
# NSW 2006, Australia
# + 61 (02) 692 2962
alias ning_zhang nzhang@ee.su.oz.au

I have been in Germany for two year as a visiting scientist at the Computer
Graphics Center (ZGDV), Darmstadt.  There I developed a Radiosity/Raytracing/
FastZBuffer package running on many platforms, including 386PC.  Recently I
came to Australia as a visiting fellow at the Univ.  of Sydney.

Email: nzhang@ee.su.OZ.AU
       (In Germany, zhang@agd.fhg.de or zhang@zgdvda.uucp)

--------

Craig McNaughton

Interests - CSG, Animation, higher dimension, distortions

Address -
C/- Computer Science Department
University of Otago
PO Box 56
Dunedin
New Zealand
ph (+64) 3 4798488

I'm in the Graphics Research Group at Otago University, currently doing a
Masters.  My main project at the moment is a general 4D CSG ray tracer, and
with luck, that will soon be extended to nD.  Other current work include
getting the most out of 8 bit displays, and character animation with CSG
systems.

--------

# David B. Wecker - Author of DBW_Render, DIDDLY and other rendering tools
# Digital Equipment Corporation
# 1175 Chapel Hills Drive, CXN1/2
# Colorado Springs, CO  80920-3952
# (719) 260-2794
alias	david_wecker	wecker@child.enet.dec.com

If people want the current version of DBW (V2.0) for their Amiga, they should
contact:

	William T. Baldridge
	2650 Sherwood Lane
	Pueblo, CO   81005
	    (BIX: WBALDRIDGE)

--------

David Tonnesen
Dept. of Computer Science
University of Toronto
Toronto, Ontario  M5S 1A4
(416) 978-6986 and 978-6619

research: modeling and rendering

e-mail:  davet@dgp.toronto.edu   (UUCP/CSNET/ARPANET)
	 davet@dgp.utoronto      (BITNET)

I am a Ph.D. student at the University of Toronto and interested in a variety
of graphic topics including ray tracing.  Other areas I am interested in
include models of deformable models, global illumination models, models of
natural phenomena, physically based modeling, etc...  Basically modeling and
rendering.

--------

Jim Rose, Visualization Research Asst.
Visualization Research Group
Utah Supercomputing Institute
rose@graph.usi.utah.edu

--------

Bruce Kahn - general concepts, portable packages, microcomputer based pkgs.
Data General Corp.
4400 Computer Dr.
MS D112
Westboro MA 01580
(508) 870-6488
bkahn@archive.webo.dg.com

-------------------------------------------------------------------------------

Ray Tracing Abstract Collection, by Tom Wilson (wilson@cs.ucf.edu)

[Tom has done the graphics community a great service in making available (for
free) all the ray tracing abstracts from articles he's collected.  If you run
into a ray tracing article title which sounds interesting, you should check
here first to see if it's really what you want before wasting time tracking it
down. --EAH]

The next release of the collection of ray tracing abstracts are ready.  I've
put them in several ftp sites (listed below).  To get it, do the following:
get rtabs.shar.1.91.Z using binary mode.  Uncompress it.  Unshar it (sh
rtabs.1.91).  Then read READ.ME.  If you have any problems, send me e-mail.
The abstracts can be converted to two forms:  Latex (preferred) or troff -me.
The filters I've included may help you write your own if you need something
else.  I received no feedback concerning problems, so evidently it works for
those who tried it.

USA Sites:
weedeater.math.yale.edu (130.132.23.17) in directory pub/Papers
   Craig Kolb <craig@weedeater.math.yale.edu>
karazm.math.uh.edu (132.170.108.2) in directory pub/Graphics
   J. Eric Townsend <jet@karazm.math.uh.edu>
iear.arts.rpi.edu (128.113.6.10) in directory pub/graphics/ray
   George Kyriazis <kyriazis@iear.arts.rpi.edu>

Australia Site:
gondwana.ecr.mu.oz.au (128.250.1.63) in directory pub/rtabs
   Bernie Kirby <bernie@ecr.mu.oz.au>

Finland Site:
maeglin.mt.luth.se (130.240.0.25) in directory graphics/raytracing/Doc
   Sven-Ove Westberg <sow@cad.luth.se>

Tom Wilson
Center for Parallel Computation
University of Central Florida
P.O. Box 25000
Orlando, FL 32816-0362
wilson@cs.ucf.edu

--------

Juhana Kouhia notes:

I have converted RT abstract to Postscript format - it's available from
nic.funet.fi pub/graphics/papers directory as wils90.1.ps.Z.

-------------------------------------------------------------------------------

Report on Lausanne Hardware Workshop, by Erik Jansen

There was a Hardware Workshop on the 2nd and 3rd of September in Lausanne.
Three papers on ray tracing are worth mentioning:

%A M-P Hebert
%A M.J. McNeill
%A B. Shah
%A R.L. Grimsdale
%A P.F. Lister
%T MARTI, A Multi-processor Architecture for Ray Tracing Images

%A D. Badouel
%A T. Priol
%T An Efficient Parallel Ray Tracing Scheme for Highly Parallel Architectures

%A Li-Sheng Shen
%A E. Deprettere
%A P. Dewilde
%T A new space partitioning Technique to Support a Highly Pipelined
%T Architecture for the Radiosity Method

The first paper discussed a load distribution based on an image space
partitioning.  Additionally the scene was stored in an octree type of space
subdivision of which the first (top) n levels were duplicated over the
processors and stored in their local cache.  The other levels of the database
were communicated when needed.  Their figures (testpicture "gears") showed
that a local cache of 1/8 to 1/4 of the size of the total database is
sufficient to avoid communication bottle-necks.  It was not clear from the
presentation whether these results also hold for more complex scenes.

The second paper discussed a virtual memory scheme that distributed the data
base over the processors in a way that 'ray coherence' is maintained as much
as possible.  The left-over memory at the processors is used as a cache.

The third paper discussed a ray coherence scheme that was based on a
subdivision of a ray frustrum in sectors and a comparison of a probablistic
and deterministic method to determine the width (angle) of the sectors.

These papers will be published by Springer in "Advances in Computer Graphics
Hardware", vol. V. (ed. R. Grimsdale and A. Kaufman).

--------------------------------------------------------------------------

New Version of SPD Now Available, by Eric Haines

Version 3.0 of the Standard Procedural Databases is out.  The major addition
is the teapot database.  Other changes include being able to input a size
factor on the command line, changing mountain.c to mount.c (for IBM users),
and many changes and new statistics in the README file.

The teapot database is essentially that published in IEEE CG&A in Jan. 1987.
A checkerboard has been added underneath it to give it something to reflect &
cast a shadow upon.  The checkerboard also reflects the subdivision amount of
the teapot patches, e.g. a 7x7 checkerboard means that each patch is
subdivided to 7x7x2 triangles.  Degenerate (e.g. no area, 0 length normal)
polygons are either removed or corrected by the program.  The bottom of the
teapot can optionally be removed.

The images for these databases and other information about them can be found
in "A Proposal for Standard Graphics Environments," IEEE Computer Graphics and
Applications, vol. 7, no. 11, November 1987, pp. 3-5.  See IEEE CG&A, vol. 8,
no. 1, January 1988, p. 18 for the correct image of the tree database
(the only difference is that the sky is blue, not orange).  The teapot
database was added later, and consists of a shiny teapot on a shiny
checkerboard.

The SPD package is available via anonymous FTP from:

	weedeater.math.yale.edu [130.132.23.17]
	irisa.fr [131.254.2.3]

among others.  For those without FTP access, write to the netlib automatic
mailer:  research!netlib and netlib@ornl.gov are the sites.  Send a one line
message "send index" for more information, or "send haines from graphics" for
the latest version of the SPD package.

-------------------------------------------------------------------------------

Teapot Timings, by John Spackman

	I enclose some timing figures for the SPD `teapot', ray-traced at
various degrees of tessellation.  All images were rendered at 512x512 with two
light sources.  The models were triangulated at a pre-processing stage to
support smooth shading with barycentric co-ordinates.  I'm not sure whether
all other conditions were as documented in SPD, but the output looks right!

	The host machine was a SUN SPARCStation IPC.

-------------------------------------------------------------------
SIZE FACTOR	# TRIANGLES	OCTTREE CONSTRUCTION	RAY-TRACING
				(SECS)			(SECS)
-------------------------------------------------------------------
1		58		9.4			841.4

2		248		18.5			832.2

3		570     	25.8			837.2

4		1024		35.7			844.8

5		1610		40.8			859.9

6		2328		50.4			864.2

7		3178		56.6			878.8

8		4160		66.5			900.7 (<-- 15 minutes)

9		5274		74.0			911.2

10		6520		81.6			922.7

11		7898		88.7			937.4

12		9408		95.0			954.7

-------------------------------------------------------------------------------

The Very First Point in Polygon Publication, by Chris Schoeneman
	(nujy@vax5.ccs.cornell.edu)

   A few months ago there was a discussion on point in polygon algorithms
(again), and one netter suggested perturbing vertices to avoid special cases
(using the Jordan curve theorem).  This led me to the solution I eventually
found that you had already made.

   Anyway, I just found the same solution in CACM, Aug.  1962, p.  434, and I
thought you might be interested to know.

   The algorithm as given has an error in the if statement.  It should
read:
   if (y<=y[i] ...
not
   if (y<y[i] ....

The author, M. Shimrat, doesn't check the horizontal bounds of the edge to
trivially accept or reject the intersection, but this, of course, isn't the
key to the solution.

So much for the patent. :-)

-------------------------------------------------------------------------------

The Acne Problem, by Christophe Schlick (schlick@geocub.greco-prog.fr)


What is Ray-Tracer Acne (RTA) ?
-----------------------------

As defined by Eric Haines in an early Ray-Tracing News, RTA is those black
dots that appear sometimes on a ray-traced image.  RTA can result from two
different phenomena:

Case 1:  When a ray intersects a surface S, some floating round-off can bring
the intersected point P inside the surface (Figure 1).  So, when a reflected
ray or a light ray is shoot from P, it will intersect S and P will be in
shadow.

A common solution to this problem is to let some tolerance (T) for the inter-
section test:  when an intersection point M is found, the distance D between
the origin of the ray and M is computed, and compared to T, the point is
rejected if D < T.  The problem with that method is that T is an absolute
value, coded in the source code and sometimes not adapted for a specific
scene.  A solution could be to scale T by the global diameter of the scene but
there will still be some scene for which it wouldn't work (for instance, a
scene will very big and very small sphere).


Case 2:  This case can only appear when rendering an object composed of
polygons with interpolated normal vectors (Phong's method).  This pathological
case was first described by Snyder & Barr at the SIGGRAPH 87 conference.  When
a ray intersects a point P, the normal vector Np is interpolated from the
vertices normals thanks to the barycentric coordinates (Figure 2).  Thus a
reflected ray or a light ray shot from P according to Np (direction Out in
Figure 2) can intersect another facet (Q on Figure 2).

Here the tolerance solution doesn't work because Q can be arbitrarily far from
P (it depends on the surface curvature and the size of facets).  Thus Snyder &
Barr proposed to move P from the surface along Np by a certain distance D, and
to consider the new point P' as the intersection point from which other rays
are shot.  Again, D is hard-coded and there would be some scenes for which D
is too high or too low.

			      Out
Out       In           Na       \      Nb
  \       /             \        \     /
   \     /               \        \Q  /     Np
-------------- S          \--------*-/    _/
     \ /                   A        B\  _/
      *                               \/___________ In
      P                               P\
				        \______Nc
				        C
  Figure 1                  Figure 2


The solution I propose is easy to implement, and robust whatever the scene.
If you examine Figure 1 and 2, you can see that, in the two case the error
comes from a sideness problem:  a ray that is assumed to be outside S finds an
intersection that is inside S.  In a ray-tracer, you are always carrying (in
the ray structure) a pointer to the media through which the ray is currently
traveling --- that pointer is needed for refraction.  So when intersecting a
surface, you always know the current media.  When you find a ray intersecting
a surface by the inside (negative cosine), the only thing to do is to test the
current media with the media inside the surface (checking the pointers).  If
the two medias are different, there is a sideness problem and the point must
be rejected.  That leads clearly to the following algorithm:

When a ray R traveling through a media M, with a directing [e.g. reflecting]
vector V hits a point P with a normal vector N on a surface S do

Begin

 Compute Cos = -N.V (the cosine between the ray direction and normal vector)

 If Cos > 0					/* R outside S */

   then no problem

   else						/* R inside S */

     if M != S then reject the point		/* Sideness error */
	       else no problem

End


The cost of that method is a dot product between V and N.  For reflection
rays, that dot product has to be done in the shading phase, so if you store
the result of the cosine in the ray structure, there will no extra cost.  But
for light rays, the dot product is an extra cost.  Compared to Snyder & Barr's
method, the cost is lower because moving a point along a normal vector needs a
linear combination (3 additions, 3 multiplications).  For non-polygonalized
surfaces, the tolerance method is cheaper, but isn't robust.

Well, make your choice. And let me know your comments (and flames !)

--------

>From Eric Haines:

    Note that a third source of acne is from bump mapping - essentially it's
the same problem as in case 2, but this problem of the reflection ray passing
through the object can then happen to any object, not just polygons with a
normal per vertex.

    Personally, I solve this problem by doing the check Christophe recommends,
i.e. check the dot product of the normal and the reflection ray.  If the
reflection ray is bad, I solve this by adding some portion of the normal to
the reflection ray which puts the ray above the tangent plane at that point.
This can result in the reflections getting a bit squished on these bad pixels,
but this is much better than seeing black acne pixels.

    Christophe's solution of rejecting intersections if the media does not
match is fine if you have well-behaved users, but our system assumes the user
can do whatever foolish thing he wants with surface definitions (my favorites
are us allowing different indices of refraction and different transmission
colors for front and back faces).

    How do the rest of you solve this problem?

-------------------------------------------------------------------------------

Shirley Thesis Available via Anonymous FTP, by Pete Shirley

I have put a postscript copy of my thesis "Physically Based Lighting
Calculations for Computer Graphics" on anon ftp on cica.cica.indiana.edu
(129.79.20.22).  I have broken the thing into several files and compressed
them.  Each file, when uncompressed, should be a stand-alone postscript file.
The whole thing is about 180 pages long, so please don't kill any trees if you
don't really want it!

WARNING:  The thesis files have many postscript figures in it.  It is a real
printer hog.  Please be thoughtful about printing and storing the thing.

I have also put several 24 bit ppm image files in compressed form.  Don't
forget to set ftp in "binary" mode.

If you want this thesis and cannot get/print these files, please don't ask me
for alternate formats until after Dec. 15th.  Thanks.  It will also be
available as a U of Illinois tr (I don't know when).

The directory is pub/Thesis/Shirley.

Pete Shirley
Indiana U
shirley@cs.indiana.edu
"I didn't say it was good, just long!"

[This thesis is, among other things, a great survey of most every rendering
technique to date.  "wine.ppm.Z" is an excellent image, even with the week+ of
computation.  - EAH]

Thesis files:

ch1ch2.ps.Z      (125K)  Title page, etc.
			 Chapter 1  Introduction
			 Chapter 2  Basic Optics
ch3ch4.ps.Z      (105K)  Chapter 3  Global Illumination Problem Formulation
			 Chapter 4  Surface Reflection Models
ch5a.ps.Z        (221K)
ch5b.ps.Z        (247K)  Chapter 5  Image Space Solution Methods
ch6a.ps.Z        (261K)
ch6b.ps.Z        (250K)  Chapter 6  Zonal Solution Methods
ch7ch8ch9.ps.Z   (123K)  Chapter 7  Extensions to Basic Methods
			 Chapter 8  Implementation
			 Chapter 9  Conclusion
app.ps.Z          (44K)  Appendices
bib.ps.Z          (30K)  Bibliography


Picture files (ppm format, Makefile, seeppm.c  viewing on SGI Iris.)

bump.ppm.Z (1.3M)   1024 by 768  camera effects, bump map, indirect lighting.
gal.ppm.Z  (1.2M)   1024 by 768  solid textures, indirect lighting
dense.ppm.Z (.2M)    512 by 384  dense media(8 volume zones for indirect light)
sparse.ppm.Z (1.2M) 1024 by 768  sparse "   "   "
turb.ppm.Z (.3M)     512 by 384  uneven " " " "
lamp.ppm.Z (.8M)    1024 by 768  diffuse transmission, indirect lighting
wine.ppm.Z (1.1M)   1024 by 768  fresnel reflection, bw ray tracing

-------------------------------------------------------------------------------

Some Thoughts On Anti-aliasing, by Zap Anderssen, Eric Haines

>From Zap:

About Anti-aliasing:  The "standard" way of doing adaptive anti-a (refered to
as AA from now on, I'm lazy) is to check the color of last pixel and pixel
above or similar, and subdivide if they differ more than "this-n-that" from
eachother.  Although you remember that I said once you should check what
OBJECT you hit (i.e. ignoring the color) and use that as subdiv criteria, and
you posted a mild flame that it wouldn't work on things like a patch mesh
since we spend useless time at all edges that are same all the same....

Well, since I think that all anti-aliasing for texturemapping should be done
in the texture mapping functions (for which I have a nasty trick involving the
angle to the normal vector of the surf and distance to screen e.t.c.) you
should only need extra rays at object edges.

But consider, what makes a surface change color abruptly? It's one of three
things:

* Shadow hits
* Texture map changes color
* Normal veccy moves a lot.

Ok, so here is my idea:  You don't save the COLOR for last pixel, but the
normal vector.  When this deviates more than this_n_that, you supersample.
Then you say "hey, sucker that doesn't work!  Edges may be at different levels
or so but still have the same normal veccy" and to that I say, sure thang,
multiply the normal vector with the distance to the eye, and use that for a
criteria?  Well, we lose the AA on the shadows, regrettably, but we get
another nice criteria....  Just a stupid idea, I know there are flaws, like
what happens if this Red surface is adjoining a blue one (i.e. color diff in
geometry rather than texture map?)  This shouldn't be allowed ;-)

Seriously, there must be ways to fix this?  I NEVER liked the idea of AA'ing
on the color diffs, since texturemaps may introduce heavy diffs even if no
extra rays are really needed....

Any Commentos?

--------

	Oddly, I was just writing up my views on anti-aliasing yesterday,
though for different reasons.  I agree, it would be nice to avoid subdivision
on texture mapped guys.  Let's see, we have aliasing due to:

edges of objects
shadow boundaries
rapidly changing reflections/refractions or highlights
texture maps

So, we could do something like separate out these various factors and do tests
on each.  The trick is that there's interaction between these things (shadows
and texture maps' effects get multiplied together:  rapidly changing textures
in the dark don't matter) and that separating these out gives some weird
problems (say I say "if contributions absolute values vary by more than 10%",
and then I find that my highlight varies 5%, my reflection varies 7% and my
shadow varies %4, should I antialias?  If yes, then why did I bother to
compute these fractional contributions.  If no, then the color really is
varying a lot and should be supersampled).  One advantage of separating stuff
out is for texture maps sampled with area methods (e.g. mipmapping) - if the
surface change ratio is separate from all the other ratios, then we could
simply say "ignore the surface change ratio" if we hit an area-sampled
textured object.

I don't have a good answer (yet?), but am interested to hear your ideas.
Using the normal is interesting, but it feels a bit removed from the problem:
on my "balls" (sphereflake) database, the normals change rapidly for the tiny
spheres, but say all the spheres just reflected things (no diffuse, no
highlights).  Much of the spheres' reflections is the background color, which
doesn't change at all.  So super-samples which merely also hit the background
in this case don't add anything, just waste time.  By checking how much the
reflection color changed, instead, we could shoot less rays.

Variance on something like the normal might be worth tracking, as you point
out, but what if you're, say, viewing a surface which is in shadow?  The
normal varies, but the shading doesn't (no light), so you waste effort.
Another interesting problem is anti-aliasing bump maps:  do you really want to
do much blending, which would then smooth out the bumps?

-------------------------------------------------------------------------------

New VORT Release, and the VORT Chess Set, by Eric H. Echnida, David Hook


Eric H. Echidna (echidna@munnari.oz.au) writes:

Vort 2.0 is now 'officially' available from the following sites:

gondwana.ecr.mu.oz.au [128.250.1.63] pub/vort.tar.Z
munnari.oz.au [128.250.1.21] pub/graphics/vort.tar.Z
uunet.uu.net [192.48.96.2] graphics/vogle/vort.tar.Z
 (uucp access as well ~ftp/graphics/vogle/vort.tar.Z

draci.cs.uow.edu.au [130.130.64.3] netlib/vort/*

Australian ACSnet sites may use fetchfile:
	fetchfile -dgondwana.ecr.mu.oz pub/vort.tar.Z

or obtain it from the Australian Netlib at draci.cs.uow.oz

==================

VORT - a very ordinary rendering tool-kit. Version 2.0

VORT is a collection of tools and a library for the generation and
manipulation of images.  It includes a ray tracer, pixel library, and
utilities for doing gamma correction, median cuts, etc...

The current distribution is described as follows:

art	- a ray tracer for doing algebraic surfaces and CSG models. Art
	supports the following primitives: polygons, offfiles, rings, disks,
	torii, superquadrics, spheres, ellipsoids, boxes, cones, cylinders,
	and algebraic surfaces. Tile patterns can be mapped onto all of the
	above except algebraics, boxes, and offfiles. The following textures
	are supported, bumpy, wave, marble, wood, ripple, fuzzy, stucco,
	spotted, granite, and colorblend. The ray tracer uses an
	acceleration technique based on the spatial kd-tree.

tools	- some utilities for fiddling around with image files, and converting
	between various formats. These include tools for converting nff files
	to art format, and vort files to ppm format (and ppm to vort).

docs	- various bits of doco on the tools

lib	- the pixel library files - this has only been developed to the
	point needed to read and write display files.

old	- this contains a program for converting image files from 1.0
	format to 1.1.

sun	- a set of display programs for a sun workstation.

X11	- a set of display programs for a 256 color X machine.

iris	- a display program for an Iris workstation.

pc	- a set of display programs for a PC with VGA Extended Graphics.

movies	- some C programs for generating some sample animations.

tutes	- some C programs for generating some animations that demonstrate
	the texturing.

text3d 	- some C routines that read in the hershey fonts provided with VOGLE
	and use them to generate 3d text. Example input files are provided.

--------

	There are several scenes and output images from the raytracer `art'
available on gondwana.ecr.mu.oz.au [128.250.1.63] in the directory pub/images.
These images are 8 bit sun rasterfiles and can be converted to other formats
using (for example) pbmplus.

	There are also some simple 36 frame movies in the directories
pub/movies/* .  These files are in VORT format and can be displayed with the
movie programs that come with VORT or can be converted to another format using
the vorttoppm program that comes with VORT.

--------

[Another package available & of interest:]

VOGLE 1.2 fixes some bugs and includes some speed ups from previous versions
of VOGLE.  Specifically, all matrix stack manipulations are now 'done in
place' to save time.

VOGLE stands for "Very Ordinary Graphics Learning Environment".

VOGLE is a device portable 3D graphics library that is loosely based on the
Silicon Graphics Iris GL.  VOGLE also supports 3D software text in the form of
Hershey vector fonts.

--------

David Hook notes:

> 'vort' package includes a chess set (it's the same one as the set at DEC).

There is a better chess set in pub/chess20.tar.Z on gondwana.ecr.mu.oz.au
(128.250.1.63) in the anonymous ftp area.  These pieces were also done by the
same person (Randy Brown) who did the ones that are in VORT, but they are much
more detailed.

--------------------------------------------------------------------------

Best (or at least Better) of RT News, by Tom Wilson

I've scrunched all of the 1988 RTNews.  I thought I would post to the net to
see if anyone else would want it.  What I've removed is:  names/addresses,
product reviews, reports on tracer bugs/fixes, and anything else that is
"out-dated."  I've taken this approach since these programs probably no longer
have the bugs and errors.  I don't know if many people are going to want the
scrunched versions, but I just wanted basic RT info to print and save.  Some
of the issues weren't scrunched much (mainly the first few), but some are
about 1/3 the size (90,000 => 30,000 bytes).

Tom
wilson@cs.ucf.edu

-------------------------------------------------------------------------------

At Long Last, Rayshade v4.0 is Available for Beta-testing, by Craig Kolb
	(kolb@yale.edu)

The distribution is available by anonymous ftp from weedeater.math.yale.edu
(130.132.23.17) in pub/rayshade.4.0 as either "rayshade.4.0.tar.Z" or the 16
compressed tar files in the subdirectory "kits@0".

Please direct comments, questions, configuration files, source code, and the
like to rayshade@weedeater.math.yale.edu.  If you get semi-serious about using
rayshade, drop us a line and we'll add you to our mailing list.

Thanks to everybody who contributed to this and previous versions of rayshade,
and to those of you who are willing to provide feedback on this Beta release.

>From the README file:
------
This is the Beta release of rayshade version 4.0, a ray tracing program.
Rayshade reads a multi-line ASCII file describing a scene to be rendered
and produces a Utah Raster Toolkit "RLE" format file containing the
ray traced image.

Rayshade features:

	Eleven primitives (blob, box, cone, cylinder, height field,
	plane, polygon, sphere, torus, flat- and Phong-shaded triangle)

	Aggregate objects

	Constructive solid geometry

	Point, directional, extended, spot, and quadrilateral light sources

	Solid procedural texturing, bump mapping, and
		2D "image" texture mapping

	Antialiasing through adaptive supersampling or "jittered" sampling

	Arbitrary linear transformations on objects and texture/bump maps.

	Use of uniform spatial subdivision or hierarchy of bounding
		volumes to speed rendering

	Options to facilitate rendering of stereo pairs

This is Really and Truly a Beta release:  No patches will be issued to upgrade
from this distribution and the 'official' release.  The documentation is
spotty, and there is no proper 'man' page.

There are many differences between rayshade versions 3.0 and 4.0.  In
particular, the input syntax has changed.  Input files created for version 3.0
must be converted to version 4.0 syntax using the provided conversion utility
(rsconvert).

Rayshade v4.0 Beta has been tested on several different UNIX-based computers,
including:  SGI 4D, IBM RS6000, Sun Sparcstation 1, Sun 3/160, DECstation,
Apollo DN10000.  If your machine has a C compiler, enough memory (at least
2Mb), and runs something resembling UNIX, rayshade should be fairly easy to
port.

[...]

It is hoped that the 'official' release will include a library that provides a
C interface to the various ray tracing libraries.  While there is currently no
documentation for the libraries, it should be easy for you to add your own
primitives, textures, aggregates, and light sources by looking at the code and
sending mail if you get stuck.

It is also hoped that the modular nature of the primitive, aggregate, texture,
and light source libraries will make it possible for people to write their own
code and to "swap" objects, textures, and the like over the net.  The object
interfaces are far from perfect, but it is hoped that they provide a
reasonable balance between modularity and speed.

Additional rayshade goodies are available via anonymous ftp from
weedeater.math.yale.edu (130.132.23.17) in pub/rayshade.4.0.  If you have
contributions of new objects, textures, input files, configuration files, or
images to make, feel free to send us email or to drop off a copy via ftp in
the "incoming" directory on weedeater.

-------------------------------------------------------------------------------

Parallel Ray Tracer, by Kory Hamzeh, Mike Muuss, Richard Webb

Parallel Raytracer Available, Kory Hamzeh (kory@avatar.avatar.com)

I wrote a parallel raytracer (named 'prt') a while back with the following
features:

	- Runs on multiple heterogeneous machines networked together
	  using TCP/IP.
	- Crude load balancing.
	- Primitives:
		- Polygons
		- Spheres
		- Hallow Sphere
		- Cones
		- Cylinders
		- A object that can be expressed in a quadratic form
		- Rings
	- Shading:
		- Gourard
		- Phong
		- Whitted
	- Rendering:
		- Simple: one ray per pixel
		- Stochastic sampling (jitter)
	- Instances of objects.
	- Input format:
		- An extension of nff. I have written a filter which
		  will convert NFF files to prt format.
	- Output format:
		- MTV format (24 bit).

Note that prt is a parallel raytracer which spawns off children over multiple
machines to distribute the work.  I have only used it on five Sun
Sparcstations and have gotten excellent performance.  I'm not aware of any
other public domain parallel raytracers other than VM_pRay (which, I believe,
runs only on a specific platform).

--------

Prt version 1.01 is now available from the following sites via
anonymous ftp:

apple.apple.com in /pub/ArchiveVol2/prt
iear.arts.rpi.edu in pub/graphics/ray/prt

If you have a copy of version 1.0, I would recommend that you get a copy of
the newer one.  I will *not* post it again in alt.sources.  If you can't ftp
it, send me mail and I will mail you a copy.

Version 1.01 had some minor bugs fixed and I included some info that I had
forgotten to put in the first set of docs.

--------

Michael John Muuss (mike@BRL.MIL) writes:

>       [ lots of good stuff about BRL-CAD deleted ]
>
> While not "super sophisticated", the load balancing algorithm is
> non-trivial.  It assigns pixel blocks of variable size.  Three
> assignments are outstanding to each worker at any time, to
> pipeline against network delay.  New assignment size is tuned,
> based upon measured past performance (weighted average with variable
> weighting, depending on circumstances).

and Kory responds:

Prt's load balancing is not as sophisticated as BRL-CAD's.  Prt simply
multiplexes the scanlines across the different machines.  The slowest machine
on the network will be the bottle neck.

When I get a chance, I would like to use the same techniques used in BRL-CAD.
I think that it is the best load balancing for a raytracer.

--------

Timings of PRT, by Richard Webb

Here is a snippet of a bug report I sent off to kory@avatar.com regarding his
"prt1.01" parallel ray tracer.  Note that I ran these test on your NFF SPD
version 2.4.  I'll grab v3.0 and re-run if you think it will make any
difference.  I ran the test on a network of 25 Sun4's (both SparcStation1's
and 1+'s as well as a Solbourne and a few SunServers).  Some machines finished
in about 1/2 of the elapsed time.  The time is as reported by the PRT program.

 TEST     | TIME (mm:ss)
----------+--------------
 balls    | 10:32
 gears    | 16:49
 mountain |  9:27
 rings    | 13:05
 tetra    |  0:59
 tree     |  4:05

It would be nice to have some texturing capability in NFF, but then I guess
that is somewhat too sophisticated for a "Neutral" format.  The SIPP2.0 image
"isy90" marble teapot on a granite base looks very good except there are no
shadows.  I hope someday we will have fast Renderman parallel "cookers"
generating nice compressed video sequences.

-------------------------------------------------------------------------------

Distributed DKB, by Jonas Yngvesson

DDKB (distributed dkb) has the same primitives, shading and input format as
DKB (for obvious reasons).  This includes very nice support for solid
texturing.

Antialiasing differs slightly.  DKB uses an adaptive jittered supersampling,
but in DDKB, each server only has access to one scanline at a time.  This
means adaption is done in the x-direction only.

>	- Output format:
>		- MTV format (24 bit).

Here I have used a mix between the MTV-format an the QRT-format used in DKB.
An MTV-header is followed by the scanlines in QRT-format.  Each line tagged
with its line number (this is because the lines are stored in the order they
are finished by the servers and must be sorted before display).

>Note that prt is a parallel raytracer which spawns off children over
>multiple machines to distribute the work. I have only used it on five
>Sun Sparcstations and have gotten excellent performance. I'm not aware
>of any other public domain parallel raytracers other than VM_pRay
>(which, I believe, runs only on a specific platform).

Yeah!  We have tried DDKB running on 55 sparcstations (then we ran out of file
descriptors, perhaps we should use UDP instead of TCP).  Pretty good
performance indeed.  Unfortunately DKB is not a terribly fast tracer in itself
and I have no time to hack around in it.

I'm not very willing so send this thing out though.  Partly because it is
still only a hack, and partly because I have not contacted David Buck and
asked what he thinks about the whole thing.

jonas-y@isy.liu.se
...!uunet!isy.liu.se!jonas-y
University of Linkoping, Sweden

-------------------------------------------------------------------------------

Quadrangle Tessellation, by Christophe Schlick

Why I believe that a quadrangle tessellation is better than a triangle one.
---------------------------------------------------------------------------

Tessellating objects using triangles has become a very popular method in the
ray-tracing world, at least since the classical paper of Snyder & Barr (SIG
87) The classical idea is to start from a parametric surface f(u,v) and
regularly sample the two parameters u and v.  That gives a mesh composed of
quadrangles ABCD where A = f(u, v), B = f(u+du, v), C = f(u+du, v+dv) and D =
f(u, v+dv).  Then the only thing to do is to cut the quadrangle into two
triangles ABC, CDA for instance.


		    D ------------- C
		     |           _/\
		     |         _/   \
		     |       _/      \
		     |     _/         \
		     |   _/            \
		     | _/               \
		     |/                  \
		    A -------------------- B



Using triangles instead of quadrangles has one main advantage:  speed Indeed,
computing the intersection between a ray and a triangle, and getting the
barycentric coordinates (needed for interpolation) is simply a linear system
of two equations.  An optimized algorithm would only take very few floating
point operations (about 8 multiplications after the ray/plane intersection)


But, using triangles instead of quadrangles has also several drawbacks:

First, there a two solutions to decompose a quadrangle into two triangles.
There is no absolute choice, and results will be very different when you take
ABC-CDA instead of DAB-BCD (especially if the curvature of the surface is
high).

But the main drawback of triangles is that the isoparametrics (u constant or v
constant) are not preserved (see figure).  Thus every texture mapping (or
radiosity reading, if you use a radiosity first pass in your system) will be
deformed.


       u=0   u=.5   u=1                   u=0   u=.5   u=1
      D ------------- C                   D ------------- C
	|      \      \                    |      |    _/\
	|      {       \                   |      |  _/   \
	|       \       \                  |      |_/      \
	|       {        \                 |     _/         \
	|        \        \                |   _/  \         \
	|        {         \               | _/     \         \
	|         \         \              |/        \         \
       A -------------------- B          A -------------------- B
       u=0      u=.5        u=1           u=0       u=.5       u=1

	  Isoparametric u=.5 using 1 quadrangle and 2 triangles
	 "Three is bad, four is good" Dustin Hoffman (in RainMan)


But there is a drawback using quadrangles instead of triangles:  speed Indeed,
computing the intersection and getting the (u,v) parameters of the
intersection point consists in solving a non-linear system of equations.  And
there is a square root.  Yerk!  (see Haines' chapter in "Introduction to Ray
Tracing")

Fortunately the square root can be avoided in many cases!  Tessellating
classical scenes will create a lot of very symphatic quadrangles:  squares,
rectangles, parallelograms and trapezoids.  For instance, tessellating a
surface of revolution (or more generally a surface created by sweeping a curve
around an axis, according to another curve) will create only trapezoids.

For squares, rectangles and parallelograms, (u,v) are given by a linear
system.  For trapezoids, (u,v) are given by a system of one linear equation
and one non- linear equation.  For all that cases, finding directly the
intersection between the ray and ONE quadrangle is LESS COSTLY than finding
the intersection between the ray and TWO triangles.

Another advantage of dealing with quadrangles vs triangles is the memory cost.
There are less primitives to create, to store, to transform, to put in
voxels...

Finally, never forget that for shadow rays, (u,v) are not needed.  Thus using
a simple intersection test (Jordan test) will be faster, for the same result.

--------

reply from Eric Haines:

	Just to be a devil's advocate, I thought I'd bring up some problems
with quadrilaterals versus triangles.  I personally like quadrilaterals,
especially because of the strange shading artifacts from triangles.  However,
we sometimes use triangles, such as in the situation where the quad mesh gives
quadrilaterals that are not totally planar (i.e. the four points don't lie in
a single plane).  This is ill-defined, and so we must triangulate.

	A more important time for triangulation is in animation.  If you use
Gouraud interpolation for shading your primitives, triangles are rotation
invariant, while quadrilaterals are not.  As such, if you make a film of some
object made of quadrilaterals rotating, you can see the quadrilateral shades
change over time.

--------

Christophe's reply:

	Well, non planar quadrilaterals surely can be a problem.  But when
rendering a scene there are so much approximations that are made (on the
illumination model, on the shading technique, on the normal vector
computation, ...)  So, why should it not be allowed to do some approximation
about the geometry and replace a non planar polygon by a planar one?

	The method I use is the following.  When sampling your object along
the u and v coordinates, test about the "planarity" of the quadrangle.  One
technique could be to compute the solid angle subtended by the 4 vertex normal
vectors.  Another technique could be to compute the max distance from a vertex
to the "approximation plane" (AC x BD).  When the solid angle or the max
distance is greater than a given tolerance, then subdivide the quadrangle
(quadtree-like scheme) until the tolerance is reached.  Of course, every one
knows that patch cracking (as defined by Catmull) should occur.  But
practically I you insure that two adjacent quadrangles are at neighboring
levels in the quadtree, cracking can not be visually detected.  (I used that
trick intuitively though I saw recently a paper on it, can't remember
where...)

	Concerning Gouraud shading, rotation invariance of triangles vs
quadrangles is not due to the number of vertex but to the fact that Gouraud
shading use an interpolation in screen space.  I should avoid like the plague
every method that interpolates in screen space!  Effects are well known:
perspective distorsion, rotation dependence, and so on...  I really think that
screen space Gouraud & Phong shading would disappear soon, and be replaced by
their equivalent object space shading.

	I haven't seen papers about that fact, but I'm sure that sure
techniques are already in use.  The idea is to use a dicing technique (as in
the Reyes rendering system) to create micro-quadrangles by sampling a
quadrangle along u and v coordinates.  The number of samples of the u
coordinate is proportional to the length of AB and CD edges (similarly for v
with BC and DA edges) The only thing to do is then to visit every micro-quad
vertex (double loop over u and v), interpolate incrementally either x, y, z,
r, g, b (Gouraud) or x, y, z, nx, ny, nz (Phong) and average samples that fall
in the same pixel.

	The technique isn't more complicate as screen space interpolation, and
can be hardware coded as well (incremental scheme with integer arithmetic) The
ONLY advantage is Gouraud and Phong today, is that they are hardware coded so
outperforming every software algorithm.  But it is perhaps time for chip
designer to realize that there are other --- and better --- shading methods
that support hardware implementation as well!  For instance, when will we see
a chip that does incremental voxel traversal for ray tracing ?

--------

Eric's reply:

	Interesting comments!  We do similar things with non-planar
quadrilaterals, that is, we find if something is not flat by checking its
tolerance from some ideal plane for it.  If the tolerance is noticeable, we
take special measures.

	I agree that it would be nice if Gouraud shading was replaced with
something else.  Renderman micro-facets is certainly one way to go.  The trick
is, what if you are dealing with a surface that (a) does not have a natural
parameterization (that is, [u,v] values are not attached to it) or (b) the
surface has 5 or more vertices?  It's not all that clear how to combine the
various vertex samples to get a proper shade.  The problem is admittedly
ill-defined, but some answers look better than others.  Doing a full weighted
average depending on the distance to all vertices from a given point on the
surface has been proposed, but this takes forever for complex polygons.

	A voxel walker might be a nice hardware addition someday, but I think
the main problem with it is that it's very special purpose.  Most hardware
manufacturers don't want to make highly specialized chips if the market is
small.  Someday I suspect things will get fast enough that ray tracing becomes
popular because it's fairly quick to do - at this point, paradoxically, we'll
probably see specialized hardware that makes ray tracing much faster still.
However, right now I see companies like AT&T Pixel Machines getting little
business for their ray tracers - everyone likes the images, but few seem to
want to get involved making them for real!

-------------------------------------------------------------------------------

New Release of Radiance Software & Bug Fixes, by Greg Ward (gjward@lbl.gov)

I have just put together a new release of Radiance, 1.3.  In addition to the
IES luminaire translator included separately on some tapes, 1.3 contains
several improvements, including faster runtimes for oconv with instances, a
driver for NeWS (and thus SGI's window system), better memory use on some
machines, fisheye views, and a translator for Architrion.  I plan to release
more CAD translators, specifically one for DXF, in the near future.

The new tar file takes up over 36 Mbytes, because it includes compiled
versions for Sun-3, Sun-4, IRIS, DECstation and MacIntosh (running A/UX 2.0),
so you will need to send a large tape to get a copy.  (Of course, you do not
need to load the binaries on your machine if you don't want them.)  As before,
full C source code is provided for all programs.

Send your tapes to:

	Greg Ward
	Lighting Systems Research
	Lawrence Berkeley Laboratory
	1 Cyclotron Rd., 90-3111
	Berkeley, CA  94720

--------

Thanks to an astute user, I have learned of a rather serious bug in the IES
luminaire data translator, ies2rad.  It involves the improper conversion of
type B photometry, and some types of asymmetric fixtures.

Anyone who is currently using this program, or plans to use it in the future,
is advised to get the corrected source from me directly.

-------------------------------------------------------------------------------

Radiance Timings on a Stardent, by Eric Ost (emo@ogre.cica.indiana.edu)

Here is the most recent set of timings which resulted from running the
Radiance batch ray-tracer on tuna.  Note that each rpict.* is a different
instance of the ray-tracer.  The differences are

name		compiled with
rpict.noO -- 	no optimizations selected
rpict.01  --    -O1, common sub-expression, etc., optimizations performed
rpict.O2  --    -O2, vectorization optimizations performed
rpict.O3  --    -O3, parallelization optimizations performed

Command:
  rpict.X -vp 52.5 6 58 -vd .05 -.6 -.8 -vu 0 1 0 -vh 35 -vv 35 \
	  -av .1 .1 .1 -x 1000 -y 1000 scene.oct > scene.raw

System: Stardent Titan-3000
	4 25MhZ R3000 processors
	32 Mb main memory
	file input read from NFS mounted file-system
	file output written to a local-disk file-system

timing batch ray tracer rpict.noO
real     4:14.1
user     4:04.1
sys         8.5
timing batch ray tracer rpict.O1
real     4:14.4
user     4:04.4
sys         8.1
timing batch ray tracer rpict.O2
real     4:27.1
user     4:18.2
sys         6.8
timing batch ray tracer rpict.O3
real     4:15.4
user    16:37.5
sys        17.6

Simply optimized code does not seem to have much advantage over unoptimized
code.  Vectorization appears to slow things down, but running on all 4
processors seems to recover nearly all of the real-time performance that was
lost.  The fact that all of these results (per-processor) are fairly close
probably indicates that to obtain significant benefits from
vectorization/parallelization modification of the implementation itself is
required.  A run-time subroutine/loop histogram obtained using 'prof' has
indicated several instances where in-lining code sequences may improve
performance; though, exactly how much improvement will be obtained remains to
be seen.

--------

Further information from Eric Ost:

Subject: Misc. Radiance 1.3 benchmarks

Program: rpict, version 1.3,
Date: February 22, 1991

This benchmark involves the example 1000x1000 picture described in
../ray/examples/textures as rendered from the associated makefile,
../ray/examples/textures/Makefile.

-----------------------------------------------------------------------------
				      (all times are in seconds)
System                                  Real    User    System
-----------------------------------------------------------------------------
Sun-4/330 (ogre)                      10:27.9  8:10.5       8.5
SGI Personal Iris (pico)              5:41.0  5:26.5       1.6
-IBM RS6000 model 320 (off-site)      4:19.2  4:13.9       0.3
+Stardent Titan-3000 (tuna)     l     4:13.9  4:04.3       7.8
-IBM RS6000 model 540 (off-site)      2:50.3  2:45.2       0.2
*Stardent Titan-3000 (tuna)           1:52.2  1:45.7       4.8
-----------------------------------------------------------------------------
Legend:
+[Note: The entire image was rendered on 1 processor]
*[Note: Each processor renders 1/4 image, so this is the MAX of all timings.
	The -x, -y, -vv, and -vh parameters were adjusted accordingly.]
-[Note: The IBM timings were performed by our IBM representative off-site.]


System Configurations:

Architecture          Operating System           RAM    Processor      #
-----------------------------------------------------------------------------
Sun-4/330             SunOS Release 4.0.3_CG9    24 MB  20 MhZ SPARC  (1)
SGI Personal Iris     IRIX System V Release 3.2  16 MB  20 MhZ R3000  (1)
Stardent Titan-3000   Unix System V Release 3.0  32 MB  25 MhZ R3000  (4)
IBM RS6000 model 320  Unix System V Release ?    16 MB  20 MhZ RS6000 (1)
IBM RS6000 model 540  Unix System V Release ?    ?? MB  30 MhZ RS6000 (1)
-----------------------------------------------------------------------------

I would be happy to answer any questions pertaining to these timings.  In no
way am I suggesting that these timings are the best possible for a given
architecture; rather, they were the ones I obtained and may or may not be
repeatable at another site.  No special fine-tuning was done either to the
system or to Radiance before performing these timings.  Each system was
relatively quiescent and therefore had a minimal load average.

-------------------------------------------------------------------------------

RTSYSTEM Fast Ray Tracer, by Patrick Aalto

The initial post:

I just finished a small demo I have been working on a couple of months.  It
performs real-time ray-tracing and shading (a sort of mixture of them both) on
an IBM PC-compatible computer with VGA graphics.  I find it pretty impressive
(although the environment is pretty simple:  It has a planet, a moon and a
sun).  It runs at 70 frames/second on my 80386/20 Mhz machine.

--------

This RTSYSTEM (Ray-Traced System) is a small demo that uses my new superfast
raytracing algorithms.  It works on register-compatible VGA cards only, using
the non-standard 320x400x256 screen mode.  This mode is highly suitable for
animation, since it features two separate video pages and lots of colors.
Nearly all common VGA-cards work quite well in this mode.

This demo shows a moon orbiting a planet.  The viewer is on a distant moon of
the planet, and is looking 'up' towards the planet and the other moon.  A
distant sun can also be seen from time to time.  It is very easy to run this
demo; just type RTSYSTEM and it starts to run.  After a second or two, a green
number will appear on the bottom- right corner of the screen.  This number
tells you how many frames per second your computer is fast enough to draw.  I
programmed this demo such, that it will just run at the maximum 70
frames/second on my 80386/20 Mhz computer.  A slow VGA-card or a slower CPU
will not reach 70 frames/second, but even a 33 Mhz 80486 - machine will not
run faster than 70 frames/second, since this is the fastest hardware refresh
rate of a VGA display at this resolution.  To quit the demo, just press the
ESC key.

The method of rendering shaded objects usually requires a lot of computation,
since the correct color value has to be computed separately for every single
screen pixel.  The color value of a certain pixel depends on the amount of
light (and it's color) this pixel transmits towards the eye of a viewer.
Since each pixel represents some small portion of some physical object in the
3D image space, the transmitted light can be calculated based on the
properties of this physical object.

When light hits a smooth surface, a portion of it is reflected, and the rest
is transmitted into the object (if the object is even slightly transparent).
The direction of the reflected light is given by the Law of Reflection:  the
angle of the reflection direction r, is equal to the angle of incidence , and
will be in the same plane as the incident vector v and a surface normal vector
N.  Vector r can be determined using vector arithmetics:

      r = v + 2Ncos = v - 2N(vN)

To calculate a dot product of two vectors requires normally 9 multiplications,
1 division and 1 square root.  All this for every pixel in the object!!  (The
planet in the middle of the screen has nearly 2800 pixels, by the way.)  This
demo uses a highly sophisticated technique to calculate the color of a certain
pixel with only one table lookup and one addition per pixel!  Everything is
also done using only integer values (16 and 32 bit), obviously.

Other new techniques in this demo include a 3D-modification of a well-known
Bresenham's circle algorithm to calculate all the X, Y and Z values of a ball
using only integer additions.  (The standard method uses the ball equation
X+Y+Z=R, from which the values for Z-coordinates, for instance, can be
determined if all the other values are known.  This includes a square root and
3 multiplications per every (X,Y) pair.)

Still another new technique is to apply trigonometrics to the 'ray-sphere
intersection' problem.  This does not reduce the needed computations very
much, but gives correct results with much smaller intermediate results (VERY
important when dealing with only integer resolution).

It is interesting to note that even a standard 386 machine with a VGA-card can
perform simplified ray-tracing in realtime.  All it takes is some
optimizations on the algorithms used.

An interesting thing is the optical effect called 'Mach band effect'.  If you
look closely at the planet surface, you will notice all sorts of different
patterns on it, which change rapidly as the sun moves.  These patterns are
merely an optical illusion, caused by the way the retina of our eyes function.
Since there are only 64 different shades of grey (from black to white)
available on a VGA display, the eye is sensitive enough to register the small
change between two neighboring pixels, thus creating a 'pattern' on the
screen.

Studying this demo you can also find out what 'anti-aliasing' means.  Look at
the edge of the planet when it is fully lit.  You will easily see the
'jaggies'.  The planet edge does not appear to be smoothly round, but rather
can easily be seen to be just a collection of pixels.  Now, look at the
day/night border on the planet.  You will not see such jaggies here.  This is
because this borderline is 'anti-aliased'.  The transition from complete black
to complete white is gradual, and thus the eye can not detect individual
pixels.

My new game LineWars II will use some of these superfast rendering techniques.
It will come out sometime this year, I hope...


      Patrick Aalto

Contact:

Internet:    ap@tukki.jyu.fi    (account about to expire soon..)
	    tkp72@jytko.jyu.fi

--------

I've been told that the demo I mentioned earlier can be found at
chyde.uwasa.fi (sorry, don't have the IP number here now) in the directory
PC/demo.

[This one caused a lot of traffic on the net.  It's a cute demo, though there
aren't all that many rays traced, and a lot of tricks are done to avoid them.
I think this is just fine:  why trace rays when you don't need to?  But it
definitely got people excited when Patrick claimed "real time ray tracing".
--EAH]

-------------------------------------------------------------------------------

DKBTrace Beta Available, by David Buck (dbuck@ccs.carleton.ca)

I've placed a new version of DKBTrace onto alfred.ccs.carleton.ca
(134.117.1.1) for beta testing - and for those who just can't wait :-).  The
source files and data files for DKBTrace version 2.10 can be obtained by
anonymous ftp from the above site.  They are in directory
pub/dkbtrace/dkb2.10.  No executables are available at this time.

Abstract:

DKBTrace is a freely-distributable raytrace program that renders quadric
shapes, CSG (Constructive Solid Geometry), and a handful of other primitive
shapes.  Shadows, reflection, refraction, and transparency are also supported.
DKBTrace also allows you to define complex an interesting solid textures such
as wood, marble, "bozo", etc.  The texturing facility has been greatly
enhanced in version 2.10.

NOTE:  The data files for version 2.10 are NOT completely compatible with
       previous versions.  Old data files must be modified to run on the
       new raytracer.

Please report any problems or questions to me at dbuck@ccs.carleton.ca.

I'm also starting up three mailing lists for people interested in the
following aspects of DKBTrace:

   - General Questions and Problems with DKBTrace
   - Porting DKBTrace to various platforms
   - Writing a graphical user interface for DKBTrace
      (note: I don't intend to write a graphical user interface, but
	     several people have expressed an interest, so I thought I
	     should at least maintain a mailing list for these people
	     so they can keep in touch).

If you want to be added to any (or all) of these mailing lists, please send me
an EMail message indicating which mailing lists you're interested in.

The final release of version 2.10 should be in one or two weeks.  I will post
another announcement at that time.

-------------------------------------------------------------------------------

"Computer Graphics" 2nd Edition Corrections, Software, etc by the Brown
	Automatic Mailer


Your e-mail addressed to
		graphtext@cs.brown.edu
has been handled by an automatic "server" program that generated this response.
This server provides several services for readers of
	"Computer Graphics: Principles and Practice", 2nd edition
		by Foley, van Dam, Feiner, and Hughes
			(Addison-Wesley, 1990)
				ISBN 0-201-12110-7

There are several distinct "services" you can obtain from this server, each
identified by a unique keyword.  To obtain the service, mail a message to
graphtext@cs.brown.edu containing the keyword (and only the keyword) in the
"Subject:"  line.  Only one service can be obtained per message.

Here are the services currently available, with the appropriate "Subject:"
lines shown:

--------
Subject: Help

	The server sends you this helpful message in response.
	If the server program seems to be broken somehow, send mail to
		Dave Sklar (dfs@cs.brown.edu)
--------
Subject: Get-Text-Bug-List

	Use this service to obtain a list of known "bugs" in the text.
--------
Subject: Report-Text-Bug

	Use this service to report a bug in the text.  The body of your
	message should give the page and line number of the bug, and if
	possible, indicate the necessary correction to be made.

	Please check the "text-bug-list" before submitting a bug report so you
	don't submit a duplicate bug report.

	Please don't submit a bug report unless you are sure that there is
	a bug in the text; this service is not for raising questions.
--------
Subject: Get-Text-Algorithms

	Use this service to get instructions on how to obtain electronic
	copies of many of the major algorithms (all in Pascal) presented
	in the textbook.
--------
Subject: Software-Distribution

	Use this service to get instructions on how to obtain SRGP and SPHIGS,
	the software packages described in chapters 2 and 7.
	These include information on all three versions:
		1) UNIX/X11  (available via ftp)
		2) Macintosh (available via ftp, except Pascal via floppy)
		3) IBM-PC and compatibles (available via floppy)
--------
Subject: Get-Software-Bug-List

	Use this service to obtain a list of known "bugs" in SRGP/SPHIGS.
	This list does not include omissions and bugs that are
	documented in the SRGP/SPHIGS reference manuals.
--------
Subject: Report-Software-Bug

	Use this service to report a bug in the software or in the doc
	associated with the software.  If you can present a code fragment
	that isolates the bug, all the better.
--------

At a later date, we will support services for the sharing of exercises
produced by instructors using the text, and for the submission of suggestions
for improvement of the text in later revisions.

-------------------------------------------------------------------------------

Papers which are currently available from the Net via anonymous FTP, J. Kouhia
--------------------------------------------------------------------
Last update January 28, 1991

Updates and mails to
Juhana Kouhia
kouhia@nic.funet.fi [128.214.6.100]

Put the new papers to
nic.funet.fi [128.214.6.100]
pub/graphics/papers/Incoming


%K KYRI90
%A George Kyriazis
%T A Study on Architectural Approaches for High Performance Graphics Systems
%I Rensselaer Design Research Center, Technical Report No: 90041
%D September 1990
%O weedeater.math.yale.edu [128.113.6.10]  pub/Papers/kyri90.ps.Z
%O nic.funet.fi [128.214.6.100]  pub/graphics/papers/kyri90.ps.Z

%K MUSG88
%A F. Kenton Musgrave
%T Grid Tracing: Fast Ray Tracing For Height Fields
%J Research Report YALEU/DCS/RR-639
%D July, 1988
%O weedeater.math.yale.edu [128.113.6.10]  pub/Papers/musg88.ms.Z
%O nic.funet.fi [128.214.6.100]  pub/graphics/papers/musg88.ps.Z

%K MUSG89a
%A F. Kenton Musgrave
%T Prisms and Rainbows: a Dispersion Model for Computer Graphics
%J Proceedings of the Graphics Interface '89 - Vision Interface '89
%I Canadian Information Processing Society
%C Toronto, Ontario
%P 227-234
%D June, 1989
%O weedeater.math.yale.edu [128.113.6.10]  pub/Papers/musg89a.ms.Z
%O nic.funet.fi [128.214.6.100]  pub/graphics/papers/musg89a.ps.Z

%K MUSG89b
%A F. Kenton Musgrave
%A Craig E. Kolb
%A Robert S. Mace
%T The Synthesis and Rendering of Eroded Fractal Terrains
%J Computer Graphics
(SIGGRAPH '89 Proceedings)
%V 23
%N 3
%D July 1989
%P 41-50
%Z info on efficiently ray tracing height fields
%K fractal, height fields
%O weedeater.math.yale.edu [128.113.6.10]  pub/Papers/musg89b.ms.Z
%O nic.funet.fi [128.214.6.100]  pub/graphics/papers/musg89b.ps.Z

%K MUUS??
%A M. J. Muuss
%T Excerpts from "Workstations, Networking, Distributed Graphics, and Parallel Processing"
%I BRL Internal Publication  (???)
%D ?????
%O freedom.graphics.cornell.edu [128.84.247.85] pub/RTNews/Muuss.parallel.Z
%O nic.funet.fi [128.214.6.100]  pub/graphics/papers/Muuss.parallel.ps.Z

%K MUUS??
%A M. J. Muuss
%A C. M. Kennedy
%T The BRL-CAD Benchmark Test
%I BRL Internal Publication  (???)
%D ?????
%O freedom.graphics.cornell.edu [128.84.247.85] pub/RTNews/Muuss.benchmark.Z
%O nic.funet.fi [128.214.6.100]  pub/graphics/papers/Muuss.benchmark.ps.Z

%K SHIR90a
%A Peter Shirley
%T Physically Based Lighting Calculations for Computer Graphics
%I Thesis
%D November 1990
%O weedeater.math.yale.edu [128.113.6.10]  pub/Papers/shir90a/
%O nic.funet.fi [128.214.6.100]  pub/graphics/shirley/

%K SHIR90b
%A Peter Shirley
%T Monte Carlo Method
%I Appendix from the SHIR90a
%D November 1990
%O nic.funet.fi [128.214.6.100]  pub/graphics/papers/shir90b.ps.Z

%K WILS91
%A Tom Wilson
%T Ray Tracing Abstracts Survey
%D January 1991
%O weedeater.math.yale.edu [128.113.6.10]  pub/Papers/wils91.1.shar.Z
%O nic.funet.fi [128.214.6.100]  pub/graphics/papers/wils91.1.ps.Z

======== USENET cullings follow ===============================================

Ray-Cylinder Intersection Tutorial, by Mark VandeWettering

>I am trying to do ray tracing of light through a
>cylinder coming at different angle to the axis
>of the cylinder.  Could some one give me some
>pointers?

Ray cylinder intersection is (conceptually) just as easy as hitting a sphere.
Most of the problems come from clipping the cylinder so it isn't infinite.  I
can think of several ways to do this, but first let me mention that you should
consult Introduction to Ray Tracing edited by Andrew Glassner.  Articles by
Pat Hanrahan and Eric Haines go over most of this stuff.


It's easiest to imagine a unit cylinder formed by rotating the line x = 1 in
the xy plane about the y axis.  The formula for this cylinder is x^2 + z^2 =
1.  If your ray is of the form P + t D, with P and D three tuples, you can
insert the components into the original formula and come up with:

	(px + t dx)^2 + (pz + t dz)^2 - 1 = 0

or	px^2 + 2 t dx px + (t dx)^2 + pz^2 + 2 t dz pz + (t dz)^2

or	(px^2 + pz^2) + 2 t (dx px + dz pz) + t^2 (dx^2 + dz^2)

which you can then solve using the quadratic formula.  If there are no roots,
then there is no intersection.  If there are roots, then these give two t
values along the ray.  Figure out those points using P + t D.  Now, clipping.
We wanted to have a finite cylinder, say within the cube two units on a side
centered at the origin.  Well, gosh, ignore any intersections that occur
outside this box.  Then take the closest one.

Now, to intersect with an arbitrary cylinder, work up a world transformation
matrix that transforms points into the objects coordinate system.  Transform
the ray origin and direction, and voila.  You do need to be careful to rescale
t appropriately, but it's really not all that hard.

You might instead want to implement general quadrics as a primitive, or choose
any one of a number of different ways of doing the above.  Homogeneous
coordinates might make this simpler actually....  Hmmm....  And there is a
geometric argument that can also be used to derive algorithms like this.

Think about it.  It shouldn't be that difficult.

-------------------------------------------------------------------------------

C++ Fractal and Ray Tracing Book, by Fractalman

There's a very nice book called 'Fractal Programming and Ray Tracing with
C++'by Roger T.  Stevens and published by M&T Books.  The book comes with a
disk of sample programs for Zortech C++ but can be modified to be used with
Turbo C++.  The text is very easy to understand and you only need some
rudimentary knowledge of C and computer graphics.

-------------------------------------------------------------------------------

Ray/Spline Intersection Reference, by Spencer Thomas

>Can someone please tell me where I can find an algorithm for
>finding the intersection of a ray and a Bezier and/or B-Spline
>patch.

You might look at

Lischinski and Gonczarowski, "Improved techniques for ray tracing parametric
surfaces," Visual Computer, Vol 6, No 3, June 1990, pp 134-152.

Besides having an interesting technique, they refer to most of the other
relevant work.

-------------------------------------------------------------------------------

Superquadric Intersection, by Rick Speer, Michael B. Carter

Here's some material that may be of assistance. The following paper-

  "An Introduction to Ray Tracing", by Roman Kuchkuda, pp. 1039-60 in
  Theoretical Foundations of Computer Graphics and CAD, R. A. Earnshaw,
  Ed., Springer-Verlag, 1988.

is a good introduction to the tracing of superquadrics.  In addition to the
usual overview material it gives actual source code, in C.

This paper-

  "Robust Ray Tracing with Interval Arithmetic", by Don Mitchell, pp.
  68-74 in the Proceedings of Graphics Interface '90, Canadian Infor-
  mation Processing Society (Toronto), 1990.

focuses in particular on root-finding when intersecting a ray with an implicit
surface.  Mathematical and pictorial examples of tracing super- quadrics are
given, along with data on the time-cost of intersections (when run on a
SPARCstation).  There's also some discussion of using the method for CSG.

Finally, this thesis-

  "Implementation of a Ray-Tracing Algorithm for Rendering Superquadric
  Solids", Bruce Edwards, Masters Thesis and Technical Report TR-82018,
  Rensselaer Polytechnic Institute (Troy, NY), December 1982.

probably goes into more detail than either of the papers mentioned above.

[Actually, it doesn't:  there's one page on intersecting superquadrics, which
says something like "use Newton's Method" and gives a reference number, which
turns out to be a "private conversation with Alan Barr".  --EAH]

--------

>From Michael B. Carter:

     I was just digging through my literature and finally came up with the
reference that was stuck in the back of my mind.  I cannot vouch for the speed
of this method, but here's the ref.

Kalra, Devendra, and Alan H. Barr, "Guaranteed Ray Intersections with
Implicit Surfaces," Computer Graphics, Vol. 23, No. 3, July 1989.

This method uses Lipschitz conditions to intersect ANY implicit surface.  It
always finds the closest (or subsequent) intersection points -- even in badly
behaved functions.  There are also some really neat pictures of blended
objects in the paper.

-------------------------------------------------------------------------------

Comments on Interval Arithmetic, by Mark VandeWettering, Don Mitchell

First some comments by Mark:

>>  "Robust Ray Tracing with Interval Arithmetic", by Don Mitchell, pp.
>>  68-74 in the Proceedings of Graphics Interface '90, Canadian Infor-
>>  mation Processing Society (Toronto), 1990. [Rick Speer]
>
>    Interesting paper, though I admit to not having tried its method yet.
>Looks like a great general method for superquadrics and whole families of
>surfaces.  My favorite paper of this proceedings. [Eric Haines]

Very interesting paper, but I believe that the convergence of the method is
quite slow, even slower than bisection.  I coded up a version of it for the
MTV raytracer, and was testing it out when it was clear it was converging VERY
slowly.  If anyone has some hints, or wants to discuss this further, send me
some email.

The reason I was interested in this scheme is that it would allow you to trace
non-linear rays (rays could be generalized to space curves) which would allow
more generic transformations of objects (ala Al Barr).

You can use geometrical properties of the superquadric to figure out
intersections.  There can be at most two intersections in any octant of the
superquad, so a primitive method would be to handle each octant separately and
then use some favorite numerical method (Newtons or bisection) to home in on
the roots.

--------

Don Mitchell responds:

The interval-arithmetic algorithm is a root-isolation algorithm.  It gets used
in combination with root-refinement (something much faster than bisection and
much safer than Newton's method, hopefully).  The speed of convergence depends
a lot on how you are refining intersections.  If you are just performing the
interval algorithm until it converges to a root, that would be very slow and
not really a proper application of the method.

For a benchmark image containing a number of 4th degree algebraic surfaces,
the interval method was about three times slower than a specialized polynomial
solver (e.g., Sturm sequences).  Its hard to compare its run time on
transcendental surfaces, since I don't know any other way except Kalra's
Lipschitz method (described in SIGGRAPH 89).  I think interval arithmetic is
more straightforward than the Lipschitz methods, particularly for surfaces
with singular gradients (like concave superquadrics).  It does not require an
off-line human calculation to derive Lipschitz conditions.

There are also a lot of improvements that I didn't mention (or implemented
later).  For example, there are faster interval multiply algorithms than the
one I gave.  You can do the root isolation algorithm with (F, dF/dt) instead
of (F, grad F).  There are also ways of using the derivative intervals to
narrow the function intervals with the mean-value theorem.

Like a lot of techniques, there is a long journey between theory and
implementation.

-------------------------------------------------------------------------------

Platonic Solids, by Alan Paeth

Coordinates for these and for their four-dimensional analogs were published by
HSM Coxeter, first in 1948 in _Regular Polytopes_, pg. 52-53 (Methuen, London)
and again in subsequent revisions; any/all are highly recommended reading. The
table for (quasi) regular 3D polyhedra is transcribed below. I've posted this a
few times already; perhaps a "frequently asked" entry is in order.


			PLATONIC SOLIDS
		(regular and quasi-regular variety,
		Kepler-Poinset star solids omitted)

The orientations minimize the number of distinct coordinates, thereby revealing
both symmetry groups and embedding (eg, tetrahedron in cube in dodecahedron).
Consequently, the latter is depicted resting on an edge (Z taken as up/down).

    SOLID            VERTEX COORDINATES
    -----------      ----------------------------------
    Tetrahedron      (  1,  1,  1), (  1, -1, -1), ( -1,  1, -1), ( -1, -1,  1)
    Cube             (+-1,+-1,+-1)
    Octahedron       (+-1,  0,  0), (  0,+-1,  0), (  0,  0,+-1)
    Cubeoctahedron   (  0,+-1,+-1), (+-1,  0,+-1), (+-1,+-1,  0)
    Icosahedron      (  0,+-p,+-1), (+-1,  0,+-p), (+-p,+-1,  0)
    Dodecahedron     (  0,+-i,+-p), (+-p,  0,+-i), (+-i,+-p,  0), (+-1,+-1,+-1)
    Icosidodecahedron(+-2,  0,  0), (  0,+-2,  0), (  0,  0,+-2), ...
		     (+-p,+-i,+-1), (+-1,+-p,+-i), (+-i,+-1,+-p)

    with golden mean: p = (sqrt(5)+1)/2; i = (sqrt(5)-1)/2 = 1/p = p-1

--------

The poster wanted a circumscribing (unit) sphere. Just pick a vertex and
calculate its length (to the origin) and you have R, that sphere's radius.
Normalize (divide all coordinates by R) and the solids are contained by a
unit sphere.

    Alan Paeth
    Computer Graphics Laboratory
    University of Waterloo

-------------------------------------------------------------------------------

Moon Problems, by Ken Musgrave, Pete Shirley

>  Am I correct in a vague memory I seem to have, that the (Earth's) moon is
>supposedly a near-ideal Lambertian reflector?
>
>  There's something fishy here, methinks.

The moon is not lambertian.  If it were, then a full moon would be bright at
the center of the disk, and fade into black at the edges.  Instead, a full
moon is a fairly uniformly colored disk.  If you assume the lambertian surface
has a scattering probability kcos(theta), you might guess that the moon is
just k (scatters in all directions above the surface with equal
probabilities).  I think this will work for a simple model.

pete


PS-- I saw some angular reflectance curves in some book I can't find.  I think
it was a heat transfer text.  The actual function was pretty funky.

--------

Alain Fournier responds:

As this is one of my favorite trick question about reflection, I'll bite.
Consider the moon when full (that is the sun -the light source- is in the same
direction as the observer -the eye.  It is pretty obvious that it appears
essentially as a disk, that is the reflected light is about of same luminance
all over the visible part of the moon (I ignore the effect of the surface
details such as the maria).  That shows that it is neither a specular
reflector (the center would be a lot brighter than the periphery, nor a
diffuse reflector (the center would be somewhat brighter than the periphery
(try this on your favorite renderer, a perfectly diffuse white sphere
illuminated from the direction of the eye).  So what's going on.  Well, as
most of the computer graphics types (us) ignore, the world is not all between
totally diffuse and totally specular, there are surfaces outside of this.  In
the case of the moon, it so happens that a Phong model (using the expression
loosely) with an exponent of 0.5 for the cosine of the angle normal/light
gives the right appearance of a disk at full moon.  I can find exact
references back in my office, if anybody is interested.  Credit where credit
is due:  Bob Woodham, of UBC, first pointed that out to me, and has worked on
the subject of models for the surface reflectance of the moon (and other
objects in the solar system).

--------

Doug McDonald replies:

Remember that the moon has hills and valleys.  The hills near the terminator
are what one sees, and the actual surface is seen at an angle much less than
90 degrees.  The actual surface itself is sort of Lambertian.  At least when I
actually examined some small (~5 cm) size pieces of the Moon in a lab they
looked rather Lambertian.  On the spot reports agree.

The word "fractal" comes to mind.

--------

Paul Heckbert writes:

As others have mentioned, the moon and other dusty surfaces are not
Lambertian.  Blinn, in the paper cited below, says they follow the
Hapke-Irvine illumination model more closely.

%A James F. Blinn
%T Light Reflection Functions for Simulation of Clouds and Dusty Surfaces
%J Computer Graphics
(SIGGRAPH '82 Proceedings)
%V 16
%N 3
%D July 1982
%P 21-29
%K shading
%Z Hapke-Irvine illumination model

-------------------------------------

Ken gave another problem with rendering the moon:

>  A very wide-angle view of a scene (i.e., a landscape), with a sphere
>(i.e., a moon) in an extreme corner of the image, sports one very distorted
>sphere in the image, when rendered using the standard virtual-screen model
>for ray tracing.  (See the cover of Jan. '89 IEEE CG&A for an example.)
>
>  Seems that this is a version of the sphere-to-plane mapping problem, and
>therefore inadmissible to a non-distorting solution.
>
>  Can anyone out there prove this conjecture right or wrong, or demonstrate
>some nice workaround?
>

   Yes, this distortion is due to sphere-to-plane mapping.  Actually, the
volume defined by the rays joining every point on the sphere to the center of
projection will be a cone.  Now an intersection of this cone with a plane will
invariably result in an ellipse.  So EVERY sphere, should look like an
ellipsoid in a ray-traced image which uses this model.  But the distortion is
prominent only for spheres located in extreme corner of the scene.

   I also encountered the same problem and would be interested in any method
or projection-model which circumvents this problem.  If the pin-hole camera
model is not a good model for the human eye-brain system then is there any
other model which is more accurate?

Dilip Khandekar

--------

A solution I can see is to have each pixel, instead of having a calculated
position on a viewing plane, have each instead with a calculated horizontal
and vertical angle from the center.  Knowing the angles, one can easily
construct a vector of the appropriate angles to represent this pixel's ray.  I
have not actually implemented this, but it is obvious that this would work, as
when a conventional method is used and the viewing plane is located far from
the eye point, the near-spherical approximation of the plane is good enough to
remove most any distortion.  The reason why a conventional camera does well is
due to the same phenomenon -- it takes a spherical "bunch" of rays and maps
them to the film plane.  If I ever get my present thesis out of the way, I
will attempt to implement this in my version of DBW_Render.

Prem Subrahmanyam

[If I understand this correctly, the idea is doing equal angle spacing of
rays, instead of equal increments on the image plane.  I gave this suggestion
to Ken, then quickly tried it out & found it lacking, as it distorts straight
lines into curves and I forget what else.  See the next entry. --EAH]

--------

This rings a bell in my mind, since I once by mistake in my raytracer did a
purely 'angular' perspective model, i.e. the x axis of the display was really
the angle of the ray in the ground plane, and the 'y' coordinate was the angle
from that plane...  the image looked....  different....  well, anybody
implemented some kind of 'different' perspective model?  Ideally, you would
hook up the user's face to the screen via a telescopic pole, and pull it via
pneumatic cylinders to the correct viewing distance, and the rendering
equation should of course not project onto a plane, but onto a slightly curved
ditto (i.e. a computer monitor).  Now we would hear no more whining about
perspective distorsion....  ;-)

No, seriously, anybody tried twiddling with it?  I did (by mistake) in my
tracer but, well....  nah, not good...  And the problem is, also, that these
twiddlings are only easy to do in raytracers, since linear-things (polygon's
and such) may get non-linear sides when you twiddle (har har for you
Z-buffalos ;-)

Zap Anderssen

--------

You might like to experiment with mimicking the distorting effect of camera
lenses.  Lenses which behave like renderers are very expensive, and are called
something like `flat-plane lenses' (surprise).  Ordinary lenses exhibit
distortion (it's called that in optics books).  Positive distortion makes flat
squares look like pillowcases, negative makes them look weird.  A way to model
positive distortion is to move points in the picture closer to the center.  A
point that starts off at (r, theta) on the image plane in polar co-ordinates
would move to (r', theta) where, in two popular approximations:  r' = r (1 - C
* r * r) r' = r * cos (C * r) for appropriate (small) constants C.  Of course,
I was working with images.  In a ray-tracer, you would distort the directions
of the rays by using the inverse transformation to splay the rays from the
camera outwards.

NO guarantees that it will fix the problem, but it is not a million miles away
from this discussion....

Ian D. Kemmish

--------

If the distorsion is too large, look for conformal mappings.  A conformal
mapping would transform _small_ circles to circles.  The most general
conformal mapping from the sphere to the plane is a stereographic projection
followed by an analytic transformation of the complex plane.  Read the Shaum
book on complex variables, do all the problems and start tinkering :-).

Pierre Asselin,  R&D, Applied Magnetics Corp.

--------

Pictures generated with a standard perspective camera model only look "normal"
if the viewing angle used during rendering matches the angle at which you view
the picture.  If you use a horizontal view angle of theta during perspective
rendering, and view the pictures on a monitor of width w, then if you view the
screen at a distance of d = w/2*cot(theta/2), the picture will not look
distorted.  Here's a little table for a w=14" screen:

				     telephoto     about normal     wide angle

    view angle (degrees), theta	    :   10		50		90

    recommended viewing distance, d :   80"		15"		7"

The same argument applies in photography:  shoot a photograph with a standard
(50mm focal length) lens, print it at 8x10" size, say, and it will look pretty
normal when viewed at a distance of about one foot.  If you shoot a picture
with a wide angle lens (24mm, say) and print it at 8x10, you will perceive
"perspective distortion" if you view it at a distance of a foot, but it will
look much less distorted if you hold it close to your face, so that your
viewing angle matches the view angle captured by the lens.

The argument also applies in projection of movies and slides:  there is only
one point in a movie theater from which a viewer will see the same image as
that seen by the camera (i.e. same angle of view).  Theater geometry and the
lenses used for shooting and projection are usually chosen to put that "ideal
viewer position" near the middle of the theater, I imagine.  Assuming perfect
filming and projection and one eye closed, viewers at this ideal position will
not see any distortion artifacts of the projection -- that is, they will not
be able to tell the difference between a projected film and a window into a
3-D scene.  Viewers not at the ideal viewing position, such as those in the
first row, will see the familiar artifacts of perspective "distortion" that
will easily allow them to distinguish between a projected image and a real 3-D
scene.

Another interesting observation about projections is that you can project onto
ANY shape screen you like (planar, spherical, cube corner, curtain, human
torso, ...) and there will be no artifacts of the projection if the
projection lens matches the shooting lens, the viewer is right at the
projector, and the surfaces are properly finished.

---

Related question:  is there a formula relating camera lens focal length and
angle of view?  (I would guess that such a relationship would not be
theoretical, but would be based on practicalities, and would vary from
manufacturer to manufacturer)

Paul Heckbert

--------

In reply to Paul Heckbert:

Focal length and angular field are independent.  For example, you can have a
50mm focal length lens which is a "standard" lens for 35mm photography, or a
50mm wide angle lens for a larger film format.  There are some applications
(enlarging?)  where it is sometimes recommended to use a wide angle lens from
a larger film format in place of a standard lens to assure better field
uniformity.

For a family of lenses designed for a given film size, then of course there
will be a relationship between focal length and field of view, since economics
dictate that the field of view should be no larger than necessary.  Note that
the field does not have a sharply defined size, but that the intensity drops
off continuously at the margins of the field (vignetting).

Jeff Mulligan

--------

In response to Paul's related question:

The relationship is fairly straightforward as I understand it.  Think pyramid
where the width and length of the base are defined by the image dimensions and
the height is given by the focal length.  The formula for the angle is simply:

	angle = 2 * atan(film_size/2 / focal_length)

For 35mm film, whose image dimensions are 34mm by 23mm (approx.), the view
angles for a standard 50mm lens are 37.6 by 25.9 degrees.

Greg Ward

--------

If Ken is rendering an outdoor picture at night with the moon in it, then it
is probably a very wide angle picture, and you are absolutely correct that the
answer is "stick you face closer to the screen and it will look ok."

You state:

>there is only one point in a movie theater from which a viewer will see the
>same image as that seen by the camera (i.e. same angle of view).  Theater
>geometry and the lenses used for shooting and projection are usually
>chosen to put that "ideal viewer position" near the middle of the
>theater, I imagine.

You don't have to imagine.  You are exactly right.  I remember this from
filmmaking books I read in high school.  A 35mm movie camera uses 50mm lenses
as the "normal" lens.  A 35mm film projector uses a 100mm lens so the picture
looks right when you are seated half way between the projector and the screen.
(I don't remember the numbers for "Panavision" or other wide screen systems.)
Use of telephoto or wide angle lenses in the camera produces some distortion
to the viewer at the center of the theater.

This is something very important to film directors.  All films are done this
way.  They understand it.  I have never heard this issue mentioned in the
context of computer graphics.  Maybe no one knows this?

As to your question:

>is there a formula relating camera lens focal length and angle of view?

Such a formula is simple, if the lens has no distortion, and the size and
shape of the film is known.  Where distortion is distortion in the strict
optical aberration sense.  Any optical system is subject to several
aberrations to varying degrees.  These are:

spherical abberation
coma
astigmatism
curvature of field
distortion
longitudinal chromatic aberration
lateral chromatic aberration

Distortion is non-uniform magnification across the field of view.
Magnification usually varies slightly as the angle off the optical axis
varies.  This gives rise to "barrel" (negative) or "pincushion" (positive)
distortion.  Named, for what the image of a square looks like when subjected
to said distortion.

For camera lenses, you can safely assume the distortion is very small except
for wide angle lenses (which is probably the case you were interested in).  I
expect that lens manufacturers would be reluctant to release the actual
numbers describing their lens's performance, since most people couldn't tell
the difference anyway.

For a lens focused at infinity and flat film,
fov = 2 * atan ( film_width / ( 2 * focal_length ) )

The only difference between a distortionless lens and an ideal pinhole camera
with respect to field of view is that if the lens is focused at a finite
distance, replace focal_length in the above formula with:  1/(1/focal_length -
1/object_distance) which is the lens to image (film) distance.

Chuck Grant

--------

>Related question: is there a formula relating camera lens
>focal length and angle of view?

The way this is done for a rectilinear lens (everything one ever encounters
short of optics with high distortions or a fisheye) is based on the size of
the image formed.  In an "ideal" lens -- a pin-hole is a nice first
approximation -- the field is as large as you want -- the negative or film
holder or other physical limitation defines the "field stop".  In a more
complex lens the diameter of some internal element might serve to define the
field stop -- a 50mm lens for a 35mm SLR would probably not be able to produce
a ~250 mm image circle if mounted on a large format camera's lens board.  If
if could it would make a tremendous wide angle!

If the linear field F is given, then you can use the dimensionless relation:

		 2 tan(A/2) = F/EFL

to solve for angular field A or effective focal length.

This says (for instance) that a conventional SLR with 43 mm film diagonal (35
mm film is 24 mm x 36 mm; hypot(24,36)~=43) will cover a reasonable, mildly
wide-angle 53 degree (diagonally) angular field with a 43 mm lens installed.

Alan Paeth

--------

So using the (theoretical! yay!) relation

	angle = 2 * atan(film_size/2 / focal_length)

with 35mm film format, which I'm taking to be 34mm wide by 24mm high (Greg
Ward's numbers; Alan Paeth's numbers differ slightly:  24x36), we get the
following correspondence for some common lens focal lengths:

	focal length (mm)	24	35	50	80	200

	horizontal angle (deg)	70.6	51.8	37.6	24.0	9.72

	vertical angle (deg)	51.2	36.4	25.9	16.4	6.58

Paul Heckbert

-------------------------------------------------------------------------------

Shadow Testing Simplification, by Tom Wilson

Out of nowhere, I have come up with a new (?) speedup (?) technique for
shadow testing in ray tracing.  I don't have the kind of code necessary to
test it.  It is kind of the inverse of a bounding volume:

Consider an expensive-to-intersect object (and for this example, a convex
solid.  I'll generalize later).  Now typically a bounding volume tells you
when a ray misses the object or parts of the objects (for hierarchies).
Eventually you have to intersect the object itself (I'm assuming the object
hasn't been broken down into polygons or whatever).  Now what would be good is
a volume that, if intersected, would tell you the object is hit without
intersecting the object at all (of course it won't always work).  Suppose for
this example we have a bloboid sphere or a megafaced convex polyhedron and we
put a sphere inside the object such that the entire sphere is enclosed in the
real object.  Now if a ray hits the sphere, it definitely hits the object.  If
the ray misses the sphere, it may still hit the object somewhere between the
bounding volume and the "inner tube" volume.  Now this scheme is ideal for
shadow testing, since where the object is hit by the ray is usually
irrelevant.  A "normal" ray needs the intersection point, so this scheme may
not help much (I don't know).

  :::::::::::::            An ugly (and pretty bad) 2D example:
 :       000   :
:      00   00  :          0's define the sphere inside the object :'s
:     0       *--:
:    0         0 -:--
 :   0         0  :  ----
  :   0       0  :       ----<   Shadow ray that hits at *
   ::: 00   00  :
      :::000   :
	 ::::::

The object could really be any type of object (not just convex) provided a
sufficient inner tube volume can be constructed.  It's really the same problem
as the bounding volume:  the better the bounding volume (inner tube) the more
rays that are culled as misses (hits).

Is it feasible?  I don't know.  I don't have any complicated-objects code in
my tracer, so I can't test yet (without writing the code first obviously).
Perhaps someone who has the type of setup necessary can incorporate this and
find out (for that matter, has anyone already done it?).  If it's a good
scheme, maybe I should change my thesis 8-) (I really just wanted to get into
the next issue of the RT News).  Please give some feedback.

--------

Reply from Mark VandeWettering:

My guess is that while this would work, it is not necessarily a good thing to
do.  Whether it is an actual speedup is basically a result of how often you
hit the internal bound versus how often you hit the object.  As a crude guess,
we might imagine that the probability of hitting an object is roughly
proportional to its surface area.  Further suppose that our complex object has
surface area = A, and for the sake of argument, spherical.  As a typical
example of our inner bound, lets assume it is a sphere as well, but with
radius 10% smaller.  (This seems pretty tight, I bet you could not do much
better for real objects).

If the object has unit radius, its surface area is 4*Pi.  Our "unbounding
sphere" has surface area .81 * 4 * Pi.  So, we can assume for this argument
that 81% of all rays that penetrate the object will penetrate the inner
volume.

Examining the costs:

	81% of the time: we just intersect the cheap volume
	19% of the time: we need to intersect BOTH volumes to determine
			 whether or not the object intersects the ray.


To generalize: let

	C(cheap) be the cost of intersecting the cheap volume
	C(object) be the cost of intersecting the real object
	p be the probability of a ray which hits the object hitting the
	  inner cheap bounding volume.

Then, this scheme is a potential speedup if

	p C(cheap) + (1-p) * (C(cheap) + C(object)) < C(object)

Is this a speedup?  I dunno.  One scene in Eric Haines' SPD data base includes
a large gear with 144 edges.  The polygon intersection routine I use is linear
in the number of edges for a polygon.  The majority of rays fall within a disc
which probably accounts for 90% or more of the total area of the face.  I am
pretty sure that a scheme like you suggest would be useful for this case.

But in general?  I dunno.  It really depends on the probability p.  For
most objects, my guess is you might be able to get p = 0.5, which would make
the inequality something like
	C(cheap) + 0.5 * C(object) < C(object)
or
	C(cheap) < 0.5 * C(object)

which actually does seem to make it attractive.

In general, if the ratio of C(cheap)/C(object) = r, then we can solve for
the percentage of inner bounding volumes we need to make this scheme profitable.

p C(cheap) + (1-p) C(cheap) + (1-p) C(object) < C(object)

p C(cheap) + C(cheap) - p C(cheap) + C(object) - p C(object) < C(object)

C(cheap) + C(object) - p C(object) < C(object)

- p C(object) < -C (cheap)

p > r

What this means, if the ratio of speed from cheap to expensive is 1/10, then
we need to have probability greater than 10% to achieve a speedup.

Hmmm.  This probably bears looking into.  Any comments?

--------

John Gregor comments:

Well, to add another data point, the Wavefront software has both a "trace
object" and a "shadow object" associated with each object being rendered.

The trace object is the object used in reflections.  Usually, these objects
are left undefined, or point to a lower complexity, but similarly shaped,
object.  Since shadows and objects appearing as reflections are typically
small and are there to provide visual cues and since Wavefront's running time
is decidedly non-linear to number of polygons in the scene, this is a win.

Also, you can achieve some neat tricks using this.  You can have an object
cast a shadow from a completely different object (e.g. a kitten casting a
lion's shadow) or appearing as something else in mirrors.  Using the same
object with a different offset or scale can lead to some pretty neat effects
occasionally also.

--------

Rick Speer writes:

I'd like to add my $0.02 and note that W.  R.  Franklin examined this idea in
a non-raytracing but similar context in the following paper-

  "3D Geometric Databases using Hierarchies of Inscribing Boxes", pp.  173-80
  in Proceedings of "CMCCS '81" (this is what the Canadian conference now
  known as 'Graphics Interface' used to be called; these volumes are available
  from the Canadian Information Processing Society, Toronto, if anyone's
  interested).

To be more specific, let me quote a bit from the paper's abstract-

  "Hierarchical tree structured databases are an efficient way of representing
   scenes with elaborate detail of varying scales.  Storing a circumscribing
   box around each object is a well known method of testing whether the object
   intersects or obstructs any other objects.  This paper proposes another
   aid:  an inscribing box.  The inbox is a polyhedron that is completely
   contained in the ob- ject.  It should be as large as is easy to
   determine...  The inbox speeds up visibility tests [in the following
   way:...  ] [He concludes,] By combining inboxes and circumboxes, it is
   possible to calculate the visible surfaces of a hierarchical scene in time
   linear in the visible complexity of the scene..."

In his followup message, Mark goes on to note that this issue really needs to
be studied on test scenes using a probabilistic analysis.  I hope it wouldn't
be too immodest of me to note that such an analysis appears in the paper,

  "A Theoretical and Empirical Analysis of Coherent Ray-tracing",
  L. R. Speer, T. DeRose and B. Barsky, pp. 11-25 in the Proceed-
  ings of Graphics Interface '85, CIPS, Toronto, 1985.

-------------------------------------------------------------------------------

SIMD Parallel Ray Tracing, by George Kyriazis, Rick Speer

In article <9308@mirsa.inria.fr> kjartan@puce.inria.fr (Kjartan Emilson) writes:
>
>Does anybody have an algorithm for a massively parallel raytracer,
>which could for example run on a Connection Machine, i.e a machine
>with a 'single instruction - many data' architecture ?
>

The ray-tracing algorithm is usually parallelized ray-at-a-time, which makes
it suitable for MIMD machines.  If you want to run it on an SIMD you have a
problem, since you have to advance all rays at the same time, then filter out
which rays do not need a second reflection, and trace the second level, etc.

I believe that Scott Whitman (at uiuc?) has written a ray-tracer for a SIMD
machine.  I don't remember any details.  Pick up the SIGGraph proceedings for
either '88 or '89 (don't remember which one) on ray-tracing or parallel
computer architectures for computer graphics and you are going to see his name
there.

--------

Rick Speer writes:

A fair amount of literature is beginning to appear on this subject.
The references below should get you started.

  "3D Image Synthesis on the Connection Machine", Franklin C. Crow,
  Gary Demos, Jim Hardy, John McLaughlin and Karl Sims, pp. 254-69
  in Parallel Processing for Computer Vision and Display, P. M. Dew,
  T. R. Heywood and R. A. Earnshaw, Eds., Addison-Wesley, 1989.

  "Ray Tracing on a Connection Machine", H. C. Delaney, pp. 659-67
  in the Proceedings of the 1988 International Conference on Super-
  computing, ACM, July, 1988.

  "Ray-tracing Parallelization on a SIMD/SPMD Machine", PhD. Thesis,
  Marie-Claire Forgue, Laboratoire de Signaux et Systemes, Universite
  de Nice, Nice, France, September, 1988.

  "Distributed Ray Tracing Using an SIMD Processor Array", A N. S.
  Williams, B. F. Buxton and H. Buxton, pp. 703-25 in Theoretical
  Foundations of Computer Graphics and CAD, R. A. Earnshaw, Ed.,
  Springer-Verlag, 1988.

-------------------------------------------------------------------------------

Rayscene Animator, by Jari K{hk|nen [sic]

  Thanks to all people who voted for Rayscene.  It's now available from
tolsun.oulu.fi.  Version is 1.3, which the first published version of Rayscene
so docs are not perfect.  But we are working on it...

  The stuff is in directory /pub/rayscene.  Directory also contains two ani-
mations for PC and animation player (Autodesk Animator AAPLAY.EXE).  Those
animations were made with Rayscene and DKB-raytracer.  Enjoy.  Also stuff for
Amiga should be available soon.

  More information from hole@rieska.oulu.fi or oldfox@rieska.oulu.fi

--------

  There are a few animation demos, made with the "Rayscene" animation utility
FTPable from tolsun.oulu.fi (128.214.5.6) in the directory
/pub/rayscene/anim.PC (for "Autodesk Animator" on the PC) and in
/pub/rayscene/anim.amiga (for "Movie" on the Amiga).  Here is a short
description of the animation files:

PC-stuff:
aaplay.lzh    Autodesk Animator animation player, freely distributable
rdice.lzh     One mirrored die rounding over chess board
2dice.lzh     Two dice ...
movdice.lzh   same as above, but viewpoint moves
bouncing.lzh  Mirrored ball bouncing over chess board
2balls.lzh    Two mirrored, color-changing balls moving over wavy surface.
	      Animation data files created with Rayscene's soon-released utili-
	      ty, "Sin". Data file and array file by Panu Hassi. This animation
	      (Amiga version) can also be found from anim.amiga
	      NOTE: When Sin is released with Rayscene version 1.31, datafile
	      and arrayfile are included (and explained too.)
vortdemo.lzh  Combination of three different animations:cylinder, mech and
	      Chess. These animations are usually distributed with VORT-tracer.
	      NOTE: No Rayscene used. Just for fun...and there seems to be a
	      request for raytraced animations...


Amiga:

 At the moment, only 2balls is available. There's going to be more...

 Rayscene was released only a few weeks ago, so there are going to be more
animations here.  Email me for more information.

NOTICE:  Rayscene is now available from iear.arts.rpi.edu also.  (Directory
/pub/graphics/ray/rayscene).  Animations are available only from tolsun...

P.S. DKBTracer was used for rendering.

-------------------------------------------------------------------------------

SIPP 2.0 3d Rendering Package, by Jonas Yngvesson and Inge Wallin

We just posted the source code to sipp 2.0 in comp.sources.misc.  It may be a
while before it appears due to moderation, though.  Here is an excerpt from
the README file:

*******************************************************************
	     sipp 2.0  --  3d rendering package

	     by         Jonas Yngvesson   jonas-y@isy.liu.se
			Inge Wallin       ingwa@isy.liu.se

	     Linkoping Institute of Technology
	     Sweden
*******************************************************************

This is the beta-test release of version 2.0 of SIPP, the SImple Polygon
Processor.  SIPP is a library for creating 3-dimensional scenes and rendering
them using a scan-line z-buffer algorithm.  A scene is built up of objects
which can be transformed with rotation, translation and scaling.  The objects
form hierarchies where each object can have arbitrarily many subobjects and
subsurfaces.  A surface is a number of connected polygons which are rendered
with Phong interpolation of the surface normals.

The library has an internal database for the objects that is to be rendered.
Objects can be installed in, and removed from, this database at any time.

The library also provides 3-dimensional texture mapping with automatic
interpolation of texture coordinates.  Simple anti-aliasing is performed
through double oversampling.  A scene can be illuminated by an arbitrary
number of light sources.  A basic shading algorithm is provided with the
library, but the user can also use his own shading algorithms for each surface
to produce special effects.  Images are produced in the Portable Pixmap format
(ppm) for which many utilities exist.

-------------------------------------------------------------------------------

3DDDA Comments, by John Spackman

3d-dda's are prone to many special cases, especially when line slopes approach
zero (so the the inverse line slope, generally used as the increment a la
ARTS) approaches infinity.

A more numerically stable & hence more robust generalization of Bresenham's
algorithm to 3D, which navigates ALL the voxels intersected by the ray is
described in

	`Scene Decompositions for Accelerated Ray Tracing', John Spackman,
	PhD Thesis The University of Bath, UK, 1990. Available as Bath Computer
	Science Technical Report 90/33

Copies can be ordered from Angela Coburn at JANET:  `amc@uk.ac.bath.maths' or
ARPA:  `amc%maths.bath.ac.uk@nsfnet-relay.ac.uk'.  Any surface mail should be
addressed to:-

	Angla Coburn
	Mathematical Sciences
	The University of Bath
	Claverton Down
	Bath
	AVON
	BA2 7AY
	UK

However, uniform spatial division is generally wasteful in memory costs, being
non-adaptive.  I prefer octtrees - the above thesis describes an incremental
ray navigation of octtrees & techniques for octtree construction (& indeed for
building Uniform grids).  - `My way's better than everyone elses' :-)

-------------------------------------------------------------------------------

Radiosity Implementation Available via FTP, by Sumant (sumant@shakti.ernet.in)

I have carried out some Radiosity implementations as part of the investigation
into light equilibrium based rendering methods for my doctoral work.  To test
the implementation, I have used a very simplified scene modeler, BOX.  BOX
supports only box as the modeling primitive and translation and scaling as the
modeling transformation.

I am looking for some good planar surface model data.  Interested netters may
please send me data with the data format.  I will try to provide the input
interface for the sent data formats.

The implementation is available at our FTP site (pub/Rad.0.1.tar.Z at
sbs.ncst.ernet.in ip# 144.16.1.21).  In case of any difficulty in getting from
the FTP site, please send me mail on sumant@shakti.ernet.in.  I will send the
UUENCODED compressed tar file.

The system has been tested both on SUN and VAX is expected to run on any UNIX
platform.

Further, I invite comments and suggestions from the interested users.


sumant(sumant@shakti.ernet.in)
national centre for software technology,bombay,india

-------------------------------------------------------------------------------

Dirt, by Paul Crowley

I've been thinking about this one, and dirt is a nightmare of the first order.
No, I'm not talking about my personal hygiene...

I was in the kitchen (this _does_ have CG content, bear with me) and I thought
"Wouldn't this be a nice interesting scene to try and render.  The reflection
of the marble back wall against the taps, the kettle, the shadows from the
fluorescent strip lights, the cups...  fun".

Then I looked at the dirt.  It was a pretty clean kitchen, as kitchens go.
There was no mould on the plates, as there is in the kitchen at home (it's my
flatmates fault.  I eat out now as a survival precaution).  But, for example,
there was a whole _variety_ of crumbs on the floor.  Breadcrumbs, little bits
of onion peel, poppy seeds, a discarded match...  anyone looking for
photorealism would have to come up with a description of each of these.  Or
the dirt on the iron - you couldn't just throw some "turbulence" function at
this or fractals because the dirt on irons isn't uniform - you'd have to go
into the physics of how dirt forms on irons.  I'm not sure if that physics is
even fully understood.  There were little pencil marks on the cupboard doors
where the carpenters marked them - if you had been rendering it, would you
have remembered those?  A dribble of water down one side of a cup.  A blue
stain on the carpet whose cause is a mystery to me.

The trouble with dirt is that it's a result of the history of those objects,
and not a static thing.  That chip in the sideboard is in _that_ position
because your flatmate was trying to get a cold sausage out of the fridge, but
she was blind drunk and accidentally drove a waterpipe into it while heading
for the ground at high speed.  You going to write a simulation of that?  Will
you remember to include the little bit of sweater that got caught in the chip
last Tuesday?

Photorealism is _really_ hard.

\/ o\ Paul Crowley aipdc@uk.ac.ed.castle
/\__/ Trust me, I know what I'm doing.

-------------------------------------------------------------------------------

Thomson Digital Images University Donation Program, by Michael Takayama
	(tak@tce.com)

Thomson Digital Images (TDI) America is offering a special donation program to
qualified educational facilities in the U.S.  and Canada interested in 3D
computer graphics/animation for Industrial Design, Scientific Visualization,
and Animation.  TDI Explore v2.3 software is a high-end workstation-based
package including sophisticated 3D modeling, animation, and rendering
capabilities.  Some features of the software include NURBS modeling, inverse-
kinematics animation, particle-system animation, interactive 3D texture
editing, fast scan-line rendering and ray-tracing.  TDI is a world leader in
the high-end 3D graphics/animation market with an installed base of more than
450 systems.

*******************************************************************************

			  UNIVERSITY DONATION PROGRAM
			  ---------------------------

If you have ever thought about working with 3D, now is the time.  TDI America
will donate the EXPLORE software to universities and colleges in the United
States and Canada to give students access to the newest developments in the
world of 3D computer animation and graphics.

There are certain criteria which must be adhered to in order for a college/
university to be a successful candidate for this program:

	1.  EXPLORE must be used for research or educational projects.

	2.  The software must be made available to a minimum of 10 students
	    per year.

	3.  The software must not be used for commercial purposes without
	    the express written permission of TDI America.

	4.  The school must appoint one person who will act as the direct
	    contact with TDI America.

	5.  A software maintenance agreement and a software license agreement
	    must be signed and returned to TDI America.

Because our university donation program provides software free of charge, the
only costs involved are for maintenance (which includes a one year warranty,
telephone support and free upgrades) and training.

TDI runs on the full range of Silicon Graphics workstations and the IBM RISC
6000.  The Silicon Graphics hardware can be ordered from TDI at our
educational discount prices for your convenience.

To demonstrate your interest in our university donation program, please send
me a letter indicating your intended use of the software, what workstation(s)
you have or would like to purchase, and your time frame.  I will be happy to
put together a price proposal for you, to arrange for a demo, and to discuss
with you the applications of our software for your school.

I look forward to speaking with you about EXPLORE and our donation program.

Sincerely,

Karen Lazar
Director, University Donation Program
TDI America

*******************************************************************************

For more information, contact:

		   Karen Lazar
		   Director, University Donation Program
		   TDI America
		   1270 Avenue of the Americas
		   Suite 508
		   New York, NY 10020
		   TEL:  212/247-1950
		   FAX:  212/247-1957

*******************************************************************************
Michael Takayama                                      email:  tak@tce.com
Technical Support Manager
TDI America

--------

>Great initiative. Any ideas to make the donation valid outside USA and Canada,
>like making it worldwide? Whom should I contact to get this information?

TDI America has made this donation offer available only to educational
facilities in the United States and Canada.  For those of you located
elsewhere in North or South America, you can ask if the program can be made
available to you.

Contact:	Karen Lazar [etc...]

For those of you located in Europe, you can try contacting the TDI home office
in Paris, France.  I believe that they have a similar program available in
Europe.

Contact:	Thomson Digital Image
		22 rue Hegesippe-Moreau
		75018 Paris
		FRANCE
		TEL:  33-1-43-87-58-58
		FAX:  33-1-43-87-61-11

-------------------------------------------------------------------------------

A Brief Summary of Ray Tracing Related Stuff, Angus Y. Montgomery
    (angus@godzilla.cgl.rmit.oz.au)

I have found (been given/know of) these raytracers (rt's), 3d object file
formats, and object file databases (db's).  If you know of other non-trivial
rt's available please let me know asap.

NOTE:  before madly ftp-ing in these rt's and stuff, check out eric
(erich@eye.com) haines' rt-related ftp site list for a site near _you_.  his
list is fairly comprehensive and up to date, and is what i used to find most
of the following.


x : before the source means that it is a duplicate of the most
   up-to-date i have downloaded.
xx : dated compared to another.
* this is the _source_ site - the rt from there couldn't be any fresher

rt : (a.k.a and de-acronymation) my source.


   >>>rts(ftp and by request):
art : (a rt, VORT) *|munnari.oz.au
brl : (ballistic research lab) more info at hanauma.stanford.edu
dbw : (dist render, distpro, d.b.wecker) hanauma.stanford.edu x|munnari.oz.au
drt : (distance rt, rayman) iear.arts.rpi.edu
dkb : (d.k.buck) iear.arts.rpi.edu 2.01 (or .10 ?) xx|cs.uoregon.edu (*alfred)
mtv : x|iear.arts.rpi.edu *|cs.uoregon.edu uses nff
ohta : iear.arts.rpi.edu xx|cs.uoregon.edu
pray : (vm_pray) cs.uoregon.edu (*irisa)
prt : (parallel) from comp.graphics/Kory Hamzeh
qrt : (quik) iear.arts.rpi.edu xx|cs.uoregon.edu
rad : from Ning Zhang.
radiance : from Greg Ward
ray : ucsd.edu v3.0 runs off nff :is this an earlier mtv?
rayshade  : xx|iear.arts.rpi.edu cs.uoregon.edu v3.0 (*weedeater)
rpi : (Kyriazis, pxm, vmpxm, gk aargh!) do differ, but in essence the same
   *|iear.arts.rpi.edu v2.0 x|cs.uoregon.edu xx|ucsd.edu


   >>>object databases:
NFF : (neutral file format) gondwana.ecr.mu.oz.au
OFF : (object file format) gondwana.ecr.mu.oz.au
poly : polyhedra db
spd v3 : (standard procedural db) cs.uoregon.edu produces NFF


   >>>other formats:
AutoCAD : cad $
GDS things file : (thf) cad
IGES : (initial graphics exchange standard)
architron : (arch) cad
imagine : amiga $
irender : (slp) sgi $
renderman : pixar's
rtk : (rt kernel) $
turbo silver : amiga $
videoscape3D : amiga


   >>>(not what i wanted : trivial or limited)
fritz
good
micro : (uray, DBW_uray) a smaller version of dbw
mini : (paul)
mintrace : (shortest)
nonfinished
obfus
ps
   >>>(not rts)
rayscene
wave

-------------------------------------------------------------------------------
END OF RTNEWS

From wbt@beauty.graphics.cornell.edu Wed Jul 17 13:54:36 1991
Return-Path: <wbt@beauty.graphics.cornell.edu>
Received: from cs.rpi.edu by iear.arts.rpi.edu (3.2/HUB10);
	id AA06436; Wed, 17 Jul 91 13:54:17 EDT for kyriazis
Received: from beauty.graphics.cornell.edu by cs.rpi.edu (4.1/1.2-RPI-CS-Dept)
	id AA15632; Wed, 17 Jul 91 13:52:12 EDT
Message-Id: <9107171752.AA15632@cs.rpi.edu>
Received: by beauty.graphics.cornell.edu
	(15.11/15.6) id AA10122; Wed, 17 Jul 91 13:41:26 edt
Date: Wed, 17 Jul 91 13:41:26 edt
From: Ben Trumbore <wbt@beauty.graphics.cornell.edu>
To: atc@cs.utexas.edu, barr@csvax.cs.caltech.edu, barsky@miro.berkeley.edu,
        bferrer@bonnie.ics.uci.edu, fornax!chapman, chet@netcom.com,
        ckchee@dgp.toronto.edu, cychosz@ecn.purdue.edu, daniel@apollo.com,
        dgh@munnari.oz.au, dk@csvax.cs.caltech.edu, flynn@cse.nd.edu,
        glassner.pa@xerox.com, grant@delvalle.llnl.gov, gray@rhea.CRAY.COM,
        green@compsci.bristol.ac.uk, hanrahan@princeton.edu, hench@cs.unc.edu,
        hohmeyer@miro.berkeley.edu, hplabs!dana!mrk,
        hultquis@prandtl.nas.nasa.gov, jakob@humus.huji.ac.il,
        jeff@hamlet.caltech.edu, johnf@apollo.com, joy@ucdavis.edu,
        kolb@yale.edu, kory@avatar.com, kyriazis@cs.rpi.edu,
        lister@dg-rtp.dg.com, litwinow@apple.com, lytle@tcgould.tn.cornell.edu,
        markc@emx.utexas.edu, mcohen@cs.utah.edu, mja@sierra.llnl.gov,
        mplevine@phoenix.princeton.edu, paul@sgi.com, ph@miro.berkeley.edu,
        ray-tracing-news@wisdom.graphics.cornell.edu,
        raycasting@duke.cs.duke.edu, raytrace@cpsc.ucalgary.ca,
        raytrace@hpgtdadm.fc.hp.com, rgb@caen.engin.umich.edu,
        rrg@acf8.nyu.edu, rt-colorado@anchor.colorado.edu,
        tim@csvax.cs.caltech.edu, tmalley@dsd.es.com, uunet!ithaca!carl,
        vedge!kardan%larry.mcrcim.mcgill.edu, zmel02@image.trc.amoco.com
Subject: Ray Tracing News
Status: OR

So, if you already saw this on comp.graphics, then please write me (don't use
reply - you'll get Ben Trumbore, trusty electronic distributor) and say so.
If you frequently read comp.graphics, I can put you on the contact list (only)
and save myself some time mailing this thing out (and fixing the bounces).
Thanks, Eric (erich@eye.com)


 _ __                 ______                         _ __
' )  )                  /                           ' )  )
 /--' __.  __  ,     --/ __  __.  _. o ____  _,      /  / _  , , , _
/  \_(_/|_/ (_/_    (_/ / (_(_/|_(__<_/ / <_(_)_    /  (_</_(_(_/_/_)_
	     /                               /|
	    '                               |/

			"Light Makes Right"

			  July 15, 1991
			Volume 4, Number 2

Compiled by Eric Haines, 3D/Eye Inc, 2359 Triphammer Rd, Ithaca, NY 14850
	erich@eye.com or uupsi!eye!erich
Archive locations: anonymous FTP at weedeater.math.yale.edu [130.132.23.17],
	/pub/RTNews, and others.
UUCP archive access: write Kory Hamzeh (quad.com!avatar!kory) for info.

Contents:
    Introduction - SIGGRAPH get-together, etc
    New People, Address Changes, etc
    Ray Tracing Related FTP sites, compiled by Eric Haines
    Ray Tracing, the way I do it, by Haakan 'Zap' Andersson
    More Thoughts on Anti-Aliasing, by John Woolverton
    Spatial Measures for Accelerated Ray Tracing, by John Spackman
    Barcelona Workshop Summary, by Arjan Kok
    Book Announcement, from Stuart Green
    Spiral Scene Generator, by Tom Wilson
    An Announcement From The 'Paper Bank' Project, by Juhana Kouhia
    Proceedings of Graphics Interface '91 Availability, by Irene Gargantini
    NFF Previewers, by Bernie Kirby, Patrick Flynn, Mike Gigante, Eric Haines
    RayTracker Demos Available, by Jari Kahkonen
    RayTracker Info, by Zap Andersson

-------------------------------------------------------------------------------

Introduction

    I admit it, I've strayed from the One True Way of pure ray tracing:  I've
been dabbling in radiosity.  We recently finished a film of Ronchamp chapel
illuminated with radiosity techniques, rendered with an a-buffer, and with
stochastic ray traced beams of lights streaming in the windows, coming to a
film show near you (SIGGRAPH and Eurographics, I hope).  My one piece of
advice from doing the film is this:  try to make everything originate from
you.  The rights to music are amazingly expensive (in our case, thousands of
dollars for just one showing at SIGGRAPH), and tracking down and obtaining
permission to use quotes, drawings, or photos can be a real hassle.

    If you want to know how we did the beams of light effect, get the
"Frontiers in Rendering" course notes (or even go to the course).  There
should be some good & weird topics at this course, such as Charlie Gunn's
animations of hyperbolic space (where dodecahedra meet 8 at a corner) and
Peter Kochevar's shading computations via cellular automata ("the lunatic
fringe of rendering", as he puts it).  Me, I'm going to talk about ray
casting for radiosity, and also all the things that drive me crazy about
radiosity (with lots of dirty laundry pictures showing where and how radiosity
falls apart, and some solutions).

    As usual, there will be a ray tracing researchers get-together at SIGGRAPH,
open to anyone.  On Thursday from 5:15 to 6:30 at room N223 in the convention
center we'll meet and gab about this and that.  No planned activities, just a
place and time to connect names to faces.

    When our son Ryan was born, Zap Andersson sent a nice little GIF from his
ray tracer of a cigar in an ashtray as congratulations; the texture mapped
smoke was particularly well done.  Zap was just married, so I was able to send
two interlinked rings with his wife's & his names inscribed in them.
Definitely a future trend:  graphical greeting cards/presents by e-mail.

    The wedding rings picture was also my first worthwhile test image using
the two-pass software we've developed to blend radiosity and ray-tracing.  The
nice thing about two-pass algorithms is that you generally get soft shadows
cheaply, as the radiosity mesh picks up the shadows adaptively and so saves
the ray tracer much shadow-testing time.  You also get all the nice features
of ray tracing, including exact geometry:  spheres are truly spherical,
instead of straight radiosity's polygonalized representation.  Being able to
combine the sampling mesh from radiosity and the geometry from ray tracing is
usually a great combination.

    Just so it is not buried in the bits:  note that Greg Ward's Radiance ray
tracer with radiosity effects built in (see his SIGGRAPH paper) is now
available via FTP.  As important, he has also begun a directory of "test"
radiosity scenes which researchers can use to attempt to compare radiosities
generated for these scenes.  For those of you trying to write your own
radiosity system this is a valuable tool for checking if you're getting
anything near the right answer.  Finally, Greg has also collected various
object models and made them available.

    Books of note:  _Digital Image Warping_ by George Wolberg (IEEE Computer
Society Press Monograph, 1990) is very handy if you're involved in texture
mapping.  Many different topics are covered, with good illustrations and
sample code.  It's not the ultimate book from a theory standpoint, but is very
practical and understandable.  Recommended.

    Last issue someone mentioned the book _Fractal Programming and Ray Tracing
with C++_ by Roger Stevens (M&T Books for $30, $20 more for the disk of code).
I bought it, and cannot recommend it to the general reader.  If you are
getting your feet wet with C++ it might be of some interest, and beginning PC
programmers might find bits of it useful.  The author takes Steve Koren's QRT
input language and develops a C++ ray tracer for it.  One major problem with
such a ray tracer is that there is no definition for a general polygon; only
triangles and parallelograms are provided.  There's a lot of padding in the
book, with 20 or 30 page stretches of nothing but code listings, and 70 pages
of data file listings (42 pages for his version of "sphereflake" alone - he
could have printed the listings for the whole SPD package in that space!).
The index is somewhat dysfunctional, e.g. minor variations on the words
"bounding boxes" are given separate listings, some with wrong page numbers.
References to almost all other work in ray tracing is missing, and the
interested reader is given almost no help on where to go for more information.
I wish it had been better.

    Coming out at SIGGRAPH is "Graphics Gems II", edited by Jim Arvo this time
around.  Does anyone else know of good books to look for there?  The other
SIGGRAPH question:  any guesses on how many times humorous references to
"Monte Carlo" techniques will be made?

    Get a job:  one subscriber pointed out an interesting relationship between
public domain ray tracers and future employment.  The authors of two of the
more popular public domain ray tracers (MTV by Mark VandeWettering and
RayShade by Craig Kolb) are currently employed by PIXAR.  Now if David Buck of
DKBtrace gets a job there (no, I have no idea if he's even looking for work)
this relationship can become a firmly established principle...

    Finally:  I've pretty much stopped culling USENET for news in
comp.graphics, figuring that most everyone plows through this stuff by now.
What convinced me was looking at my file of comp.graphics clippings and seeing
that the accumulation surpassed half a megabyte!  Updating the ray tracing and
radiosity bibliographies, mailing list, and the FTP site list are time
consuming enough; also running a clipping service was too much.

-------------------------------------------------------------------------------

New People, Address Changes, etc


The first subscriber connected across the now-slagged Iron Curtain:

# Janusz Kalinowski - density clouds (metaballs), parallelism, textures
# Technical University of Wroclaw, Computing Center
# Wybrzeze Wyspianskiego 27
# Wroclaw, Poland
alias	janusz_kalinowski	kalinows@plwrtu11

I am a lecturer at Technical University of Wroclaw.  I am mainly involved in
CG-related subjects, but also in system software.  We are preparing a
metaballs system (modeller, renderer).  I made my MS in DataFlow (I wrote an
emulator of Manchester Prototype Dataflow System), but I prefer CG now.  Maybe
I will marry both domains in the future?

--------

# Chris Green - efficiency, fancy primitives, radiosity, textures
# Commodore Business Machines
# 1200 Wilson Drive
# West Chester, PA 19380
# (215)-431-9100
alias	chris_green njin!cbmvax.cbm.commodore.com!chrisg

When I've got spare time away from working on Amiga graphics, and doing
contract work on 3d games, I spend it working on my ray tracer.  My ray tracer
is extremely fast, due to being written completely in 680x0 assembly with all
fixed point math.  I support spheres, triangles, hypertexture (!), and general
implicit functions.  It also has depth of field, penumbras, procedural
textures, and more.  The efficiency scheme used is my own invention and is
especially fast for radiosity ray tracing (some day) and penumbras (now).

--------

Russ Tuck
tuck@maspar.com
MasPar Computer Corporation, 749 N. Mary Ave, Sunnyvale, CA 94086

(Old:  tuck@cs.unc.edu)

--------

# Billy Ferrer - ray tracing on the Atari STe, efficiency.
# University of California, Irvine
# 2520 Golden Ave.
# Long Beach, CA 90806
# (213) 595-5279
alias bill_ferrer bferrer@bonnie.ics.uci.edu

I am a sophomore at University of California, Irvine studying computer
science.  My interest in ray tracing stemmed from seeing impressive displays
from Amiga, NCGA and Siggraph shows.  I am just a beginner in the ray tracing
programming department, but during my spare, spare, spare time I am writing a
ray tracing program for the Atari STe because first there isn't a good and
reliable ray tracing program on the ST/STe and second to write a ray tracing
program for learning purposes.  Currently my program traces checkered floors,
spheres, and reflections.  Right now, my effort is going towards texture
mapping the floor by mapping an image on the floor.  Hopefully my program will
support refractions and other various 3d object formats.

--------

Name:    Thomas Michael Burgey
Goals:   modelling of objects, modelling of textures, RT art
Company: Cadlab - Kooperation Uni-GH Paderborn / Siemens Nixdorf AG
         Bahnhofstrasse 32
         W-4790 Paderborn
Tel:     +49 5251 284 151
Mail:    tmb@cadlab.cadlab.de

-------------------------------------------------------------------------------

Ray Tracing related FTP sites (and maintainers), 7/15/91
	compiled by Eric Haines, erich@eye.com

[Ironically, we still don't have our FTP connection yet (it's been "one month
away" since last December...), so I can't verify much of this data!  Please do
send me any updates & corrections.]

Some highlights:

RayShade - a great ray tracer for workstations on up.
DKBtrace - another good ray tracer, from all reports; works on PCs.
Radiance - a ray tracer w/radiosity effects, a la Greg Ward (who wrote it).
VORT,QRT,MTV,DBW - yet more ray tracers, some with interesting features.
prt, VM_pRAY - parallel ray tracers.
SIPP - scanline Z-buffer renderer.
VOGLE - graphics learning environment (device portable).

SPD - a set of procedural databases for testing ray tracers.
NFF - simplistic file format used by SPD.
OFF - another file format.

RT News - collections of articles on ray tracing.
RT bib - all known (by me) articles on ray tracing, in "refer" format.
RT abstracts - collection of abstracts of many many RT articles.

Utah Raster Toolkit - nice image manipulation tools.
FBM - another set of image manipulation tools.
Graphics Gems - code from the ever so useful book.

(*) means site is an "official" distributor, so is most up to date.


weedeater.math.yale.edu [130.132.23.17]:  /pub - *Rayshade 3.0 ray tracer*,
	*color quantization code*, *SPD*, *RT News*, *Wilson's RT
	abstracts*, "RT bib*, *new Utah raster toolkit*, newer FBM,
	*Graphics Gems code*.  Craig Kolb <kolb@yale.edu>

rascal.ics.utexas.edu [128.83.144.1]:  /misc/mac/inqueue - VISION-3D facet
	based modeller, can output RayShade files.

ccu1.aukuni.ac.nz [130.216.1.5]:  ftp/mac/architec - *VISION-3D facet
	based modeller, can output RayShade files*.  P.D. Bourke
	<pdbourke@ccu1.aukuni.ac.nz>

alfred.ccs.carleton.ca [134.117.1.1]:  /pub/dkbtrace - *DKB ray tracer*.
	David Buck <david_buck@carleton.ca>

hobbes.lbl.gov [128.3.12.38]: Radiance ray trace/radiosity package.  Greg Ward
	<gjward@lbl.gov>

nic.funet.fi [128.214.6.100]:  pub/graphics/papers - *Paper bank project,
	including Pete Shirley's entire thesis (with pics)*, *Wilson's RT
	abstracts in PostScript*, Kouhia Juhana Krister <jk87377@cs.tut.fi>

isy.liu.se [130.236.1.3]:  pub/sipp-2.0.tar.Z scan line z-buffer and Phong
	shading renderer.  Jonas Yngvesson <jonas-y@isy.liu.se>

calpe.psc.edu [128.182.66.148]:  pub/p3d - p3d_2_0.tar P3D lispy scene
	language & renderers.  Joel Welling <welling@seurat.psc.edu>

ftp.ee.lbl.gov [128.3.254.68]: *pbmplus.tar.Z*, RayShade data files.  Jef
	Poskanzer <jef@ace.ee.lbl.gov>

irisa.fr [131.254.2.3]:  */iPSC2/VM_pRAY ray tracer*, SPD, /NFF - many non-SPD
	NFF format scenes, RayShade data files (Americans: check
	ftp.ee.lbl.gov first).  Didier Badouel <badouel@irisa.irisa.fr>

wuarchive.wustl.edu [128.252.135.4]:  /mirrors/unix-c/graphics - Rayshade ray
	tracer, MTV ray tracer, Vort ray tracer, FBM, PBM, popi, Utah raster
	toolkit.  /mirrors/msdos/graphics - DKB ray tracer, FLI RayTracker
	demos.  Tracey Bernath <tmbernath@tiger.waterloo.edu>

tolsun.oulu.fi [128.214.5.6]:  *FLI RayTracker animation files (PC VGA)*,
	*RayScene demos* (Americans:  check wustl first).  Jari Kahkonen
	<hole@rieska.oulu.fi>

cs.uoregon.edu [128.223.4.13]:  /pub - *MTV ray tracer*, *RT News*, *RT
	bibliography*, other raytracers (including RayShade, QRT, VM_pRAY),
	SPD/NFF, OFF objects, musgrave papers, some Netlib polyhedra, Roy Hall
	book source code, Hershey fonts, old FBM.  Mark VandeWettering
	<markv@acm.princeton.edu>

hanauma.stanford.edu [36.51.0.16]: /pub/graphics/Comp.graphics - best of
	comp.graphics (very extensive), ray-tracers - DBW, MTV, QRT, and more.
	Joe Dellinger <joe@hanauma.stanford.edu>

freedom.graphics.cornell.edu [128.84.247.85]:  *RT News back issues*, *source
	code from Roy Hall's book "Illumination and Color in Computer
	Generated Imagery"*, SPD package, *Heckbert/Haines ray tracing article
	bibliography*, Muuss timing papers.

uunet.uu.net [192.48.96.2]:  /graphics - RT News back issues (not complete),
	NURBS models, other graphics related material.

iear.arts.rpi.edu [128.113.6.10]:  /pub - *Kyriazis stochastic Ray Tracer*.
	qrt, ohta's ray tracer, prt, other RT's (including one for the AT&T
	Pixel Machine), RT News, *Wilson's RT abstracts*, Graphics Gems, wave
	ray tracing using digital filter method.  George Kyriazis
	<kyriazis@turing.cs.rpi.edu>

jyu.fi [128.214.7.5]: /pub/graphics/ray-traces - many ray tracers, including
	VM_pRAY, DBW, DKB, MTV, QRT, RayShade, some RT News, NFF files.  Jari
	Toivanen <toivanen@jyu.fi>

life.pawl.rpi.edu [128.113.10.2]: /pub/ray - *Kyriazis stochastic Ray Tracer*.
	George Kyriazis <kyriazis@turing.cs.rpi.edu>

ab20.larc.nasa.gov [128.155.23.64]: /amiga - DBW,
	/usenet/comp.{sources|binaries}.amiga/volume90/applications -
	DKBTrace 2.01.  <ftp@abcfd20.larc.nasa.gov>

munnari.oz.au [128.250.1.21]:  pub/graphics/vort.tar.Z - *VORT CSG and
	algebraic surface ray tracer*, *VOGLE*, /pub - DBW, pbmplus.  David
	Hook <dgh@munnari.oz.au>

gondwana.ecr.mu.oz.au [128.250.1.63]:  pub - *VORT ray tracer*, *VOGLE*,
	Wilson's ray tracing abstracts.  Bernie Kirby <bernie@ecr.mu.oz.au>

freebie.engin.umich.edu [141.212.68.23]:  *Utah Raster Toolkit*, Spencer Thomas
	<thomas@eecs.umich.edu> or Rod Bogart <rgb@caen.engin.umich.edu>.

cs.utah.edu [128.110.4.21]: /pub - Utah raster toolkit, *NURBS databases*.
	Jamie Painter <jamie@cs.utah.edu>

gatekeeper.dec.com [16.1.0.2]: /pub/DEC/off.tar.Z - *OFF objects*,
	/pub/misc/graf-bib - *graphics bibliographies (incomplete)*.  Randi
	Rost <rost@granite.dec.com>

expo.lcs.mit.edu [18.30.0.212]:  contrib - *pbm.tar.Z portable bitmap
	package*, *poskbitmaptars bitmap collection*, *Raveling Img*,
	xloadimage.  Jef Poskanzer <jef@well.sf.ca.us>

venera.isi.edu [128.9.0.32]:  */pub/Img.tar.z and img.tar.z - some image
	manipulation*, /pub/images - RGB separation photos.  Paul Raveling
	<raveling@venera.isi.edu>

wuarchive.wustl.edu [128.252.135.4]:  /mirrors/unix-c/graphics - Rayshade ray
	tracer, MTV ray tracer, Vort ray tracer, FBM, PBM, popi, Utah raster
	toolkit.  /mirrors/msdos/graphics - DKB ray tracer.

ucsd.edu [128.54.16.1]:  /graphics - utah rle toolkit, pbmplus, fbm,
	databases, MTV, DBW and other ray tracers, world map, other stuff.
	Not updated much recently.

okeeffe.berkeley.edu [128.32.130.3]:  /pub - TIFF software and pics.  Sam
	Leffler <sam@okeeffe.berkeley.edu>

surya.waterloo.edu [129.97.129.72]: /graphics - FBM, ray tracers

vega.hut.fi [128.214.3.82]: /graphics - RTN archive, ray tracers (MTV, QRT,
	others), NFF, some models

gondwana.ecr.mu.oz.au [128.250.1.63]:  SPD, NFF & OFF databases, Graphics Gems
	code.  Bernie Kirby <bernie@ecr.mu.oz.au>

hp4nl.nluug.nl [192.16.202.2]: /pub/graphics/raytrace - DBW.microray, MTV,
	etc.

ftp.brl.mil [128.63.16.158]: /old/brl-cad - information on how to get the
	BRL CAD package & ray tracer.

karazm.math.uh.edu [129.7.7.6]:  pub/Graphics/rtabs.shar.12.90.Z - *Wilson's
	RT abstracts*, VM_pRAY.  J. Eric Townsend
	<jet@karazm.math.uh.edu>

maeglin.mt.luth.se [130.240.0.25]:  graphics/raytracing/Doc - *Wilson's RT
	abstracts*.

ftp.fu-berlin.de []:  /pub/unix/graphics/rayshade4.0/inputs - aq.tar.Z is
	RayShade aquarium (Americans:  check ftp.ee.lbl.gov first).  Heiko
	Schlichting <heiko@math.fu-berlin.de>

apple.apple.com [130.43.2.2?]:  /pub/ArchiveVol2/prt.

netlib automatic mail replier:  UUCP - research!netlib, Internet -
	netlib@ornl.gov.  *SPD package*, *polyhedra databases*.  Send one
	line message "send index" for more info, "send haines from graphics"
	to get the SPD.

UUCP archive: avatar - RT News back issues.  For details, write Kory Hamzeh
	<kory@avatar.avatar.com>

-------------------------------------------------------------------------------

Ray Tracing, the way I do it, by Haakan 'Zap' Andersson

[See the RayTracker description later in this issue for what his system looks
like.  I don't agree with the usefulness of some of these, but find them
interesting reading.  I particularly like the idea of hashing, though as
John Woolverton points out, this idea has problems with soft shadows. - EAH]

* TIGHT screen space bounding.
  Some people neglect screen space bounding, and don't use it. MANY people
  use it, but they project the 3D bounding box onto screen space, potenti-
  ally getting a lot of 'slack' around 'em.
     Gain: Speedier bounding box generation
     Loss: Slack around objects = more wasteful intersections.
  My Way: Do a vector rendering and get minmax screen coordinates from
          that.  Yes, a vector rendering might take time, but hey, what's that
	  compared to the tracing time, eh?
  (Yes I know about the method of doing a Z buffer rendering first... but
   then you have to write a Z buffer renderer first, right?)
  Comments:
    Think about the standard axis-aligned bounding box around an object.
    Seen from the vector 1,1,1 you will have the corners 'sticking out'
    maximally. Also, bounding box intersection takes this 'n that amount
    of FP operation. A screen space bounding box is done with four integer
    compares. For eye rays I NEVER intersect the actual bounding boxes at
    all, ONLY use the screenspace bounding, plus a stored minimum distance
    to the eye for each bounding box. So my bounding box intersection for
    eye rays is 4 integer and one FP compare.

* Self sorting list
  Most objects in my structure has the minimum distance to the eye recorded.
  When intersection the object, the first thing you do is to see if the
  currently valid 't' is smaller than this distance. If so we have already
  hit a closer object, and can never hit this object. Also, when the ray-
  intersection checker has found the closest object, this is put first in
  the list and will be checked first next time. 

  Since the intersection check function is called recursively when we enter
  a bounding box, objects INSIDE the bounding box is also sorted, so for
  each 'level' in the ray tree we always have the object we hit last first.

* Light buffer
  Similar to the 2d bounding boxes on screen for eye rays, shadow rays have
  their 2d bounding boxes, but since a light source can shine all around and
  is not limited to one direction, this is done in polar coordinates (a
  spherical coordinate system). So, for shadow rays I don't intersect any
  real bounding boxes either, but do some more compare operations. And since
  the ranges for a spherical coordinate system is fixed (i.e. 2 pi * pi) 
  there is no point in using floating point, so fixed point integers can be
  used instead.

  The minimum distance to each object is also constant for each light, and
  may then be calculated and stored when we set up things and do the wire
  frame rendering.

* Reflected/fracted rays
  This is the only case where I actually intersect bounding boxes. And now
  to the next weird issue: I do NOT use axis-aligned bounding boxes, they
  are transformed together with the rest of things, since my bounding boxes
  actually work as "transformers" for my objects. But FIRST, for each box,
  I check the MINIMUM distance with the current 't' and if bigger, just
  discard. 

  To find the minimum distance, I am helped by the fact that my bounding 
  boxes are stored as a center point and x, y and z extends from that
  centre. So I can use a rough (distance to center) - (x_size + y_size +
  z_size), and I get a value that is guaranteed to be SMALLER than the
  actual distance. And to avoid square roots, my actual comparisons is
  of course done on the distances squared, i.e. from the ray origin, I take
  the x distance to the bbox centre squared, plus y distance squared, plus
  z distance squared. Now I have the distance from ray origin to the
  center of the bbox squared. Now subtract from that (x_size + y_size +
  z_size) squared, and compare that to 't' squared. If 't' is smaller, we
  can never hit that bounding box.

* Transformed bounding boxes
  I've always been in love with tight bounding volumes, because they avoid
  unnecessary TRUE intersections. Thus, my bounding boxes are transformed
  with everything else. Actually, it works like this (pseudo code-ish):

  trace_ray(first_object,ray)
  {
    object = first_object;

    while (object)
    {
      .
      .
      (( do the 2d intersection checking, either eye-ray 2d or
         polar 2d for lamps. only do rest of code if 2d hit, and
         distance is below current 't' ))
      .
      (( Ok, we hit 2D-wise, OR this was a reflected/fracted ray then:
         transform ray to object's coordinate system ))
      .
      switch(object->type)
      {
         case SPHERE: /* Intersect sphere(oids) */
                 .
                 .
         case BOUNDING_BOX:
                ((check boundbox intersection, if refl/frac ray, 
                  if not refl/frac ray, set hit to true.))

                if (hit)
                {
                   /* Ok, let's trace rays in this new coordinate system
                      we are in now */
                   trace_ray(object->bbox->contents_of_box,transformed_ray);
                }
      }
      object = object->next;
    }
  }

  What this gives me, besides the ability to twiddle my bounding boxes, is
  local coordinate systems WITHIN those boxes. So if I have a HUNDRED objects
  that would feel happy if they got their bounding boxes tilted 30 degrees
  to the right, I could create ONE bounding box that transforms the ray 30
  degrees to the right, and then use axis-aligned boxes (which of course are
  faster) INSIDE the box for each of the hundred objects, making them happy.

  Also, it helps animating stuff a lot. Move the bounding box, ans wham you
  have an aggregate object. Move a box within the box, and Wham you have
  hierarchical motion control without sweating too much.

* Filtering texture maps
  The way I filter texture maps may be considered "nasty" but it works OK
  for me. I know the resolution, and I also know (from reverse engineering
  my perspective ray creator ;-) how "big" the screen is in units, I can
  easily calculate how "big" one pixel is in the screen plane. Deeper into
  the model, the pixel covers a larger area, increasing linearly. So, the
  distance an object is from the eye, guide the "pixel size" at that point
  in a very simple and linear way.

  Now the first error you do when you want to sample from your texture map
  is to sample an area that is pixel_size * pixel_size large. Well, that is
  correct for planes that are facing you. But if we are looking almost para-
  llell to the surface? No, the actual area covered by the pixel is roughly
  pixel_size / cos(view_angle), and since cos() is simply a dot product 
  (that you'll need later anyway) it's easily calculated. Now I simply grab
  a square area that is this size large, and average it. Nasty, but looks
  quite OK without overwhelming calculations.

   Oh yes, there is the pathetic case where you are supposed to sample
  10000 x 10000 pixels and average. Well, I never sample more than 8 x 8,
  then I start to pick out 64 pixels at random within an n x n grid, and I
  don't bother if I get the same one twice, since the impact of that on an
  average of 64 pixels is mostly killed by the dithering noise anyway ;-)

  Also, I use the eye-distance calculation ALL THE WAY DOWN THE RAY TREE,
  which looks very OK as soon as the mirroring surfaces are flat, or there
  is no heavy refraction going on. But now let's move on to other anti-
  aliasing.


* Hashing anti aliasing
  I never use color difference as my subdivision criteria for supersampling.
  What I do is this:

  Before each sample for a pixel, set hash_number to 0.
  The trace_ray() procedure does the following:
    If it hits an object
      Is it a smoothed patch mesh or similar?
         Add its "smoothing group" or anything else that is
         same for all faces smoothed together (I use the vertexlist
         pointer) to the hashnumber
      else
         Add something unique for the object, its "number" or the
         objectpointer to the hashnumber.

      Set shadow_count to zero 
      For each light source:
         Check shadows. If in shadow, shadow_count++.

      
      hash_number += shadow_count < 5 (or whatever);
      
      If object is reflecting/fracting, call trace_ray() recursively
      as always in a raytracer....


  So what is all this? Well the hashnumber we get we compare with the
  hashnumber for last pixel and the pixel in the last scanline. If it is
  different then:
    * We have hit different objects than last pixel/line
    * We are in/out of different number than shadows than last pixel/line
  AND THIS IS FOR THE ENTIRE RAY TREE! So if we hit anything else, ANYWHERE
  down the tree, the hashnumber is bound to be different.

  Oh yes, there are a number of weird cases where you just HAPPEN to miss
  a supersample just because we get out of shadow the same time we move
  from object 0 to object 32, and thereby by mistake get the same hash
  number. But who cares? Don't worry, be happy.


* Surface acne [NEW! NEW! NEW! NEW! NEW?] ??? Or is it? You tell me...

  How to avoid it. Well, somebody proposed normal vector testing, and that
  is OK for the system with well behaved users. The rest of us do a small
  epsilon to avoid it. But someone said that this epsilon might be too big
  or small for a given scene. And the funniest of them all (I laughed quite
  a while), he proposed scaling it to the "diameter of the scene" or
  something else. Why on earth do things like this?

  The epsilon for displacing the ray for any object, is of course related
  to the pixel_size as described above! I use pixel_size / 4 and it works
  like a charm even if I model molecules or the solar system (which I have
  done both, actually!) even in the same scene!!

-------------------------------------------------------------------------------

More Thoughts on Anti-Aliasing, John Woolverton (woolstar@cobalt.caltech.edu)

[Zap Andersson also talks about this problem and independently invented this
hashing method.  I neglected to publish a rough draft of Zap's idea last issue
- see his article this issue for a more polished version.  - EAH]


   I was also troubled by the fact that my ray-tracer was anti-aliasing across
textures (causing massive thrashing on my machine), and took the problem to
the local gurus.

   I got back the suggestion of building a hash code, and putting all the
things I wanted to detect for in the hash calculation.

   So I hashed together, object pointers, and light sources (when they weren't
shadowed, or outside the cone of a spotlight...).  So first I'd check the hash
code, skipping color changes due purely to textures.  Only if the hash
differed, would I check the colors, just so I didn't SSamp a smooth edge also.

   However, this didn't fix sampling across a soft shadow edge.

-------------------------------------------------------------------------------

Spatial Measures for Accelerated Ray Tracing, by John Spackman

[Here are some interesting passages from a note from him to me]

...Mind you, my thesis is more to do with the navigation of oct-trees AFTER
they have been constructed with interval analysis.  This navigation seems
more efficient than all that ARTS nonsense (navigating only half the vertical
steps), naturally runs under integer addition and bit-shifting in a Bresenham-
type way BUT WITHOUT the concept of a global driving axis, and being
completely immune to division by zero requires no exception handling.
I called the SMART method (Spatial Measures for Accelerated Ray Tracing) -
perhaps my jocularity has back-fired & no-ones taking me seriously. 
I can ray-trace 20,000+ triangles with two light sources & shadows at 512x512
in under 8 minutes on a Sun SparcStation - in fact (outrageous claim time)
SMART has been observed to achieve constant time ray tracing INDEPENDENT
of object count.  Each ray simply navigates a few empty voxels, whose
number is independent of the global object count, until reaching the first
non-empty voxel where generally only one object need be intersected & 
accepted as the nearest struck.  For example, I rendered a single torus in
7 mins 23 secs and 80 tori in 7 mins 20 secs.  This is all AFTER constructing
the octtree - O(N) but very efficient with interval analysis.  Interval
analysis allows one to decompose right down to the surface of a primitive or
CSG object (none of this bounding box nonsense propounded in RTNews a couple
of years back).  You'll be able to see some of the pictures of the Octtrees in
my thesis - at a fine resolution they look great!

[...]

I think the work of Adrian Bowyer & John Woodwark at Bath would interest you -
they're attacking things from a CAD/CAM angle (Adrian is a Mechanical
Engineer) - email `ab@uk.ac.bath.maths'.  Another pocket of isolated English
ray-tracing is established at Leeds university (which I only stumbled across
recently).  They also have a CAD/CAM bent, & are particularly into multi-
processors.  If you're interested, email Professor Peter Dew at
`dew@uk.ac.leeds.dcs'.  A Dr Stuart Green also did some multi-processor work
(using your SPD data base) at Bristol University, but has now moved out to
industry & I don't have his new email address.  I've recently moved to
Edinburgh to take advantage of a 420 Meiko transputer surface - there's quite
a lot of knowledge in ray tracing accumulating here (Fractal planets etc).


...  Arvo & Kirk's formation of candidate lists for their 5D ray tracing
efficiency scheme?  I'm not a great fan of this I'm afraid - the old problem,
too much over-approximation for concave objects.  Consider a hoop with large
major radius, small minor radius (e.g. bicycle tyre tube).  A view frustrum
can easily intersect the hoops bounding box by passing through the hoop's
central hole WITHOUT striking the tyre anywhere!

    Lazy Oct-tree construction is the one for me, with nice tight
decompositions right down to object surfaces, allowing rays to pass through
the centre of such hoops whilst ignoring them, (or indeed lazy Hex-tree
construction for animated scenes...), and reduced storage & construction
costs.  It's a bit difficult to convey the efficiency of the scheme without
recourse to a black board, but the long and the short of it is that most rays
missing all objects end up querying none (hoorah!)  whilst those hitting an
object end up querying only that (ie just one) object.  One can't do much
better than that on a ray by ray basis, and I gave up on ray coherence ages
ago (not that it's not got great potential, but I got awful headaches trying
to work out the action of a complex CSG object e.g. reflective engine block
on a single incoming pyramid of rays, never mind refraction - perhaps you've
got further ...  ?).  Oh, and you get free adaptive anti-aliasing with
octtrees ..

-------------------------------------------------------------------------------

Barcelona Workshop Summary, by Arjan Kok (arjan@duticg.tudelft.nl)

[There were many interesting papers at this workshop.  What follows is
excerpted from Arjan's summary, focusing on those papers directly concerned
with classical ray tracing (e.g.  not including Monte Carlo methods, two-pass
methods, etc).  The finished papers will be available from Springer-Verlag in
book form some time next year.]

Summaries of papers presented at the second Eurographics Workshop on Rendering
Barcelona, Spain, 13-15 May 1991


Gregory Ward (greg@lesosun1.epfl.CH)
Adaptive Shadow Testing for Ray Tracing

Method for reducing the number of shadow rays for scenes with a large number
of light sources.  The sources are sorted on their contribution, and only for
the most important sources rays are cast.  The influence of the other sources
is estimated statistically.  Tests are done with different tolerances
(threshold to determine whether sources are important) and certainties (rate
of accuracy).  The method gives good reduction and is able to find the most
important shadows because it selects contrast as criterion.


Christophe Schlick (schlick@geocub.greco_prg.FR)
An Adaptive Sampling Technique for Multidimensional Integration by Ray-
Tracing

Describes a sampling method that includes the following characteristics:
adaptivity, irregularity, complete stratification, importance sampling and
uncorrelation.  It allows a fast reconstruction.  Implementation is done using
look-up tables.


J.P. Jessel, M. Paulin, R. Caubet
An Extended Radiosity Using Parallel Ray-Traced Specular Transfers

Describes a parallel extended radiosity method.  The method is implemented on
a parallel architecture dedicated to ray-tracing (based on transputers).


Veysi Isler, Cevdet Aykanat, Bulent Ozguc (isler@TRBILUN.BITNET)
Subdivision of 3D Space Based on the Graph Partitioning for Parallel Ray
Tracing

Describes a heuristic algorithm to subdivide the 3D space by converting the
problem into a graph partitioning problem.

-------------------------------------------------------------------------------

Book Announcement, from Stuart Green

[This is Stuart's thesis, further refined for publication.  To see if you
might be interested in it, look at:  Stuart A. Green & D.J. Paddon,
"Exploiting Coherence for Multiprocessor Ray Tracing," IEEE Computer Graphics
and Applications, vol 9, no 6, p. 12-26, Nov. 1989. - EAH]

Green, Stuart. "Parallel Processing for Computer Graphics", 1991,
in the series "Research Monographs in Parallel and Distributed Computing".

MIT Press, Cambridge, Massachusetts, 02142.  ISSN 0953-7767, 
ISBN 0-262-57087-4.

Pitman Publishing, 12-14 Slaidburn Crescent, Southport PR9 9YF.
ISBN 0-273-08834-3.

I don't have the US$ price, but in the UK it costs 27.95 Sterling.

-------------------------------------------------------------------------------

Spiral Scene Generator, by Tom Wilson

[This is a simple SPD-like scene generator, creating a 3D spiral loop]

There are two loops:  one which creates SIZE levels and the other that creates
2^SIZE balls on each level.  The balls on each level almost form a circle.
the entire structure makes a spiral.  The thing I am most displeased with is
the texture ("f" line).  I created it so that a scene would have a very
nonuniform distribution of objects.


#include <stdio.h>
#include <math.h>
#define PI 3.1415926
#define DEFAULTSIZE 10

main(argc,argv)
  int argc;
  char *argv[];
{double r, x, y, z, frac, angle, tmp, invs2;
 int s, s2, w, SIZE, col;

 if (argc > 1)
   sscanf(argv[1],"%d",&SIZE);
 else SIZE = DEFAULTSIZE;
 s2 = (int) pow(2.0,(double)SIZE);
 x = (double) SIZE / 2.0;
 printf("v\nfrom 40 20 -40\n");
 printf("at 0 0 0\n");
 printf("up 0 1 0\nangle 45\nhither 1\nresolution 512 512\n");
 printf("b 0 0 0.3\n");
 printf("l 90 90 0\n");
 printf("l 0 90 -90\n");
 printf("f 0.3 0.3 0.3 0.5 0.5 3 0 0\n");
 printf("p 4\n");
 printf("50 0 50\n");
 printf("50 0 -50\n");
 printf("-50 0 -50\n");
 printf("-50 0 50\n");
 for (s = col = 0; s < SIZE; s++)
  {s2 = (int) pow(2.0,(double) s);
   for (w = 0; w < s2; w++)
    {frac = (double)w / (double)s2;
     r = (double)s + frac;
     angle = 2.0 * PI * frac;
     x = 2*r*cos(angle);
     z = 2*r*sin(angle);
     tmp = (double)(SIZE - s) + (double)(s2 - w - 1) / (double)s2;
     y = tmp*tmp*tmp / (double)(SIZE*SIZE);
     col = (col + 1) & 7;
     printf("f %d %d %d 0.8 0.2 3 0 0\n",(col&4)>0,(col&2)>0,col&1);
     printf("s %g %g %g %g\n",x,y,z,2.0/(double)(s*s+1));
    }
  }
}

-------------------------------------------------------------------------------

An Announcement From The 'Paper Bank' Project,
	by Juhana Kouhia (jk87377@tut.fi)

The following new paper is available from nic.funet.fi [128.214.6.100] from
the directory pub/graphics/papers/papers.

Eric A. Haines and John R. Wallace
Shaft Culling for Efficient Ray-Traced Radiosity
May 1991, Barcelona
File: hain91.ps.Z   [about 70 kBytes]

If anonymous FTP is not available to you I will mail it if requested.

Please contact me for a full list of the papers.

[We have recently sent Juhana the last version of our paper (with thinko's
removed) and so you might want to get this improved version.  The new version
is called "Shaft Culling for Efficient Ray-Cast Radiosity".  It's a nice
little algorithm for finding what objects potentially block light between any
two given boxes in space.  Similar in some ways to Arvo & Kirk 5D but with
hierarchy, it has some interesting potential uses and generally speeds up
hierarchical bounding volume ray tracers.  - EAH]

-------------------------------------------------------------------------------

Radiance 1.4 via FTP, by Greg Ward

Radiance 1.4 is now available via tape distribution or anonymous ftp (for the
first time).  Rather than including compiled executables as I have in the
previous releases, only the source code and a global make script is provided
that should work on most platforms.  Please let me know if you have any
trouble with it.  I have also taken out the example images and the conference
room model in order to trim back the distribution.  I hope to include these
files in other anonymous ftp directories as suggested by Robert Amor since
there seems to be general agreement that this is a good idea.

To pick up release 1.4 from anonymous ftp, connect to hobbes.lbl.gov
(128.3.12.38) with ftp using the "anonymous" account and enter your e-mail
address as the password.  Everything is in the directory pub, and the main
distribution is called "Radiance1R4.tar.Z".  This file is about 3.5 Megabytes,
so please do your transfers in binary mode the first time!  Also, you will
probably experience less network traffic in the morning, when most computer
scientists are asleep.

--------

Information on Models, Test Environments, etc for Radiance:

I've just set up an anonymous ftp archive site at hobbes.lbl.gov (128.3.12.38)
for sharing Radiance models and programs.  In addition to the standard source
distribution, this archive contains the following:

pub/generators	- Programs for generating specialized objects & shapes
pub/libraries	- Libraries of patterns, textures, fonts, etc.
pub/mac		- MacIntosh applications and utilities (for Radiance)
pub/models	- Complete Radiance scene descriptions
pub/objects	- Objects for including in Radiance scenes
pub/programs	- Miscellaneous programs and utilities
pub/tests	- Test scenes for validating global illumination programs
pub/translators	- CAD file translators and image format converters

For those of you who I haven't dragged aside and told already, Radiance is
free ray tracing software for lighting simulation and rendering that does a
lot of neat stuff and tries very hard to do it accurately.

If you are interested in picking up or leaving off some nice environment
models, this is a good place to do it.  Most of the scene descriptions are in
Radiance format, but writing a translator shouldn't be too much work, and I'm
willing to offer whatever help I can.

If you are working on your own radiosity or ray tracing program and want to
compare results, please check out the pub/tests directory.  There is not much
there now, but with your help we can make this archive into a valuable
resource for researchers in global illumination.

Please send any questions or comments to greg@hobbes.lbl.gov.

--------

I finally finished putting together a library of objects from the various
models I've created for Radiance.  It is in pub/objects/gjward.tar.Z.  I hope
people find it useful.  It took me quite some time to get the my miscellany
into a usable form.

If you have objects you are willing to submit, please take a look at the way
this initial library is set up first.

--------

>From Paul D. Bourke (pdbourke@ccu1.aukuni.ac.nz) [130.216.1.5]

The public domain modeller Vision-3D for the Mac II family is about to
support Radiance data files as an export option. This has already been
done but the copy on our FTP site hasn't yet been updated (I want to put
some more features in the next release) If anyone is interested however
the current version of Vision-3D with Radiance file export can be made
available.

-------------------------------------------------------------------------------

Proceedings of Graphics Interface '91 Availability, by Irene Gargantini

In the USA: Morgan Kaufmann Publishers
Order Fulfillment Center
P.O. Box 50490
Palo Alto, Ca 94303 USA, phone (415) 965 4081

The proceedings should be available by now, and they can take your order in
advance.

-------------------------------------------------------------------------------

NFF Previewers, by Bernie Kirby, Patrick Flynn, Mike Gigante, Eric Haines

>From Bernie Kirby (bernie@eric.ecr.mu.oz):

There is an NFF previewer available for anonymous FTP on gondwana.ecr.mu.oz.au
[128.250.1.63] pub/preview.c.  It uses the vogle library which is also
available as pub/vogle.tar.Z

>From Patrick Flynn (flynn@cse.nd.edu):

If anyone on this side of the pond wants the preview.c program, it's now
available for anonymous FTP from shillelagh.cse.nd.edu (129.74.9.7).  You can
get the VOGLE library from uunet.  I tried the previewer; it does what I need.

>From Mike Gigante (mg@godzilla.cgl.rmit.oz.au):

There is a NFF previewer for Silicon Graphics workstations available on
godzilla.cgl.rmit.oz.au (131.170.14.2).  Make sure you grab the readme file
also.  It uses the hardware lighting and Zbuffer on the SGI machines to give a
very fast preview.

>From Eric Haines:

I also have two previewers for NFF files which work on HP workstations (one
is static, the other uses the mouse for rotating & viewing the object).  I
can send them to anyone who wants them (no FTP right now).

-------------------------------------------------------------------------------

RayTracker Demos Available, by Jari Kahkonen

    My Swedish friend Haakan Andersson sent me some RayTracker animation-
demos and now they are ftp'able from tolsun.oulu.fi [128.214.5.6].  You need
PC with VGA to run these animations.

  Because I'm sure that raytracer-fans have lotsa questions about RayTracker,
after they have seen these animations, please mail all questions to the
author, i.e. to Haakan Andersson, zap@lage.lysator.liu.se.  If you have
problems with animations (they don't unpack etc...)  flame or hate-mail me...

  You can find animations from /pub/rayscene/zap, though it has nothing to do
with Rayscene.  /pub/rayscene/anim.PC contains raytraced animations made with
Rayscene and DKBtracer (my animations...).  /pub/rayscene/anim.AMIGA contains
raytraced animations for Amiga made with Rayscene and DKBTracer (animations by
Panu Hassi).

  Animations are packed with Lharc 2.05.  All animations are self-extracting
archives (= .exe-files).  So, "run" them and they will unpack themselves.  For
examples:  If you have "car.exe"-animation, just type "car".  Animation-arc-
hives includes short README.TXT and animation (.fli).  If you want to give
these animations to your friends, *do not* separate these parts.  Give .exe,
since README.TXT contains contact-information to Haakan Andersson.  Also,
every animation has little text for contact address.  (except "mesh", I didn't
wanna "spoil" it...it moves so smoothly...:).  Remember to set "type binary"
when downloading files...

  These animations need Autodesk Animation Player, and if you don't have it,
you can pick it from /pub/rayscene/anim.PC.  That directory contains my demos
made with Rayscene.  Filename is "aaplay.exe".

  Animations run faster if you are running them from RAM-disk.  If you have
mouse it's easier to control aaplay.exe.  Adjust animation speed to your
machine.

  Some animations are not full-screen because of their original size.

  Little description of animations:

BIKE.FLI: Fly-by over a bike in a brick basement.

HP-RIP2.FLI:  The 'famous' one of a Camshaft emerging from its own drawing.
Created for the 'Mechslide' demo video, available from Emt Inc in USA.  The
name 'HP-RIP' comes from the fact that the 'idea' of a camshaft came from
Hewlett-Packard's well known raytraced 'Camshaft on bed of ravioli' by Eric
Haines, a friend of mine.

ART.FLI:  Fly-by over a pen, a few bolts, a drawing and a book placed on a
wooden table under a green metal table lamp.

BLAHBLAH.FLI:  Here's a (relatively) new one:  Demo of 1.55 Beta's
transparent, animated texturemaps, using Autodesks 'BOSSTALK'.  Even funnier
is to use animated bump-maps.  To see them, watch out for 'Spaceman Spiff' at
a theatre near you.

BOING.FLI:  Newer version of BOUNCE.  A Red, bouncing chrome sphere on top of
a texture map from Imagetects(tm).  Rendered and Animated in RayTracker 1.4
Beta.

BOUNCE.FLI:  Very very simple animated bouncing ball.  The first RayTracker
animation EVER!

CAR.FLI:  Looking at a red car in sunset.  Image Rendered and Animated in
RayTracker 0.6 Beta.  Model built in AutoCAD R10 by:  Bertil Heden, Autodesk,
Sweden.  Thanks, Bert!

MESH.FLI:  Fly-round a bowling pin of glass and a bowling ball on a plate
suspended in space.  New effect in this version of RayTracker is the ability
create a feel of space via 'space mapping' a background to the universe.

ALARMCLK.FLI:  An image of an alarmclock, a matchbox, a table, and a wooden
table lamp.  Suddenly, without warning, we fly past the clock and up the lamp.
Why?  Beats me.

--------

Tracey Bernath (tmbernath@tiger.waterloo.edu) notes:

I recently uploaded the raytraced animations from tolsun.oulu.fi to
wuarchive.wustl.edu to try and cut down the trans-ocean net travel.  I can't
upload to SIMTEL because we have a lousy connection, and I always get the
usual "Too many anonymous users, Try Again" 8-}

in /pub/zap are the raytraced and animated images, some are really good, some,
well...  in /pub/raytracker are the original animations, including aaplay.lzh
(I don't know if aaplay.exe works, but I know the aaplay.lzh does )

Enjoy!

-------------------------------------------------------------------------------

RayTracker Info, by Zap Andersson

[Though a commercial product, I thought I would include this info, since the
FLI demos used this software to generate them.  Also, the interactive features
are of interest.  I received a demo copy to review, and it seemed pretty nice
(though I didn't do any serious rendering with my no-math-coprocessor 386).]

Current version (1.71)
Date: 24 april 1991

RayTracker is a commercial raytracing program, especially created to render
geometries from the well-known CAD software AutoCAD.  RayTracker reads models
in the AutoCAD DXF format, or in Autodesk 3D Studio 'ASCII' format.

RayTracker runs on MS-DOS PC computers, with a math co-processor, but porting
to other platforms (mainly Sparc, Amiga and Mac) are being considered.  It
utilizes Expanded (EMS) memory if it is found in the system.

RayTracker has a graphical user interface (GUI) with dialog boxes, pull-down
menus, and a built in hypertext help-system.  It runs in two modes, a
graphical mode (which requires an EGA/VGA display) and a text mode fore those
lacking graphics.  On a VGA you may also see the image as it is being
rendered.  You may view the finished image with a supplied program on your
VGA, Super-VGA or on your CAD display, if it has an AutoSHADE compatible real
mode ADI driver with version 4.0 or higher and at least 256 colors.

Materials are easily assigned to objects by simply loading them from the
provided material library, containing many useful materials.  You may also
create your own materials by setting parameters for color, ambient, diffuse
and specular reflection, mirroring, transparency, index of refraction, trans-
lucency, shadow-casting and many many other things.

The mapping functions in RayTracker use a very generalized model of a map.  A
'map' can be any number of patterns, mixed or overlaid, placed on different
parts of one single surface, or repeated all over.  Each pattern can be both a
bitmapped graphic image (GIF, Targa 16/24/32, Animator CEL, Animator FLI and
Amiga IFF format is accepted), or one of the built in mathematically defined
functions (wood, marble, random noise, wavy, checkered etc.).

Each 'map' can be applied in many ways:

* Texture map - Changing the color of the surface.  This is the only thing
		that more primitive renderers, such as 'Big D' can do.
		Certain parts of a texturemap can also have a transparent
		color allowing one single surface to depict a complex object
		such as a tree or a person as a 'coulisse'.  Naturally, the
		shadow has the contour of the tree or person also.

* Bump map    - The 'brightness' at each spot in the map guides the surface
		'altitude', allowing you to create dented, scraped, wrinkled,
		engraved or in any other minor surface deviation.

* Mix map     - Allows you to mix between two totally different surface
		descriptions on one surface, such as a checkered pattern where
		some squares are of red glass and others of wood.

* Reflect. map- Allows you to simulate reflections without the increased
		rendering time using true mirroring.  Curved objects looks
		just as convincing with a reflection map.  You may also add
		amusing effects, such as 'window reflections' like in the film
		'Tin Toy' from Pixar.

Aside from this, a map can also be applied as the 'slide' in a slide projector
light source, or as the screen fore- or background.

Lightsources include point sources, directed, spotlights and slide-
projectors.  Light falloff can be none, linear or quadratic.  All types of
lights can cast shadows of two types:  Raytraced shadows, that are sharp and
accurate but requires one extra step in the ray tracing algo- rithm, and
shadows using a "shadow map", that can have 'fuzzy edges' and work very
quickly.

To ease the production of images, RayTracker renders every 16:th pixel first,
then every 8:th and so on, allowing you to quickly determine if there is
something wrong with the light, or a material.  You may also create 'test
renderings' on parts of your model by simply marking two points on the initial
wireframe.

Three modes are available, Quick Track, ignoring shadows and reflections,
Medium Track, accounting for shadows, but not reflection and refraction, and
finally Ray Track, performing the full raytracing calculation.

RayTracker can generate walkthrough animations from just a small set of 'key'
views, using a 7 dimensional spline interpolation technique.  The images can
be automatically concatenated to an Animator compatible FLI file.

The output format from RayTracker is GIF or Targa 16/24/32 (with alpha channel
in the 32 bit format) in any resolution (up to your diskspace limit).  A
conversion utility is also provided to convert a Targa file to a Amiga HAM
format.

RayTracker is *NOT* a PD or ShareWare program, it is a commercial product,
available from the addresses supplied below.  If you have any technical
questions about its capabilities, you can contact me, the program author on
email address:

    zap@lysator.liu.se

....or you can send ordinary mail to the LAST (Scandinavian) address listed
below, and attach my name: Hakan 'Zap' Andersson

U.S.A.                  Europe:             Scandinavia:
=================       =================   ===================
EMT Inc.  #250          EMT Ltd.            EMT Ab
199 N. Commercial st.   PO Box 103          Box 40
Bellingham, WA          Rickmansworth       S-178 21 Ekeroe
98255 USA               Herts WD3 5RF       SWEDEN
                        U.K.
Phone:
(USA)-206 647 2426     (UK)-923 285 496     (SW)-756 320 20
Fax:
(USA)-206 647 2890     (UK)-923 285 496     (SW)-756 346 50

-------------------------------------------------------------------------------
END OF RTNEWS

From erich@eye.com Wed Nov 20 14:13:44 1991
Return-Path: <erich@eye.com>
Received: from cs.rpi.edu by iear.arts.rpi.edu (3.2/HUB10);
	id AA28638; Wed, 20 Nov 91 14:13:08 EST for kyriazis
Received: from eye.com by cs.rpi.edu (4.1/1.2-RPI-CS-Dept)
	id AA16384; Wed, 20 Nov 91 14:14:01 EST
Received: from juniper.eye.com by eye.com with SMTP
	(16.6/16.2) id AA18863; Wed, 20 Nov 91 13:52:03 -0500
Received: by juniper
	(16.6/15.6) id AA26177; Wed, 20 Nov 91 13:49:34 -0500
Date: Wed, 20 Nov 91 13:49:34 -0500
From: Eric Haines <erich@eye.com>
Message-Id: <9111201849.AA26177@juniper>
To: kyriazis@cs.rpi.edu
Status: RO

 _ __                 ______                         _ __
' )  )                  /                           ' )  )
 /--' __.  __  ,     --/ __  __.  _. o ____  _,      /  / _  , , , _
/  \_(_/|_/ (_/_    (_/ / (_(_/|_(__<_/ / <_(_)_    /  (_</_(_(_/_/_)_
	     /                               /|
	    '                               |/

			"Light Makes Right"

			 November 18, 1991
			Volume 4, Number 3

Compiled by Eric Haines, 3D/Eye Inc, 2359 Triphammer Rd, Ithaca, NY 14850
	erich@eye.com
All contents are US copyright (c) 1991 by the individual authors
Archive locations: anonymous FTP at weedeater.math.yale.edu [130.132.23.17],
	/pub/RTNews, and others.
UUCP archive access: write Kory Hamzeh (quad.com!avatar!kory) for info.

Contents:
    Introduction
    New People, Address Changes, etc
    ElectroGig Free Software Offer
    Spectrum: A Proposed Image Synthesis Architecture, by Andrew Glassner
    Spline Intersection, Texture Mapping, and Whatnot, by Rick Turner
    Satellite Image Interpretation, by Andy Newton
    Material Properties, by Ken Turkowski
    New Library of 3D Objects Available via FTP, by Steve Worley
    Object Oriented Ray Tracing Book
    New and Updated Ray Tracing and Radiosity Bibliographies
    DKBTrace 2.12 Port to Mac, by Thomas Okken
    Graphics Gems II Source Code
    Radiance Digest Archive, by Greg Ward
    Model Generation Software, by Paul D. Bourke
    Rayshade 4.0 Release, Patches 1 & 2, and DOS Port, by Craig Kolb and
	Rod Bogart
    RayShade Timings, by Craig Kolb
    RayShade vs. DKBtrace Timings, by Iain Dick Sinclair
    PVRay Beta Release, by David Buck
    Vort 2.1 Release, by Eric H. Echidna
    BRL-CAD 4.0 Release, by Michael J. Muuss and Glenn M. Gillis

-------------------------------------------------------------------------------

Introduction

    Well, it's been awhile - RealWork (TM) has been getting in the way of
putting out an issue of the Ray Tracing News.  So, rewind your brains back to
August...

    SIGGRAPH was interesting, as usual.  Las Vegas is an amusing place; now
that I've seen it once, I don't ever need to go back.  To my surprise, there
was quite a turnout for the ray tracing roundtable get together at SIGGRAPH.
The roundtable is a nice excuse for people to get in a room and put faces to
names, and I finally got to meet some people who had been just authors with
email addresses before this.

    Some papers of note at SIGGRAPH which directly affect ray tracing were
Kirk & Arvo's paper on unbiased sampling techniques and Mitchell's on optimal
sampling for ray tracing.  The first warns that re-using initial samples
results in bias when adaptively supersampling; the last talks of image
sampling strategies.  Other papers of interest include those on new procedural
texturing methods, which all look fairly easy to implement in their simpler
forms.

    Chen et al presented "A Progressive Multi-Pass Method for Global
Illumination", which does about every trick in the book to attempt to achieve
maximum realism.  Xiao He et al presented "A Comprehensive Physical Model for
Light Reflection", which is just that; it seems about the most realistic
shading model I've seen, with some very serious mathematics behind it.  Another
paper from Cornell, "A Global Illumination Solution for General Reflectance
Distributions" by Sillion et al, gives an interesting method of storing
reflectance functions by using spherical harmonics.

    The most theoretically significant radiosity paper was done by Hanrahan et
al, who presented a method of limiting the amount of computation by use of
hierarchy and error limits.  This method opens up interesting new lines of
thought and research in radiosity.

    I did not spend a lot of time on the floor, but did run across an
interesting demo at the Intergraph booth.  They had a cute ray tracing program 
that implemented parameterized ray tracing (Sequin & Smyrl, SIGGRAPH '89),
where you essentially store the shading equation parameters for each pixel.
Changing colors, applying textures, etc then becomes pleasantly fast, as all
you have to do is substitute the proper parameter values and reevaluate,
getting a new full ray traced image in seconds.

    Other new ray tracing products I noticed were from Ray Dream and Strata.
Ray Dream has a ray tracer for the Mac, with the program LightForge for
modeling surfaces and SceneBuilder for scene description.  They have also
added a distributed computing feature to poll Macs on a network for idle CPU
time and uses it for rendering.  Strata offers StrataVision 3d, again for the
Mac.  They claim ray tracing and radiosity rendering and gave us a demo disk -
the radiosity images are no great shakes, but it's interesting to see the word
"radiosity" making its way into the microcomputer market.

    AT&T Pixel Machines has been adding radiosity capabilities to their
rendering library set.  Silicon Graphics is still demoing radiosity, though no
product seems in the offing.  They did have a good tutorial film showing the
ideas behind the progressive radiosity algorithm, and Baum et al had a
worthwhile paper in the Proceedings on making radiosity usable.  This paper is
indispensable for anyone designing a robust radiosity system for general use
(i.e.  you plan on rendering more that a few axis aligned boxes in a room).
HP demoed their radiosity rendering product (ARTCore) with a room designer
demonstration, and had a movie in the film show (positive adjectives avoided,
since I worked on both projects).

    One of the more clever tricks I learnt from the room designer was how to
get reasonable wallpaper, floor covering, and other such textures scanned in
using a flatbed scanner.  In the past I went to building supply places and
borrowed or bought samples ("Yes, I want to see how this will look in my
kitchen", not mentioning that the kitchen existed only in the computer).
However, with a flatbed scanner you can get stuck:  the samples can be bigger
than the scanning surface.  Even if small enough, repetition of the texture
can lead to unrealistic effects (for example, a brick pattern is obviously
tiled if the brick colors keep repeating in a too regular fashion).  I've also
tried photographing large areas of a surface (e.g. a brick wall), but then
variations in the scene's lighting often appear and make for patterning or odd
shading artifacts.

    Tamar Cohen, who developed the room designer, realized that there was an
excellent solution to these problems:  dollhouse supplies!  Dollhouse
wallpaper and floor coverings easily fit on a flatbed scanner, and all the
repetition and lighting problems go away.

    For those of you who are deeply into texturing, you should consider
looking into the Khoros image processing system (ftp from pprg.eece.unm.edu
[129.24.24.10]:  /pub/khoros - check release first).  It's a huge (~100 Meg)
system, but from my minimal exposure seems extremely powerful and easy to use.
It has a visual programming language, so you can interactively attach various
function boxes together to perform operations.  This makes the system easy to
quickly start using for simple manipulations, though I think I'm going to have
to break down and read the documentation at this point.  The system is X based
and has been ported to most major workstations on up, and the group at the
University of New Mexico are enthusiastic and willing to help.  Recommended.

    I've also finally scratched the surface of Greg Ward et al's Radiance
package.  I was impressed first off by the portability:  one of his displayers
was the first serious X program I've ever compiled and linked without having
to diddle around with something to make it go.  In fact, I didn't even know it
was an X program until I ran it and a window popped up on the screen!  If you
want physically based rendering, this is the only package I know that even
attempts it.  It also seems to be a fine renderer, and I enjoy the progressive
ray tracing feature (the image refines while you watch it).

    As far as speed goes, Rich Marisa at the Cornell Theory Center kindly gave
me an explanation and demonstration of their Ray Casting Engine.  Duke and
Cornell have been developing this piece of hardware for some time, and it
embodies an interesting approach:  represent the CSG model as a network of
processors, then, given a direction of view, convert the model into sets of
spans.  These spans can then be used for analysis, rendering, etc.  For more
information, see the Feb. 1991 issue of Mechanical Engineering, or Kedem and
Ellis' article in _Parallel Processing of Computer Vision and Display_ (ed.
by Dew, Heywood & Earnshaw).

-------------------------------------------------------------------------------

New People, Address Changes, etc


# Andy Newton - physical radiance modelling, natural scenes, rays with solid
#	angle
# Remote Sensing Research, University College London
# Photogrammetry,
# UCL, Gower Street
# London, ENGLAND
# +44 71 387 7050 x2742
alias andy_newton	anewton@ps.ucl.ac.uk
alias andy_newton	anewton%uk.ac.ucl.ps@uk.ac.ucl.cs

Although the graphics is more fun than the Remote Sensing this is what I'm
supposed to be doing ...

Applying ray tracing to the understanding of remote sensed images of the
natural world, mainly satellite imagery. Much more interested in physical
accuracy than efficiency. Also how to correctly model, and sample, very large
and non-uniform light sources (the sky!) in ray tracing. How to relate the
point sampling paradigm of the infinitesimal ray to light energy transport.
Physical reflectance models like BRDF. Doing distance attenuation and variable
light source sampling properly (probably) using solid angle.

I'd be really interested in any references anyone has to ray tracing for
physical process simulations or radiance calculations using solid angle as
a ray property.

On offer: a realistic sky radiance model based on atmospheric scattering.

--------

# Denise Blakeley
# 1455 Runaway Bay Dr. #2B
# Columbus, OH 43204
# (614) 487-8442
blakeley@cis.ohio-state.edu

Ray-tracing interests: general

What I'm doing these days: I'm trying to finish my MS in Computer Science
  (concentrating in graphics) here at Ohio State December '91.  I'd like to
  finish the program with at least one fairly complete project to show for
  it, so I'm trying to expand my basic ray-tracer into a more complete
  rendering system.  Nothing ground-breaking; I'm just trying to learn as
  much as I can at this point, and have fun doing it!

--------

# Rick Turner - weird primitives, non-Euclidean raytracing, textures
# IBM UK Science Centre
# Athelstan House, St. Clement Street, Winchester SO23 9DR, England.
ricky@venta.iinus1.ibm.com

I'm a scientist at UKSC, working in the area of remote sensing and the
application of image and visualisation techniques to earth science problems.
Raytracing is a spare time activity.  I've written one raytracer, as well as a
substantial part of a second.  These use all the common CSG primitives, and
for the large tracer (called RT), I've added support for bicubic spline
patches (bezier, b-spline, ..., continuous beta-splines) and implicit
functions as well.  Currently I'm playing around with texture and image
mapping and volume objects.

--------

Matthew Williams
501 Chapel Drive #1417
Tallahassee, Fl  32304
(904)681-0873
fudd@fsunuc.physics.fsu.edu

Interests: Anything and everything

At the moment I am a student at Florida State University majoring in Russian
Language with minors in Math, Physics, and Computer Science.  All the time
that I am not in class (including some times when I should be in class) I am
on my PC playing around with DKBTrace (or should I say PVRay).  One of my
larger projects that I want to attempt is translating the C source for DKB to
assembly and hopefully gain some speed.  I would also like to add a fractal
section to it so I can have vines and stuff growing on different objects.

-------------------------------------------------------------------------------

ElectroGig Free Software Offer

[I don't know much about GIG, except that they have a CSG ray tracer.  Sounds
like quite a deal, though! - EAH]

>From Communications of the ACM, Nov. 1991:

In an effort to enhance computer graphics education on a national level, GIG
USA is offering a limited number of complete 3D graphics packages free of
charge to accredited universities, colleges and schools throughout the U.S.
The ElectroGIG system, which lists for $30,000, includes retracing [sic -
should be ray-tracing] and animation applications and runs on Silicon Graphics
and DEC 5000 workstations.  Written requests must be mailed (phone calls or
faxes will not be accepted) on official school letterhead by staff or faculty
members only to GIG USA, Inc., 7380 Sand Lake Rd., Suite 390, Orlando, FL
32819, attention:  GIG Educational Software Program.

-------------------------------------------------------------------------------

Spectrum: A Proposed Image Synthesis Architecture, by Andrew Glassner
	(glassner.pa@xerox.com)

Andrew Glassner is currently working on a proposal called "Spectrum", which is
a new ray tracer architecture.  The document outlining this design was made
available in the "Frontiers in Rendering" course notes.  The idea is to make a
flexible public domain ray tracer available among researchers and educators.

-------------------------------------------------------------------------------

Spline Intersection, Texture Mapping, and Whatnot, by
	Rick Turner (ricky@venta.iinus1.ibm.com) and Eric Haines

The code that I developed is based essentially on algorithms developed by
Kajiya and extended by Marini et al (the paper is in one of the Eurographics
procs; I can dig out a reference for you if you're interested).  Basically, we
model the ray as a pair of orthogonal planes.  Each of these is intersected
with the spline surface, giving a pair of space curves.  You intersect the
space curves giving the ray/surface intersection.

Intersecting curves is a manageable problem, so it works quite well.  The same
basic method is employed for all types of bicubic spline surface.  It actually
turns out that splines having a constant basis matrix (eg b-spline, power
splines, hermites, Catmull-Rom splines etc) are cheaper to compute if you
first do a basis transformation to bezier splines.  Beta splines and so on
that have a variable basis matrix require special treatment, which is quite
complex.

I run the code on an i860 based accelerator card to get it to work in
reasonable time; a 1024*1024 image with a few hundred primitives takes 5-10
minutes to compute.  Spline surfaces can increase this considerably.

The raytracer in question was originally developed by IBM Poughkeepsie as part
of a system called GDP (Geometric Design Processor).  This was essentially a
CSG system and was used to design parts of IBM mainframes.  It ran on a
mainframe and was written in PL/I and Assembler.  The mainframe code was
hacked out as a standalone module some years ago, and was then re-written in
'C' in peoples spare time.  Most of the basics were done in Burlington, Va.,
though extensions were done all over the place.

I'm currently working with mapping images and textures onto objects.  Yes, I
know this is a largely solved problem nowadays, but there are some interesting
'gotchas'.  I'm particularly interested in singular mappings, where you don't
have a one-to-one correspondence.  This overlaps in some ways with my 'real'
work, which often involves rendering some earth science dataset.  I'm
currently fooling around with Magellan data from JPL, rendering combinations
of terrain and image data in various ways.

--------

My reply:

I know the two-plane intersection method you refer to, in fact I coded it up
once long ago (though I don't know the Marini paper - maybe he solves the
problem of sometimes converging on the farther intersections instead of the
closest?  Seems like I remember that guaranteeing the right root is found was
a headache, though I recall Kajiya's solution was something like "use
Laguerre's method and find them all" or somesuch - I'm probably mixing this
up, as I haven't looked at these numerical methods in years...)

As far as texture mapping, that's something I'm still playing with here, too.
Solved?  Well, how does adaptive sampling work along with textures and mipmaps
and so on (mipmaps sample area, but what do you do if more sample rays are
shot in a pixel?) - there was a paper on this topic in Eurographics '91, so
it's of interest.  Also, specifying parameterizations for sets of primitives
is easy enough in theory (e.g. define some projection (spherical, conical,
plane) in space and use this to determine xyz -> uv), but this kind of thing
can look really bad in some cases.  I've been playing with other
parameterization functions, with some interesting results (my VW bug covered
with straw weave is certain to become a style trend soon, I'm sure).  Have you
run across any interesting parameterization papers/techniques lately?

--------

Reply from Rick:

In my code, the patches are subdivided, though not by very much.  Each
'mini-patch' has its own bounding box, and both are built into a tree
structure.  The more curvature the patch has, the greater the subdivision that
will be used; this reduces the chances of having a local maximum or minimum in
the middle of the patch go outside the bounding box.  It also allows you to
fit the bounding boxes much more tightly to the surface, so cuts down the
number of false intersections where the ray intersects the bounding box but
not the surface.  One interesting side effect of the subdivision is that by
making the bounding box smaller than the surface, you get disconnected pieces
of surface floating in space.  A nice example of this is on the cover of the
Bartels/Beatty/Barsky book on splines.  I've done the teapot in a similar way,
and it looks rather neat.  I went further and mapped the baboon onto each
disconnected patch, and it's quite eye catching.

Basically, the convergence method is a hybrid of multi-dimensional conjugate
gradient and quasi-Newton techniques.  This offers speed plus reasonable
stability (although you can always find a pathological case to defeat the
algorithms).  One thing about it is that the iterations happen in (u,v) space
rather than in (x,y,z) space; this ensures that the iterations will always
converge to a valid solution; if either u or v go outside the range valid for
the particular mini-patch that you're testing against you can immediately
reject the solution, as there will be no intersection with that patch.
Typically false intersections such as this are rejected on the first (or
sometimes the second) iteration, which improves the performance a bit.

Ray tracing splines is tedious though, so I've given my code an option to
render the bounding boxes for the mini-patches.  This gives a pretty good
first look at what you're going to get out, and takes a couple of orders of
magnitude less time.

Texture mapping people may want to look through the cartographic literature.
Map makers have similar problems in projective geometry when it comes to
changing map (or image) coordinate systems - eg, from Lat/Long to UTM for
example.  Much effort has been devoted to solving the mapping (in the
mathematical sense) problem; less on resampling.  Almost always the resampling
used are standard bi-linear or bi-cubics with the attendant problems.  On the
whole though, digital cartography can be a useful source on information that
is often overlooked by graphics people.  A good reference to start with is
USGS Professional Paper 1395, 'Map Projections, a Working Manual' by John
Snyder.  In the US you can get a copy from any Government bookstore - when I
got mine I think that it cost me about 25 dollars.

-------------------------------------------------------------------------------

Satellite Image Interpretation, by Andy Newton (anewton@ps.ucl.ac.uk)

I work in a satellite image interpretation research group. Our interest in 
ray tracing is for simulating all parts of the process of the formation of 
satellite images - optics, camera motion, atmosphere, surface scattering and
global (as in hemispherical!) light source illumination. We do work at two
scales - where the basic scene is a DEM (height grid) and for very complicated
3-d scenes for plant canopy reflectance simulation.

So ray tracing is a really powerful tool to allow us to simulate as many parts
of these complex processes as we can model but what we need to do is not quite
computer graphics. What I mean by this is that what we need out of our models 
are accurate, truthful, energies (in real units) not measures of visual
brightness. We need physical results. One example of this is wavelength sampling
by importance sampling a spectral response curve as opposed to treating light as
an RGB 'colour'. (Though I can't recall any references to doing exactly this it
is proposed in Cook's original stochastic sampling papers and my implementation
is very similar to his for reflected ray direction).

Our main problems are (i) illumination from such a big light source (the sky
hemisphere) with 'diffuse' reflectors (ii) ensuring that we model energies
correctly and (iii) using physical directional reflectances. I may be going out
on a bit of a limb here so please excuse me if I do - I'm not as familiar as I
ought to be with general CG practice - but I'll try to explain what I mean.

I've spent the better part of 3 years supporting and trying to add to a tracer
I inherited when I came to work here. Now that has turned out not to be 
particularly smart as its been a lot harder to modify and bug fix than if it was
my own work. Anyway the point is that through out most of that time we had no
good model for the directional and spectral variation of that thing outside the
window - the sky. As our main interest is _supposed_ to be the satellite work,
not the graphics, I felt we couldn't come to grand conclusions from our results
if the illumination function was simply not representative of the real world.
Now I've managed to solve this problem by implementing a model of atmospheric
scattering processes due to Zibordi and Voss so its time to make the physics
correct and BTW write a fresh tracer. I have seen some work on CG models of the
sky which are functional and not physical. This model is quite fast enough to
create a sky radiance LUT at any resolution required on a per scene basis so if
you know of anyone out there who needs a model of the sky's irradiance maybe I
could help.

The thing about any such illumination model is, of course, that the energy
results are per steradian of solid angle of illuminating source.  With the
point sampling infinitesimally thin ray what solid angle does a ray reaching
the sky have?  If it's a primary ray then OK it can have some solid angle
associated with the pixel but how should ray solid angles be transformed by
reflection etc?  The only work I've seen that is remotely like this is
Amantide's cones.  However that doesn't use the geometric concept of a solid
angle.  One plus point of considering solid angle is of course that the
effects of distance attenuation by divergence are implicitly included.  So I'm
very interested in how much solid angle is used as yet another ray parameter
in more general CG work.  Is this really common, or unheard of?

There's a reflectance concept in remote sensing called Bi-Directional
Reflectance Distribution Function (BRDF), which may also be use in CG or have
a parallel, that defines directional reflectance as a 5-d array of pairwise
directional spectral reflectance coefficients.  For each wavelength
(quantized, of course), for each incident direction (over the 2 Pi
hemisphere), for each emergent direction, define a reflectance coefficient.
Such things can be used as reflectance LUTs or integrated, subdivided and
importance sampled.  Are similar things done for interesting material
reflectances in the main stream?

-------------------------------------------------------------------------------

Material Properties, by Ken Turkowski (turk@apple.com)

[I edited Ken's notes into a coherent whole. - EAH]

>Does anyone know where I can pick up a list of material properties for
>different metals and other objects? 
>
>I need to know refractive index, diffuse component, specular component and
>specular exponent.

Purdue has a catalog of transmissive, reflective, absorptive and emissive
spectral data for conductors, dielectrics, pigments, emulsions, and light
sources.  From this you can calculate the refractive index and diffuse
spectrum.  It's called _Thermophysical Properties of Matter_ by the Purdue
University Thermophysical properties research center.  There are multiple
volumes.  We have found volumes 6, 7, and 8 most useful.  These contain data
for dielectrics, conductors, and surface coatings.

Unfortunately, this book is out of print.  We got our copy by photocopying one
that Purdue had.

Specular data is a function of the finish (i.e. rough, smooth), and can be
calculated by the method of He (SIGGRAPH '91) given surface statistics.

Roy Hall's book, _Illumination and Color in Computer Generated Imagery_,
points to some other sources of reflective data.  The reflective spectrum *is*
the diffuse "color".  Specular properties are a function on the finish, not
the material.

-------------------------------------------------------------------------------

New Library of 3D Objects Available via FTP, by Steve Worley
	(worley@updike.sri.com)

On the ftp site cs.uoregon.edu (128.223.4.13), I have assembled a set of over
150 3D objects in a binary format called TDDD in the directory
/incoming/TDDDobjs.  These objects range from human figures to airplanes, from
semi-trucks to lampposts.  These objects are all freely distributable, and most
have READMEs that describe them.  There are over six megabytes of these binary
objects.

In order to convert these objects to a human-readable format, a file with the
specification of TDDD is included in the directory with the objects.  There is
also a shareware utility called TDDDconv that will convert the binary objects
into either OFF, NFF, Rayshade, or vort objects.  This utility is also found
on cs.uoregon.edu, in the file /incoming/TDDDconv.tar.Z.

[There are some interesting things here.  You might have to diddle a bit, and
I noticed that some databases don't translate, but good stuff for the price!
One very cute thing in the package is "tddd2ps", which "converts" a TDDD file
to a printable set of four orthogonal views - nice touch!  - EAH]

-------------------------------------------------------------------------------

Object Oriented Ray Tracing Book

> I am looking for a book named "Object-oriented ray_tracing" by Melcher,
> and published by Wiley 91.

This is, in fact, an article by Karl Melcher and G. Scott Owen, more fully
entitled "Object Oriented Ray Tracing: A Comparison of C++ Versus C
Implementations" which will appear in a Wiley book early in 1992 (and which
was in the Wiley booth at SIGGRAPH '91).  The title of the book will be
"Computer Graphics Using Object-Oriented Programming" and the editors are
Cunningham, Craighill, Fong, and Brown.

-------------------------------------------------------------------------------

New and Updated Ray Tracing and Radiosity Bibliographies

At weedeater (see the header of this issue) via anonymous FTP are a number of
new or updated ray tracing and radiosity bibliographies.  I've updated the
ray tracing bibliography (pub/Papers/RayBib.09.91.Z) and radiosity bib
(RadBib.09.91) from last year's version.  Rick Speer has provided a postscript
(only) version of his extensively cross-referenced ray tracing bibliography
(speer.raytrace.bib.ps).  Tom Wilson's fine ray tracing abstract collection is
also available, with June 1 being the latest version (rtabs.6.91.shar.Z).
Also, the file "NetPapers" lists a number of worthwhile articles and theses
available on the net and where to get them.

-------------------------------------------------------------------------------

DKBTrace 2.12 Port to Mac, by Thomas Okken (thomas@duteca.et.tudelft.nl)

The public-domain raytracer DKBTrace, which runs on FPU-equipped Macs, has
been made available for anonymous ftp from "alfred.ccs.carleton.ca", files
/pub/dkbtrace/dkb2.12/other_ports/MacPort1.0.2.*.

-------------------------------------------------------------------------------

Graphics Gems II Source Code

FTP from:

weedeater.math.yale.edu [130.132.23.17]
gondwana.ecr.mu.oz.au [128.250.1.63]

file: pub/GraphicsGems/GemsII/GGemsII.tar.Z

-------------------------------------------------------------------------------

Radiance Digest Archive, by Greg Ward (greg@lesosun1.epfl.ch)

I have just made back issues of the Radiance Digest available from anonymous
ftp at hobbes.lbl.gov (128.3.12.38) in the pub/digest directory.  Those of you
who have limited network access can still ask me to send back issues to you
directly.

-------------------------------------------------------------------------------

Model Generation Software, by Paul D. Bourke (pdbourke@ccu1.aukuni.ac.nz)

[Paul has a facet based modeller for the Mac called VISION-3D, which can be
used to generate models directly in the RayShade and Radiance file formats.
He wrote telling me of other programs that might be of interest - EAH]

I have some other "niche" model generators that also export to Radiance and
RayShade.

A brief description of some of them follows:

FracHill - generates the old fractal landscapes using the spatial subdivision
           technique (ie: not the fourier method) It has all the usual
           settings for roughness, sea level, seed, land/sea colour, etc

3D LSystems - allows the user to generate 3D LSystems (0L). Uses all the
              standard symbols from the literature, an extension of my 
              2D LSystem which I wrote years ago.

Triangulate - takes a set of randomly distributed samples on a surface and
              generates either a triangulated (Delaunay) of gridded mesh
              representing the surface. We use it for our landscape
              Architecture course. Note: surfaces (functions) only, not
              solids!

Anyway, these and other applications can be obtained from my FTP directory
   ccu1@aukuni.ac.nz (130.216.1.5)
located in the
   architec
directory. Because we pay for FTP to the US people should be asked to FTP
the README file in the above directory, it will inform them of alternative
sites in the US.

-------------------------------------------------------------------------------

Rayshade 4.0 Release, Patches 1 & 2, and DOS Port, by Craig Kolb and Rod Bogart
	(rayshade@weedeater.math.yale.edu)

Rayshade 4.0 is now available.  This version is extremely different from 3.0,
and is very different from 4.0beta.

Rayshade 4.0 features include:
	+ Eleven primitives (blob, box, cone, cylinder, height field,
		plane, polygon, sphere, torus, flat- and Phong-shaded triangle)
	+ Aggregate objects
	+ Constructive solid geometry
	+ Point, directional, extended, spot, and quadrilateral light sources
	+ Solid procedural texturing, bump mapping, and
		2D "image" texture mapping
	+ Antialiasing through variable-rate "jittered" sampling
	+ Arbitrary linear transformations on objects and texture/bump maps.
	+ Use of uniform spatial subdivision or hierarchy of bounding
		volumes to speed rendering
	+ Options to facilitate rendering of stereo pairs
	+ Rudimentary animation support and motion blur
	+ Numerous bug fixes and syntax changes

Apologies to all the folks who felt that their Rayshade 4.0beta questions were
not handled in a timely fashion.  Both Rod and Craig have had to deal with
Real Life and did not have as much time for Rayshade as we had hoped.  We
still feel that Rayshade is the best Un*x raytracing package for the price.

Rayshade 4.0 is available via anonymous ftp from weedeater.math.yale.edu
(130.132.23.17) in pub/rayshade.4.0.  The shar files will be posted to
alt.sources and submitted to comp.sources.misc.

--------

Patches 1 and 2 to rayshade 4.0 are now available through
anonymous ftp from weedeater.math.yale.edu (130.132.23.17)
as pub/rayshade.4.0/patches/patch{1,2}.  The patches have
also been posted to comp.sources.misc.

--------

Tom Hite managed to port rayshade to DOS, and was kind enough to send me
a set of diffs, a couple of configuration files, and a short note describing
what one needs to do in order to coax rayshade into running on PCs.

I haven't had the courage yet to find myself a PC and to verify that Tom's 
instructions are idiot-proof.  In addition, Tom's diffs were for rayshade 4.0
patchlevel 0, and the new patches will undoubtedly cause some minor problems
when it comes to applying Tom's diffs.

The files are available from weedeater.math.yale.edu (130.132.23.17) as
pub/rayshade.4.0/raydiffs.dos.

-------------------------------------------------------------------------------

RayShade Timings, by Craig Kolb

Below for your amusement(?)  are timings for the latest version of rayshade
running on an HP-730.

I suspect that rayshade is a good bit less efficient than it used to be, but
I have yet to actually put this suspicion to the test.


Rayshade v4.0, patchlevel 1, on an HP-730 running HPUX-8.05, ?? MB memory.
Wed Oct  9 13:50:29 EDT 1991

         Setup       Total     |  Polygon   Sphere    Cyl/Cone    Bounding
             (seconds)         |   Tests     Tests      Tests    Box Tests
--------------------------------------------------------------------------
balls     3.18       116.24    |    356K     1564K        0        2763K
gears    15.79       705.25    |   8345K       0          0       11260K
mount     6.13       165.03    |   1035K     2096K        0        2991K
rings     5.44       235.37    |    103K      206K      4883K      5536K
teapot   13.49       126.43    |   1677K       0          0        2761K
tetra     2.96        31.93    |    578K       0          0         694K 
tree      4.94       103.28    |    716K       16K       366K      1467K

All timings are sum of user and system time.  Setup includes time to read
the database and initialize all appropriate data structures.  Total time
is setup time plus rendering time.  Test figures are rounded upwards.

Uniform spatial subdivision, employing 22^3 voxels, is used to accelerate
rendering.  In the balls, gears, and tree databases, the ground plane
is moved outside of the 22^3 grid in an attempt to generate a more uniform
object distribution within the grid.

-------------------------------------------------------------------------------

RayShade vs. DKBtrace Timings, by Iain Dick Sinclair (axolotl@socs.uts.edu.au)

>	In your posting on RayShade vs. DKBtrace you mention doing timings
>on them using the SPD.  Any chance you have the timings sitting around?

A friend here did the comparison, but it was by no means thorough -- it only
used one benchmark (the balls?).  In any event, the results of his quick
experiment seem to have been discarded (apparently it was a slight pain to
translate NFF -> DKB's verbose input format).  I seem to remember him saying
that Rayshade completed the scene about 20% quicker.  Unfortunately, SPD
wasn't used exhaustively, though it may be in the near future.

-------------------------------------------------------------------------------

PVRay Beta Release, by David Buck (dbuck@ccs.carleton.ca)

The freely distributable raytracer PVRay (Persistence of Vision) is
available for BETA testing from alfred.ccs.carleton.ca (134.117.1.1)
in the directory pub/dkbtrace/pvraybeta.  This program has been
developed by the "Persistence of Vision" group on CompuServe and is
built on top of DKBTrace version 2.12 with my permission and blessing.

Please note that this is a BETA release, so it may exhibit some bugs or
portability problems to different platforms.  Please refer any
problems to Drew Wells at 73767.1244@compuserve.com.  Also, this
version does not contain some of the ports which were previously
available for DKBTrace.  This situation is being rectified.  However,
you should find that the product-specific modules developed for
DKBTrace should be easily adaptable to PVRay.

Program Synopsis:
-----------------

PVRay is a Freely Distributable raytracer built on top of DKBTrace.  New
additions include:

   - Bezier surfaces
   - Height Fields (GIF only at this time)
   - Bump maps
   - Improved Quartic surface support
   - Input language can now optionally accept lowercase keywords
   - New textures: Onion, Leopard
   - C-style comments accepted (  /* */ and //)
   - Algebraic surfaces
   - Sphere, Cylinder, and Torus image mappings

Please direct inquiries to Drew Wells at 73767.1244@compuserve.com.

-------------------------------------------------------------------------------

Vort 2.1 Release, by Eric H. Echidna

	gondwana.ecr.mu.oz.au [128.250.1.63] pub/vort.tar.Z
	munnari.oz.au [128.250.1.21] pub/graphics/vort.tar.Z
	uunet.uu.net [192.48.96.2] graphics/vogle/vort.tar.Z
	 (uucp access as well ~ftp/graphics/vogle/vort.tar.Z)

Australian ACSnet sites may use fetchfile:
	fetchfile -dgondwana.ecr.mu.oz pub/vort.tar.Z

The major changes are to the ray-tracer which now allows orthographic
projections, lights to be in composite objects, provides a transform operator,
a few other odds and sods plus the usual set of bug fixes.  There are also a
couple of utilities for starting art up via inetd to help simplify the
generation of animations across networks.

It runs on IBM PC's, VMS, and a variety of UNIX boxes.

Contributed scene files for the ray-tracer can be found in contrib/artscenes
on gondwana.  Apart from the scene files the tar files in this directory also
includes some useful tile patterns and geometry files.

Anyone with anything they'd like to add is welcome to put it in gondwana's
incoming directory and send us mail.

Includes [among much else]:

art	- a ray tracer for doing algebraic surfaces and CSG models.

-------------------------------------------------------------------------------

BRL-CAD 4.0 Release, by Michael J. Muuss and Glenn M. Gillis

The U. S. Army Ballistic Research Laboratory (BRL) is proud to announce
the availability of Release 4.0 of the BRL-CAD Package.

The BRL-CAD Package is a powerful Constructive Solid Geometry (CSG) based
solid modeling system.  BRL-CAD includes an interactive geometry editor, a ray
tracing library, two ray-tracing based lighting models, a generic framebuffer
library, a network-distributed image-processing and signal-processing
capability, and a large collection of related tools and utilities.  Release
4.0 is the latest version of software which has been undergoing continuous
development since 1979.

The most significant new feature for Release 4.0 is the addition of n-Manifold
Geometry (NMG) support based on the work of Kevin Weiler.  The NMG software
converts CSG solid models into approximate polygonalized boundary
representations, suitable for processing by subsequent applications and
high-speed hardware display.

BRL-CAD is used at over 800 sites located throughout the world.  It is
provided in source code form only, and totals more than 280,000 lines of "C"
code.

BRL-CAD supports a great variety of geometric representations, including an
extensive set of traditional CSG primitive solids such as blocks, cones and
tori, solids made from closed collections of Uniform B-Spline Surfaces as
well as Non-Uniform Rational B-Spline (NURBS) Surfaces, purely faceted
geometry, and n-Manifold Geometry (NMG).  All geometric objects may be
combined using boolean set-theoretic operations such as union, intersection,
and subtraction.

Material properties and other attribute properties can be associated with
geometry objects.  This combining of material properties with geometry is a
critical part of the link to applications codes.  BRL-CAD supports a rich
object-oriented set of extensible interfaces by means of which geometry and
attribute data are passed to applications.

A few of the applications linked to BRL-CAD include:

*) Optical Image Generation (including specular/diffuse reflection,
	refraction, multiple light sources, and articulated animation)
*) An array of military vehicle design and evaluation V/L Codes
*) Bistatic laser analysis
*) Predictive Synthetic Aperture Radar Codes (including codes due to ERIM)
*) High-Energy Laser Damage
*) High-Power Microwave Damage
*) Weights and Moments-of-Inertia
*) Neutron Transport Code
*) PATRAN [TM] and hence to ADINA, EPIC-2, NASTRAN, etc.
	for structural/stress analysis
*) X-Ray image calculation

BRL-CAD requires the UNIX operating system and is supported on more than a
dozen product lines from workstations to supercomputers, including:  Alliant
FX/8 and FX/80, Alliant FX/2800, Apple Macintosh II, Convex C1, Cray-1, Cray
X-MP, Cray Y-MP, Cray-2, Digital Equipment VAX, Gould/Encore PN 6000/9000, IBM
RS/6000, Pyramid 9820, Silicon Graphics 3030, Silicon Graphics 4D ``Iris'',
Sun Microsystems Sun-3, and the Sun Microsystems Sun-4 ``SparcStation''.
Porting to other UNIX systems is very easy, and generally only takes a day or
two.

You may obtain a copy of the BRL-CAD Package distribution materials in one of
two ways:

1.  FREE distribution with no support privileges:  Those users with online
access to the DARPA InterNet may obtain the BRL-CAD Package via FTP file
transfer, at no cost, after completing and returning a signed copy of the
printed distribution agreement.  A blank agreement form is available only via
anonymous FTP from host ftp.brl.mil (address 128.63.16.158) from file
"brl-cad/agreement".  There are encrypted FTP-able files in several countries
around the world.  Directions on how to obtain and decrypt the files will be
sent to you upon receipt of your signed agreement.  One printed set of BRL-CAD
documentation will be mailed to you at no cost.  Note that installation
assistance or telephone support are available only with full service
distributions.

2.  FULL SERVICE distribution:  The Survivability/Vulnerability Information
Analysis Center (SURVIAC) administers the supported BRL-CAD distributions and
information exchange programs for BRL.  Full service distributions cost
US$500, and include a copy of the full distribution materials on your choice
of magnetic tape media.  You may elect to obtain your copy via network FTP.
One printed set of BRL-CAD documentation will be mailed to you.  BRL-CAD
maintenance releases and errata sheets will be provided at no additional
charge, and you will have access to full technical assistance by phone, FAX,
letter, or E-mail.  Agencies of the U.S.  Federal Government may acquire the
full service distribution with a simple MIPR or OGA funds transfer.

For further details, call Mr. Glenn Gillis at USA (410)-273-7794, send E-mail
to <gillis@brl.mil>, FAX your letter to USA (410)-272-7413, or write to:

	BRL-CAD Distribution
	SURVIAC Aberdeen Satellite Office
	1003 Old Philadelphia Road
	Suite 103
	Aberdeen MD  21001  USA

Note that USA area code 410 will not go into effect until 1-Nov-91.  Prior to
that date, please use area code 301.

Those sites selecting the free distribution may upgrade to full service status
at any time.  All users have access to the BRL-CAD Symposia, workshops, user's
group, and E-mail mailing list.

Sincerely,

Michael J. Muuss			Glenn M. Gillis
Advanced Computer Systems		SURVIAC
Ballistic Research Lab			Aberdeen Satellite Office

-------------------------------------------------------------------------------
END OF RTNEWS

