Date: 18 May 92 10:32:37-PST
From: Vision-List moderator Phil Kahn <Vision-List-Request@ADS.COM>
Errors-to: Vision-List-Errors@ADS.COM
Reply-to: Vision-List@ADS.COM
Subject: VISION-LIST digest 11.19
To: Vision-List@ADS.COM

VISION-LIST Digest    Mon May 18 10:32:37 PDT 92     Volume 11 : Issue 19

 - Send submissions to Vision-List@ADS.COM
 - Vision List Digest available via COMP.AI.VISION newsgroup
 - If you don't have access to COMP.AI.VISION, request list 
   membership to Vision-List-Request@ADS.COM
 - Access Vision List Archives via anonymous ftp to FTP.ADS.COM

Today's Topics:

 Looking for color camera recommendations
 Synchronized recording of speech and image sequences
 Image Quality
 Requesting specular object image....
 KBVision
 Latest Object Recognition Toolkit (perceptual grouping) S/W
 Re: computer tomography graphics
 Looking for C (or any) implementation of Geman and Geman
 Looking for Computer Vision-article
 Replies to Preattentive vision and Pop-outs request (long)
 1st CFP: Third IEE International Conference on Artificial Neural Networks

----------------------------------------------------------------------

Date: Fri, 8 May 92 14:03:24 -0500
From: jean@ecn.purdue.edu (Jean Hsu)
Subject: Looking for color camera recommendations

We are looking for a color CCD camera for doing vision research
and would appreciate information or recommendations from anyone 
who have used or bought such a camera. We are also considering 
getting a pair of the cameras to do stereo vision. How is 
synchronization of the two cameras usually done? 

Jean Hsu
jean@ecn.purdue.edu

[ Please post recommendations for cameras and specific reasons
  supporting these recommendations to the List. Synchronization
  other than by genlock and its ramifications is also of interest.
			phil...			]

------------------------------

Date: 8 May 1992 15:56:40 GMT
From: bregler@i13d8.ira.uka.de (Christoph Bregler)
Organization: University of Karlsruhe, FRG
Subject: Synchroniced recording of speech and image sequences

  We start a new project, where we need speech and visial information
in parallel to do some recognition task. We want to use a standard
framegrabber for digitizing video images. I guess we first record 
the test person, and than digitize the video image sequences and the
speech seperate. In order to know in the later recognition state, what
noise happend exactly at what video picture, we need some
time coding for the tape recorder, and also the possibility, that
the workstation can controll the recorder. It has to be a very
clean still fream by the recorder. 

  So, does anybody out there know some equipment, wich does this task 
in a reasonable financial frame? I also might consider optical disk
recorders for that.

  Thanks for any help. Please reply to  bregler@ira.uka.de

	-Chris :)

------------------------------

Date: Tue, 12 May 1992 00:14:23 GMT
From: bellutta@ohsu.edu (Paolo Bellutta)
Organization: Oregon Health Sciences University
Subject: Image Quality

I'm trying to locate documntation on Image Quality assesment. To be  
more specific, I'm trying to locate what has been done in the past  
for comparing, for example, lossy compression algorithms or analog  
image transmission media. Eventually I would like to find a set of  
test images and a set of tests that allows to asses the quality of a  
compression/transmission system, possibly independently of the final  
use or type of the images.

Any pointer of work that has been done in the past, or test images  
that are known to better than others for this purpose, and test that  
you think that should be considered are welcome.

I know that the JPEG comitee has made several experiments to asses the
quality of images after compression. If anybody knows anything about
this I would appreciate it if you could e-mail references or pointers
of any kind.

Please use e-mail. If there is interest I'll summarize to the net.

Paolo Bellutta - BICC - OHSU - 3181 SW Sam Jackson Park Rd. 

Portland, OR 97201-3098 - internet: bellutta@ohsu.edu
tel: (503) 494 8404 - fax: (503) 494 4551

------------------------------

Date: Wed, 13 May 1992 18:57:32 GMT
From: tsai@osceola.cs.ucf.edu (Tsai Ping-sing)
Organization: University of Central Florida, Orlando
Subject: Requesting specular object image....
Keywords: Specular images

Hello, 

I am a graduate student at UCF, and I am doing some experiment
on Shape from shading with specular object. Is there any real
specular object images that I can ftp. 
If so, I will appreciate it for giving me the address of the site.

Thank you!

Ping-Sing Tsai
tsai@eola.cs.ucf.edu

------------------------------

Date: Tue, 12 May 1992 16:04:13 +0200
From: " (PUN Thierry)" <pun@cui.unige.ch>
Organization: University of Geneva, Switzerland
Subject: KBVision

Hello,

I am interested in purchasing for my group the KBVision (TM) product 
from Amerinex Artificial Intelligence. I would be happy to receive 
comments and opinions from other researchers. For example:

 - general opinion;
 - how did you integrate your existing software with KBVision;
 - how did KBVision changed your way of working;
 - have you used KB Vision as a support for an (advanced) computer
   vision course;
 - did you use all modules;
 - comments about the quality of service.

Many thanks in advance for any reply,
Best regards,

	Prof. Thierry Pun, Computer Vision Group
	Computing Science Center, University of Geneva
	12, rue du Lac, CH-1207 Geneva SWITZERLAND
	Phone: +41(22) 787 65 82; fax: +41(22) 735 39 05
	E-mail: pun@cui.unige.ch [or pun@cgeuge51.bitnet]

------------------------------

Date: Wed, 13 May 92 21:13 GMT
From: The Maverick <ATAE@spva.physics.imperial.ac.uk>
Subject: Latest Object Recognition Toolkit (perceptual grouping) S/W

         Object Recognition Toolkit (ORT) Version 2.1

Description:
===========

ORT is a collection of image understanding S/W in C for use on Unix
platforms (tested on Sun4, Decstation, Iris). The aim has been to build
a hierarchy of groupings of straight-line segments (junctions to polygons)
for use in object recognition. The S/W is in the form of filters and includes 
a displayer for use on COLOUR workstations under X11R4/5. All the S/W comes 
with the GNU general public licence. Also included are LaTeX copies of papers 
on some of the S/W (FEX, LPEG). The changes to the new version include:

   1. All programs are now default driven
   2. PGM format as standard (with option for raw images)
   3. Compilation is much better (should only need to type make)
   4. Added new options to LPEG and IPEG
   5. IPEG (polygon detector) is now complete, general, and very much faster
   6. LPEG (low-level grouper) now has option to consider pairs of collinear 
      lines as real lines
   7. Some minor bug fixes etc..

The features defined within ORT represent the complete set required to
recognize any polyhedral object (view point independently) starting from 
edge information. If you have access to Prolog or SQL you can build relational 
graphs, using these primitives, to define objects. This should be much 
simpler than trying to write a new language in C (aaaaaaaaaaaarghhh!!).

Where to get it
===============

Filename:  ORT-2.1.tar.Z 
Site:      FTP.ADS.COM [128.229.36.25]
Directory: pub/VISION-LIST-ARCHIVE/SHAREWARE/ORT-2.1

Contents of tarfile 
====================

    CODE                              DESCRIPTION

Liste              List handling library in C by Jean-Paul Schmidt formerly
                   of University of Surrey, UK. [Version 1.2]

RW_ChainPixels     Pixel Chainning code by Geoff A.W. West and Paul L. Rosin
                   of Curtin University, Australia [Version 1.2]

FEX                Segments chained pixel lists produced by RW_ChainPixels
                   into straight-line segments and circular arcs. [Version 1.7]

LPEG               Low-level straight-line grouping [Version 1.9]. Groups
                   straight-line segments produced by FEX into:

                         Parallel overlapping   Parallel non-overlapping
                         Collinear              V,L,T, and Lambda Junctions

IPEG               Intermediate-level grouping [Version 2.1]. Groups sets
                   produced by LPEG into:

                         Triplets (barends, Z)
                         Corners  (3 lines sharing a junction point)
                         Polygons

DisplayPEG         X11R4/5 viewer for the above groupings/segments by
                   Jean-Paul Schmidt and Ata Etemadi [Version 1.2]

Timing Information
==================

Starting from a 7x7 Grid of squares in a 256x256 image:

                                           | Time in Sun4 |
                                           | CPU Seconds  |
                                           |              |
    RW_ChainPixels < Grid.pgm  > Grid.str  |   0.5        | chained pixels
              FEX  < Grid.str  > Grid.fex  |   9.4        | lines and arcs
              LPEG < Grid.fex  > Grid.lpeg |   10.0       | junctions/paral..
              IPEG < Grid.lpeg > Grid.ipeg |   35.5       | triplets/polygons..

LineSegments = 138    Parallel OV pairs  =  144   Triplets        = 418
CircularArcs = 0      Parallel NOV pairs =  504   Y Corners       = 0
                      Collinear pairs    =  112   TLambda Corners = 105
                      L junctions        =  232   Closed Polygons = 49
                      V junctions        =  0
                      T junctions        =  14
                      Lambda junctions   =  0

I would appreciate it if people who obtain the S/W drop me a line. All
contributions/comments to the distribution are most welcome. If you have
any problems I'll be glad to help.

       regards
                Ata <(|)>.

| Mail          Dr Ata Etemadi,                                               |
|               Blackett Laboratory,                                          |
|               Imperial College of Science, Technology, and Medicine,        |
|               Space and Atmospheric Physics Group,                          |
|               Prince Consort Road, London SW7 2BZ, ENGLAND                  |
| Phone         +44 (0)71 589 5111 Ext 6751                                   |
| Fax           +44 (0)71 823 8250 Attn. Dr Ata Etemadi,                      |
|               +44 (0)71 589 9463 Attn. Dr Ata Etemadi,                      |
| Telex         929484 (IMPCOL G)  Attn. Dr Ata Etemadi,                      |
| Janet                     atae@uk.ac.ic.ph.spva  or ata@uk.ac.ucl.mssl.c    |
| Earn/Bitnet/Internet      atae@spva.ph.ic.ac.uk  or ata@c.mssl.ucl.ac.uk    |
| Arpanet                   atae%spva.ph.ic.ac.uk  or ata%c.mssl.ucl.ac.uk    |
|  or                       atae%spva.ph.ic@ac.uk  or ata%c.mssl.ucl@ac.uk    |
| Span                      SPVA::atae (19773::atae) or                       |
|                           MSSLC::ata (19708::atae)                          |
|                           RLESIS::cbs%uk.ac.ic.ph.spva::atae or             |
|                           RLESIS::cbs%uk.ac.ucl/mssl.c::ata                 |
|  or                       ecd1::323mwd  (Space Phys. Span account at esoc)  |
| UUCP/Usenet               atae%spva.ph.ic@nsfnet-relay.ac.uk or             |
|                           ata%c.mssl.ucl@nsfnet-relay.ac.uk                 |

------------------------------

Date: Wed, 13 May 1992 09:06:50 GMT
From: ron@monu6.cc.monash.edu.au (Ron Van Schyndel)
Organization: Monash University, Melb., Australia.
Subject: Re: computer tomography graphics

In <Ye2IokO00WB6ET4d4L@andrew.cmu.edu> gk1d+@andrew.cmu.edu (Georgios T. Kossioris) writes:
>I am looking for computer tomography (graphics) SW. I would greatly
>appreciate if someone could give me any relevant  information  e.g.  a
>phone number or a person  to contact with  from a  company or research
>group. Thank you!      

>                                                                        
>     George Kossioris, CMU

You might try disks 293 and 294 of the C User's Group library.

Their address is:

2601 Iowa St,
Lawrence, KS 66046,
(913) 841-1631
FAX: (913) 841-2624

The programs are fairly old now, but all the source is there.  There might be
a PC-SIG entry for these as well....


Hope this helps, RON

Ron van Schyndel, Physics Dept.       ron@monu6.cc.monash.edu.au
Monash University (Caulfield Campus)  ron%monu6.cc.monash.edu.au@uunet.UU.NET
CAULFIELD EAST, Victoria, AUSTRALIA   {hplabs,mcvax,uunet,ukc}!munnari!monu6..
Location: 37 52 38.8S  145 02 42.0E   Ph: +61 3-573-2567   Fax: +61 3-573-2350 

------------------------------

Date: Thu, 14 May 92 10:26:33 BST
From: Bob Fisher <rbf@aifh.edinburgh.ac.uk>
Subject: Looking for C (or any) implementation of Geman and Geman

Dear Vision researchers:
	One of my students is interested in exploring variations 
	of Geman and Geman's algorithm for relaxation-based maximum
	a posteriori image reconstruction.

	Does someone have a C (or other language) implementation
	that we could use as a starting point? We would be happy
	to cite any contributions.

Thanks,
Bob Fisher
Department of Artificial Intelligence
University of Edinburgh

rbf@uk.ac.edinburgh.aifh
phone (31)-6530-3098
fax: (31)-225-9370

------------------------------

Date: 14 May 92 18:27:48 GMT
From: baustad@idt.unit.no (Jostein Baustad)
Organization: The Norwegian Institue of Technology
Subject: Looking for Computer Vision-article

Hi everybody!

I am looking for a particular article by Ruye Wang and Herbert Freeman. I
don't know the title of the article, but it is a follow-up to the following
one:

@incollection{wang,
   author = "Ruye Wang and Herbert Freeman"
  ,title  = "The Use of Characteristic-View Classes for {3D} Object Recognition"
  ,booktitle = "Machine Vision for Three-Dimensional Scenes"
  ,publisher = "Academic Press"
  ,year   = 1990
  ,editor = "Herbert Freeman"
  ,chapter= "4"
  ,pages  = "109--161"
}

To make it clear: I am looking for the follow-up to the article above. The
article describes an algorithm for checking for equivalence of two visible-edge
perspective projections of polyhedral scenes. If anybody knows where this
article can be found (proceedings, collections etc.), please inform me.

Please e-mail me directly - if others are interested, I will post the
information.

Jostein.

* Jostein Baustad                        * E-mail : baustad@idt.unit.no       *
* The Norwegian Institute of Technology  *          baustad@solan.unit.no     *
* Division of CS and Telematics          * "Reality is just a convenient      *
* Information and Knowledge Systems      *  measure for complexity."          *

------------------------------

Subject: Replies to Preattentive vision and Pop-outs request
Date: Fri, 15 May 92 13:02:02 PDT
From: "Ramaswamy P. Aditya" <aditya@ocf.Berkeley.EDU>

Here are the responses I received in reply to the query in VLD which is 
reproduced below:


Subject: Preattentive vision esp. pop-outs
Date: Fri, 10 Apr 92 18:28:45 -0700

Has anyone done any computational work involving pop-outs? I would be
equally appreciative for pointers to any papers that discuss the
computational aspects of pop-outs, I have gone through all the
psychologists papers (esp. Treisman) but have yet to find anyting
involving mathematical/computational analysis of pop-outs.

At the request of the moderator and with the risk of being imprecise
or plain wrong let me try to explain what pop-outs are...

A pop-out is the characteristic of a figure that is recognizable (able
to be picked out) by a subject within a field of distractors inside
200-500ms. Hence it is a preattentive phenomenon. For example, give a
field of 5 circles (outlines), the target being one incomplete circle
and the distractors four complete circles, the subject is unable to
realize that one of the circles is incomplete. Whereas, upon switching
the target and the distractors, i.e. four incomplete and one complete
circle, the subject is immediately able to tell that one of the
circles is incomplete. Furthermore, given larger fields, the search time
for such pop-outs proceeds in a parellel manner if told to look for a
target that is a pop-out, but in serial manner if told to look for a
target that is not a pop-out. The pop-out feature of a closed circle
is its closure. Hopefully this helps to define what I am looking for,
and perhaps prompt some ideas and intrests.

BEGIN RESPONSES

In response to your query in Vision List, you might have a
look at the citations in chapter 5 of Jules Davidoff,
"Cognition Through Color", MIT Press, 1991. I can't
guarantee that there's anything there you haven't already
seen, but Davidoff does cite some computational approaches
to the "pictorial register".

Larry Hardin \\
Dept. of Philosophy\\
Syracuse University\\
<clhardin@suvm.acs.syr.edu>\\

My thesis deals with the issue of pop-outs, albeit from a different
perspective. It has proposed and implemented a computational model
of visual attention for use as a selection mechanism in object recognition.
But I regard the popout problem as one of finding salient regions using
features such as color, texture, line groups, etc. If you are further
interested, I can send more info. on my approaches to finding salient 
color and texture regions.
\\
Regards\\
\\
Tanveer\\
stf@ai.mit.edu\\

From: William.C.Loftus@Dartmouth.EDU\\
\\
Have you seen :\\
Sandon , Peter A. Simulating Visual Attention, Journal of Cognitive
Neuroscience 2:3, 213-231.  
\\ 
>From the abstract:\\
\\
...This paper describes a connectionist network that exhibits a variety of
attential phenomena reported by Triesman, Wolford, Duncan, and others. As
demonstrated in several simulations, a hierarchical, multiscale network that
uses feature arrays with strong lateral inhibitory connections provides
responses in agreement with a number of prominent behaviors associated with
visual attention...\\

From: brecher@watson.ibm.com\\
\\
I have been doing a little work on a computer model of "pop-out".
I'm an engineer working on automatic circuit inspection at the
IBM Watson lab.  I became interested in Triesman's work because
I had observed that human circuit inspectors seem to use
preattentive vision to detect defects.  I've created a very
simple model based on local to global comparison of edge and
contrast statistics and applied it to some of Triesmans images
as well as real integrated circuit images with defects.  If you
wish I would be glad to send you a paper on the subject.  Just,
please, do not expect very much.  I'm not a computational vision
researcher or a psychophysicist.
\\
              Virginia Brecher\\

From: aar@vnet.ibm.com\\
\\
Virginia Brecher of the IBM T J Watson Research Center has done
excellent work.

Her paper, " New techniques for patterned wafer inspection based on
a model of human preattentive vision" will be presented next week:
in the SPIE conf: Applications of Artificial Intelligence X:
Machine Vision \&\ Robotics in Orlando, Florida.
\\
Arturo A. Rodriguez\\
\\

From: Ron Rensink <rensink@cs.ubc.ca>\\
\\
Hi,

  I saw your request for pointers to computational work involving
pop-outs.  There's (at least) two different parts to what you're
asking.  The first concerns how the pop-out mechanism works, while
the second concerns what kind of feature is able to pop out.
In regards to the first, I'd suggest looking at
\\
   1) Selecting One Among the Many: A Simple Network Implementing
	Shifts in Selective Visual Attention, C. Koch and S. Ullman,
	MIT AI Memo 770 (Jan 1984)\\
   2) Analyzing Vision at the Complexity Level, J. Tsotsos, 
	Behavioral and Brain Sciences, 13: 423-469  (1990)\\
   3) Efficient Visual Search: A Connectionist Solution, S. Ahmad 
	and S. Omohundro, Proc 13th Ann. Conf of the Cognitive
	Science Society, pp 293-298 (1991)\\

  Much of my PhD work has concerned the second set of issues.  Some
of it is presented in
\\
   1) Preattentive Recovey of Three-Dimensional Orientation from
	Line Drawings, J. Enns and R. Rensink, Psychological Review,
	98:335-351 (1991)\\
   2) The Analysis of Resource-limited Vision Systems, R. Rensink
	and G. Provan, Proc 13th Ann. Conf of the Cognitive Science
	Society, pp 311-316 (1991)\\

					Hope this helps.\\
							   ...Ron


Here's some computational stuff I know about "pop-outs."  Since I"m a
psychologist interested in visual perception and attention, these ref's
are, understandably, more psychological in aim.

Wolfe \&\ Cave (1990) Deploying visual attention: The guided search model.
 In Blake \&\ Troscianko, AI and the Eye.\\

Cave \&\ Wolfe (1990) Modeling the role of parallel processing in visual
search.  Cognitive Psychology, 22, 225-271.\\

These are connectionist attempts to simulate Treisman style visual
search tasks.  As far other computational approaches, you might try
Julez.  I believe there is some discussion of parallel processes in
early vision.  If you want references to empirical work let me know.

Shaun Vecera\\
Department of Psychology\\
Carnegie Mellon University\\
Pittsburgh, PA 15213\\

sv11+@andrew.cmu.edu\\

From: "John K. Tsotsos" <tsotsos@vis.toronto.edu>\\

I have done some of the kinds of things you requested. I will send you some paper.

john tsotsos\\
(Aditya: Unfortunately, at the time of this writing, I have not received 
anything from Mr. Tsotsos)\\

Hi,\\

   Just a quick response to your question about preattentive processing.
I'm actually doing thesis working in scientific visualization, and we're
using preattentive "features" as a basis for design of our visualization
tools. I suppose this falls into your category of "computational work
involving pop-outs". We're in the middle of conducting psych experiments
to see if estimation is a preattentive task (when using our visualization
tools).

   I've also done a pretty extensive review of the Psych literature. I've
mostly focused on work by Triesman, Julesz, Duncan and Humphreys, Callaghan,
and Enns (who is actually co-supervising me). There's not a lot of people
explicitly using preattentive features in computer science. My area of
interest is mostly in computer graphics and visualization, so I haven't
looked into the vision literature much. I can suggest one set of researchers
who are using preattentive features for visualization, Ron Pickett and George
Grinstein from the University of Lowell, Massachusetts. If you're interested,
a couple of papers which give a flavor of their work would be:

EXVIS: An Exploratory Visualization Environment\\
Proceedings of Graphics Interface '91\\
George Grinstein and Ron Pickett, pp. 254-261\\

Iconographic Displays for Visualizing Multidimensional Data\\
Proceedings of the 1988 IEEE Conference on Systems, Man, and Cybernetics\\
Ron Pickett and George Grinstein, pp. 514-519\\

   There is a PhD student in our department who's also working with Jim
Enns. They're developing what they call "3D popout icons", icons whose
3-dimensionality is what makes them preattentive. He's doing joint
computer vision-psychology work, so he might be more appropriate to your
work. His name is Ron Rensink (rensink@cs.ubc.ca). I'm sure he'd be happy
to discuss his work with you.

Hope this was helpful. Good luck!\\
Christopher.\\
(healey@cs.ubc.ca)\\

From: dwe@watson.ibm.com (D.Weinshall)\\

There has been a lot of recent work on "computational pop-outs",
mostly statistical/engineering/noise-tolerance models of the
phenomenon. I personally know very little about it, so I'd suggest you
contact (Prof.) Misha Pavel from NYU, who has done some interesting
work on it (write to mis@cns.nyu.edu). Other people have also worked
on it, but he could give you much better references...

Good luck\\
Daphna\\

I also have a BibTeX file of all these references,
I will try to put it somewhere so that it is easily accesible. Any
suggestions? any offers for anon ftp sites, I don't have the resources
for that or for a mail server...

R.P. Aditya                         203 Bowles Hall
aditya@ocf.berkeley.edu             University of California
(Yes, the account's been changed)   Berkeley, CA
(510) 643- 2485			    94720

------------------------------

Date:           Fri, 8 May 92  11:42 GMT
From: "John V. Black @ DRA Malvern" <"MVUB::BLACK%hermes.mod.uk"@relay.mod.uk>
Subject:        1st CFP: Third IEE International Conference on Artificial Neural Networks

      IEE 3rd INTERNATIONAL CONFERENCE ON ARTIFICIAL NEURAL NETWORKS

                         FIRST CALL FOR PAPERS


 Date  : 25-27 May 1993
 Venue : Conference Centre, Brighton, United Kingdom

 Contributions & Conference format: Oral & poster presentations in single, 
                                    non-parallel sessions
 
 Scope: 3 principal areas of interest

 Architecture & Learning Algorithms : Theory & design of neural networks, 
                                      modular systems, comparison with 
                                      classical techniques

 Applications & industrial systems  : Vision and image processing, 
                                      speech and language processing,
                                      biomedical systems, robotics & control,
                                      AI applications, expert systems,
                                      financial and business systems

 Implementations                    : parallel simulation/architecture,
                                      hardware implementations (analogue & 
                                      digital), VLSI devices or systems,
                                      optoelectronics

 Travel: Frequent trains from London (journey time 60 mins) and from Gatwick 
         airport (30 mins)
                          
 Deadlines:

 
 1st September 1992 : Receipt of synopsis by secretariat. The synopsis should
                      not exceed 1 A4 page

 October 1992       : Notification of provisional acceptance

 25th January 1993  : Receipt of full typescript for final review by
                      secretariat. This should be a maximum of 5 A4 pages -
                      approximately 5,000 words, less if illustrations are
                      included.
 
 Further details and contributions to:

 Sheila Griffiths
 ANN 93 Secretariat
 IEE Conference Services
 London WC2R 0BL
 United Kingdom
 
 Telephone (+44) 71 240 1871 Ext 222
 Fax       (+44) 71 497 3633
 Telex     261176 IEE LDN G

                 David Lowe      Janet: lowe@uk.mod.hermes
                              Internet: lowe%hermes.mod.uk@relay.mod.hermes


------------------------------

End of VISION-LIST digest 11.19
************************
