The Journal
of the
Acoustical Society of America
Vol. 141, No. 5, Pt. 2 of 2, May 2017
Acoustical Society of America
www.acousticalsociety.org
European Acoustics Association
Acoustics ‘17 Boston
The 3rd Joint Meeting of the
Acoustical Society of America and the European Acoustics Association
John B. Hynes Veterans Memorial Convention Center
Boston, Massachusetts
25–29 June 2017
Table of Contents on p. A5
Published by the Acoustical Society of America through AIP Publishing LLC
PROVEN PERFORMANCE
For over 40 years Commercial Acoustics has been helping to solve
noise sensitive projects by providing field proven solutions including
Sound Barriers, Acoustical Enclosures,
Sound Attenuators and Acoustical Louvers.
We manufacture to standard specifications
and to specific customized request.
Circular & Rectangular Silencers in Dissipative and Reactive Designs
Clean-Built Silencers Elbow Silencers and Mufflers Independently Tested
Custom Enclosures Acoustical Panels Barrier Wall Systems
Let us PERFORM for you on your
next noise abatement project!
Commercial Acoustics
A DIVISION OF METAL FORM MFG., CO.
Satisfying Clients Worldwide for Over 40 Years.
5960 West Washington Street, Phoenix, AZ 85043
(602) 233-2322 • Fax: (602) 233-2033
www.mfmca.com
rbullock@mfmca.com
THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA
Postmaster: If undeliverable, send notice on Form 3579 to:
ISSN: 0001-4966
CODEN: JASMAN
ACOUSTICAL SOCIETY OF AMERICA
1305 Walt Whitman Road, Suite 300,
Melville, NY 11747-4300
Periodicals Postage Paid at
Huntington Station, NY and
Additional Mailing Offices
IF COMPLIANCE AND
PRODUCTIVITY INVENTED A
MONITORING SOLUTION...
SENTINEL - NOISE, VIBRATION, DUST AND AIR QUALITY MONITORING
FREE WEBINAR
Sentinel for Compliance Monitoring
July 30, 11 AM ET
WWW.BKSV.COM/WEBINARS
Unattended continuous noise, vibration, dust and air quality monitoring
has never been easier. Our hosted Sentinel environmental management
solution lets you view real-time data online to quickly make better
informed decisions; receive alerts about impending breaches to reduce
exceedances; and create trusted reports with the click of a button.
With a Sentinel subscription you can:
•
•
•
•
Improve monitoring efficiency
Manage and report environmental compliance
Gain approval of planned changes
Enrich community engagement
Brüel & Kjær North America, Inc.
3079 Premiere Parkway, Suite 120
Duluth · Ga · 30097
Telephone: 800 332 2040
bkinfo@bksv.com
www.bksv.com/sentinel
THE JOURNAL OF THE
ACOUSTICAL SOCIETY OF AMERICA
Acoustics ‘17 Boston
141, No. 5, Pt. 2, 3451–4086, May 2017
Sound and Vibration Instrumentation
Scantek, Inc.
Sound Level Meters
Vibration Meters
Prediction Software
Selection of sound level meters
for simple noise level
measurements or advanced
acoustical analysis
Vibration meters for measuring
overall vibration levels, simple to
advanced FFT analysis and
human exposure to vibration
Software for prediction of
environmental noise, building
insulation and room acoustics
using the latest standards
Building Acoustics
Sound Localization
Monitoring
Systems for airborne sound
transmission, impact insulation,
STIPA, reverberation and other
room acoustics measurements
Near-field or far-field sound
localization and identification
using Norsonic’s state of the art
acoustic camera
Temporary or permanent remote
monitoring of noise or vibration
levels with notifications of
exceeded limits
Specialized Test Systems
Multi-Channel Systems
Industrial Hygiene
Impedance tubes, capacity and
volume measurement systems,
air-flow resistance measurement
devices and calibration systems
Multi-channel analyzers for
sound power, vibration, building
acoustics and FFT analysis in the
laboratory or in the field
Noise alert systems and
dosimeters for facility noise
monitoring or hearing
conservation programs
Scantek, Inc.
www.ScantekInc.com
800-224-3813
CODEN: JASMAN
ISSN: 0001-4966
INFORMATION REGARDING THE JOURNAL
Publication of the Journal is jointly financed by the dues of members of
the Society, by contributions from Sustaining Members, by nonmember
subscriptions, and by publication charges contributed by the authors’
institutions. A peer-reviewed archival journal, its actual overall value includes extensive voluntary commitments of time by the Journal ’s Associate Editors and reviewers. The Journal has been published continuously
since 1929 and is a principal means by which the Acoustical Society
seeks to fulfill its stated mission—to increase and diffuse the knowledge
of acoustics and to promote its practical applications.
Submission of Manuscripts: Detailed instructions are given in
the latest version of the “Information for Contributors” document, which
can be found online at asa.scitation.org/journal/jas. All research articles
and letters to the editor should be submitted electronically via an online
process at the site www.editorialmanager.com/jasa. The uploaded files
should include the complete manuscript and the figures. Authors are
requested to consult the online listings of JASA Associate Editors and
to identify which Associate Editor should handle their manuscript; the
decision regarding the acceptability of a manuscript will ordinarily be made
by that Associate Editor. The Journal also has special Associate Editors
who deal with applied acoustics, education in acoustics, computational
acoustics, and mathematical acoustics. Authors may suggest one of these
Associate Editors, if doing so is consistent with the content or emphasis of
their paper. Review and tutorial articles are ordinarily invited; submission
of unsolicited review articles or tutorial articles (other than those which
can be construed as papers on education in acoustics) without prior discussion with the Editor-in-Chief is discouraged. Authors are also encouraged to discuss contemplated submissions with appropriate members of
the Editorial Board before submission. Submission of papers is open to
everyone, and one need not be a member of the Society to submit a
paper.
JASA Express Letters: The Journal includes a special section
which has a separate submission process than that for the rest of the
Journal. Details concerning the nature of this section and information
for contributors can be found online at asa.scitation.org/jel/authors/
manuscript. Submissions to JASA Express Letters should be submitted
electronically via the site www.editorialmanager.com/jasa-el.
Publication Charge: To support the cost of wide dissemination of
acoustical information through publication of journal pages and production of a database of articles, the author’s institution is requested to pay
a page charge of $80 per page (with a one-page minimum). Acceptance
of a paper for publication is based on its technical merit and not on the
acceptance of the page charge. The page charge (if accepted) entitles the
author to 100 free reprints. For Errata the minimum page charge is $10,
with no free reprints. Although regular page charges commonly accepted
by authors’ institutions are not mandatory for articles that are 12 or fewer
pages, payment of the page charges for articles exceeding 12 pages is
mandatory. Payment of the publication fee for JASA Express Letters is
also mandatory.
Selection of Articles for Publication: All submitted articles are peer
reviewed. Responsibility for selection of articles for publication rests with
the Associate Editors and with the Editor-in-Chief. Selection is ordinarily
based on the following factors: adherence to the stylistic requirements of the
Journal, clarity and eloquence of exposition, originality of the contribution,
demonstrated understanding of previously published literature pertaining
to the subject matter, appropriate discussion of the relationships of the
reported research to other current research or applications, appropriateness
of the subject matter to the Journal, correctness of the content of the article,
completeness of the reporting of results, the reproducibility of the results,
and the significance of the contribution. The Journal reserves the right
to refuse publication of any submitted article without giving extensively
documented reasons. Associate Editors and reviewers are volunteers and,
while prompt and rapid processing of submitted manuscripts is of high
priority to the Editorial Board and the Society, there is no a priori guarantee
that such will be the case for every submission.
Supplemental Material: Authors may submit material that is part
supplemental to a paper. Deposits must be in electronic media, and can
include text, figures, movies, computer programs, etc. Retrieval instructions are footnoted in the related published paper. Direct requests to the
JASA office at jasa@acousticalsociety.org and for additional information,
see asa.scitation.org/jas/authors/manuscript.
Role of AIP Publishing: AIP Publishing LLC has been under contract
with the Acoustical Society of America (ASA) continuously since 1933
to provide administrative and editorial services. The providing of these
services is independent of the fact that the ASA is one of the member
societies of AIP Publishing. Services provided in relation to the Journal
include production editing, copyediting, composition of the monthly issues
of the Journal, and the administration of all financial tasks associated with
the Journal. AIP Publishing’s administrative services include the billing
and collection of nonmember subscriptions, the billing and collection
of page charges, and the administration of copyright-related services.
In carrying out these services, AIP Publishing acts in accordance with
guidelines established by the ASA. All further processing of manuscripts,
once they have been selected by the Associate Editors for publication, is
handled by AIP Publishing. In the event that a manuscript, in spite of the
prior review process, still does not adhere to the stylistic requirements
of the Journal, AIP Publishing may notify the authors that processing
will be delayed until a suitably revised manuscript is transmitted via the
appropriate Associate Editor. If it appears that the nature of the manuscript
is such that processing and eventual printing of a manuscript may result
in excessive costs, AIP Publishing is authorized to directly bill the authors.
Publication of papers is ordinarily delayed until all such charges have
been paid.
Copyright 2017, Acoustical Society of America. All rights reserved.
Copying: Single copies of individual articles may be made for private use or research. Authorization is given to copy
articles beyond the free use permitted under Sections 107 and 108 of the U.S. Copyright Law, provided that the copying
fee of $30.00 per copy per article is paid to the Copyright Clearance Center, 222 Rosewood Drive, Danvers, MA 01923,
USA, www.copyright.com. (Note: The ISSN for this journal is 0001-4966.)
Authorization does not extend to systematic or multiple reproduction, to copying for promotional purposes, to electronic
storage or distribution, or to republication in any form. In all such cases, specific written permission from AIP Publishing
LLC must be obtained.
Note: Copies of individual articles may also be purchased online via asa.scitation.org/journal/jas.
Permission for Other Use: Permission is granted to quote from the Journal with the customary acknowledgment of
the source. Republication of an article or portions thereof (e.g., extensive excerpts, figures, tables, etc.) in original form
or in translation, as well as other types of reuse (e.g., in course packs) require formal permission from AIP Publishing
and may be subject to fees. As a courtesy, the author of the original journal article should be informed of any request for
republication/reuse.
Obtaining Permission and Payment of Fees: Using Rightslink®: AIP Publishing has partnered with the Copyright
Clearance Center to offer Rightslink, a convenient online service that streamlines the permissions process. Rightslink
allows users to instantly obtain permissions and pay any related fees for reuse of copyrighted material, directly from AlP’s
website. Once licensed, the material may be reused legally, according to the terms and conditions set forth in each unique
license agreement.
To use the service, access the article you wish to license on our site and simply click on article “Tools” tab and then
select the “Reprints & Permissions” link. If you have questions about Rightslink, click on the link as described, then click
the “Help” button located in the top right-hand corner of the Rightslink page.
Without using Rightslink: Address requests for permission for republication or other reuse of journal articles or portions
thereof to: Office of Rights and Permissions, AIP Publishing LLC, 1305 Walt Whitman Road, Suite 300, Melville, NY 117474300, USA; FAX: 516-576-2450; Tel.: 516-576-2268; E-mail: rights@aip.org
The answer to your
infrasound challenges
The increased awareness about infrasound and the discomfort that
noise and low frequencies can bring along have given rise to new
measurement challenges from the aerospace industry to monitoring
environmental noise.
Introducing the new 47AC ½” infrasound microphone set:
• Frequency range from 0.09 Hz to 20 kHz
• Dynamic range 20 dB(A) to 148 dB
• Sensitivity 8 mV/Pa
• TEDS for automatic sensor identification and
reading of calibration data
• Assembled and calibrated as a complete unit
We make microphones
gras.dk/infrasound-measurements
CODEN: JASMAN
ISSN: 0001-4966
The Journal
of the
Acoustical Society of America
Acoustical Society of America Editor-in-Chief: James F. Lynch
ASSOCIATE EDITORS OF JASA
General Linear Acoustics: A.N. Norris, Rutgers University; A.G. Petculescu,
Univ. Louisiana, Lafayette; O. Umnova, Univ. Salford; S.F. Wu, Wayne State
Univ.
Nonlinear Acoustics: M. Destrade, Natl. Univ. Ireland, Galway; L. Huang, Univ.
of Hong Kong; V.E. Ostashev, Univ. of Colorado Boulder.
Atmospheric Acoustics and Aeroacoustics: P. Blanc-Benon, Ecole Centrale
de Lyon; J.W Posey, NASA Langley Res. Ctr. (ret.); D.K. Wilson, Army Cold
Regions Res. Lab.
Underwater Sound: N.P. Chotiros, Univ. of Texas; J.A. Colosi, Naval
Postgraduate School; S.E. Dosso, Univ. of Victoria; T.F. Duda, Woods Hole
Oceanographic Inst.; B.T. Hefner, Univ. of Washington; A.P. Lyons, Pennsylvania
State Univ.; W. Siegmann, Rensselaer Polytech. Inst.; H.C. Song, Scripps Inst. of
Oceanography; A.M. Thode, Scripps Inst. of Oceanography
Ultrasonics and Physical Acoustics: M.R. Haberman, Univ. Texas Austin;
M.F. Hamilton, Univ. Texas, Austin; V.M. Keppens, Univ. Tennessee, Knoxville;
T.G. Leighton, Inst. for Sound and Vibration Res. Southampton; J.D. Maynard,
Pennsylvania State Univ.; R. Raspet, Univ. of Mississippi; R.K. Snieder, Colorado
School of Mines; M.D. Verweij, Delft Univ. of Technol.; L. Zhang, Univ. of
Mississippi
Transduction, Acoustical Measurements, Instrumentation, Applied Acoustics: M.R. Bai, Natl., Tsinghua Univ.; D.D. Ebenezer, Naval Physical and
Oceanographic Lab., India; T.R. Howarth, NAVSEA, Newport; M. Sheplak,
Univ. of Florida
Structural Acoustics and Vibration: L. Cheng, Hong Kong Polytechnic Univ.;
L.P. Franzoni, Duke Univ.; A.J. Hull, Naval Undersea Warfare Center; N.J.
Kessissoglou, UNSW Australia; T. Kundu, Univ. of Arizona; K.M. Li, Purdue
Univ.; E.A. Magliula, Naval Undersea Warfare Center; E.G. Williams, Naval
Research Lab.
Noise: Its Effects and Control: G. Brambilla, Natl. Center for Research (CNR),
Rome; S. Fidell, Fidell Assoc.; K.V. Horoshenkov, Univ. of Bradford; R.M.
Kirby, Brunel Univ.; B. Schulte-Fortkamp, Technical Univ. of Berlin; A.T. Wall,
Air Force Research Lab.
Architectural Acoustics: B.F.G. Katz, Computer Science Lab. for Mechanics
and Engineering Sciences (LIMSI); F. Martellotta, Politecnico di Bari; F. Sgard,
Quebec Occupational Health and Safety Res. Ctr.; M. Vorländer, Univ. Aachen
Acoustic Signal Processing: P. Gerstoft, Univ. of California, San Diego; J. Li,
Zhejiang Univ.; Z-H. Michalopoulou, New Jersey Inst. Technol.; K.G. Sabra,
Georgia Inst. Tech; K. Wong, Hong Kong Polytech. Univ.
Physiological Acoustics: C. Abdala, House Research Inst.; I.C. Bruce, McMaster
Univ.; K. Grosh, Univ. of Michigan; P.X. Joris, KU Leuven; A.K.C. Lee, Univ.
of Washington; B.L. Lonsbury-Martin, Loma Linda VA Medical Center; C.A.
Shera, Harvard Medical School; G.C. Stecker, Vanderbilt Univ.
Psychological Acoustics: L.R. Bernstein, Univ. Conn.; V. Best, Natl. Acoust.
Lab., Australia; J. Braasch, Rensselaer Polytech. Inst.; M. Dietz, Western Univ.;
J.J. Lentz, Indiana Univ.; V.M. Richards, Univ. California, Irvine; M.A. Stone,
Univ. of Cambridge
Speech Production: L.L. Koenig, Long Island Univ. and Haskins Labs.; Z.
Zhang, Univ. of California, Los Angeles
Speech Perception: D. Baskent, Univ. Medical Center, Groningen; T. Bent,
Indiana Univ.; C.G. Clopper, Ohio State Univ.; S.H. Ferguson, Univ. of Utah; M.S.
Sommers, Washington Univ.; M. Sundara, Univ. of California, Los Angeles; B.V.
Tucker, Univ. of Alberta
Speech Processing: C.Y. Espy-Wilson, Univ. of Maryland; College Park; M.A.
Hasegawa-Johnson, Univ. of Illinois
Musical Acoustics: D. Deutsch, Univ. of California, San Diego; T.R. Moore,
Rollins College; A. Morrison, Joliet Junior College; J. Wolfe, Univ. of New South
Wales
Bioacoustics: C.C. Church, Univ. of Mississippi; G. Haïat, Natl. Ctr. for Scientific
Res. (CNRS); D.L. Miller, Univ. of Michigan; T.J. Royston, Univ. Illinois,
Chicago; K.A. Wear, Food and Drug Admin; S.W. Yoon, Sungkyunkwan Univ.
Animal Bioacoustics: W.W.L. Au, Hawaii Inst. of Marine Biology; M.L. Dent,
Univ. at Buffalo; R.A. Dunlop, Univ. of Queensland; R.R. Fay, Loyola Univ.,
Chicago; J.J. Finneran, Navy Marine Mammal Program; K. Lucke, Curtin Univ.;
C.F. Moss, Univ. of Maryland; A.N. Popper, Univ. Maryland; A.M. Simmons,
Brown Univ.; J.A. Sisneros, Univ. of Washington; C.E. Taylor, UCLA Ecology
and Evol. Biology
Computational Acoustics: D.S. Burnett, Naval Surface Warfare Ctr., Panama
City; N.A. Gumerov, Univ. of Maryland; L.L. Thompson, Clemson Univ.
Education in Acoustics: B.E. Anderson, Los Alamos National Lab.; V.W.
Sparrow, Pennsylvania State Univ.; P.S. Wilson, Univ. of Texas at Austin
Reviews and Tutorials: J.F. Lynch, Woods Hole Oceanographic Inst.
Forum and Technical Notes: N. Xiang, Rensselaer Polytechnic Univ.
Acoustical News: E. Moran, Acoustical Society of America
Standards News, Standards: N. Stremmel, Acoustical Society of America;
C. Struck, CJS Labs
Book Reviews: P.L. Marston, Washington State Univ.
Patent Reviews: S.A. Fulop, California State Univ., Fresno; D.L. Rice,
Computalker Consultants (ret.)
ASSOCIATE EDITORS OF JASA EXPRESS LETTERS
Editor: C.C. Church, Univ. Mississippi
General Linear Acoustics: O.A. Godin, NOAA-Earth System Research
Laboratory; S.F. Wu, Wayne State Univ.
Nonlinear Acoustics: M.F. Hamilton, Univ. of Texas at Austin
Aeroacoustics and Atmospheric Sound: C. Doolan, Univ. of New South Wales;
V.E. Ostashev, Univ. of Colorado Boulder
Underwater Sound: P.E. Barbone, Boston Univ.; D. Barclay, Dalhousie Univ.;
A.J.M. Davis, Univ. California, San Diego; D.R. Dowling, Univ. of Michigan;
W.L. Siegmann, Rensselaer Polytechnic Institute
Ultrasonics, Quantum Acoustics, and Physical Effects of Sound: T.D. Mast,
Univ of Cincinatti; J.S. Mobley, Univ. of Mississippi
Transduction: Acoustical Devices for the Generation and Reproduction
of Sound; Acoustical Measurements and Instrumentation: M.D. Sheplak,
Univ. of Florida
Structural Acoustics and Vibration: J.G. McDaniel, Boston Univ.
Noise: S.K. Lau, National Univ. of Singapore
Architectural Acoustics: N. Xiang, Rensselaer Polytechnic Inst.
Acoustic Signal Processing: D.H. Chambers, Lawrence Livermore Natl. Lab.;
L.M. Zurk, Portland State Univ.
Physiological Acoustics: B.L. Lonsbury-Martin, Loma Linda VA Medical Ctr.
Psychological Acoustics: Q.-J. Fu, House Ear Inst.
Speech Production: A. Lofqvist, Univ. Hospital, Lund, Sweden
Speech Perception: M. Cooke, Univ. of the Basque Country; R. Smiljanic, Univ.
of Texas at Austin
Speech Processing and Communication Systems and Speech Perception:
D.D. O’Shaughnessy, INRS-Telecommunications
Music and Musical Instruments: D.M. Campbell, Univ. of Edinburgh;
D. Deutsch, Univ. of California, San Diego; T.R. Moore, Rollins College
Bioacoustics—Biomedical: C.C. Church, Natl. Ctr. for Physical Acoustics
Bioacoustics—Animal: W.-J. Lee, Univ. of Washington
Computational Acoustics: D.S. Burnett, Naval Surface Warfare Ctr., Panama City;
L.L. Thompson, Clemson Univ.
CONTENTS
page
Technical Program Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A7
Schedule of Technical Session Starting Times . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A11
Map of Meeting Rooms at the Hynes Convention Center . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A13
Map of Boston. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A17
Calendar–Technical Program . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A20
Calendar–Other Events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A28
Meeting Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A30
Guidelines for Presentations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A35
Dates of Future Meetings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A37
Technical Sessions (1a__), Sunday Morning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3451
Technical Sessions (1p__), Sunday Afternoon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3475
Technical Sessions (2a__), Monday Morning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3535
Technical Sessions (2p__), Monday Afternoon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3594
Technical Sessions (3a__), Tuesday Morning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3660
Technical Sessions (3p__), Tuesday Afternoon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3709
Technical Session (3eED), Tuesday Evening . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3754
Plenary Session and Awards Ceremony, Tuesday Afternoon . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3755
R. Bruce Lindsay Award Encomium . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3757
Helmholtz-Rayleigh Interdisciplinary Silver Medal Encomium . . . . . . . . . . . . . . . . . . . . . . . . . . 3761
Gold Medal Encomium . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3765
EAA Award for Lifetime Achievements in Acoustics encomium . . . . . . . . . . . . . . . . . . . . . . . . . 3769
EAA Award for contributions to the promotion of Acoustics in Europe . . . . . . . . . . . . . . . . . . . 3771
Technical Sessions (4a__), Wednesday Morning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3773
Technical Sessions (4p__), Wednesday Afternoon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3851
Technical Sessions (5a__), Thursday Morning. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3928
Technical Sessions (5p_), Thursday Afternoon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3994
Sustaining Members . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4053
Application Forms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4056
Regional Chapters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4059
Author Index to Abstracts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4064
Index to Advertisers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4086
A5
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ‘17 Boston
A5
ACOUSTICAL SOCIETY OF AMERICA
The Acoustical Society of America was founded in 1929 to increase and diffuse the knowledge of acoustics and promote
its practical applications. Any person or corporation interested in acoustics is eligible for membership in this Society. Further
information concerning membership, together with application forms, may be obtained by addressing Elaine Moran, ASA
Director of Operations, 1305 Walt Whitman Road, Suite 300, Melville, NY 11747-4300, T: 516-576-2360, F: 631-923-2875;
E-mail: elaine@acousticalsociety.org; Web: http://acousticalsociety.org
Officers 2016-2017
Ronald A. Roy, Vice President
Michael R. Stinson, President
Department of Engineering Science
University of Oxford
Oxford, OX1 3PJ, UK
ronald.roy@hmc.ox.ac.uk
Institute for Microstructural Science
National Research Council of Canada
Ottawa, ON K1A 0R6, Canada
mike.stinson@nrc-cnrc.gc.ca
Marcia J. Isakson, President-Elect
Michael J. Buckingham, Vice President-Elect
Scripps Institution of Oceanography
9500 Gilman Drive, #0238
La Jolla, CA 92093-0238
mbuckingham@ucsd.edu
Department of Mechanical Engineering
University of Texas at Austin
Austin, TX 78712-1591
misakson@arlut.utexas.edu
David Feit, Treasurer
Acoustical Society of America
1305 Walt Whitman Road, Suite 300
Melville, NY 11747-4300
(516) 576-2360
dfeit@aip.org
Members of the Executive Council
Christy K. Holland
University of Cincinnati
231 Albert Sabin Way
Cincinnati, OH 45267-0586
christy.holland@uc.edu
Lily M. Wang
Durham School of Architectural
Engineering and Construction
University of Nebraska–Lincoln
Omaha, NE 68182-0816
lwang4@unl.edu
Michael R. Bailey
Applied Physics Laboratory
Center for Industrial and Medical
Ultrasound
1013 N.E. 40th St.
Seattle, WA 98105
(206) 685-8618
bailey@apl.washington.edu
Acoustical Society of America
Publications Office
P.O. Box 274
West Barnstable, MA 02668
(508) 362-1200
jasaeditor@acousticalsociety.org
Christopher J. Struck, Standards Director
CJS Labs.
57 States Street
San Francisco, CA 94114-1401
(415) 923-9535
cjs@cjs-labs.com
Susan E. Fox, Executive Director
Acoustical Society of America
1305 Walt Whitman Road, Suite 300
Melville, NY 11747-4300
(516) 576-2360
sfox@acousticalsociety.org
Members of the Technical Council
Tessa C. Bent
Christine H. Shadle
Haskins Laboratories
300 George Street, Suite 900
New Haven, CT 06511
(203) 865-6163 x 228
shadle@haskins.yale.edu
John A. Hildebrand
Scripps Institution of Oceanography
University of California, San Diego
Ritter Hall 200 E
La Jolla, CA 92093-0205
(858) 534-4069
jhildebrand@ucsd.edu
Indiana University
200 S. Jordan Avenue
Bloomington, IN 47405
tbent@indiana.edu
Preston S. Wilson
Department of Mechanical Engineering
University of Texas at Austin
Austin, TX 78712-1591
pswilson@mail.utexas.edu
Andrew J. Oxenham
University of Minnesota
75 East River Road
Minneapolis, MN 55455
(612) 624-2241
oxenham@umn.edu
R.A. Roy, Vice President
M.J. Buckingham, Vice President-Elect
L.M. Wang, Past Vice President
J.A. Colosi, Acoustical Oceanography
C. Erbe, Animal Bioacoustics
E.E. Ryherd, Architectural Acoustics
N. McDannold, Biomedical Acoustics
K.M. Walsh, Engineering Acoustics
A.C.H. Morrison, Musical Acoustics
W.J. Murphy, Jr., Noise
J.R. Gladden, Physical Acoustics
M. Wojtczak, Psychological and Physiological Acoustics
P.J. Gendron, Signal Processing in Acoustics
R.M. Koch, Structural Acoustics and Vibration
L. Polka, Speech Communication
M.S. Ballard, Underwater Acoustics
Organizing Committee
Damian Doria, ASA Cochair
Mats Åbom, EAA Cochair
David Feit, Treasurer
Daniel Farrell, Webmaster
Michael Stinson, Communications
Christopher Jasinski/Cristina Zamorano, Student Activities
Susan Fox/Elaine Moran, Secretariat
300, Melville, NY 11747-4300. Telephone: (516) 576-2360; FAX: (631) 923-2875;
E-mail: elaine@acousticalsociety.org.
Subscription Prices
U.S.A. & Poss.
ASA Members
Institutions (print + online)
Institutions (online only)
James F. Lynch, Editor-in-Chief
Outside the U.S.A.
(on membership)
$2335.00
$2500.00
$2100.00
$2100.00
The Journal of the Acoustical Society of America (ISSN: 0001-4966) is published monthly by the Acoustical Society of America through the AIP Publishing
LLC. POSTMASTER: Send address changes to The Journal of the Acoustical
Society of America, 1305 Walt Whitman Road, Suite 300, Melville, NY 117474300. Periodicals postage paid at Huntington Station, NY 11746 and additional
mailing offices.
Editions: The Journal of the Acoustical Society of America is published
simultaneously in print and online. Journal articles are available online from
Volume 1 (1929) to the present at http://asadl.org.
Back Numbers: All back issues of the Journal are available online. Some,
but not all, print issues are also available. Prices will be supplied upon request
to Elaine Moran, ASA Director of Operations, 1305 Walt Whitman Road, Suite
Subscription, renewals, and address changes should be addressed to AIP
Publishing LLC - FMS, 1305 Walt Whitman Road, Suite 300, Melville, NY 117474300. Allow at least six weeks advance notice. For address changes please send
both old and new addresses and, if possible, your ASA account number.
Claims, Single Copy Replacement and Back Volumes: Missing issue
requests will be honored only if received within six months of publication date
(nine months for Australia and Asia). Single copies of a journal may be ordered
and back volumes are available. Members—contact AIP Publishing Member
Services at (516) 576-2288; (800) 344-6901, membership@aip.org. Nonmember
subscribers—contact AIP Publishing Subscriber Services at (516) 576-2270;
(800) 344-6902; E-mail: subs@aip.org.
Page Charge and Reprint Billing: Contact: AIP Publishing Publication Page
Charge and Reprints—CFD, 1305 Walt Whitman Road, Suite 300, Melville, NY
11747-4300; (516) 576-2234; (800) 344-6909; E-mail: prc@aip.org.
Document Delivery: Copies of journal articles can be purchased for immediate download at www.asadl.org.
TECHNICAL PROGRAM SUMMARY
*Indicates Special Session
Sunday Morning
*1aID
Keynote Lectures
*1aAAa Echolocation by People Who are Blind
*1aAAb Sound Propagation Modeling and Spatial Audio for Virtual Reality I
*1aAAc Teaching and Learning in Healthy and Comfortable Classrooms I
*1aAO
Acoustical Oceanography Prize Lecture
*1aBAa Beamforming and Image Guided Therapy I: Algorithms
1aBAb Imaging I
*1aNS
Sonic Boom Noise I: Low Boom Technology, Propagation, Etc.
*1aPA
Acoustofluidics I
1aPPa
Perception of Synthetic Sound Fields I
*1aPPb Auditory Neuroscience Prize Lecture
*1aSA
Groundborne Noise and Vibration from Transit Systems
1aSC
Speech Technology (Poster Session)
*1aSP
Application of Bayesian Methods to Acoustic Model Identification
and Classification I
*1aUWa Passive Sensing, Monitoring, and Imaging in Wave Physics I
1aUWb Underwater Acoustic Uncertainty
Sunday Afternoon
*1pAAa Noise and Soundscapes in Restaurants and Other Public
Accommodations
*1pAAb Prediction of Direct and Flanking Airborne and Impact Sound
Transmission
*1pAAc Teaching and Learning in Healthy and Comfortable Classrooms II
1pAB
Biosonar
1pAO
Topics in Acoustical Oceanography
*1pBAa Beamforming and Image Guided Therapy II: Cavitation Nuclei
1pBAb Imaging II
1pEA
Engineering Acoustics Topics I
*1pMU Concert Hall Acoustics
*1pNSa Perception of Tonal Noise
*1pNSb Session in Honor of Kenneth Plotkin
*1pPA
Acoustofluidics II
*1pPPa Honoring the Contributions of Louis Braida to the Study of
Auditory and Speech Perception
*1pPPb Perception of Synthetic Sound Fields II
1pSA
General Topics in Structural Acoustics and Vibration I
1pSC
Non-Native Speech and Bilingualism (Poster Session)
*1pSP
Application of Bayesian Methods to Acoustic Model Identification
and Classification II
1pUWa Ambient Sound in the Ocean
*1pUWb Passive Sensing, Monitoring, and Imaging in Wave Physics II
1pUWc Topics in Underwater Acoustics (Poster Session)
Monday Morning
*2aIDa
Keynote Lecture
*2aAAa Sound Propagation Modeling and Spatial Audio for Virtual
Reality II
*2aAAb Acoustic Regulations and Classification of New and Retrofitted
Buildings I
*2aAAc Teaching and Learning in Healthy and Comfortable Classrooms III
2aAB
Behavior/Comparative Studies
*2aAO
Session in Honor of David Farmer I
*2aBAa Impact of Soft Tissue Inhomogeneities and Bone/Air on
Ultrasound Propagation in the Body
*2aBAb Beamforming and Image Guided Therapy III: Ablation and
Histotripsy
*2aEA
Ducts and Mufflers I
*2aEDa Communicating Scientific Research to Non-Scientists
2aEDb Education in Acoustics Poster Session
*2aIDb
Neuroimaging Techniques I
*2aMU Session in Memory of David Wessel
*2aNSa Noise Impacts and Soundscapes on Outdoor Gathering Spaces I
*2aNSb Sonic Boom Noise II: Mach Cutoff, Turbulence, Etc.
*2aPAa Infrasound I
2aPAb General Topics in Physical Acoustics I
A7
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
*2aPPa
Acoustics Outreach to Budding Scientists: Planting Seeds for
Future Clinical and Physiological Collaborations
*2aPPb Models and Reproducible Research I
*2aSAa Acoustic Metamaterials I
2aSAb General Topics in Structural Acoustics and Vibration II
2aSC
Speech Production (Poster Session)
*2aSPa
Topological Signal Processing
*2aSPb Signal Processing for Directional Sensors I
*2aUWa Sound Propagation and Scattering in Three-Dimensional
Environments I
*2aUWb Passive Sensing, Monitoring, and Imaging in Wave Physics III
Monday Afternoon
*2pAAa New Measurement and Prediction Techniques at Low Frequencies
in Buildings
2pAAb Topics in Architectural Acoustics Related to Application
*2pAAc Perceptual Effects Related to Music Dynamics in Concert Halls
*2pAAd Room Acoustics Design for Improved Behavior, Comfort, and
Performance I
*2pABa Incorporating Underwater Acoustics Research into the Decision
Making Process
2pABb Data Management, Detection, Classification, and Localization
*2pAO
Session in Honor of David Farmer II
*2pBA
Beamforming and Image Reconstruction
*2pEA
Ducts and Mufflers II
*2pID
Neuroimaging Techniques I
*2pMU Electronically-Augmented Instruments
*2pNSa Noise Impacts and Soundscapes on Outdoor Gathering Spaces II
*2pNSb Sonic Boom Noise III: Community Exposure and Metrics
*2pPA
Infrasound II
*2pPPa Models and Reproducible Research II
2pPPb Hearing Aiding, Protection, and Speech Perception
2pPPc Localization, Binaural Hearing, and Cocktail Party (Poster Session)
*2pSAa Acoustic Metamaterials II
*2pSAb Novel Treatments in Vibration Damping
*2pSC
New Trends in Imaging for Speech Production
*2pSP
Signal Processing for Directional Sensors II
*2pUWa In Honor of Ira Dyer, 60 Years as an Innovator, Entrepreneur, and
Visionary for Ocean Engineering
*2pUWb Sound Propagation and Scattering in Three-Dimensional
Environments II
Tuesday Morning
*3aIDa
Keynote 4
*3aAAa Retrospect on the Works of Bertram Kinsey I
*3aAAb Room Acoustics Design for Improved Behavior, Comfort, and
Performance II
*3aAAc Acoustic Regulations and Classification of New and Retrofitted
Buildings II
*3aAB
Comparative Bioacoustics: Session in Honor of Robert Dooling I
*3aAO
Acoustic Measurements of Sediment Transport and Near-Bottom
Structures I
*3aBAa Advances in Shock Wave Lithotripsy I
*3aBAb Partial Differential Equation Constrained and Heuristic Inverse
Methods in Elastography I
*3aEA
Microelectromechanicalsystems (MEMS) Acoustic Sensors I
*3aIDb
Graduate Programs in Acoustics Poster Session
*3aMU Session in Honor of Thomas D. Rossing
*3aNSa Mechanical System Noise
*3aNSb Using Acoustic Standards in Education
*3aNSc Aircraft Noise and Measurements (Poster Session)
*3aPA
Eco-acoustics: Acoustic Applications for Green Technologies and
Environmental Impact Measurements
*3aPPa
Auditory Cognition and Scene Analysis in Complex Environments
3aPPb Environmental Auditory Experience
*3aSAa Energy Methods in Acoustics and Vibration I
*3aSAb Acoustic Metamaterials III
Acoustics ‘17 Boston
A7
3aSC
*3aSP
*3aUWa
*3aUWb
Prosody (Poster Session)
Signal Processing for Directional Sensors III
A Century of Sonar I
Sound Propagation and Scattering in Three-Dimensional
Environments III
Tuesday Afternoon
*3pAAa Retrospect on the Works of Bertram Kinsey II
*3pAAb Architectural Acoustics and Audio: Even Better Than the Real
Thing I
*3pAAc Robust Heavy and Lightweight Constructions for New-Build and
Retrofit Buildings
*3pAB
Comparative Bioacoustics: Session in Honor of Robert Dooling II
*3pAO
Acoustic Measurements of Sediment Transport and Near-Bottom
Structures II
*3pBAa Advances in Shock Wave Lithotripsy II
*3pBAb Partial Differential Equation Constrained and Heuristic Inverse
Methods in Elastography II
*3pEA
Microelectromechanicalsystems (MEMS) Acoustic Sensors II
3pMU General Topics in Musical Acoustics I (Poster Session)
*3pNSa Implications of Community Tolerance Level Analysis for
Prediction of Community Reaction to Environmental Noise
*3pNSb Sonic Boom Noise V: Turbulence, Predictions, and Measurements
3pNSc Effects of Noise and Perception (Poster Session)
*3pPAa Chains, Grains, and Origami Nonlinear Metamaterials
3pPAb General Topics in Physical Acoustics II
3pPAc Topics in Physical Acoustics (Poster Session)
*3pPP
A Celebration of Nat Durlach and His Contributions to Sensory
Communications
*3pSAa Acoustic Metamaterials IV
*3pSAb Energy Methods in Acoustics and Vibration II
3pSC
Aging and Development (Poster Session)
*3pSP
Signal Processing for Directional Sensors IV
*3pUWa A Century of Sonar II
*3pUWb Sound Propagation and Scattering in Three-Dimensional
Environments IV
Tuesday Evening
3eED
Listen Up and Get Involved
Wednesday Morning
*4aAAa Recent Developments and Advances in Archeo-Acoustics and
Historical Soundscapes I
*4aAAb Acoustic Regulations and Classification of New and Retrofitted
Buildings III (Poster Session)
4aAAc Topics in Architectural Acoustics (Poster Session)
*4aAAd Simulation and Evaluation of Acoustic Environments I (Poster
Session)
*4aAAe Assistive Listening Systems in Assembly Spaces
*4aAAf Simulation and Evaluation of Acoustic Environments I
4aAAg Topics in Architectural Acoustics Related to Measurements I
*4aAB
Fish Bioacoustics I: Session in Honor of Anthony Hawkins and
Arthur Popper
*4aBA
Session in Honor of Edwin Carstensen I
*4aEAa Microelectromechanicalsystems (MEMS) Acoustic Sensors III
4aEAb Engineering Acoustics Topics II
*4aEAc Micro-Perforates I
*4aED
Take 5’s
*4aMU Musical Instrument Performance, Perception, and Psychophysics I
*4aNSa Measuring, Modeling, and Managing Transportation Noise I
*4aNSb Wind Turbine Noise
*4aPAa Outdoor Sound Propagation I
*4aPAb Propagation in Inhomogeneous Media I
4aPPa
Speech, Pitch, Cochlear Implants, and Hearing Aids Potpourri
(Poster Session)
*4aPPb History of Psychoacoustics in the Period 1900-1950
*4aPPc
Physiology Meets Perception I
*4aSAa Novel Techniques for Nondestructive Evaluation I
4aSAb Topics in Structural Acoustics and Vibration (Poster Session)
4aSAc General Topics in Structural Acoustics and Vibration III
4aSC
Speech Perception and Production in Clinical Populations (Poster
Session)
*4aSP
Sparse and Co-Prime Array Processing I
A8
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
4aUWa Acoustical Interaction with Ocean Boundaries and Targets
*4aUWb Underwater Noise from Marine Construction and Energy
Production I
*4aUWc Unmanned Vehicles and Acoustics I
Wednesday Afternoon
*4pAAa Architectural Acoustics and Audio: Even Better Than the Real
Thing II
*4pAAb Simulation and Evaluation of Acoustic Environments II
4pAAc Topics in Architectural Acoustics Related to Measurements II
*4pAAd Recent Developments and Advances in Archeo-Acoustics and
Historical Soundscapes II
*4pAB
Fish Bioacoustics II: Session in Honor of Anthony Hopkins and
Arthur Popper
*4pAO
Acoustics and Acoustic Ecology of Benthic Communities
*4pBAa Session in Honor of Edwin Carstensen II
*4pBAb Biomedical Acoustics Best Student Paper Competition (Poster
Session)
*4pEA
Micro-Perforates II
*4pMU Musical Instrument Performance, Perception, and Psychophysics II
*4pNSa E-Mobility–Challenge for Acoustics
*4pNSb Measuring, Modeling, and Managing Transportation Noise II
4pNSc Urban Environment and Noise Control (Poster Session)
*4pPAa Outdoor Sound Propagation II
*4pPAb Propagation in Inhomogeneous Media II
*4pPPa Perceptual Weights and Cue Integration in Hearing: Loudness,
Binaural Hearing, Motion Perception, and Beyond
*4pPPb Physiology Meets Perception II
4pPPc Attention, Learning, Perception, Physiology Potpourri (Poster
Session)
*4pSAa Novel Techniques for Nondestructive Evaluation II
*4pSAb Probabilistic Finite Element Analysis and Uncertainty
Quantification in Vibro-acoustic Problems
*4pSC
Measuring Speech Perception and Production Remotely: Telehealth,
Crowd-Sourcing, and Experiments over the Internet
*4pSPa Sparse and Co-Prime Array Processing II
4pSPb Topics in Signal Processing in Acoustics (Poster Session)
*4pSPc Extraction of Acoustic Signals by Remote Non-Acoustic Methods
*4pUWa Underwater Noise from Marine Construction and Energy
Production II
*4pUWb Unmanned Vehicles and Acoustics II
Thursday Morning
*5aAAa Uncertainty in Laboratory Building Acoustic Standards
5aAAb Topics in Architectural Acoustics Related to Materials and
Modeling
*5aAAc Simulation and Evaluation of Acoustic Environments III
*5aAAd Recent Developments and Advances in Archeo-Acoustics and
Historical Soundscapes III
*5aABa Ecosystem Acoustics I
5aABb Topics in Animal Bioacoustics (Poster Session)
*5aAO
Tools and Methods for Ocean Mapping I
*5aBAa Diagnostic and Therapeutic Applications of Ultrasound Contrast
Agents I
5aBAb Imaging III
5aEA
Engineering Acoustics Topics III
5aMU General Topics in Musical Acoustics II
*5aNSa Statistical Learning and Data Science Techniques in Acoustics
Research
*5aNSb Effects of Noise on Human Comfort and Performance I
5aPA
General Topics in Physical Acoustics III
*5aPPa
Speech Intelligibility in Adverse Environments: Behavior and
Modeling I
5aPPb Sound Localization and Binaural Hearing
*5aSAa Numerical Methods and Benchmarking in Computational
Acoustics I
*5aSAb Acoustics and Vibration of Sports and Sports Equipment
5aSC
Variation: Age, Gender, Dialect, and Style (Poster Session)
*5aSPa
Audio and Array Signal Processing I
*5aSPb Underwater Acoustic Communications
5aUWa Acoustical Localization, Navigation, Inversion, and Communication
*5aUWb Underwater Noise from Marine Construction and Energy
Production III
Acoustics ‘17 Boston
A8
Thursday Afternoon
*5pAAa Architectural Acoustics and Audio: Even Better Than the Real
Thing III
*5pAAb Simulation and Evaluation of Acoustic Environments IV
*5pAAc Recent Developments and Advances in Archeo-Acoustics and
Historical Soundscapes IV
*5pAB
Ecosystem Acoustics II
*5pAO
Tools and Methods for Ocean Mapping II
*5pBAa Standardization of Ultrasound Medical Devices
*5pBAb Diagnostic and Therapeutic Applications of Ultrasound Contrast
Agents II
5pBAc Therapeutic Ultrasound and Bioeffects
5pEA
Engineering Acoustics Topics IV
*5pED
Teaching Tips for the New (or Not So New) Acoustics Faculty
Members
A9
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
*5pNSa
*5pNSb
5pPA
5pPPa
*5pPPb
A Comparative Look at US and European Noise Policies
Effects of Noise on Human Comfort and Performance II
General Topics in Physical Acoustics IV
Psychoacoustics: Models and Perception
Speech Intelligibility in Adverse Environments: Behavior and
Modeling II
*5pSA
Numerical Methods and Benchmarking in Computational
Acoustics II
5pSC
Speech Perception (Poster Session)
*5pSPa Signal Processing in Side Scan Sonar Systems
*5pSPb Audio and Array Signal Processing II
5pUWa Infrasound in the Ocean and Atmosphere
*5pUWb Underwater Acoustic Propagation
Acoustics ‘17 Boston
A9
ASA School 2018
Living in
the Acoustic
Environment
5-6 May 2018
Chaska, MN
ASA School 2018 is an Acoustical Society of America event for graduate students and
early career acousticians in all areas of acoustics to learn about and discuss a wide variety of
topics related to the interdisciplinary acoustical theme Living in the Acoustic Environment. ASA
School 2018 follows on the success of ASA Schools in 2012, 2014, and 2016, and will provide
opportunities for meeting faculty and fellow students, discussing research topics, developing
collaborations and professional relationships within acoustics, and mentoring.
Program and Costs
ASA School 2018 will take place at Oak Ridge Hotel and Conference Center in Chaska, MN, a
lakeside resort 30 minutes from Minneapolis, MN. Lectures, demonstrations, and discussions
will be given by distinguished acousticians in a two-day program covering topics in Acoustical
Oceanography, Animal Bioacoustics, Biomedical Acoustics, Engineering Acoustics, Physical
Acoustics, Signal Processing in Acoustics, Structural Acoustics and Vibration, and Underwater
Acoustics. The registration fee is $50. Hotel rooms at Oak Ridge for two nights (doubleoccupancy), meals, and course materials are provided by ASA. Participants are responsible for
their own travel costs and arrangements including transportation to Oak Ridge. Transportation
from Oak Ridge to the ASA meeting location in Minneapolis at the close of ASA School 2018 will
be provided and paid by ASA.
Participants and Requirements
ASA School 2018 is targeted to graduate students and early career acousticians (within 3 years of
terminal degree) in all areas of acoustics. Attendance is limited to 60 participants who are
expected to attend all School events and the ASA meeting immediately following on 7-11 May
2018. ASA School attendees are required to be an author or coauthor on an abstract for
presentation at the ASA Minneapolis meeting.
Application and Deadlines
The application form and preliminary program will be available
online in November, 2017, at www.AcousticalSociety.org.
A11
SCHEDULE OF STARTING TIMES FOR TECHNICAL SESSIONS AND TECHNICAL COMMITTEE (TC) MEETINGS
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
ROOM
200
Sun AM
SUN PM
1pMU
1:15
Mon AM
2aMU
9:15
Mon PM
2pMU
1:15
201
1aSA
10:35
1aNS
10:35
1pSA
1:20
1pNSb
1:15
1pNSa
1:15
1pEA
1:20
2aSAa
9:15
2aNSb
9:15
2aNSa
9:15
2aSAb
9:20
2pSAa
1:15
2pNSb
1:20
2pNSa
1:15
2pSAb
1:20
2aEA
9:15
202
203
204
205
Tue AM
3aMU
9:15
Tue PM
3pPAb
1:20
3aSAb
10:40
3aNSb
9:15
3aNSa
9:15
3aSAa
9:15
3pSAa
1:20
3pNSa
1:15
3pNSb
1:15
3pSAb
1:20
2pEA
1:20
3aEA
9:20
3pEA
1:20
3aAAa
9:15
3pAAa
1:15
3aAAc
9:20
3aAAb
9:20
3aPA
9:15
206
1aAAc
10:40
1pAAc
1:20
2aAAc
9:20
2pAAb
1:20
207
1aAAa
10:35
1aAAb
10:35
1pAAb
1:15
1pAAa
1:15
2aAAb
9:20
2aAAa
9:15
1aPA
10:40
1pPA
1:20
2pAAa
1:15
2pAAc
1:20
2pAAd
3:20
2pPA
1:20
2pABa
1:15
2pSP
1:20
208
210
302
1aSP
10:35
1pSP
1:15
304
1aPPa
10:40
1aUWb
10:40
1pPPb
1:40
1pUWa
1:20
2aPAa
9:20
2aPAb
10:20
2aSPa
9:15
2aSPb
11:00
2aEDa
9:35
2aUWa
9:15
1aUWa
10:35
1pUWb
1:20
2aUWb
9:20
300
Acoustics ‘17 Boston
306
309
Mon EVE
TCEA
8:00
TCAA
8:00
TCPA
8:00
Tue EVE
Wed PM
4pMU
1:20
3pAAb
1:15
3pAAc
1:20
Wed AM
4aMU
7:55
4aED
11:00
4aSAa
8:00
4aNSa
8:00
4aNSb
8:55
4aEAb
8:20
4aSAc
10:40
4aEAa
8:00
4aEAc
10:15
4aAAa
7:55
4aAAg
11:00
4aAAe
8:15
4aAAf
8:20
3pPAa
1:15
4aPAb
8:35
3aSP
9:20
3pSP
1:20
4aSP
8:15
2pSC
1:15
2pUWb
1:20
3aPPb
10:00
3aUWb
9:20
3pBAa
1:20
3pUWb
1:20
2pUWa
1:15
3aUWa
9:20
3pUWa
1:20
4aPPc
9:15
4aUWa
8:00
4aUWc
10:35
4aUWb
8:35
3eED
5:30
Thu AM
5aMU
8:00
Thu PM
5pSPa
1:15
Room
200
5aSAa
8:00
5aNSb
9:15
5aNSa
8:35
5aSAb
10:40
5pSA
1:20
5pNSb
1:20
5pNSa
1:15
5pED
1:15
201
4pEA
1:20
4pSAb
4:15
4pAAc
1:20
4pAAd
2:35
4pAAa
1:15
4pAAb
1:20
5aEA
8:00
5pEA
1:20
205
5aAAb
8:00
5aAAd
9:15
5aAAa
7:55
5aAAc
8:20
5pAAc
1:35
206
5pAAa
1:15
5pAAb
1:20
207
4pPAb
1:20
5aPA
9:00
5aPPb
8:00
5aSPa
8:20
5pPA
1:40
210
5pSPb
1:20
302
5aSPb
8:20
5aUWa
8:00
5pPPa
1:20
5pUWa
1:15
304
5aUWb
8:40
5pUWb
1:20
309
4pSAa
1:20
4pNSb
1:20
4pNSa
1:15
4pPAa
1:20
4pSPa
1:20
4pSPc
3:35
4pPPb
1:20
4pUWb
1:20
4pUWa
1:20
Wed EVE
TCMU
8:00
TCNS
8:00
TCSP
8:00
TCSC
8:00
202
203
204
208
300
306
A11
A12
ROOM
310
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
311
312
Sun AM
1aAO
10:35
1aPPb
10:55
SUN PM
1pAO
1:20
1pPPa
1:35
1aBAb
10:40
1pBAb
1:40
1pAB
1:20
1pSC
1:20
1pUWc
1:20
313
Ballroom 1aSC
A
10:40
Ballroom 1aID
B
8:00
1aBAa
10:40
Ballroom
C
1pBAa
1:20
Mon AM
2aAO
9:15
2aPPa
9:15
2aPPb
11:40
2aBAa
9:15
2aAB
9:20
2aSC
9:20
2aEDb
10:20
Mon PM
2pAO
1:20
2pPPa
1:20
Mon EVE
TCAO
8:00
TCPP
8:00
Tue AM
3aAO
9:15
3aPPa
9:15
Tue PM
3pAO
1:20
3pPP
1:15
2pBA
1:15
2pABb
1:20
2pPPc
1:20
TCSA
8:00
TCAB
8:00
3aBAb
9:15
3aAB
9:15
3aIDb
9:20
3aNSc
10:20
3aSC
9:20
3pBAb
1:40
3pAB
1:20
3pMU
1:20
3pNSc
1:20
3pPAc
1:20
3PSc
1:20
2aID
8:00
2aBAb
9:20
2aIDb
9:15
2pPPb
1:20
2pID
1:20
3aIDa
8:00
3aBAa
9:15
Tue EVE
Wed AM
4aPAa
8:35
4pPPb
8:15
Wed PM
4pAO
1:15
4pPPa
1:15
Wed EVE
TCUW
8:00
TCBA
8:00
4aAB
7:55
4aPPa
8:00
4aSC
8:00
4pSC
1:15
4pAB
1:20
4pNSc
1:20
4pPPc
1:20
4aBA
7:55
4pBAa
1:15
4aAAb
8:00
4aAAc
8:00
4aAAd
8:00
4aSAb
8:00
4pBAb
1:20
4pSPb
1:20
Thu AM
5aAO
8:15
5aPPa
7:55
Thu PM
5pAO
1:20
5pPPb
1:20
Room
310
5aBAb
8:20
5aABa
7:55
5aABb
8:00
5aSC
8:00
5pBAa
1:15
5pAB
1:20
5pSC
1:20
312
5aBAa
7:55
5pBAb
1:20
Ballroom B
5pBAc
1:20
Ballroom C
311
313
Ballroom A
Acoustics ‘17 Boston
A12
A13
900 Boylston Street | Boston, Massachusetts 02115
t 877.393.3393 | f 617.954.3326 | SignatureBoston.com
Huntington Ave
Prudential
Service Yard Entrance
Lower Level
Hand
Carry
Ramp
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Hynes Convention Center
Service
Trucking Area
5 Docks
Main
Trucking Area
Loading
Loading
Dock
Manager
M as
Towne
Stove
&
Spirits
Acoustics ‘17 Boston
Up
sach
uset
ts Tu
(sub rnpike
/I
terra
nean - 90 Tu
n
)
MCCA
Executive Offices
Lobby
Drop-Off
Back Bay Logan Express
Shuttle Stop
Sidewalk (Lower Level)
Boylston Street Entrance
A13
Boylston Street
nel
9 Docks
Cambria St
Service
Entrance
20’ x 14’
A14
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ‘17 Boston
A14
A15
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ‘17 Boston
A15
A16
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ‘17 Boston
A16
CHARLESTOWN
ste
rS
t
He
nc
hm
an
St
Hanove
r St
Fo
St
lem
Un
ity
Sa
St
North
G de
Couar
rt Stn
St
n
lto
lS
t
Fu
t
cia
At la
Co
mm
er
nS
lto
Fu
Fo
r
Ch t Poi
ann nt
el
1
P
Av
e
St
12
St
rth
rns
Fa
Co
ng
re
ss
St
t
M
Su
el
Ne
mm
ch
cc
er
er
St
o
St
Ct
P
Pit
SOUTH
BOSTON
LEGEND
Rapid transit line & station
M
W
t.
mw
Commuter
railorline
W
as
oo
hi
dS
ng
t
Freedom
Trail
Bin
to
n
for
Av
dS
t
e
Black Heritage
Trail
One way street
Public restroom
i Visitor information center
P Parking
Mid
wa
yS
t
Do
rch
este
r A
ve
(pr
iva
te w
ay)
P
tsb
urg
Stil
St
ling
sS
t
11
St
Sle
ep
er
St
gh
Federal
Hi
er
t
93
Se
ap
ort
P
wo
St
rth
e
Atla
ntic
Av
t
So
bin
Pk
Devonshire St
De
St
von
St shire
Lincoln St
mm
13
No
rth P
ern
Av
e
AS
Snow Hill St
St
lem
Sa
Ha
no
ve
rS
No
t
rth
St
Marsh
all
Devonshire St
Cou
Squ rt
are
St
ch
Ar
is
Ch
au
nc
yS
t
Ki
ng
sto
n
St
Hudso
n St
Harris
on Ave
South St
Somerset St
t
tS
on
m
Tr
e
W
as
hin
gt
on
St
Washing
ton St
No
Stan
iford
St
St
ont
Tre
m
t St
Tremon
Shawmut
Ave
Washin
gton St
Brimm
er St
Joy St
W. Cedar St
e
t Av
Ot
Blo
sso
m
wmu
Sha
Linco
ln S
t
Ct
Cha
rles
St
Emb
ankm
ent R
d
Co
lu
m
bu
sA
ve
Tyler
St
Ca
La
nd
mb
Blv
rid
ge
d
Pk
wy
rk
Pa
or
rid
or
tC
es
hw
ut
So
Harr
ison
Ave
First St
Second
St
Lopez
St
N. Grove St
Carleton St
Av
e
Hu
nt
in
gt
on
St
ph
Co
lu
m
bu
sA
ve
Galileo
Way
Dock St
Wadsworth
St
Mo
ore
St
vis
Da
Fifth St
ol
Pe
de
str
ian
sO
nly
South
Station
AS
St
Eight
ot
t
eS
as
rch
Pu
e
Av
tic
lan
At
St
ss
re
ng
Co
.B
St
St
er
liv
Bea
c
St h
Rowes Wharf
Ferries to South Shore
and Logan Airport
O
Su
st S
India
RowWharf
P
P
t
lS
ar
Pe
Ea
P
P
East India
Russia
Wharf
South
Station
10
Milk St
St
St
Charlesgate E
y
er
Bedford St
x St
Central
Wharf
28
Ave
l Arter
Centra
St
rch
liv
Fenway
Long Wharf
P
Esse
Commercial Wharf
Aquarium
Atlantic
24
P
P
Tufts
St
Ave
Christopher
Columbus
Waterfront
Excursion and
Park
whale watch cruises
ma
O
t
lS
ar
St
Pe
s
gres
Con
St
Federal
Charlesgate W
93
St
India
St
Kilby
St
ress
Cong
3
St
n St
Unio
ess St
Congr
P
Pl
Sargent's Wharf
Eastern
Ave
s
St
23
Union
Wharf
P
1
State St
90
P
wi
ne
to
ks
P
Le
r
te
en
Port
lan
dS
t
St
ac
Bl
t
ll
to
el
nn
u
el
rT
nn
ne
Tu
m
n
u
S
ha
lla
1A
Ca
Commercial St
St
St
Oak S
St
Endicott
er
St
m
ley
St
P
P
Marg
inal
R
Mass d
Pike
P
m
t
Herald St
rke
lin
nk
Fra
Be
t
St
Knee
land
St
New England
Medical Center
hS
16
Downtown
Crossing
CHINATOWN
Nas
sau
St
P
St
Kingston
St
Edinboro
Ping On St
d St
Oxfor
E.
St
Su
P
P
Ext
Harrison
St
t
Ave
rren
Wa
fie
Pedestrian
Mall
Beach St
As
i
3
ic
nt
l
Centra
Ind
St
St
ia S
er
t
t
es S Wat
ilk
M
Haw
Bro
Ba
t
a
S
dS
tter
Water
y
t
t
S
ilk
M
St
t
n
High S
kli
an
P Fr
22
Milk
om
Battery
Wharf
Lewis
Wharf
St
St
Chatham
t
20
P
27
State
21
Br
St
ld
yS
18 19
R
ic
hm
on
d
Clinton St
26
rt S
Sch
ool
St
NORTH
END
30
Cr
os
sS
t
r
ve
no
Ha
Cou
P
Win
ter
Te
St
mp
le
Pl
W
es
tS
t
LaGrange
St
Kneela
nd St
W.Oak
S
Margin
rk
90
Hera
t
nS
leto
App
Gra
ton St
N. Washing
St
rd
fo
ed
M
Pa
r St
ndle
Cha nce St
re
Law
t
nS
leto
l
App
th P
mou
Dart
P
t
ont S
Piedm
r St
heste
Winc
St
St
tte
se
ye
t
lro
Fa
tS
Me
n
o
em
Tr
r St
ndle
Cha
P
Bowdoin St
P
St
ill
rh
ve
Ha
t
lS
na
Ca
St
nd
ie
St
d
Fr
an
rtl
t
Po
rS
te
as
nc
La
P
Isabella St
Warrenton St
outh
s St S
Charle
P
P
St
t
St
rch
Chu
28
pe
ho
an
St
P
way
Broad
St
St
rles
Cha
St
St
gton
P
SOUTH
END
Ya
rm
ou
th
W
.C
an
to
n
Arlin
r
W
.N
ew
to
n
28
Joy St
t St
t
ce S
t
er S
Riv
Prudential
P
art S
Stu
15
9
P
8
P
Center
Av P
en
ue
De
Avery St
Pl Laf
Hayward
ay
P
e
Chinatown tte
Boylston
Essex St
Boylston
St
rt S
Stua
St
eley
Berk
Clear
way S
t
P
St
St
mer
main
t
6
St
don
ren
Cla
St Ger
5
Back Bay
South End
P
Ha
rc
Ga
ou
rt
St rriso
St
n
e
s Av
ame
St J
t
es S
harl
St.C
t
ve S
eno
Caz
Norw
ay St
Belvide
re St
St
gton
Arlin
P
P
a
Plaz
Park
l
ity P
Trin
y Rive
A17
Rd
erly
Edg
t
dS
ilan
Hav
t
on S
Dalt
ls
Boy
St
den
Blag
t
th S
mou
Dart
3
P
Scotia
St
S
ton
St
don
ren
Cla
t
th S
mou
Dart
St
St
2
P
Cambria St
t
1
ter
ster
Ipswich St
Copley
Square
Boston Common
ton
Boyls
Arlington
P
4
n St
lsto
Boy
St
bury
New
i
P
25
Park St
Boston Common
Visitor Information
Center
St
Hu
ll S
t
gin
St
ury
db
Su
ld
Swan
Boats
P
2
14
Frog Pond
(seasonal ice skating)
Public
Garden
w
Ne
31
17 Government
St
28
2
28
P
St
bury
New
n St
Byro
St
ugh
boro
Marl
30
Copley
Hynes Convention
Center/ICA
FENWAY
Back
Bay
Fens
uce
Glo
Ave
2
etts
hus
30
30
St
ve
lth A
wea
mon
Com
BACK
BAY
ve
lth A
wea
mon
Com
sac
Mas
St
ugh
boro
Marl
MassPike
Mudd
Acoustics ‘17 Boston
B
Kenmore
Square
St
St
ford
Here
St
con
Bea
rrow
Exe
es
Sto
St
field
Fair
Jam
on
eac
con
Bea
Dr
J.
P
con
Bea
28
28
2A
Bay State Rd
r
l
ver P
Bea
r
St
eley
Berk
ge
Brid
ard
Harv
Cha
ive
les R
t St
stnu
Che
t
ch S
Bran
s
Che
t
ke
ar
M St
P
Ashburton
Pl
St
tnut
St
Lime
Brim
St
7
Way
u
Waln
.25 mi
Vernon St
Mt
Spru
SCALE
0
Dr
St
t
nS
rdoB
ow
Cha
ke
New Ha
r
wk
in
s
St
BEACON HILL
28
t
ve
sA
rth
nfo
Da
rial
mo
Me
Pinckney
Community
Boating
.15 km
Riv er S
sett
chu
Am
0
3
ay
iW
nt
le
Va
34
Temple St
Ridgeway Ln
Hancock St
Myrtle
Pinckney St
t
ar S
Ced
W. edar Ln
C
ssa
t
tS
s
her
imac
Bowdoin
Garden St
Revere St
Merr
Cambridge St
Phillips St
St
ay
ew
us
Ca
Constitution
Wharf
St
St
Battery
Cle
St
ve
Salutation
lan
d P N Ti
l
e
l
.
St
Harr
Be ston
is S
t
nn
St Cla
Thatcher
rk S
et
Pr
t
tS
inc
t
e
St
Fle
et
Cooper St Ba
St
St
rtlett
n
Pa
oo
rm
M
Stillman
29
t
S
h
Haymarket
Nort
North
Station
P
P
S. Russell St
Irving St
Charles/MGH
Ma
Massachusetts
Institute of
Technology
O'Connel
P
St
re
Wm. Cardinal
Pl
joy
ve
Lo
32
33
WEST
END
Parkman St
Longfe
llow Br
idge
Ander son St
ua
P
Grove
Sq
P
Fruit St
Main St
P
P
Blos
som
St
Charlesbank
Playground
Charles St
d
Mart
ha R
d
28
Athenaeu
m
St
North
Station
3
3
Binney
St
3
93
Science
Park
St
rv
ar
a St
shu
Na
lon
co
Ac Way
St
35
U.S. Coast
Guard Base
North End
Playground
Com
merc
Ch
ial S
ar
t
te
r
t
lS
cia
er
m
m
P
Co
er
art
Ch
Bent St
Boston Inner Harbor
1
P
es
Ha
Rd
ark
rial P
Indust
28
Am
To
St
ay
yW
ne
as
m
Lo
Osborn St
st
Blossom St
Main St
BOSTON
Ea
Hig
hw
ay
CambridgeSide
Galleria
Mall
Way
36
Constitution
Plaza
93
t
ar
e S N. M
inc
Pr
Cl
ar
k
St
St Br
CAMBRIDGE
M
Co id-Blo
nn
ec ck
tor
ist
t
Sixth St
Fulkers
on St
Ber
ksh
ire
St
St
ke
ol
Ms
gr
O'B
rie
n
Ju
dg
eG
ilm
or
eB
rid
ge
Max
St
Wil
low
St
Unio
n S
t
Wind
sor S
t
ar
Lechmere
ge
Brid
wn
sto
arle
Ch
Ave
th
mou
Ports
M
37
1
r
ke
un
/B idge
kim Br
Za al
P. ori
d
ar em
on ll M
Le Hi
er
Webst
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
th
ou
Plym
Car
din
al M
ede
iros
Ave
A17
Lin
coln
St
Gore St
mbridg
Benjamin Franklin’s Birthplace . . . . . . . . e. .St. . . . .16
St
Boston
City Hall . . . . . . . . . . . . . . . . . . . . . . . . . . .25
Boston Massacre Site . . . . . . . . . . . . . . . . . . . . . .23
Boston Public Library . . . . . . . . . . . . . . . . . . . . . . . .4
Boston Tea Party Ship . . . . . . . . . . . . . . . . . . . . . .11
Bunker Hill Monument . . . . . . . . . . . . . . . . . . . . . .37
Children’s Museum . . . . . . . . . . . . . . . . . . . . . . . .12
Copp’s Hill Burying Ground . . . . . . . . . . . . . . . . . .31
Custom House . . . . . . . . . . . . . . . . . . . . . . . . . . . .24
Faneuil Hall Marketplace . . . . . . . . . . . . . . . . . . . .27
Fenway Park . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1
Fleet Center . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .32
Hatch Shell . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7
Bin
Historic
Faneuil Hall . . . . . . . . . . . . . . . . . . . . . . . .26
n
HyneseyConvention
Center . . . . . . . . . . . . . . . . . . . .2
St
John Joseph Moakley United States Courthouse .13
John Hancock Tower
. . . . . . . . . . . . . . . . . . . . . . . .6
Binney
St
John McCormack State Office Building . . . . . . . . .17
King’s Chapel . . . . . . . . . . . . . . . . . . . . . . . . . . . . .18
Massachusetts Transportation Building . . . . . . . . . .8
Museum of Science . . . . . . . . . . . . . . . . . . . . . . . .35
New England Aquarium . . . . . . . . . . . . . . . . . . . . .28
Br Hall . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .19
Old City
oa
P
d
ay
Old CornerwBook
Store . . . . . . . . . o.tt.er. . . . . . . . . .20
S
Old Granary Burying Ground . . . . . . . . .t . . . . . . . .15
Old North Church . . . . . . . . . . . . . . . . . . . . . . . . . .30
Old South Meeting House . . . . . . . . . . . . . . . . . . .21
Old State House . . . . . . . . . . . . . . . . . . . . . . . . . .22
Paul Revere House . . . . . . . . . . . . . . . . . . . . . . . .29
Prudential Center . . . . . . . . . . . . . . . . . . . . . . . . . . .3
Registry of Motor Vehicles . . . . . . . . . . . . . . . . . . . .9
South Station . . . . . . . . . . . . . . . . . . . . . . . . . . . . .10
State House . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .14
St
Edward W. Brooke CountyrstCourt
House . . . . . . . .34
e
mh
Thomas P. O’Neill, Jr.AFederal
Office . . . . . . . . . . .33
Trinity Church . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5
USS Constitution (“Old Ironsides”) . . . . . . . . . . . .36
Utica St
Attractions/Points of
Interest
Ca
Pale
rm
St o
MASON INDUSTRIES
VIBRATION
CONTROL
PRODUCTS
From simple rubber pads to air
springs to full structural isolation
of floating floors and complete
buildings, Mason has the solution
for all your vibration and noise
control problems.
Concerned about noise
and vibration shaking
your building?
Mason can help
with structural
isolation solutions.
For close to 60 years, we have been designing and providing
rubber isolation bearing pads of any capacity or thickness to
support entire floating floors within a structure or the entire
building should it need isolation to keep out train or subway
vibration. When the situation is more severe, we use spring
assemblies for improved performance.
Whether you are an acoustical consultant, structural engineer,
contractor or owner, let’s get together and do the job right. Our
home base is New York, although we work all over the world.
Give us a call. It’s time to get to know one another before these
problems come up.
STRUCTURAL ISOLATION SOLUTIONS
Structural Isolation, Zankel Auditorium,
Carnegie Hall
Rubber Isolation Bearing Pads
Cross Section
Subway Track Isolation, Los Angeles Metro
Floor Rubber
Isolation Bearing
Type EAFM
North Sea Wind Turbine Isolation
Spring Isolators Close to Rail Lines Supporting
the Federation Square Building, Melbourne,
Australia
Precompressed
Building Support
Spring Assembly
Type SLFPC
MASON is with you every step of the way!
8
7
6
5
4
3
2
1
H
H
G
G
F
F
E
E
D
D
C
C
B
B
CUSTOMER
JOB NAME
MASON INDUSTRIES Inc.
631-348-0282
FAX 631-348-0279
HAUPPAUGE, NEW YORK
A
P.O. NO.
714-535-2727
FAX 714-535-5738
ANAHEIM, CALIFORNIA
DRAWN BY
M.I. NO.
A
CHECKED BY
TITLE
DATE
8
7
6
5
4
3
2
SCALE
DWG NO.
1
DESIGN– Shear Wall/Key Isolation,
Isolated Building in NYC
MANUFACTURE– Spring
Isolators Built at
Mason Industries NY
Please send for the latest Mason Seismic
Restraint Guidelines book and a complete
Mason Catalog at: (FAX) 631-348-0279, (Email)
info@mason-ind.com or (Phone)Alison
Vazqueztell, 631-348-0282. Or check out our
web site at www.mason-ind.com.
INSTALLATION– Spring Isolators
Placed Under Support Columns
at NY Times Building NYC
Our response will include our nearest
Rep Contact.
MASON INDUSTRIES
Manufacturers of Noise and Vibration Control Products and Systems
350 Rabro Drive,Hauppauge, NY 11788 • 631/348-0282 • FAX 631/348-0279
Email info@Mason-Ind.com • Website www.Mason-Ind.com
TECHNICAL PROGRAM CALENDAR
Acoustics ‘17 Boston
25–29 June 2017
SUNDAY MORNING
8:00
1aID
Interdisciplinary: Opening Ceremonies
and Keynote Lectures 1 and 2. Ballroom B
10:40 1aUWb Underwater Acoustics: Underwater
Acoustic Uncertainty. Room 306
SUNDAY AFTERNOON
10:35 1aAAa Architectural Acoustics: Echolocation by
People Who are Blind. Room 207
1:15
10:35 1aAAb Architectural Acoustics: Sound
Propagation Modeling and Spatial Audio for
Virtual Reality I. Room 208
1pAAa Architectural Acoustics and Noise: Noise
and Soundscapes in Restaurants and Other
Public Accommodations. Room 208
1:15
10:40 1aAAc Architectural Acoustics: Teaching and
Learning in Healthy and Comfortable
Classrooms I. Room 206
1pAAb Architectural Acoustics: Prediction of
Direct and Flanking Airborne and Impact
Sound Transmission. Room 207
1:20
1pAAc Architectural Acoustics: Teaching and
Learning in Healthy and Comfortable
Classrooms II. Room 206
1:20
1pAB
Animal Bioacoustics: Biosonar. Room 313
1:20
1pAO
Acoustical Oceanography: Topics in
Acoustical Oceanography. Room 310
1:20
1pBAa
Biomedical Acoustics: Beamforming
and Image Guided Therapy II: Cavitation
Nuclei. Ballroom B
1:40
1pBAb Biomedical Acoustics: Imaging II. Room
312
1:20
1pEA
Engineering Acoustics: Engineering
Acoustics Topics I. Room 204
10:35 1aAO
Acoustical Oceanography: Acoustical
Oceanography Prize Lecture. Room 310
10:40 1aBAa
Biomedical Acoustics: Beamforming
and Image Guided Therapy I: Algorithms.
Ballroom B
10:40 1aBAb
Biomedical Acoustics: Imaging I. Room 312
10:35 1aNS
Noise, Physical Acoustics, ASA
Committee on Standards, and Structural
Acoustics and Vibration: Sonic
Boom Noise I: Low Boom Technology,
Propagation, Etc. Room 202
10:40 1aPA
Physical Acoustics and Biomedical
Acoustics: Acoustofluidics I. Room 210
1:15
1pMU
10:40 1aPPa
Psychological and Physiological
Acoustics: Perception of Synthetic Sound
Fields I. Room 304
Musical Acoustics and Architectural
Acoustics: Concert Hall Acoustics. Room
200
1:15
1pNSa
Noise, Psychological and Physiological
Acoustics, and Structural Acoustics and
Vibration: Perception of Tonal Noise.
Room 203
1:15
1pNSb
Noise and Physical Acoustics: Session in
Honor of Kenneth Plotkin. Room 202
1:20
1pPA
Physical Acoustics and Biomedical
Acoustics: Acoustofluidics II. Room 210
1:35
1pPPa
Psychological and Physiological Acoustics
and Speech Communication: Honoring
the Contributions of Louis Braida to the
Study of Auditory and Speech Perception.
Room 311
1:40
1pPPb
Psychological and Physiological
Acoustics: Perception of Synthetic Sound
Fields II. Room 304
1:20
1pSA
Structural Acoustics and Vibration:
General Topics in Structural Acoustics and
Vibration I. Room 201
1:20
1pSC
Speech Communication: Non-Native
Speech and Bilingualism (Poster Session).
Ballroom A
10:55 1aPPb
Psychological and Physiological
Acoustics: Auditory Neuroscience Prize
Lecture. Room 311
10:35 1aSA
Structural Acoustics and Vibration,
Noise, Physical Acoustics, and ASA
Committee on Standards: Groundborne
Noise and Vibration from Transit Systems.
Room 201
10:40 1aSC
Speech Communication: Speech
Technology (Poster Session). Ballroom A
10:35 1aSP
Signal Processing in Acoustics:
Application of Bayesian Methods to
Acoustic Model Identification and
Classification I. Room 302
10:35 1aUWa Underwater Acoustics, Acoustical
Oceanography, Signal Processing in
Acoustics, Structural Acoustics and
Vibration, Physical Acoustics, and
Biomedical Acoustics: Passive Sensing,
Monitoring, and Imaging in Wave Physics I.
Room 309
A20
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ‘17 Boston
A20
Signal Processing in Acoustics:
Application of Bayesian Methods to
Acoustic Model Identification and
Classification II. Room 302
1:15
1pSP
1:20
1pUWa Underwater Acoustics: Ambient Sound in
the Ocean. Room 306
1:20
1pUWb Underwater Acoustics, Acoustical
Oceanography, Signal Processing in
Acoustics, Structural Acoustics and
Vibration, Physical Acoustics, and
Biomedical Acoustics: Passive Sensing,
Monitoring, and Imaging in Wave Physics
II. Room 309
1:20
1pUWc Underwater Acoustics: Topics in
Underwater Acoustics (Poster Session).
Ballroom A
MONDAY MORNING
Interdisciplinary: Keynote Lecture.
Ballroom B
9:15
2aNSa
Noise, Architectural Acoustics, and ASA
Committee on Standards: Noise Impacts
and Soundscapes on Outdoor Gathering
Spaces I. Room 203
9:15
2aNSb
Noise, Physical Acoustics, ASA
Committee on Standards, and Structural
Acoustics and Vibration: Sonic Boom
Noise II: Mach Cutoff, Turbulence, Etc.
Room 202
9:20
2aPAa
Physical Acoustics: Infrasound I. Room 210
10:20 2aPAb
Physical Acoustics: General Topics in
Physical Acoustics I. Room 300
9:15
2aPPa
Psychological and Physiological Acoustics
and Speech Communication: Acoustics
Outreach to Budding Scientists: Planting
Seeds for Future Clinical and Physiological
Collaborations. Room 311
11:40 2aPPb
Psychological and Physiological Acoustics:
Models and Reproducible Research I.
Room 311
9:15
2aSAa
Structural Acoustics and Vibration,
Physical Acoustics, and Engineering
Acoustics: Acoustic Metamaterials I. Room
201
8:00
2aIDa
9:15
2aAAa Architectural Acoustics: Sound
Propagation Modeling and Spatial Audio for
Virtual Reality II. Room 208
9:20
2aAAb Architectural Acoustics: Acoustic
Regulations and Classification of New and
Retrofitted Buildings I. Room 207
9:20
2aSAb
2aAAc Architectural Acoustics: Teaching and
Learning in Healthy and Comfortable
Classrooms III. Room 206
Structural Acoustics and Vibration:
General Topics in Structural Acoustics and
Vibration II. Room 204
9:20
2aSC
Speech Communication: Speech
Production (Poster Session). Ballroom A
2aSPa
Signal Processing in Acoustics:
Topological Signal Processing. Room 302
9:20
9:20
2aAB
Animal Bioacoustics: Behavior/
Comparative Studies. Room 313
9:15
9:15
2aAO
Acoustical Oceanography: Session in
Honor of David Farmer I. Room 310
11:00 2aSPb
9:15
2aBAa
Biomedical Acoustics and Physical
Acoustics: Impact of Soft Tissue
Inhomogeneities and Bone/Air on Ultrasound
Propagation in the Body. Room 312
9:20
2aBAb
Biomedical Acoustics: Beamforming and
Image Guided Therapy III: Ablation and
Histotripsy. Ballroom B
9:15
2aEA
Engineering Acoustics: Ducts and Mufflers
I. Room 205
9:35
2aEDa
Education in Acoustics, Public Relations
Committee, and Student Council:
Communicating Scientific Research to NonScientists. Room 304
10:20 2aEDb
Education in Acoustics: Education in
Acoustics Poster Session. Ballroom A
9:15
2aIDb
Interdisciplinary: Neuroimaging
Techniques I. Ballroom C
9:15
2aMU
Musical Acoustics and Psychological
and Physiological Acoustics: Session in
Memory of David Wessel. Room 200
A21
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Signal Processing in Acoustics,
Engineering Acoustics, and Architectural
Acoustics: Signal Processing for
Directional Sensors I. Room 302
9:15
2aUWa Underwater Acoustics: Sound Propagation
and Scattering in Three-Dimensional
Environments I. Room 306
9:20
2aUWb Underwater Acoustics, Acoustical
Oceanography, Signal Processing in
Acoustics, Structural Acoustics and
Vibration, Physical Acoustics, and
Biomedical Acoustics: Passive Sensing,
Monitoring, and Imaging in Wave Physics
III. Room 309
MONDAY AFTERNOON
1:15
2pAAa Architectural Acoustics: New
Measurement and Prediction Techniques at
Low Frequencies in Buildings. Room 207
1:20
2pAAb Architectural Acoustics: Topics in
Architectural Acoustics Related to
Application. Room 206
Acoustics ‘17 Boston
A21
1:20
2pAAc Architectural Acoustics: Perceptual
Effects Related to Music Dynamics in
Concert Halls. Room 208
1:15
2pSC
Speech Communication: New Trends in
Imaging for Speech Production. Room 304
3:20
2pAAd Architectural Acoustics: Room Acoustics
Design for Improved Behavior, Comfort,
and Performance I. Room 208
1:20
2pSP
Signal Processing in Acoustics,
Engineering Acoustics, and Architectural
Acoustics: Signal Processing for
Directional Sensors II. Room 302
1:15
2pABa Animal Bioacoustics, Acoustical
Oceanography, Education in Acoustics,
and Underwater Acoustics: Incorporating
Underwater Acoustics Research into the
Decision Making Process. Room 300
1:15
2pUWa Underwater Acoustics and Acoustical
Oceanography: In Honor of Ira Dyer, 60
Years as an Innovator, Entrepreneur, and
Visionary for Ocean Engineering. Room
309
1:20
2pABb Animal Bioacoustics: Data Management,
Detection, Classification, and Localization.
Room 313
1:20
2pUWb Underwater Acoustics: Sound Propagation
and Scattering in Three-Dimensional
Environments II. Room 306
1:20
2pAO
Acoustical Oceanography: Session in
Honor of David Farmer II. Room 310
1:15
2pBA
Biomedical Acoustics and Physical
Acoustics: Beamforming and Image
Reconstruction. Room 312
1:20
2pEA
Engineering Acoustics: Ducts and Mufflers
II. Room 205
1:20
2pID
Interdisciplinary: Neuroimaging Technics
II. Ballroom C
1:15
2pMU
Musical Acoustics: ElectronicallyAugmented Instruments. Room 200
1:15
2pNSa
Noise, Architectural Acoustics, and ASA
Committee on Standards: Noise Impacts
and Soundscapes on Outdoor Gathering
Spaces II. Room 203
1:20
2pNSb
Noise, Physical Acoustics, ASA
Committee on Standards, and Structural
Acoustics and Vibration: Sonic Boom
Noise III: Community Exposure and
Metrics. Room 202
TUESDAY MORNING
Interdisciplinary: Keynote Lecture.
Ballroom B
8:00
3aIDa
9:15
3aAAa Architectural Acoustics: Retrospect on the
Works of Bertram Kinsey I. Room 206
9:20
3aAAb Architectural Acoustics: Room Acoustics
Design for Improved Behavior, Comfort,
and Performance II. Room 208
9:20
3aAAc Architectural Acoustics: Acoustic
Regulations and Classification of New and
Retrofitted Buildings II. Room 207
9:15
3aAB
Animal Bioacoustics: Comparative
Bioacoustics: Session in Honor of Robert
Dooling I. Room 313
9:15
3aAO
Acoustical Oceanography and
Underwater Acoustics: Acoustic
Measurements of Sediment Transport and
Near-Bottom Structures I. Room 310
9:15
3aBAa
Biomedical Acoustics: Advances in Shock
Wave Lithotripsy I. Ballroom B
1:20
2pPA
Physical Acoustics: Infrasound II. Room
210
9:15
3aBAb
1:20
2pPPa
Psychological and Physiological
Acoustics: Models and Reproducible
Research II. Room 311
Biomedical Acoustics: Partial Differential
Equation Constrained and Heuristic Inverse
Methods in Elastography I. Room 312
9:20
3aEA
Psychological and Physiological
Acoustics: Hearing Aiding, Protection, and
Speech Perception. Ballroom B
Engineering Acoustics and Physical
Acoustics: Microelectromechanicalsystems
(MEMS) Acoustic Sensors I. Room 205
9:20
3aIDb
Psychological and Physiological Acoustics:
Localization, Binaural Hearing, and Cocktail
Party (Poster Session). Ballroom A
Interdisciplinary, Education in Acoustics
and Student Council: Graduate Programs
in Acoustics Poster Session. Ballroom A
9:15
3aMU
Musical Acoustics: Session in Honor of
Thomas D. Rossing. Room 200
9:15
3aNSa
Noise: Mechanical System Noise. Room
203
9:15
3aNSb
Noise, Education in Acoustics,
ASA Committee on Standards, and
Psychological and Physiological
Acoustics: Using Acoustic Standards in
Education. Room 202
1:20
1:20
1:15
2pPPb
2pPPc
2pSAa
Structural Acoustics and Vibration,
Physical Acoustics, and Engineering
Acoustics: Acoustic Metamaterials II.
Room 201
Structural Acoustics and Vibration and
ASA Committee on Standards: Novel
Treatments in Vibration Damping. Room
204
1:20
2pSAb
A22
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ‘17 Boston
A22
10:20 3aNSc
Noise, Physical Acoustics, ASA
Committee on Standards, and Structural
Acoustics and Vibration: Aircraft Noise
and Measurement (Poster Session).
Ballroom A
1:20
3pBAa
1:40
3pBAb Biomedical Acoustics: Partial Differential
Equation Constrained and Heuristic Inverse
Methods in Elastography II. Room 312
1:20
3pEA
Engineering Acoustics and Physical
Acoustics: Microelectromechanicalsystems
(MEMS) Acoustic Sensors II. Room 205
1:20
3pMU
Musical Acoustics: General Topics in
Musical Acoustics I (Poster Session).
Ballroom A
1:15
3pNSa
Noise: Implications of Community
Tolerance Level Analysis for Prediction
of Community Reaction to Environmental
Noise. Room 202
1:15
3pNSb
Noise, Physical Acoustics, ASA
Committee on Standards, and Structural
Acoustics and Vibration: Sonic Boom
Noise V: Turbulence, Predictions, and
Measurements. Room 203
1:20
3pNSc
Noise: Effects of Noise and Perception
(Poster Session). Ballroom A
1:15
3pPAa
Physical Acoustics: Chains, Grains, and
Origami Nonlinear Metamaterials. Room
210
1:20
3pPAb
Physical Acoustics: General Topics in
Physical Acoustics II. Room 200
1:20
3pPAc
Physical Acoustics: Topics in Physical
Acoustics (Poster Session). Ballroom A
1:15
3pPP
Psychological and Physiological
Acoustics: A Celebration of Nat Durlach
and His Contributions to Sensory
Communications. Room 311
1:20
3pSAa
Structural Acoustics and Vibration,
Physical Acoustics, and Engineering
Acoustics: Acoustic Metamaterials IV.
Room 201
1:20
3pSAb
Structural Acoustics and Vibration:
Energy Methods in Acoustics and Vibration
II. Room 204
1:20
3pSC
Speech Communication: Aging and
Development (Poster Session). Ballroom A
1:20
3pSP
Signal Processing in Acoustics,
Engineering Acoustics, and Architectural
Acoustics: Signal Processing for
Directional Sensors IV. Room 302
1:20
3pUWa Underwater Acoustics, Acoustical
Oceanography, Engineering Acoustics,
and Signal Processing in Acoustics: A
Century of Sonar II. Room 309
1:20
3pUWb Underwater Acoustics: Sound Propagation
and Scattering in Three-Dimensional
Environments IV. Room 306
9:15
3aPA
Physical Acoustics and Noise: Ecoacoustics: Acoustic Applications for Green
Technologies and Environmental Impact
Measurements. Room 210
9:15
3aPPa
Psychological and Physiological
Acoustics: Auditory Cognition and Scene
Analysis in Complex Environments. Room
311
10:00 3aPPb
Psychological and Physiological
Acoustics: Environmental Auditory
Experience. Room 304
9:15
Structural Acoustics and Vibration:
Energy Methods in Acoustics and Vibration
I. Room 204
3aSAa
10:40 3aSAb
Structural Acoustics and Vibration,
Physical Acoustics, and Engineering
Acoustics: Acoustic Metamaterials III.
Room 201
9:20
3aSC
Speech Communication: Prosody (Poster
Session). Ballroom A
9:20
3aSP
Signal Processing in Acoustics,
Engineering Acoustics, and Architectural
Acoustics: Signal Processing for
Directional Sensors III. Room 302
9:20
3aUWa Underwater Acoustics, Acoustical
Oceanography, Engineering Acoustics,
and Signal Processing in Acoustics: A
Century of Sonar I. Room 309
9:20
3aUWb Underwater Acoustics: Sound Propagation
and Scattering in Three-Dimensional
Environments III. Room 306
TUESDAY AFTERNOON
1:15
3pAAa Architectural Acoustics: Retrospect on the
Works of Bertram Kinsey II. Room 206
1:15
3pAAb Architectural Acoustics: Architectural
Acoustics and Audio: Even Better Than the
Real Thing I. Room 207
1:20
3pAAc Architectural Acoustics: Robust Heavy
and Lightweight Constructions for NewBuild and Retrofit Buildings. Room 208
1:20
3pAB
Animal Bioacoustics: Comparative
Bioacoustics: Session in Honor of Robert
Dooling II. Room 313
1:20
3pAO
Acoustical Oceanography and
Underwater Acoustics: Acoustic
Measurements of Sediment Transport and
Near-Bottom Structures II. Room 310
A23
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Biomedical Acoustics: Advances in Shock
Wave Lithotripsy II. Room 304
Acoustics ‘17 Boston
A23
TUESDAY EVENING
5:30
3eED
Education in Acoustics and Women in
Acoustics: Listen Up and Get Involved.
Room 309
8:55
4aNSb
Noise, ASA Committee on Standards, and
Structural Acoustics and Vibration: Wind
Turbine Noise. Room 203
8:35
4aPAa
Physical Acoustics and Signal Processing
in Acoustics: Outdoor Sound Propagation I.
Room 310
WEDNESDAY MORNING
7:55
4aAAa Architectural Acoustics: Recent
Developments and Advances in ArcheoAcoustics and Historical Soundscapes I.
Room 206
8:35
4aPAb
Physical Acoustics, Biomedical Acoustics,
and Structural Acoustics and Vibration:
Propagation in Inhomogeneous Media I.
Room 210
8:00
4aAAb Architectural Acoustics: Acoustic
Regulations and Classification of New and
Retrofitted Buildings III (Poster Session).
Ballroom C
8:00
4aPPa
Psychological and Physiological
Acoustics: Speech, Pitch, Cochlear
Implants, and Hearing Aids Potpourri
(Poster Session). Ballroom A
8:00
4aAAc Architectural Acoustics: Topics in
Architectural Acoustics (Poster Session).
Ballroom C
8:15
4aPPb
Psychological and Physiological
Acoustics: History of Psychoacoustics in
the Period 1900-1950. Room 311
8:00
4aAAd Architectural Acoustics: Simulation and
Evaluation of Acoustic Environments I.
(Poster Session). Ballroom C
9:15
4aPPc
Psychological and Physiological
Acoustics: Physiology Meets Perception I.
Room 304
8:15
4aAAe Architectural Acoustics, Speech
Communication, Signal Processing
in Acoustics, Psychological and
Physiological Acoustics, ASA Committee
on Standards, and Engineering Acoustics:
Assistive Listening Systems in Assembly
Spaces. Room 207
8:00
4aSAa
Structural Acoustics and Vibration,
Biomedical Acoustics, Signal Processing
in Acoustics, and Physical Acoustics:
Novel Techniques for Nondestructive
Evaluation I. Room 201
8:00
4aSAb
Structural Acoustics and Vibration:
Topics in Structural Acoustics and Vibration
(Poster Session). Ballroom C
8:20
4aAAf
Architectural Acoustics: Simulation and
Evaluation of Acoustic Environments I.
Room 208
11:00 4aAAg Architectural Acoustics: Topics in
Architectural Acoustics Related to
Measurements I. Room 206
7:55
4aAB
Animal Bioacoustics: Fish Bioacoustics I:
Session in Honor of Anthony Hawkins and
Arthur Popper. Room 313
10:40 4aSAc
Structural Acoustics and Vibration:
General Topics in Structural Acoustics and
Vibration III. Room 204
8:00
4aSC
Speech Communication: Speech
Perception and Production in Clinical
Populations (Poster Session). Ballroom A
8:15
4aSP
Signal Processing in Acoustics,
Underwater Acoustics, and Biomedical
Acoustics: Sparse and Co-Prime Array
Processing I. Room 302
7:55
4aBA
Biomedical Acoustics, Physical Acoustics,
and Underwater Acoustics: Session in
Honor of Edwin Carstensen I. Ballroom B
8:00
8:00
4aEAa
Engineering Acoustics and Physical
Acoustics: Microelectromechanicalsystems
(MEMS) Acoustic Sensors III. Room 205
4aUWa Underwater Acoustics: Acoustical
Interaction with Ocean Boundaries and
Targets. Room 306
8:35
8:20
4aEAb
Engineering Acoustics: Engineering
Acoustics Topics II. Room 204
4aUWb Underwater Acoustics, Acoustical
Oceanography, and ASA Committee on
Standards: Underwater Noise from Marine
Construction and Energy Production I.
Room 309
10:15 4aEAc
Engineering Acoustics: Micro-Perforates I.
Room 205
11:00 4aED
Education in Acoustics: Take 5’s. Room 200
7:55
Musical Acoustics and Psychological
and Physiological Acoustics: Musical
Instrument Performance, Perception, and
Psychophysics I. Room 200
4aMU
Noise: Measuring, Modeling, and Managing
Transportation Noise I. Room 202
8:00
4aNSa
A24
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
10:35 4aUWc Underwater Acoustics: Unmanned
Vehicles and Acoustics I. Room 306
WEDNESDAY AFTERNOON
1:15
4pAAa Architectural Acoustics and Noise:
Architectural Acoustics and Audio: Even
Better Than the Real Thing II. Room 207
Acoustics ‘17 Boston
A24
1:20
4pAAb Architectural Acoustics: Simulation and
Evaluation of Acoustic Environments II.
Room 208
1:20
4pAAc Architectural Acoustics: Topics in
Architectural Acoustics Related to
Measurements II. Room 206
2:35
4pAAd Architectural Acoustics: Recent
Developments and Advances in ArcheoAcoustics and Historical Soundscapes II.
Room 206
1:20
4pAB
Animal Bioacoustics: Fish Bioacoustics II:
Session in Honor of Anthony Hopkins and
Arthur Popper. Room 313
1:15
4pAO
Acoustical Oceanography, Animal
Bioacoustics, and Underwater Acoustics:
Acoustics and Acoustic Ecology of Benthic
Communities. Room 310
1:15
4pBAa
Biomedical Acoustics, Physical Acoustics,
and Underwater Acoustics: Session in
Honor of Edwin Carstensen II. Ballroom B
1:20
4pBAb Biomedical Acoustics: Biomedical
Acoustics Best Student Paper Competition.
(Poster Session) Ballroom C
1:20
4pEA
Engineering Acoustics: MicroPerforates II. Room 205
1:20
4pMU
Musical Acoustics and Psychological
and Physiological Acoustics: Musical
Instrument Performance, Perception, and
Psychophysics II. Room 200
1:15
4pNSa
Noise, Structural Acoustics and
Vibration, Architectural Acoustics,
Speech Communication, and
Psychological and Physiological
Acoustics: E-Mobility–Challenge for
Acoustics. Room 203
1:20
4pNSb
Noise: Measuring, Modeling, and Managing
Transportation Noise II. Room 202
1:20
4pNSc
Noise: Urban Environment and Noise
Control (Poster Session). Ballroom A
1:20
4pPAa
Physical Acoustics and Signal
Processing in Acoustics: Outdoor Sound
Propagation II. Room 204
1:20
4pPAb
Physical Acoustics, Biomedical Acoustics,
and Structural Acoustics and Vibration:
Propagation in Inhomogeneous Media II.
Room 210
1:15
4pPPa
Psychological and Physiological Acoustics:
Perceptual Weights and Cue Integration
in Hearing: Loudness, Binaural Hearing,
Motion Perception, and Beyond. Room 311
1:20
4pPPb
Psychological and Physiological
Acoustics: Physiology Meets Perception II.
Room 304
A25
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
1:20
4pPPc
Psychological and Physiological
Acoustics: Attention, Learning, Perception,
Physiology Potpourri (Poster Session).
Ballroom A
1:20
4pSAa
Structural Acoustics and Vibration,
Biomedical Acoustics, Signal Processing
in Acoustics, and Physical Acoustics:
Novel Techniques for Nondestructive
Evaluation II. Room 201
4:15
4pSAb
Structural Acoustics and Vibration:
Probabilistic Finite Element Analysis and
Uncertainty Quantification in Vibro-acoustic
Problems. Room 205
1:15
4pSC
Speech Communication and Animal
Bioacoustics: Measuring Speech Perception
and Production Remotely: Telehealth,
Crowd-Sourcing, and Experiments over the
Internet. Room 312
1:20
4pSPa
Signal Processing in Acoustics,
Underwater Acoustics, and Biomedical
Acoustics: Sparse and Co-Prime Array
Processing II. Room 302
1:20
4pSPb
Signal Processing in Acoustics: Topics
in Signal Processing in Acoustics (Poster
Session). Ballroom C
3:35
4pSPc
Signal Processing in Acoustics,
Architectural Acoustics, Biomedical
Acoustics, and Physical Acoustics:
Extraction of Acoustic Signals by Remote
Non-Acoustic Methods. Room 302
1:20
4pUWa Underwater Acoustics, Acoustical
Oceanography, and ASA Committee on
Standards: Underwater Noise From Marine
Construction and Energy Production II.
Room 309
1:20
4pUWb Underwater Acoustics: Unmanned
Vehicles and Acoustics II. Room 306
THURSDAY MORNING
7:55
5aAAa Architectural Acoustics and ASA
Committee on Standards: Uncertainty in
Laboratory Building Acoustic Standards.
Room 207
8:00
5aAAb Architectural Acoustics: Topics in
Architectural Acoustics Related to Materials
and Modeling. Room 206
8:20
5aAAc Architectural Acoustics: Simulation and
Evaluation of Acoustic Environments III.
Room 208
9:15
5aAAd Architectural Acoustics: Recent
Developments and Advances in ArcheoAcoustics and Historical Soundscapes III.
Room 206
Acoustics ‘17 Boston
A25
Animal Bioacoustics: Ecosystem Acoustics
I. Room 313
7:55
5aABa
8:00
5aABb Animal Bioacoustics: Topics in Animal
Bioacoustics (Poster Session). Ballroom A
8:15
5aAO
Acoustical Oceanography: Tools and
Methods for Ocean Mapping I. Room 310
7:55
5aBAa
Biomedical Acoustics and Signal
Processing in Acoustics: Diagnostic and
Therapeutic Applications of Ultrasound
Contrast Agents I. Ballroom B
8:00
5aUWa Underwater Acoustics: Acoustical
Localization, Navigation, Inversion, and
Communication. Room 306
8:40
5aUWb Underwater Acoustics, Acoustical
Oceanography, and ASA Committee on
Standards: Underwater Noise from Marine
Construction and Energy Production III.
Room 309
THURSDAY AFTERNOON
1:15
5pAAa Architectural Acoustics: Architectural
Acoustics and Audio: Even Better Than the
Real Thing III. Room 207
1:20
5pAAb Architectural Acoustics: Simulation and
Evaluation of Acoustic Environments IV.
Room 208
1:35
5pAAc Architectural Acoustics: Recent
Developments and Advances in ArcheoAcoustics and Historical Soundscapes IV.
Room 206
Noise, Architectural Acoustics, Speech
Communication, and Psychological and
Physiological Acoustics: Effects of Noise
on Human Comfort and Performance I.
Room 202
1:20
5pAB
Animal Bioacoustics: Ecosystem Acoustics
II. Room 313
1:20
5pAO
Acoustical Oceanography: Tools and
Methods for Ocean Mapping II. Room 310
1:15
5pBAa
Biomedical Acoustics and ASA
Committee on Standards: Standardization
of Ultrasound Medical Devices. Room 312
1:20
5pBAb Biomedical Acoustics and Signal
Processing in Acoustics: Diagnostic and
Therapeutic Applications of Ultrasound
Contrast Agents II. Ballroom B
1:20
5pBAc
Biomedical Acoustics: Therapeutic
Ultrasound and Bioeffects. Ballroom C
1:20
5pEA
Engineering Acoustics: Engineering
Acoustics Topics IV. Room 205
1:15
5pED
Education in Acoustics and Student
Council: Teaching Tips for the New (or
Not So New) Acoustics Faculty Members.
Room 204
1:15
5pNSa
Noise, Architectural Acoustics, and
Engineering Acoustics: A Comparative
Look at US and European Noise Policies.
Room 203
1:20
5pNSb
Noise, Architectural Acoustics, Speech
Communication, and Psychological and
Physiological Acoustics: Effects of Noise
on Human Comfort and Performance II.
Room 202
8:20
5aBAb
Biomedical Acoustics: Imaging III. Room
312
8:00
5aEA
Engineering Acoustics: Engineering
Acoustics Topics III. Room 205
8:00
5aMU
Musical Acoustics: General Topics in
Musical Acoustics II. Room 200
8:35
5aNSa
Noise and Signal Processing in Acoustics:
Statistical Learning and Data Science
Techniques in Acoustics Research. Room 203
9:15
5aNSb
9:00
5aPA
Physical Acoustics: General Topics in
Physical Acoustics III. Room 210
7:55
5aPPa
Psychological and Physiological
Acoustics, Speech Communication, ASA
Committee on Standards, Architectural
Acoustics, and Signal Processing in
Acoustics: Speech Intelligibility in Adverse
Environments: Behavior and Modeling I.
Room 311
8:00
5aPPb
Psychological and Physiological
Acoustics: Sound Localization and Binaural
Hearing. Room 300
8:00
5aSAa
Structural Acoustics and Vibration and
Physical Acoustics: Numerical Methods
and Benchmarking in Computational
Acoustics I. Room 201
10:40 5aSAb
Structural Acoustics and Vibration,
Noise, Physical Acoustics, and
Architectural Acoustics: Acoustics and
Vibration of Sports and Sports Equipment.
Room 204
8:00
5aSC
Speech Communication: Variation: Age,
Gender, Dialect, and Style (Poster Session).
Ballroom A
8:20
5aSPa
Signal Processing in Acoustics: Audio and
Array Signal Processing I. Room 302
1:40
5pPA
Physical Acoustics: General Topics in
Physical Acoustics IV. Room 210
8:20
5aSPb
Signal Processing in Acoustics and
Underwater Acoustics: Underwater
Acoustic Communications. Room 304
1:20
5pPPa
Psychological and Physiological
Acoustics: Psychoacoustics: Models and
Perception. Room 304
A26
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ‘17 Boston
A26
1:20
1:20
5pPPb
5pSA
Psychological and Physiological
Acoustics, Speech Communication, ASA
Committee on Standards, Architectural
Acoustics, and Signal Processing in
Acoustics: Speech Intelligibility in Adverse
Environments: Behavior and Modeling II.
Room 311
Structural Acoustics and Vibration and
Physical Acoustics: Numerical Methods
and Benchmarking in Computational
Acoustics II. Room 201
Speech Communication: Speech
Perception (Poster Session). Ballroom A
1:20
5pSC
A27
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
1:15
5pSPa
Signal Processing in Acoustics,
Engineering Acoustics, and Underwater
Acoustics: Signal Processing in Side Scan
Sonar Systems. Room 200
1:20
5pSPb
Signal Processing in Acoustics: Audio and
Array Signal Processing II. Room 302
1:15
5pUWa Underwater Acoustics and Physical
Acoustics: Infrasound in the Ocean and
Atmosphere. Room 306
1:20
5pUWb Underwater Acoustics: Underwater
Acoustic Propagation. Room 309
Acoustics ‘17 Boston
A27
SCHEDULE OF COMMITTEE MEETINGS AND OTHER EVENTS
ASA COUNCIL AND ADMINISTRATIVE COMMITTEES
Sat, 24 June, 8:00 a.m.
Sun, 25 June, 3:30 p.m.
Sun, 25 June, 6:30 p.m.
Mon, 26 June, 7:00 a.m.
Mon, 26 June, 7:30 a.m.
Mon, 26 June, 11:45 a.m.
Mon, 26 June, 12:00 noon
Mon, 26 June, 12:30 p.m.
Mon, 26 June, 1:30 p.m.
Mon, 26 June, 4:00 p.m.
Mon, 26 June, 5:00 p.m.
Tue, 27 June, 7:00 a.m.
Tue, 27 June, 7:00 a.m.
Tue, 27 June, 7:00 a.m.
Tue, 27 June, 7:00 a.m.
Tue, 27 June, 7:00 a.m.
Tue, 27 June, 7:30 a.m.
Tue, 27 June, 11:00 a.m.
Tue, 27 June, 11:30 a.m.
Tue, 27 June, 12:00 noon
Tue, 27 June, 12:00 noon
Tue, 27 June, 1:30 p.m.
Tue, 27 June, 5:30 p.m.
Tue, 27 June, 5:30 p.m.
Wed, 28 June, 7:30 a.m.
Wed, 28 June, 7:30 a.m.
Wed, 28 June, 7:30 a.m.
Wed, 28 June, 2:00 p.m.
Wed, 28 June, 4:30 p.m.
Wed, 28 June, 4:30 p.m.
Wed, 28 June, 4:30 p.m.
Wed, 28 June, 4:30 p.m.
Thu, 29 June, 7:00 a.m.
Fri, 30 June, 8:00 a.m.
Executive Council
Technical Council
Technical Council Dinner
ASA Books
Panel on Public Policy
Editorial Board
Student Council
Prizes & Special Fellowships
Meetings
Newman Fund Advisory
Women in Acoustics
Archives and History
College of Fellows
International Research &
Education
Publication Policy
Regional/Student Chapters
Finance
Medals and Awards
Public Relations
Audit
Membership
AS Foundation Board
Education in Acoustics
Acoustics Today Editorial Board
Investment
Publishing Services
Tutorials, Short Courses,
Hot Topics
Strategic Plan Champions
Financial Affairs
Member Engagement and Diversity
Outreach
Publishing and Standards
Technical Council
Executive Council
Room 101
Room 303
Room 101
Room 301
Room 303
Room 102
Room 305
Room 301
Room 303
Room 301
Room 305
Room 300
Room 301
Room 303
EAA Board and Executive Meeting
EAA General Assembly
Room Acoustics Design for
Improved Behavior, Comfort and
Performance TC
Computational Acoustics TC
Room 104
Room 104
Room 208
Room 104
Room 102
Room 308
Room 308
Room 301
Room 305
Room 104
Room 301
Room 310
Room 301
Room 308
Room 301
Room 305
Room 101
Room 305
Room 301
Room 300
Room 303
Room 301
Room 313
EAA MEETINGS
Sat, 24 June, 8:30 a.m.
Sat, 24 June, 1:00 p.m.
Mon, 26 June, 5:20 p.m.
Tue, 27 June, 12:00 p.m.
Room 300
TECHNICAL COMMITTEEE OPEN MEETINGS
Mon, 26 June, 8:00 p.m.
Mon, 26 June, 8:00 p.m.
Mon, 26 June, 8:00 p.m.
Mon, 26 June, 8:00 p.m.
Mon, 26 June, 8:00 p.m.
Mon, 26 June, 8:00 p.m.
Mon, 26 June, 8:00 p.m.
Wed, 28 June, 8:00 p.m.
Wed, 28 June, 8:00 p.m.
Wed, 28 June, 8:00 p.m.
Wed, 28 June, 8:00 p.m.
Wed, 28 June, 8:00 p.m.
Wed, 28 June, 8:00 p.m.
Acoustical Oceanography
Animal Bioacoustics
Architectural Acoustics
Engineering Acoustics
Physical Acoustics
Psychological and Physiological
Acoustics
Structural Acoustics and Vibration
Biomedical Acoustics
Musical Acoustics
Noise
Signal Processing in Acoustics
Speech Communication
Underwater Acoustics
Room 310
Room 313
Room 207
Room 204
Room 210
Room 311
Room 312
Room 312
Room 200
Room 203
Room 302
Room 304
Room 310
STANDARDS COMMITTEES AND WORKING GROUPS
Sat, 24 June, 9:00 a.m.
Sun, 25 June 5:00 p.m.
Sun, 25 June, 7:00 p.m.
Mon, 26 June, 7:30 a.m.
A28
ISO TC43/SC1/WG45 and ANSI
S12/WG15
S2, Mechanical Vibration and Shock
ASACOS Steering
ASACOS
Room 301
Room 305
Room 305
Room 104
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Mon, 26 June, 9:15 a.m.
Mon, 26 June, 11:00 a.m.
Mon, 26 June, 2:00 p.m.
Mon, 26 June, 3:15 p.m.
Mon, 26 June, 4:45 p.m.
Tue, 27 June, 8:30 a.m.
Tue, 27 June, 1:00 p.m.
Tue, 27 June, 4:00 p.m.
Standards Plenary
S12, Noise
S3/SC1, Animal Bioacoustics
S3, Bioacoustics
S1, Acoustics
S3/SC1/WG7 - Passive Acoustic
Monitoring
S12/WG57 - Physical Education
S1/WG9 - Underwater
Transducers
Room 104
Room 104
Room 104
Room 104
Room 104
Room 303
Room 101
Room 104
MEEETING SERVICES, SPECIAL EVENTS, SOCIAL EVENTS
Sat, 26 June,
1:00 p.m. - 5:00 p.m.
Sun-Thu, 27-29 June,
7:30 a.m. - 5:00 p.m.
Sun-Thu, 27-29 June,
7:00 a.m. - 6:00 p.m.
Sun-Thu, 27-29 June,
7:00 a.m. - 6:00 p.m.
Sun-Thu, 27-29 June,
7:00 a.m. - 6:00 p.m.
Sun-Thu, 25-29 June,
8:00 a.m. - 10:00 a.m.
Sun-Thu, 25-29 June,
Sun: 10:20 a.m. 10:40 a.m.
Mon-Thu: 10:00 a.m. 11:00 a.m.
Registration
Exhibit
Hall D
E-mail
A/V Preview
Exhibit
Hall D
Exhibit
Hall D
Room 307
Accompanying Persons
Room 101
Mon, 26 June,
3:00 p.m. - 4:00 p.m.
Mon-Wed, 26-28 June,
12:20 p.m. - 1:20 p.m.
Sun, 25 June,
8:00 a.m. - 10:15 a.m.
Sun, 25 June,
5:00 p.m. - 5:30 p.m.
Sun, 25 June
5:30 p.m. - 7:00 p.m.
Sun, 25 June,
5:30 p.m. - 7:00 p.m.
Mon, 26 June
8:00 a.m. - 9:00 a.m.
Mon, 26 June,
9:00 a.m. - 5:00 p.m.
Mon, 26 June,
6:30 p.m. - 8:00 p.m.
Tue, 27 June,
8:00 a.m. - 9:00 a.m.
Tue, 27 June,
9:00 a.m. - 12:00 noon
Tue, 27 June,
1:15 p.m. - 3:15 p.m.
P.M. Coffee Break
Tue, 27 June,
11:45 a.m. - 1:45 p.m.
Tue, 27 June,
3:30 p.m. - 6:00 p.m.
Tue, 27 June,
5:30 p.m. - 7:00 p.m.
Tue, 27 June,
6:00 p.m. - 8:00 p.m.
Tue, 27 June, 8:00 p.m. 12:00 midnight
Wed, 28 June,
6:30 p.m. - 8:00 p.m.
Thu, 29 June,
6:00 p.m. - 6:30 p.m.
Women in Acoustics Luncheon
Exhibit
Hall D
Meet
1:00 p.m.
Hynes streetlevel foyer
Room 102
Plenary Session/Awards
Ceremony
Early Career Networking
Ballroom
B/C
Room 300
Internet Zone
A.M. Coffee Break
Resume Help Desk
Opening Ceremonies
and Keynote Lectures
New Student Orientation
Ballroom
Foyer
Exhibit
Hall D/
3rd Floor
Boylston
Foyer
Exhibit
Hall D
Exhibit
Hall D
Ballroom B
Room 208
Student Meet and Greet
Room 300
Exhibit Opening Reception
Exhibit
Hall D
Ballroom B
Keynote Lecture
Exhibit
Exhibit
Hall D
Ballroom
B/C
Ballroom B
Social Hour
Keynote Lecture
Exhibit
Technical Tour - Berkeley
College of Music
Student Reception - Belvidere Room
Hilton Boston Back Bay Hotel
ASA Jam
Room 313
Social Hour
Closing Ceremonies
Ballroom
B/C
Ballroom B
Acoustics ‘17 Boston
A28
RSIC-1®
RSIC - Products
RSIC-WHI
RSIC-DC04
RSIC
V
RSIC-V
RSIC-U HD
The RSIC-1 is the only product
ever listed into UL Directory for
both Fire and Noise Control.
STC: 58
STC: 61
IIC: 58
NEW RC-1
Boost
STC:
54
IIC: 52
The RC-1 Boost increases the performance of standard RC-1 channel both in IIC and
STC. The RC-1 Boost also reduces the chance for a short circuit by moving the channel
further away from the framing. The RC-1 Boost is listed in UL.com for use in a one hour
floor ceiling assembly.
Www.RC1Boost.com • www.Pac-Intl.com • (866) 774-2100 • Fax (866) 649-2710
3rd Joint Meeting: Acoustical Society of America and European Acoustics Association
The 3rd Joint Meeting of the Acoustical Society of America
and the European Acoustics Association, which incorporates
the 173rd Meeting of the Acoustical Society of America and
the 8th Forum Acusticum, will be held Sunday through Thursday, 25–29 June 2017 at the John B. Hynes Veterans Memorial Convention Center
SECTION HEADINGS
1. VENUE AND HOTEL INFORMATION
2. TRANSPORTATION AND TRAVEL
3. STUDENT TRANSPORTATION SUBSIDIES
4. MESSAGES FOR ATTENDEES
5. REGISTRATION
6. TECHNICAL SESSIONS
7. TECHNICAL SESSION DESIGNATIONS
8. OPENING SESSION AND CLOSING CEREMONIES
9. KEYNOTE LECTURES
10. WILLIAM AND CHRISTINE HARTMANN PRIZE IN
AUDITORY NEUROSCIENCE AND THE AUDITORY
NEUROSCIENCE PRIZE LECTURE
11. MEDWIN PRIZE IN ACOUSTICAL OCEANOGRAPHY AND ACOUSTICAL OCEANOGRAPHY PRIZE
LECTURE
12. EXHIBIT AND EXHIBIT RECEPTION
13. RESUME HELP DESK
14. TECHNICAL COMMITTEE OPEN MEETINGS
15. TECHNICAL TOURS
16. PLENARY SESSION AND AWARDS CEREMONIES
17. ANSI STANDARDS COMMITTEES
18. COFFEE BREAKS
19. A/V PREVIEW ROOM
20. PROCEEDINGS OF MEETINGS ON ACOUSTICS
21. E-MAIL AND INTERNET ZONE
22. SOCIAL HOURS
23. STUDENTS MEET MEMBERS FOR LUNCH
24. STUDENT EVENTS: NEW STUDENT ORIENTATION,
MEET AND GREET, FELLOWSHIP AND GRANT
PANEL, STUDENT RECEPTION
25. WOMEN IN ACOUSTICS LUNCHEON
26. JAM SESSION
27. ACCOMPANYING PERSONS PROGRAM
28. F.V. HUNT EVENT AT NEW ORLEANS MEETING
29. WEATHER
30. TECHNICAL PROGRAM ORGANIZING COMMITTEE
31. MEETING ORGANIZING COMMITTEE
32. PHOTOGRAPHING AND RECORDING
33. ABSTRACT ERRATA
34. GUIDELINES FOR ORAL PRESENTATIONS
35. SUGGESTIONS FOR EFFECTIVE POSTER PRESENTATIONS
36. GUIDELINES FOR USE OF COMPUTER PROJEC TION
37. DATES OF FUTURE ASA MEETINGS
1. VENUE AND HOTEL INFORMATION
All meeting events, excluding some committee meetings,
will be held at The John B. Hynes Veterans Memorial
Convention Center which is located in Boston’s historic Back
A30
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Bay neighborhood [900 Boylston Street, Boston, MA 02115;
T: 617-954-2000].
The deadline for making hotel reservations has passed.
Please visit the meeting webpage for information about hotels
close to the convention center.
2. TRANSPORTATION AND TRAVEL
AIR
Logan International Airport is located a convenient
two miles from the city center, with several public airport
transportation options from downtown and suburban locations.
There are approximately 40 airlines that serve Boston.
RAIL
Amtrak passenger rail service connects Boston, New York,
Washington, D.C., Philadelphia, Baltimore, Portland (Maine)
and other points nationwide. Amtrak trains depart from South
Station (Red Line), Back Bay Station (Orange Line) and North
Station (Green and/or Orange Line). Amtrak’s high-speed
train Acela provides fast service along the Northeast Corridor
High-Speed Rail between Washington, New York and Boston.
BUS SERVICE
Nationwide bus companies stop downtown at South Station
(adjacent to the South Station train terminal). Ticket counters
are located on the third level of the Transportation Center. For
information, call the South Station Bus Terminal at 617-7378040.
AUTO
There are three main routes into Boston: I-90 (Massachusetts
Turnpike) from the West; I-95 from the North and South; I-93
from the North and South. Driving directions can be obtained
from sources such as Google Maps or Mapquest. See the
Ground Transportation section of this document for specific
driving directions and parking information.
LOCAL TRANSPORTATION
Local transportation details can be found at http://www.
mbta.com/. Ground transportation options include the “T”-Boston’s public transportation system, known at the “T”,
offers subway, bus, trolley car and boat service to just about
everywhere in the Greater Boston area and beyond, Logan
Express to Back Bay, taxis, and Uber.
Taxi service from Logan International Airport to most
hotels in Boston and Cambridge costs approximately $25$35.00, one way.
The Hynes Convention Center is conveniently located
close to four T stops – the Hynes Convention Center stop,
Prudential Center stop, and Copley Square stop on the Green
Line and the Back Bay stop on the Orange Line.
DRIVING/PARKING INFORMATION
Driving directions to the Hynes Convention center can be
found at http://s3.amazonaws.com/signatureboston/documents/
HynesDirections_1.pdf
Within a three-block walk of the Hynes Convention Center
are numerous parking garages totaling over 4,400 spaces. There
Acoustics ‘17 Boston
A30
is limited meter parking available around the Hynes and adjacent
streets. Sample rates are $34 up to 12 hours; $38 12 to 24 hours.
Download a guide of nearby garages for a full list of options.
3. STUDENT TRANSPORTATION SUBSIDIES
To encourage student participation, limited funds are
available to defray partially the cost of travel expenses of
students to attend Acoustical Society meetings. Instructions
for applying for travel subsidies are given in the Call for
Papers which can be found online at http://acousticalsociety.
org. The deadline for the present meeting has passed but this
information may be useful in the future.
4. MESSAGES FOR ATTENDEES
A message board will be located in Exhibit Hall D near the
ASA registration desk. Check the board during the week as
messages may be posted by attendees who do not have cell
phone numbers of other attendees.
5. REGISTRATION
Registration is required for all attendees and accompanying
persons. Registration badges must be worn in order to
participate in technical sessions and other meeting activities.
Registration will open on Saturday, 24 June, 1:00 p.m. in
Exhibit Hall D (see floor plan on page A15).
Checks or travelers checks in U.S. funds drawn on U.S.
banks and Visa, MasterCard and American Express credit
cards will be accepted for payment of registration. Meeting
attendees who have pre-registered may pick up their badges
and registration materials at the pre-registration desk.
The registration fees (in USD) are $675 for Full Registration;
$150 for Student Registration, $200 for Emeritus Members
(Emeritus status pre-approved by ASA or EAA), and $200 for
accompanying persons.
Special note to students who pre-registered online: You
will also be required to show your student id card when
picking-up your registration materials at the meeting.
6. TECHNICAL SESSIONS
The technical program includes 230 sessions with over
2200 abstracts scheduled for presentation during the meeting.
A floor plan of the Hynes Convention Center appears on
pages A13–A16. Session Chairs have been instructed to
adhere strictly to the printed time schedule, both to be fair to
all speakers and to permit attendees to schedule moving from
one session to another to hear specific papers. If an author is
not present to deliver a lecture-style paper, the Session Chairs
have been instructed either to call for additional discussion
of papers already given or to declare a short recess so that
subsequent papers are not given ahead of the designated times.
Several sessions are scheduled in poster format, with the
display times indicated in the program schedule.
7. TECHNICAL SESSION DESIGNATIONS
The first character is a number indicating the day the session
will be held, as follows:
1-Sunday, 25 June
2-Monday, 26 June
3-Tuesday, 27 June
4-Wednesday, 28 June
5-Thusday, 29 June
A31
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
The second character is a lower case “a” for a.m., “p” for
p.m., or “e” for evening corresponding to the time of day the
session will take place. The third and fourth characters are
capital letters indicating the primary Technical Area sponsor
of the session using the following abbreviations or codes:
AA Architectural Acoustics
AB Animal Bioacoustics
AO Acoustical Oceanography
BA Biomedical Acoustics
EA Engineering Acoustics
ED Education in Acoustics
ID Interdisciplinary
MU Musical Acoustics
NS Noise
PA Physical Acoustics
PP Psychological and Physiological Acoustics
SA Structural Acoustics and Vibration
SC Speech Communication
SP Signal Processing in Acoustics
UW Underwater Acoustics
In sessions where the same group is the primary organizer
of more than one session scheduled in the same morning or
afternoon, a fifth character, either lower-case “a,” “b,” “c,”
“d,” “e,” or “f” is used to distinguish the sessions. Each paper
within a session is identified by a paper number following
the session-designating characters, in conventional manner.
As hypothetical examples: paper 2pEA3 would be the third
paper in a session on Monday afternoon organized by the
Engineering Acoustics; 3pSAb5 would be the fifth paper in
the second of two sessions on Tuesday afternoon sponsored
by Structural Acoustics and Vibration.
Note that technical sessions are listed both in the calendar
and the body of the program in the numerical and alphabetical
order of the session designations rather than the order of their
starting times. For example, session 3aAA would be listed
ahead of session 3aAO even if the latter session begins earlier
in the same morning.
8. OPENING SESSION AND CLOSING CEREMONY
The meeting will begin with Opening Ceremonies on
Sunday, 25 June, at 8:00 a.m. in Ballroom B. The meeting
will end with Closing Ceremonies on Thursday, 29 June, at
6:00 p.m. in Ballroom B.
9. KEYNOTE LECTURES
Four Keynote Lectures will be presented—two on Sunday
morning, 25 June, one on Monday morning, 26 June, and one
on Tuesday morning, 27 June. All plenary lectures will be
held in Ballroom B.
Sunday, 25 June, 8:15 a.m. to 9:15 a.m. and 9:20 a.m. to
10:20 a.m.
Dr. Tuomas Virtanen of Tampere University of Technology,
Finland, will present “Computational analysis of acoustics
events in everyday environments”
Prof. Steven A. Cummer of Duke University, USA, will
present “A sound future for acoustic metamaterials.”
Keynote Lectures on Sunday will be followed by a coffee
break from 10:20 a.m. to 10:40 a.m. Morning sessions on
Sunday will begin at 10:40 a.m.
Acoustics ‘17 Boston
A31
Monday, 26 June, 8:00 a.m. to 9:00 a.m.
Prof. Constantin-C. Coussios of the University of Oxford,
Oxford, UK, will present “Making, mapping and using
acoustic nanobubbles for therapy.”
Tuesday, 27 June, 8:00 a.m. to 9:00 a.m.
Dr. Darlene R. Ketten, Harvard Medical School, Cambridge,
US will present “Hearing as an extreme sport: Underwater
ears, infra to ultrasonic and surface to the abyss.”
Keynote Lectures on Monday and Tuesday will be followed
by a 20-minute break to allow the audience to move to
technical sessions.
10. WILLIAM AND CHRISTINE HARTMANN
PRIZE IN AUDITORY NEUROSCIENCE AND THE
AUDITORY NEUROSCIENCE PRIZE LECTURE
The 2017 William and Christine Hartmann Prize in
Auditory Neuroscience will be presented to Cynthia F. Moss,
John Hopkins University, at the Plenary Session on Tuesday,
27 June. Cynthia Moss will present the Acoustics Education
Prize Lecture titled “Active listening in 3D auditory scenes”
on Sunday, 25 June, at 10:55 a.m. in Session 1aPPb, in Room
311.
11. MEDWIN PRIZE IN ACOUSTICAL
OCEANOGRAPHY AND ACOUSTICAL
OCEANOGRAPHY PRIZE LECTURE
The 2017 Medwin Prize in Acoustical Oceanography will
be presented to Jennifer L. Miksis-Olds, University of New
Hampshire, at the Plenary Session on Tuesday, 27 June. Jennifer
Miksis-Olds will present the Acoustical Oceanography Prize
Lecture titled “Exploring ocean ecosystems and dynamics
through sound” on Sunday, 25 June, at 10:35 a.m. in Session
1aAO in Room 310.
12. EXHIBIT AND EXHIBIT RECEPTION
An instrument and equipment exhibit conveniently located
near the registration area and meeting rooms, will be located
in Exhibit Hall D.
The Exhibit will include computer-based instrumentation,
scientific books, sound level meters, sound intensity systems,
signal processing systems, devices for noise control and
acoustical materials, active noise control systems and other
exhibits on acoustics.
The Exhibit will open with an evening reception on Sunday
with lite snacks and a complimentary drink. Coffee breaks on
Monday and Tuesday mornings will be held in the exhibit area
as well as an afternoon break on Monday.
Exhibit hours are Sunday, 25 June, 5:30 p.m. to 7:00 p.m.,
Monday, 26 June, 9:00 a.m. to 5:00 p.m., and Tuesday, 27
June, 9:00 a.m. to 12:00 noon.
13. RESUME HELP DESK
Are you interested in applying for graduate school, a
postdoctoral opportunity, a research scientist position, a
faculty opening, or other position involving acoustics? If you
are, please stop by the ASA Resume Help Desk in the Coral
Foyer near the ASA registration desk. Members of the ASA
experienced in hiring will be available review at your CV, cover
letter, and research and teaching statements to provide tips and
suggestions to help you most effectively present yourself in
A32
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
today’s competitive job market. The ASA Resume Help Desk
will be staffed on Monday, Tuesday, and Wednesday during
the lunch hour (12:20 p.m. to 1.20 p.m.) for walk-up meetings
in Exhibit Hall D.
Appointments during these three lunch hours will also be
available via a sign-up sheet.
14. TECHNICAL COMMITTEE OPEN MEETINGS
Technical Committees will hold open meetings on Monday
and Wednesday evenings at 8:00 p.m. following the Social
Hours. The schedule and rooms for each Committee meeting
are given on page A28
These are working, collegial meetings. Much of the work
of the Society is accomplished by actions that originate and
are taken in these meetings including proposals for special
sessions, workshops and technical initiatives. All meeting
participants are cordially invited to attend these meetings and
to participate actively in the discussions.
15. TECHNICAL TOURS
Technical Tour – Berkeley College of Music – Tuesday, 27
June – 1:15 p.m. to 3:15 p.m. – Cost $0 – Limited to 40
participants
Berklee College of Music is a major contributor to the
vibrancy and musical culture of Boston. Its outstanding
world-renown programs include a rigorous core curriculum,
instruction on a range of principal instruments, and a wide
variety of majors and minors including jazz composition,
music production and engineering, film scoring, performance,
and electronic production and design.
One of Berklee’s most recent additions to its campus is right
around the corner from the ASA Meeting site, and this exciting
tour will include visits to major performance spaces, a new
multi-purpose cabaret-style venue, recording studios, a “dub
stage” and sound track production suite. These spaces include
examples of room acoustics design for studios and rehearsal
spaces, high sound isolation details, and state-of-the-art studio
gear, all in a new stunning urban environment. The tour will
include visits to these spaces with input from the acoustical
designers (Acentech Incorporated and Walter Storyk Design
Group), the architect, Berklee administration, as well as
hands-on introduction to the studio equipment use by staff and
students of the College, for interactive discussions on site.
Berkeley College of Music is a short walking distance
(across the street) from the Hynes Convention Center. The
tour will depart from the street level foyer of the Hynes
Convention Center at 1:15 p.m. and is planned to take 2 hours
from departure to return.
Tuesday, 27 June, 6:30 p.m. to 8:30 p.m.
Note: If you are interested in attending, please send an RSVP
by 23 June to Sarah DeRosa, sederosa@mgh.harvard.edu.
The Center for Laryngeal Surgery and Voice Rehabilitation at
the Massachusetts General Hospital (MGH Voice Center) brings
together an interdisciplinary group of clinicians and scientists
to integrate state-of-the-art clinical care with translational
research in laryngeal surgery and voice disorders. The MGH
Voice Center has a highly integrated combination of clinical
and research facilities, including the Clinical Voice Research
Laboratories and Laryngeal Surgery Research Laboratories.
Acoustics ‘17 Boston
A32
During the Open House, attendees will have the opportunity
to tour the Center’s main outpatient and clinical research facility
and to learn about current research programs, e.g., advanced
laryngeal imaging, ambulatory voice monitoring/biofeedback,
vocal system modeling, studies of vocal hyperfunction, etc.
16. PLENARY SESSION AND AWARDS CEREMONIES
A joint ASA/EAA plenary session and awards ceremonies will
be held Tuesday, 27 June, in Ballroom B, 3:30 p.m. to 6:00 p.m.
ASA will present the following recognitions and awards:
The William and Christine Hartmann Prize in Auditory
Neuroscience will be presented to Cynthia F. Moss. The Medwin
Prize in Acoustical Oceanography will be presented to Jennifer
L. Miksis-Olds. The R. Bruce Lindsay Award will be presented
to Bradley E. Treeby. The Helmholtz-Rayleigh Interdiscplinary
Silver Medal will be presented to Blake S. Wilson and the Gold
Medal will be presented to William M. Hartmann.
Certificates will be presented to Fellows elected at the
fall 2016 meeting of the Society. See page 3755 for a list of
fellows.
EAA will present the EAA AWARD for lifetime achievements
in acoustics to Hugo Fastl and the EAA AWARD for contributions
to the promotion of Acoustics in Europe to Antonio Pérez López.
All attendees are welcome and encouraged to attend. Please
join us to honor and congratulate these medalists and award
recipients.
17. ANSI STANDARDS COMMITTEES
Meetings of ANSI Accredited Standards Committees will
be held at Acoustics ’17 Boston on Sunday and Monday, 25
and 26 June.
Meetings of selected advisory working groups are often
held in conjunction with Society meetings and are listed in the
Schedule of Committee Meetings and Other Events on page
A28 or on the standards bulletin board in the registration area,
e.g., S12/WGI8-Room Criteria.
People interested in attending and in becoming involved in
working group activities should contact the ASA Standards
Manager for further information about these groups, or about
the ASA Standards Program in general, at the following
address: Neil Stremmel, ASA Standards Manager, Standards
Secretariat, Acoustical Society of America, 1305 Walt
Whitman Road, Suite 300, Melville, NY 11747-4300; T.: 631390-0215; F: 631-923-2875; E: asastds@acousticalsociety.org
18. COFFEE BREAKS
Morning coffee breaks will be held each day. On Sunday
the break will be held from 10:20 a.m. to 10:40 a.m. in the
prefunction foyer. Monday to Thursday the breaks will be
held from 10.00 a.m. to 11:00 a.m. in Exhibition Hall D and
the 3rd Floor Boylston Hallway. Morning breaks on Tuesday
and Wednesday will be held in the Exhibit area.
There will also be an afternoon break on Tuesday from 3:00
p.m. to 4:00 p.m. in the Exhibit area.
19. A/V PREVIEW ROOM
Room 307 will be set up as an A/V preview room for
authors’ convenience, and will be available Sunday through
Thursday from 7:00 a.m. to 6:00 p.m.
A33
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
20. PROCEEDINGS OF MEETINGS ON ACOUSTICS
(POMA)
Acoustics ’17 Boston meeting will have a published
proceedings, and submission is optional. Authors of Acoustics
‘17 Boston meeting papers are encouraged to submit a pdf
manuscript to the meeting’s volume of ASA’s Proceedings of
Meetings on Acoustics (POMA).
All authors of Acoustics ‘17 Boston meeting papers are
encouraged to submit a pdf manuscript to Volume 30 of
ASA’s Proceedings of Meetings on Acoustics (POMA).
Things to note:
• There is no publication fee, but presentation of the paper at
the meeting is mandatory.
• POMA does not have a submission deadline. Authors may
submit manuscripts before or after the meeting; note, however, that review will not take place until after the meeting.
• POMA has new Word and LaTeX manuscript templates
and cover pages are now generated automatically at the
time of publication.
• Published papers are being both indexed in scholarly
venues and highlighted on Twitter and Facebook.
• Click here for additional information, including recent
changes to the manuscript preparation/submission process.
21. E-MAIL AND INTERNET ZONE
Wi-Fi will be available in all ASA meeting rooms and
spaces.
Computers providing e-mail access will be available 7:00
a.m. to 6:00 p.m., Monday to Friday in Exhibit Hall D. Tables
with power cords will be set up in Exhibit Hall D for attendees
to gather and to power-up their electronic devices.
22. SOCIALS
Socials will be held on Monday and Wednesday evenings,
6:30 p.m. to 8:00 p.m. in Ballroom B at the Hynes Convention
Center.
These social hours provide a relaxing setting for meeting
attendees to meet and mingle with their friends and colleagues
as well as an opportunity for new members and first-time
attendees to meet and introduce themselves to others in the
field. A second goal of the socials is to provide a sufficient
meal so that meeting attendees can attend the open meetings of
Technical Committees that begin immediately after the socials.
23. STUDENTS MEET MEMBERS FOR LUNCH
The ASA Education Committee arranges for a student to
meet one-on-one with a membe of the Acoustical Society over
lunch. The purpose is to make it easier for students to meet and
interact with members at ASA Meetings. Each lunch pairing is
arranged separately. Students who are interested should contact
Dr. David Blackstock, University of Texas at Austin, by email
dtb@mail.utexas.edu. Please provide your name, university,
department, degree you are seeking (BS, MS, or PhD), research
field, acoustical interests, your supervisor’s name, days you are
free for lunch, and abstract number (or title) of any paper(s) you
are presenting. The sign-up deadline is 12 days before the start
of the Meeting, but an earlier sign-up is strongly encouraged.
Each participant pays for his/her own meal.
Acoustics ‘17 Boston
A33
24. STUDENT EVENTS: NEW STUDENTS
ORIENTATION, MEET AND GREET, STUDENT
RECEPTION
Follow the student twitter throughout the meeting @
ASAStudents.
A New Students Orientation will be held from 5:00 p.m. to
5:30 p.m. on Sunday, 25 June, in Room 208 for all students to
learn about the activities and opportunities available for students
at Acoustics ’17 Boston meeting. This will be followed by the
Student Meet and Greet from 5:30 p.m. to 7:00 p.m. in Room
300. Refreshments and a cash bar will be available.
The Students’ Reception will be held on Tuesday, 27 June,
from 6:00 p.m. to 8:00 p.m. in Room 102. This reception,
sponsored by the Acoustical Society of America and supported
by the National Council of Acoustical Consultants, will
provide an opportunity for students to meet informally with
fellow students and other members of the Acoustical Society.
All students are encouraged to attend, especially students who
are first time attendees or those from smaller universities.
Students will find a ribbon in their registration envelopes
to place on their name tags identifying them as students.
Although wearing the sticker is not mandatory, it will allow
for easier networking between students and other meeting
attendees.
Students are encouraged to refer to the student guide, also
found in their envelopes, for important program and meeting
information pertaining only to students attending the ASA
meeting.
They are also encouraged to visit the official ASA Student
Home Page at http://asastudentcouncil.org// to learn more
about student involvement in ASA.
25. WOMEN IN ACOUSTICS LUNCHEON
The Women in Acoustics luncheon will be held at 11:45
a.m. on Tuesday, 27 June, in Room 102. Those who wish to
attend must purchase their tickets in advance by 10:00 a.m. on
Monday, 30 November. The fee is USD $30 for non-students
and USD $15 for students.
26. JAM SESSION
You are invited to Room 313 on Tuesday night, 27 June,
from 8:00 p.m. to midnight for the ASA Jam. Bring your
axe, horn, sticks, voice, or anything else that makes music.
Musicians and non-musicians are all welcome to attend. A
full PA system, backline equipment, guitars, bass, keyboard,
and drum set will be provided. All attendees will enjoy live
music, a cash bar with snacks, and all-around good times.
Don’t miss out.
27. ACCOMPANYING PERSONS PROGRAM
Spouses and other visitors are welcome at Acoustics ’17
Boston. The on-site registration fee for accompanying persons
is USD $200. This entitles access to the accompanying
persons room, socials on Monday and Wednesday evenings,
and the Jam Session.
A hospitality room for accompanying persons will be open
in Room 101 at the Hynes Convention Center from 8:00 a.m.
to 10:00 a.m. Sunday through Thursday.
Breakfast snacks including beverages will be provided.
A34
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
28. HUNT RECOGNITION AND FUND RAISING
INITIATIVE
The Hunt Fellowship has been an outstanding feature of
the Acoustical Society of America for 40 years. Recipients of
the Fellowship have made major contributions to acoustical
science and have served the Society in a variety of positions.
An event to commemorate and celebrate this anniversary is
being planned for the ASA’s fall 2017 meeting to be held
in New Orleans. In conjunction with that celebration, the
Acoustical Society Foundation Fund, with the support of the
Executive Council, is initiating a fund raising campaign to
support Early Career Awards for development of talent within
the Society. Details of the Hunt Recognition event and fund
raising campaign are underway and further information will
be distributed in the coming months.
29. WEATHER
Summer can be delightful with the ocean breezes helping
keep the humid temperatures in control. Evening temperatures
can be cool and may require a light sweater. A sudden
thunderstorm is not uncommon, so you may want to include
an umbrella in your bag. Summers average high temperatures
are above 80 °F (26.7 °C) and overnight lows above 60 °F
(15.5 °C).
30. TECHNICAL PROGRAM ORGANIZING
COMMITTEE
Robert M. Koch (ASA) and Philippe Blanc-Benon (EAA),
Technical Program Cochairs; Andone Lavery (ASA), Philippe
Blondel (EAA), Acoustical Oceanography; Christine Erbe
(ASA), Olivier Adam (EAA), Animal Bioacoustics; Kenneth
Good (ASA), Monika Rychtáriková (EAA), Stefan Schoenwald
(EAA), Architectural Acoustics; Nathan McDannold (ASA),
Constantin-C. Coussios (EAA), Biomedical Acoustics;
David T. Bradley (ASA), Catherine Potel (EAA), Education
in Acoustics; Kenneth Walsh (ASA), Ondrej Jiricek (EAA),
Engineering Acoustics; Andrew Morrison (ASA), David
Sharp (EAA), Musical Acoustics; William Murphy (ASA),
Jian Kang (EAA), Brigitte Schulte-Fortkamp (EAA), Noise;
Joseph Gladden (ASA), Olga Umnova (EAA), Physical
Acoustics; Magdalena Wojtczak (ASA), Armin Kohlrausch
(EAA), Psychological and Physiological Acoustics; Paul
Gendron (ASA), Boaz Rafaely (EAA), Signal Processing in
Acoustics; Catherine Rogers (ASA), Alexander Raake (EAA),
Speech Communication; Robert Koch (ASA), Manfred
Kaltenbacher (EAA), Ines Lopez-Arteaga (EAA), Structural
Acoustics and Vibration; Megan Ballard (ASA), Phillipe
Blondel (EAA), Underwater Acoustics.
ASA TPOC ASSISTANTS
Aaron Thode, Animal Bioacoustics; Damian Doria, Ian
Hoffman, Architectural Acoustics; Siddhartha Sikdar, Kang
Kim, Biomedical Acoustics; Eoin King, Daniel Russell,
Education in Acoustics; Whitney Coyle, Peter Rucz, Musical
Acoustics; James Phillips, Noise; Michael Haberman, Kevin
Lee, Physical Acoustics; Melissa Baese-Berk, Alexander
Francis, Kristin Van Engen, Speech Communication;
Benjamin Shafer, Structural Acoustics and Vibration; Derek
Olson, Underwater Acoustics; Kathy Whiteford, ASA Student
Council.
Acoustics ‘17 Boston
A34
31. MEETING ORGANIZING COMMITTEE
Damian Doria, ASA Cochair and Mats Åbom, EAA
Cochair; David Feit, Treasurer; Daniel Farrell, Webmaster;
Michael Stinson, Communications; Christopher Jasinski and
Cristina Zamorano, Student Activities; Susan Fox, Elaine
Moran, Secretariat
32. PHOTOGRAPHING AND RECORDING
Photographing and recording during regular sessions are not
permitted without prior explicit permission of the presenter.
33. ABSTRACT ERRATA
This meeting program is Part 2 of the May 2017 issue of
The Journal of the Acoustical Society of America. Corrections,
for printer’s errors only, may be submitted for publication in
the Errata section of the Journal.
34. GUIDELINES FOR ORAL PRESENTATIONS
Preparation of Visual Aids
• See the guidelines for computer projection in the next section below.
• Allow at least one minute of your talk for each slide (e.g.,
PowerPoint). No more than 12 slides for a 15-minute talk
(with 3 minutes for questions and answers).
• Minimize the number of lines of text on one visual aid. 12
lines of text should be a maximum. Include no more than 2
graphs/plots/figures on a single slide. Generally, too little
information is better than too much.
• Presentations should contain simple, legible text that is
readable from the back of the room.
• Characters should be at least 0.25 inches (6.5 mm) in
height to be legible when projected. A good rule of thumb
is that text should be 20 point or larger (including labels
in inserted graphics). Anything smaller is difficult to read.
○ Make symbols at least 1/3 the height of a capital letter.
○ For computer presentations, use all of the available
screen area using landscape orientation with very thin
margins. If your institutions logo must be included,
place it at the bottom of the slide.
○ Sans serif fonts (e.g., Arial, Calibri, and Helvetica) are
much easier to read than serif fonts (e.g., Times New
Roman) especially from afar. Avoid thin fonts (e.g.,
the horizontal bar of an e may be lost at low resolution
thereby registering as a c.)
○ Do not use underlining to emphasize text. It makes the
text harder to read.
○ All axes on figures should be labeled.
• No more than 3–5 major points per slide.
• Consistency across slides is desirable. Use the same back
ground, font, font size, etc. across all slides.
• Use appropriate colors. Avoid complicated backgrounds
and do not exceed four colors per slide. Backgrounds that
change from dark to light and back again are difficult to
read. Keep it simple.
○ If using a dark background (dark blue works best), use
white or yellow lettering. If you are preparing slides that
may be printed to paper, a dark background is not appropriate.
A35
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
If using light backgrounds (white, off-white), use dark
blue, dark brown or black lettering.
• DVDs should be in standard format.
○
PRESENTATION
• Organize your talk with introduction, body, and summary
or conclusion. Include only ideas, results, and concepts
that can be explained adequately in the allotted time. Four
elements to include are:
○ Statement of research problem
○ Research methodology
○ Review of results
○ Conclusions
• Generally, no more than 3–5 key points can be covered adequately in a 15-minute talk so keep it concise.
• Rehearse your talk so you can confidently deliver it in the allotted time. Session Chairs have been instructed to adhere to
the time schedule and to stop your presentation if you run over.
• An A/V preview room will be available for viewing computer presentations before your session starts. It is advisable to preview your presentation because in most cases
you will be asked to load your presentation onto a computer which may have different software or a different
configuration from your own computer.
• Arrive early enough so that you can meet the session chair,
load your presentation on the computer provided, and familiarize yourself with the microphone, computer slide
controls, laser pointer, and other equipment that you will
use during your presentation. There will be many presenters loading their materials just prior to the session so it is
very important that you check that all multi-media elements (e.g., sounds or videos) play accurately prior to the
day of your session.
• Each time you display a visual aid the audience needs time
to interpret it. Describe the abscissa, ordinate, units, and the
legend for each figure. If the shape of a curve or some other
feature is important, tell the audience what they should observe to grasp the point. They won’t have time to figure it
out for themselves. A popular myth is that a technical audience requires a lot of technical details. Less can be more.
• Turn off your cell phone prior to your talk and put it away
from your body. Cell phones can interfere with the speakers and the wireless microphone.
35. SUGGESTIONS FOR EFFECTIVE POSTER
PRESENTATIONS
Content
The poster should be centered around two or three key points
supported by the title, figures, and text. The poster should be
able to “stand alone.” That is, it should be understandable
even when you are not present to explain, discuss, and answer
questions. This quality is highly desirable since you may not
be present the entire time posters are on display, and when you
are engaged in discussion with one person, others may want
to study the poster without interrupting an ongoing dialogue.
• To meet the “stand alone” criteria, it is suggested that the
poster include the following elements, as appropriate:
○ Background
○ Objective, purpose, or goal
Acoustics ‘17 Boston
A35
○
○
○
○
○
○
Hypotheses
Methodology
Results (including data, figures, or tables)
Discussion
Implications and future research
References and Acknowledgment
Design and Layout
• A board approximately 8 ft. wide × 4 ft. high will be pro
vided for the display of each poster. Supplies will be avail
able for attaching the poster to the display board. Each
board will be marked with an abstract number.
• Typically posters are arranged from left to right and top to
bottom. Numbering sections or placing arrows be tween
sections can help guide the viewer through the poster.
• Centered at the top of the poster, include a section with
the abstract number, paper title, and author names and af
filiations. An institutional logo may be added. Keep the design relatively simple and uncluttered. Avoid glossy paper.
Lettering and text
• Font size for the title should be large (e.g., 70-point font)
• Font size for the main elements should be large enough
to facilitate readability from 2 yards away (e.g., 32 point
font). The font size for other elements, such as refer ences,
may be smaller (e.g., 20–24 point font).
• Sans serif fonts (e.g., Arial, Calibri, Helvetica) are much
easier to read than serif fonts (e.g., Times New Roman).
• Text should be brief and presented in a bullet-point list as
much as possible. Long paragraphs are difficult to read in a
poster presentation setting.
Visuals
• Graphs, photographs, and schematics should be large
enough to see from 2 yards (e.g., 8 × 10 inches).
• Figure captions or bulleted annotation of major findings
next to figures are essential. To ensure that all visual elements are “stand alone,” axes should be labeled and all
symbols should be explained.
• Tables should be used sparingly and presented in a simpli
fied format.
Presentation
• Prepare a brief oral summary of your poster and short
answers to likely questions in advance.
• The presentation should cover the key points of the poster so that the audience can understand the main findings.
Further details of the work should be left for discussion
after the initial poster presentation.
• It is recommended that authors practice their poster presen
tation in front of colleagues before the meeting. Authors
should request feedback about the oral presentation as well
as poster content and layout.
Other suggestions
• You may wish to prepare reduced-size copies of the poster
(e.g., 8 1/2 × 11 sheets) to distribute to interested audience
members.
A36
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
36. GUIDELINES FOR USE OF COMPUTER
PROJECTION
A PC computer with monaural audio playback capability
and projector will be provided in each meeting room on
which all authors who plan to use computer projection should
load their presentations. Authors should bring computer
presentations on a CD or USB drive to load onto the provided
computer and should arrive at the meeting rooms at least 30
minutes before the start of their sessions. Assistance in loading
presentations onto the computers will be provided.
Note that only PC format will be supported so authors
using Macs must save their presentations for projection in
PC format. Also, authors who plan to play audio during their
presentations should insure that their sound files are also saved
on the CD or USB drive.
Introduction
It is essential that each speaker who plans to use his/her
own laptop connect to the computer projection system in the
A/V preview room prior to session start time to verify that
the presentation will work properly. Technical assistance is
available in the A/V preview room at the meeting, but not in
session rooms. Presenters whose computers fail to project for
any reason will not be granted extra time.
Guidelines
• Set your computer’s screen resolution to 1024x768 pixels
or to the resolution indicated by the AV technical support.
If it looks OK, it will probably look OK to your audience
during your presentation.
• Remember that graphics can be animated or quickly toggled among several options: Comparisons between figures
may be made temporally rather than spatially.
• Animations often run more slowly on laptops connected
to computer video projectors than when not so connected.
Test the effectiveness of your animations before your assigned presentation time on a similar projection system
(e.g., in the A/V preview room). Avoid real-time calculations in favor of pre-calculation and saving of images.
• If you will use your own laptop instead of the computer
provided, connect your laptop to the projector during the
question/answer period of the previous speaker. It is good
protocol to initiate your slide show (e.g., run PowerPoint)
immediately once connected, so the audience doesn’t have
to wait. If there are any problems, the session chair will
endeavor to assist you, but it is your responsibility to ensure that the technical details have been worked out ahead
of time.
• During the presentation have your laptop running with
main power instead of using battery power to insure that
the laptop is running at full CPU speed. This will also guarantee that your laptop does not run out of power during
your presentation.
Specific Hardware Configurations
Macintosh
Older Macs require a special adapter to connect the video
output port to the standard 15-pin male DIN connector. Make
sure you have one with you.
Acoustics ‘17 Boston
A36
• Hook everything up before powering anything on. (Connect the computer to the RGB input on the projector).
• Turn the projector on and boot up the Macintosh. If this
doesn’t work immediately, you should make sure that your
monitor resolution is set to 1024x768 for an XGA projector
or at least 640x480 for an older VGA projector. (1024x768
will most always work.). You should also make sure that
your monitor controls are set to mirroring.
If it’s an older PowerBook, it may not have video mirroring,
but something called simulscan, which is essentially the same.
• Depending upon the vintage of your Mac, you may have
to reboot once it is connected to the computer projector
or switcher. Hint: you can reboot while connected to the
computer projector in the A/V preview room in advance of
your presentation, then put your computer to sleep. Macs
thus booted will retain the memory of this connection when
awakened from sleep.
• Depending upon the vintage of your system software, you
may find that the default video mode is a side-by-side configuration of monitor windows (the test for this will be that
you see no menus or cursor on your desktop; the cursor will
slide from the projected image onto your laptop’s screen as
it is moved). Go to Control Panels, Monitors, configuration, and drag the larger window onto the smaller one. This
produces a mirror-image of the projected image on your
laptop’s screen.
• Also depending upon your system software, either the
Control Panels will automatically detect the video projector’s resolution and frame rate, or you will have to set it
manually. If it is not set at a commensurable resolution, the
projector may not show an image. Experiment ahead of time
with resolution and color depth settings in the A/V preview
room (please don’t waste valuable time adjusting the Control Panel settings during your allotted session time).
PC
• Make sure your computer has the standard female 15-pin
DE-15 video output connector. Some computers require an
adaptor.
• Once your computer is physically connected, you will need
to toggle the video display on. Most PCS use either ALT-
A37
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
F5 or F6, as indicated by a little video monitor icon on
the appropriate key. Some systems require more elaborate
keystroke combinations to activate this feature. Verify your
laptop’s compatibility with the projector in the A/V preview room. Likewise, you may have to set your laptop’s
resolution and color depth via the monitor’s Control Panel
to match that of the projector, which settings you should
verify prior to your session.
Linux
• Most Linux laptops have a function key marked CRT/LCD
or two symbols representing computer versus projector.
Often that key toggles on and off the VGA output of the
computer, but in some cases, doing so will cause the computer to crash. One fix for this is to boot up the BIOS and
look for a field marked CRT/LCD (or similar). This field
can be set to Both, in which case the signal to the laptop
is always presented to the VGA output jack on the back
of the computer. Once connected to a computer projector,
the signal will appear automatically, without toggling the
function key. Once you get it working, don’t touch it and it
should continue to work, even after reboot.
37. DATES OF FUTURE ASA MEETINGS
For further information on any ASA meeting, or to obtain
instructions for the preparation and submission of meeting
abstracts, contact the Acoustical Society of America, 1305
Walt Whitman Road, Suite 300, Melville, NY 11747-4300;
Telephone: 516-576-2360; Fax: 631-923-2875; E-mail: asa@
acousticalsociety.org
173rd Meeting, Boston, Massachusetts, 25–29 June 2017
(The 3rd joint meeting of the Acoustical Society of America
and the European Acoustics Association
174th Meeting, New Orleans, Louisiana, 4–8 December 2017
175th Meeting, Minneapolis, Minnesota, 7–11 May 2018
176th Meeting, Victoria, Canada, 6–9 November 2018
177th Meeting, Louisville, Kentucky, 13–17 May 2019
178th Meeting, TBD, fall 2019
179th Meeting, Chicago, Illinois, 11–15 May 2020
180th Meeting, Cancun, Mexico, fall 2020
Acoustics ‘17 Boston
A37
FIFTY-YEAR AWARDS
“Gold” certificates in recognition of continuing interest I membership in the Society for half a century will be sent to the following members:
Michael W. Blanck
Richard E. Boner
Dwight A. Boyd
John L. Butler
Raymond H. Colton
Alex de Bruijn
M. David Egan
Anthony I. Eller
Guillermo C. Gaunaurd
Julius L. Goldstein
Leonard Mellberg
Thomas Miller
Roger C. Noppe
John J. Ohala
Ira J. Rosenbaum
Ronald W. Schafer
Duane R. Simmons
Donald W. Tufts
John F. Wilby
Nai-Chuyan Yen
TWENTY-FIVE YEAR AWARDS
The following individuals have been members of the Society for twenty-five years. They have been sent “Silver” certificates in recognition of
the mutual advantages derived from their long-time association with the Society:
Abeer Alwan
Rex K. Andrew
Nick Antonio
Noureddine Atalla
Philip J. Battenberg
Durand R. Begault
Steven A. Bielamowicz
Leonard J. Bond
Thomas H. Burns
Reh-Lin Chen
Dan Clayton
Perry R. Cook
Bruce C. Denardo
Gary E. English
Anibal J. S. Ferreira
Lawrence R. Fincham
Ronald R. Freiheit
Kurt M. Graffy
Christopher R. Hamlin
Jerry M. Harris
Alex E. Hay
A38
Carleton S. Hayek
Karl Wilhelm Hirsch
Josef M. Jech
Michael Jessen
Walter A. Kargus, IV
Aaron M. Korby
Judi Lapsley-Miller
Stuart D. McGregor
Sandra C. MacLean
Nicholas C. Makris
Thomas E. Miller
Andrzej J. Miskiewicz
Amy T. Neel
Martin D. Newson
Masahiko Okajima
W. J. Richardson
Yasuhiro Riko
Daniel W. Robert
Thomas J. Royston
Amebu K. Seddoh
Alan Sharpley
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
W. S. Shepard
Arne Solstad
Brian J. Sperry
Philip S. Spoor
Gerald Randolph Stanley
Arend M. Sulter
U. Peter Svensson
Jack M. Terhune
James A. Theriault
James R. Underbrink
Randall P. Wagner
Daniel M. Warren
Alain Weill
Paul A. Wheeler
Terence J. Williams
Richard A. Wright
Keith Yates
Pavel Zahorik
Tao Zhang
Acoustics ‘17 Boston
A38
ANNUAL GIVING TO THE ACOUSTICAL SOCIETY FOUNDATION FUND – 2016
The Acoustical Society of America Foundation Board is deeply grateful to all contributions received in 2016. To help express this gratitude, the list of donors to
the Foundation Fund is published below for all donations received in 2016.
*Indicates donors for 10 years or more
Patrons — $25,000 and above
Elizabeth L. and Russell F. Hallberg
Foundation and Douglas F. Winker
Leaders — $5,000 to $24,999
Louis C. and Marilyn J. Sutherland
Family Trust
Wenger Foundation
Benefactors — $1,000 to $4,999
Acentech, Inc.
Frost, Hal
Newman, Henry
Newman, R. Bradford, Jr.
O’ Brien, William D.
Siebein Associates
Sponsors — $500 to $999
Atal, Bishnu S.
*Beranek, Leo L.
*Burkhard, Mahlon D.
Fox, Susan E.
Hartmann, William M.
*McKinney, Chester M.
National Council of Acoustical
Consultants
*Ostergaard, Paul B.
Shure Corp.
*Wang, Lily M.
Donors — $250 to $499
*Alach, D. Robert
*Atchley, Anthony A.
*Cavanaugh, William J.
*Case, Alexander U.
Dubno, Judy R.
*Feit, David
Freiheit, Ronald R.
Glaser, Kevin J.
*Kinzey, Bertram Y.
*Kuperman, William A.
Mast, T. Douglas
Moldover, Michael R.
*Nelson, Peggy B.
Oxford Acoustics
Supporters — $100 to $249
*Anderson, Roger J.
Assmann, Peter F.
*Augspurger, George L.
Baggeroer, Arthur B.
*Baker, Steven R.
Barnard, Robert R.
*Beckman, Mary Esther
*Bell-Berti, Fredericka
Bergen, Thomas
*Bishop, Dwight E.
Bobrovnitskii, Yuri I.
*Boyce, Suzanne E.
Bradlow, Ann R.
*Broad, David J.
*Burroughs, Courtney B.
*Carney, Arlene E.
Celmer, Robert D.
A39
*Chambers, David H.
*Chang, Shun Hsyung
*Cheng, Arthur C.
*Chu, Dezhang
Colburn, H. Steven
Coriolano, J.
Cuschieri, Joseph M.
Dahl, Peter H.
*Davies, Patricia
Donovan, Paul
Elko, Gary W.
*Farabee, Theodore M.
Fleischer, Gerald
*Francis, Alexander L.
*Frisk, George V.
Garrett, Steven L.
*Ginsberg, Jerry H.
Greenwood, Margaret S.
*Griesinger, David H.
*Heinz, John M.
*Hellweg, Robert D.
*Kato, Hiroaki
Kemp, Kenneth A.
*Kennedy, Elizabeth A.
Kent, Ray D.
Khokhlova, Vera A.
Kieser, Robert
*Kreiman, Jody E.
Kube, Christopher
*Letcher, Stephen V.
*Lewis, Edwin R.
*Macaulay, Michael C.
Maslak, Samuel H.
Mason, Christine R.
Metropolitan Acoustics
*Mikhalevsky, Peter N.
Miller, James H.
Mironov, Mikhail A.
Moore, Patrick W.
*Neff, Donna L.
Noxon, Arthur M.
*Patton, Richard S.
*Pettyjohn, Steven D.
Pirn, Rein
Powell, Clemans A.
Reinke, Robert E.
Ridgway, Sam H.
*Roederer, Juan G.
Rosenbaum, Ira J.
*Rosenberg, Carl. J.
Roy, Kenneth P.
*Rutledge, Janet C.
Scarbrough, Paul H.
*Schmid, Charles E.
Shannon Corp.
*Sigelmann, Rubens A.
Simmons, James A.
Sommerfeldt, Scott D.
*Stinson, Michael R.
*Strong, William J.
Swallow, John C.
Taylor, M. M.
Toole, Floyd E.
*Victor, Alfred E.
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Vorländer, Michael
*Webster, John C.
*Wilber, Laura A.
Wright, Richard A.
*Wright, Wayne M.
Contributors — up to $99
*Abramson, Arthur S.
*Akamatsu, Katsuji
Allen Jont B.
Anderson, Ronald K.
Anthony, Richard
Bacon, Cedrik
Balachandran, Balakumar
Bassett, Christopher S.
Beauchamp, James W.
Becker, Kyle M.
Bedard, Alfred J.
Bernstein, Jared C.
*Bissinger, George A.
Bjelobrk, Nada
*Blackstock, David T.
Blom, Philip S.
*Bond, Zinny
Botta, Jeanine
Boyce, Suzanne E.
Brown, Steven
Burroughs, Courtney B.
*Campbell, Joseph P.
*Campbell, Murray D.
Carey, John H.
*Carome, Edward F.
Carter, G Clifford
Cavanagh, Raymond C.
Chalikia, Magdalene H.
Chapter Y PEO Sisterhood
Chen, Songmao
Chun Cheng, Chih
*Church, Charles C.
*Ciocca, Valter
*Colleran, C N.
Collier, Sandra L.
Connor, William K.
*Cook, Rufus L.
*Cottingham, James
*Cristini, Paul
*Crystal, Thomas H.
*Curtis, Allen J.
*Curtis, George D.
Dapin, Andy Lou S.
Das, Pankaj K.
da Silva Neto, Mikey
*Davis, Donald B.
Deaett, Michael A.
deGroot Hedlin, Catherine D.
*Delannoy, Jaime
Dembowski, James S.
Demorest, Marilyn E.
Doggett, Felicia M.
Dooling, Robert J.
*Duifhuis, Hendrikus
*Dunens, Egons K.
Edmonds, Peter D.
*Ellermeier, Wolfgang
Erskine, Fred T.
Essert, Robert D.
*Fahnline, John B.
Fause, Kenneth R.
Feeney, M. Patrick
Ferguson, Elizabeth L.
*Feth, Lawrence L.
Feuillade, C.
Fleischer, Gerald
Fong, Kirby W.
Forbes, Barbara J.
Foulkes, Timothy J.
Frankenthal, Shimshon
Frisina, D Robert
Frosch, Robert A.
Fujisaki, Hiroya
*Funatsu, Seiya
*Galaitsis, Anthony G.
Gammell, Paul M.
*Gaumond, Charles F.
*Gendron, Paul J.
*George, Ballard W.
Gerratt, Bruce R.
Giacometti, Alberto
Gjestland, Truls T.
Glean, Aldo A.
*Goldstein, Julius L.
Gonzalez, Roberto A.
Grantham, D Wesley
Granzow, John
*Grason, Rufus L.
Gray, Kathleen
Greenwood, Margaret S.
Halter, Edmund J.
Hanson, Helen M.
*Harris, Katherine S.
Hermand, Jean Pierre
*Herstein, Peter D.
*Hieken, Milton H.
Hobelsberger, Max
*Holford, Richard L.
Hollien, Harry
Holt, Yolanda F.
Horoshenkov, Kirill V.
*Howe, Bruce M.
*Hueter, Theodor F.
Hull, Andrew J.
Hulva, Andrew
Iannace, Gino
Ichimura, Hideyuki
*Ishii, Kiyoteru
*Ivey, Larry E.
Janisch, R. Bryan
*Kakita, Kuniko
*Kawahara, Hideki
Kayes, Gillyanne
Kedrinskiy, Valery K.
Kentaro, Ishizuka
*Kewley Port, Diane
Kikuchi, Toshiaki
*Kleinschmidt, Klaus
Klepper, David L.
*Koenig, Laura L.
Korman, Murray S.
Acoustics ‘17 Boston
A39
*Krahe, Detlef
*Kumamoto, Yoshiro
Lalani, Asad
Lancey, Timothy W.
*Langley, Robin S.
Lee, Keunhwa
Lee, Sungbok
Lentz, Jennifer
*Lerner, Armand
Letcher, Stephen V.
Levitt, Harry
LeZak, Raymond J.
Lienard, Jean Sylvain
*Lilly, Jerry G.
Lin, Wen Hwang
*Lofqvist, Anders
Loftis, Charles B.
Long, Glenis
*Lowe, Albert W.
Lubman, David
Ludlow, Christy L.
*Lutolf, John J.
Lynch, James F.
Lyon, Craig A.
Maddieson, Ian
*Maekawa, Zyun iti
Mandel, Michael I.
Markley, John
Marshall, James L.
Marston, Timothy M.
Martin, Gordon E.
*McEachern, James F.
Meesawat, Kittiphong
*Mehl, James B.
Meitzler, Allen H.
*Moffett, Mark B.
A40
Monahan, Edward C.
*Moore, James A.
Muller Preuss, Peter
Mullins, Joe H.
Murphy, Seibert
Murphy, William J.
*Namba, Seiichiro
Neuman, Pamela
Nobile, Matthew A.
Norris, Thomas R.
Novarini, Jorge C.
Ohala, John J.
Ohtani, Toshihiro
Ohyama, Ghen
*O’Malley, Honor
Osses, Alejandro
Padilla, Alexandra M.
Panjaitan, Andy B.
Pappas, Anthony L.
Patterson, Roy D.
*Paulauskis, John A.
*Penardi, Paul A.
*Perry, Howard B.
*Peterson, Ronald G.
Peterson, William M.
*Pettersen, Michael S.
Pettersen, Inge
Port, Robert F.
*Powell, Robert E.
Preissner, Curt A.
Raymond, Jason L.
Reilly, Sean M.
Richards, Roy L.
Ringheim, Matias
Ritenour, Donald V.
*Rochat, Judith L.
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Rogers, Catherine L.
Romano, Rosario A.
Romero Faus, Jose
*Rosenberg, Aaron E.
*Rosowski, John J.
Rubin, Philip E.
*Rueckert, Daniel C.
Russo, Arlyne E.
*Saito, Shigemi
Sandoe, Iain D.
*Sato, Takuso
Satori, Ivan
Saunders, James C.
*Scaife, Ronan P.
Scarton, Henry A.
Schaefer, John B.
Schlauch, Robert S.
Schuette, Dawn R.
*Schulte Fortkamp, Brigitte
*Schwartz, Richard F.
Schwenke, Roger W.
Selamet, Ahmet
*Sessler, Gerhard M.
Shattuck Hufnagel, Stefanie R.
Shekhar, Himanshu
Shimoda, Hidemaro
Shimizu, Yasushi
*Shiner, Allen H.
Sidorovskaia, Natalia A.
Slagel, John
Smedley, John E.
Smith, Kevin B.
Solet, Jo M.
Sone, Toshio
Soria, Leonardo
St. Pierre, Richard L.
Studebaker, Gerald A.
Stukes, Deborah D.
*Stumpf, Folden B.
Suzuki, Yoiti
Szabo, Thomas L.
*Temkin, Samuel
Thode, Aaron M.
Thompson, Stephen C.
*Thurlow, Willard R.
Tichy, Jiri
Tubis, Arnold
Turner, Glenn
*Turner, Joseph A., Jr.
*Ungar, Eric E.
van Dommelen, Wim A.
*Van Dyke, Michael B.
Veale, Edward
Ver, Istvan L.
*Visintini, Lucio
*Wagner, Paul A.
*Walkling, Robert A.
Wall, Alan T.
Wang, Rong G.
Warren, Richard M.
Washburn, Donald J.
Watson, Eric T.
Weaver, Aurora
Whalen, Doug H.
White, James V.
*Wilby, John F.
Willey, Carson L.
Wisdom, Sheyna S.
Wold, Donald C.
Wright, Matthew C.
Yacher, John M.
Yukio, Iwaya
Acoustics ‘17 Boston
A40
BALLROOM B, 8:00 A.M. TO 10:15 A.M.
1a SUN. AM
SUNDAY MORNING, 25 JUNE 2017
Session 1aID
Interdisciplinary: Opening Ceremonies, Keynote Lectures
8:00
The Presidents of the European Acoustics Association and the Acoustical Society of America will welcome attendees to Acoustics’17 Boston.
Invited Papers
Keynote Introduction—8:15
8:20
1aID1. Computational analysis of acoustic events in everyday environments. Tuomas Virtanen (Tampere Univ. of Technol.,
Korkeakoulunkatu 1, Tampere FI-33720, Finland, tuomas.virtanen@tut.fi)
Sounds carry a large amount of information about our everyday environment and physical events that take place in it. Recent advances in machine learning allows automatic methods to analyze this information, for example, by detecting and classifying acoustic events
produced by various sources. This allows several applications, for example, in acoustic surveillance, context-aware devices, and multimedia indexing. This talk will present signal processing and machine learning methods that can be used to detect and classify everyday
acoustic events originating, e.g., from vehicles, human activity, human and animal vocalizations, in everyday environments. It will
describe the scientific challenges in such methods, for example, many sources having highly similar spectral characteristics and multiple
sources being active simultaneously. It will explain how state-of-the-art methods based on advanced deep neural network topologies
deal with these challenges. The talk will also discuss the practical challenges related to the development of the methods, such as acquisition of data that is used to develop the methods. It will present results from recent evaluations of event detection systems and illustrate
them using audio and video examples.
Keynote Introduction—9:15
9:20
1aID2. A sound future for acoustic metamaterials. Steven Cummer (Duke Univ., PO Box 90291, Durham, NC 27708, cummer@
duke.edu)
The field of acoustic metamaterials borrowed ideas from electromagnetics and optics to create engineered structures that exhibit
desired fluid or fluid-like properties for the propagation of sound. These metamaterials offer the possibility of manipulating and controlling sound waves in ways that are challenging or impossible with conventional materials. Metamaterials with zero, or negative, refractive index for sound offer new possibilities for acoustic imaging and for the control of sound at subwavelength scales. The combination
of transformation acoustics theory and highly anisotropic acoustic metamaterials enables precise control over the deformation of sound
fields, which can be used, for example, to hide or cloak objects from incident acoustic energy. And active acoustic metamaterials use
external control and power to create effective material properties that are fundamentally not possible with passive structures. Challenges
remain, including the development of efficient techniques for fabricating large-scale metamaterial structures and, critically, converting
exciting laboratory experiments into practically useful devices. In this presentation, I will outline the recent history of the field, describe
some of the designs and properties of materials with unusual acoustic parameters, discuss examples of extreme manipulation of sound,
and finally, provide a personal perspective on future directions in the field.
10:15–10:40 Break
3451
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3451
SUNDAY MORNING, 25 JUNE 2017
ROOM 207, 10:35 A.M. TO 12:20 P.M.
Session 1aAAa
Architectural Acoustics: Echolocation by People Who are Blind
Monika Rychtarikova, Cochair
Faculty of Architecture, KU Leuven, Hoogstraat 51, Gent 9000, Belgium
David P. Garcia, Cochair
Physics and Astronomy, KU Leuven, Heverlee, Belgium
Chair’s Introduction—10:35
Invited Papers
10:40
1aAAa1. Human echolocation in different situations and rooms. Bo N. Schenkman (Speech, Music and Hearing, Royal Inst. of
Technol. (KTH), Lindstedtsv€agen 24, Stockholm SE-100 44, Sweden, bosch@kth.se)
People, especially when blind, use echolocation to detect obstacles, orient themselves, and get an awareness of their environment. I
and coworkers have, with mostly psychophysical methods, studied perceptual aspects of how people accomplish echolocation. Echolocation with long canes while walking was possible but difficult. The effects of the spectral composition of the emitted sounds had no
effects. Sound recordings in anechoic and conference rooms from non-walking, static situations, later presented in a laboratory showed a
better performance in an ordinary room with reflections, than in an anechoic room. We also found that there was a higher performance
with longer sounding sounds than for short clicks. Among the difficulties for the blind are how to avoid masking of sounds. A few blind
are exceptionally high performing. An “information-surplus principle” has been proposed. Various information sources are used, but
repetition pitch seems more important than loudness for echolocation. Among other sources, timbre may also provide information. There
may exist a time gap, acoustic gaze, for how blind people use clicks. It is likely also that there are at least two processes taking place in
the hearing system when listening for echoes, one attuned to short sounds and one to long sounds.
11:00
1aAAa2. Auditory recognition of surface texture with various scattering coefficients. Monika Rychtarikova (Faculty of
Architecture, KU Leuven, Hoogstraat 51, Gent 9000, Belgium, Monika.Rychtarikova@kuleuven.be), Lukas Zelem (Faculty of Civil
Eng., Dept. of Bldg. Construction, STU Bratislava, Bratislava, Slovakia), Leopold Kritly, David P. Garcia (Phys. and Astronomy, Lab.
of Acoust., KU Leuven, Heverlee, Belgium), Vojtech Chmelık (Faculty of Civil Eng., Dept. of Bldg. Construction, STU Bratislava,
Bratislava, Slovakia), and Christ Glorieux (Phys. and Astronomy, Lab. of Acoust., KU Leuven, Leuven, Belgium)
Human echolocation is a known ability of people to grasp the information about the surrounding environment from purely acoustic
information. However, the extent to what normal sighted and blind people can auditorily recognize the surface texture, such as its roughness or other different sound scattering features, is not completely known. In this paper, we investigate the ability of people to distinguish different types of surfaces by their sound reflections. Reflection patterns from 24 types of surface textures at two different
distances were calculated in finite difference method and convolved with “click sound” in order to be used for perception tests. Twenty
normally sighted human subjects participated on the listening test experiment.
11:20
1aAAa3. Audible sonar images generated with proprioception for target analysis. Roman B. Kuc (Elec. Eng., Yale, 15 Prospect St.,
511 Becton, New Haven, CT 06511, roman.kuc@yale.edu)
Some blind humans have demonstrated the ability to detect and classify objects with echolocation using palatal clicks. An audiblesonar robot mimics human click emissions, binaural hearing, and head movements to extract interaural time and level differences from
target echoes. Targets of various complexity are examined by transverse displacements of the sonar and by target pose rotations that
model movements performed by the blind. Controlled sonar movements executed by the robot provide data that model proprioception
information available to blind humans for examining targets from various aspects. The audible sonar uses this sonar location and orientation information to form two-dimensional target images that are similar to medical diagnostic ultrasound tomograms. Simple targets,
such as single round and square posts, produce distinguishable and recognizable images. More complex targets configured with several
simple objects generate diffraction effects and multiple reflections that produce image artifacts. The presentation illustrates the capabilities and limitations of target classification from audible sonar images.
3452
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3452
11:40
1a SUN. AM
1aAAa4. Investigate echolocation with non-disabled individuals. Alessia Tonelli, Luca Brayda, and Monica Gori (Istituto Italiano di
Tecnologia, Via Melen, Genoa 16152, Italy, alessia.tonelli@iit.it)
Vision is the most important sense on the domain of spatial perception. Congenital blind individuals, that cannot rely on vision,
show impairments in performing complex spatial auditory tasks. The echolocation technique allows blind people to compensate the
audio spatial deficit. Here, we present an overview of our works. First, we show that also sighted people can acquire spatial information
through echolocation, i.e., localize an aperture or discriminate the depths of an object locate in front of them. Second, we identified
some kinematic variables that can predict the echolocation performance. Third, we show that echolocation, not only helps to understand
the external space, but can influence internal models of the body-space relation, such as the peripersonal space (PPS). We discuss all
these aspects showing that human beings are sensitive to echoes. Spatial information can be acquired by echolocation when vision is not
available also in people that normally would acquire the same information through it. We finally discuss our results in term of rehabilitation technique for visually impaired people.
12:00
1aAAa5. Restoring an allocentric reference frame in blind individuals through echolocation. Tiziana Vercillo (Psych., Univ. of
Nevada, Reno, 1664 N. Virginia St., Reno, NV 89503, tvercillo@unr.edu), Alessia Tonelli (U-VIP Unit for Visually Impaired People,
Fondazione Istituto Italiano di Tecnologia, Genoa, Italy), Melvyn Goodale (The Brain and Mind Inst., London, ON, Canada), and
Monica Gori (U-VIP Unit for Visually Impaired People, Fondazione Istituto Italiano di Tecnologia, Genoa, Italy)
Recent psychophysical studies have described task-specific auditory spatial deficits in congenitally blind individuals. We investigated auditory spatial perception in congenitally blind children and adults during different auditory spatial tasks that required the localization of brief auditory stimuli with respect to either external acoustic landmarks (allocentric reference frame) or their own body
(egocentric reference frame). Early blind participants successfully represented sound locations with respect to their body. However, they
showed relative poor precision when compared to sighted participants during the localization of sound with respect to external auditory
landmarks, suggesting that vision is crucial for an allocentric representation of the auditory space. In a separate study, we tested three
congenitally blind individuals who used echolocation as a navigational strategy, to assess the benefit of echolocation on auditory spatial
perception. Blind echolocators did not show the same impairment in auditory spatial localization reported for blind non-echolocators,
but rather proved enhanced precision and accuracy as compared to blind non-echolocators and sighted participants. Our results suggest
that echolocation can compensate for the spatial deficit reported in early blind individuals, likely by reactivating an allocentric reference
frame needed to shape spatial representations similar to those generated by vision.
3453
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3453
SUNDAY MORNING, 25 JUNE 2017
ROOM 208, 10:35 A.M. TO 12:20 P.M.
Session 1aAAb
Architectural Acoustics: Sound Propagation Modeling and Spatial Audio for Virtual Reality I
Dinesh Manocha, Cochair
Computer Science, University of North Carolina at Chapel Hill, 250 Brooks Building, Columbia Street, Chapel Hill,
NC 27599-3175
Lauri Savioja, Cochair
Department of Media Technology, Aalto University, PO Box 15500, Aalto FI-00076, Finland
U. Peter Svensson, Cochair
Department of Electronic Systems, Norwegian University of Science and Technology, Acoustics Research Centre,
Trondheim NO - 7491, Norway
Chair’s Introduction—10:35
Invited Papers
10:40
1aAAb1. Experience with a virtual reality auralization of Notre-Dame Cathedral. Brian F. Katz (Lutheries - Acoustique - Musique,
Inst. @’Alembert, UPMC/CNRS, @’Alembert, bo^ıte 162, 4, Pl. Jussieu, Paris 75252 Cedex 05, France, brian.katz@upmc.fr), Barteld N.
Postma (LIMSI, CNRS, Universite Paris-Saclay, Orsay, France), David Poirier-Quinot (Espaces acoustiques et cognitifs, UMR STMS
IRCAM-CNRS-UPMC, Paris, France), and Julie Meyer (LIMSI, CNRS, Universite Paris-Saclay, Paris, France)
As part of the 850-year anniversary of Notre-Dame cathedral, Paris, there was a special performance of “La Vierge.” A close-mic recording of the concert was made by the Conservatoire de Paris. In an attempt to provide a new type of experience, a virtual recreation of
the performance using these roughly 45 audio channels was made via auralization. A computational acoustic model was created and calibrated based on in-situ measurements for reverberation and clarity parameters. A perceptual study with omnidirectional source and binaural receiver validated the calibrated simulation for the tested subjective attributes of reverberation, clarity, source distance, tonal
balance, coloration, plausibility, ASW, and LEV when compared to measured responses. Instrument directivity was included for each
track’s representative orchestral section based on published data. Higher-Order Ambisonic (3rd order) RIRs were generated for all
source and receiver combinations using the CATT-Acoustic TUCT software. Virtual navigation throughout a visual 3D rendering of the
cathedral during the concert was made possible using an immersive rendering architecture with BlenderVR, MaxMSP, and Oculus Rift
HMD. We present major elements of this project: calibration, perceptual study, system architecture, lessons learned, and technological
limits encountered with regards to such an ambitious undertaking. [Previously presented in part at EuroRegio2016 & FISM2016.]
11:00
1aAAb2. Bidirectional sound transport. Chunxiao Cao, Zhong Ren (State Key Lab of CAD&CG, Zhejiang Univ., 423 Mengminwei
Bldg., Zijingang Campus, Zhejiang University, 866 Yuhangtang Rd., Hangzhou 310058, China, zhongren@acm.org), Carl Schissler,
Dinesh Manocha (Univ. of North Carolina at Chapel Hill, Chapel Hill, NC), and Kun Zhou (State Key Lab of CAD&CG, Zhejiang
Univ., Hangzhou, China)
We present a new sound propagation algorithm, Bidirectional Sound Transport (BST), based on bidirectional path tracing. Current
state-of-the-art geometric acoustic method handles diffuse reflection by backward path tracing and uses diffuse rain to improve the validity of generated paths. We show that this can be viewed as a special case of bidirectional path tracing. By allowing the connections to be
established between any nodes of the subpaths, we are able to improve the sampling quality when sound sources locate near scene
objects. This ensures more stable rendering quality and eases ray budget selection. We propose a new metric based on the signal-to-noise
(SNR) of the energy response to evaluate the performance of Monte-Carlo path tracing method for sound. Based on the metric, we develop an iterative algorithm to redistribute the samples among bounce numbers according to the statistic characteristics of the sampling
of previous frames. We show that the sample redistribution algorithm converges and better balances between early and late reverberation. We evaluate our approach on different benchmarks and demonstrate significant speedup over prior geometric acoustic algorithms.
We also discuss clustering algorithms used to improve the scalability for bidirectional sound transport.
3454
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3454
11:20
1a SUN. AM
1aAAb3. Efficient construction of the spatial room impulse response. Carl Schissler (Comput. Sci., Univ. of North Carolina at
Chapel Hill, Chapel Hill, NC), Peter Stirling (Oculus, Seattle, WA), and Ravish Mehra (Oculus, 8747 148th Ave. NE, Redmond, WA
98052, ravish.mehra@oculus.com)
An important component of the modeling of sound propagation for virtual reality (VR) is the spatialization of the room impulse
response (RIR) for directional listeners. This involves convolution of the listener’s head-related transfer function (HRTF) with the RIR
to generate a spatial room impulse response (SRIR) which can be used to auralize the sound entering the listener’s ear canals. Previous
approaches tend to evaluate the HRTF for each sound propagation path, though this is too slow for interactive VR latency requirements.
We present a new technique for computation of the SRIR that performs the convolution with the HRTF in the spherical harmonic (SH)
domain for RIR partitions of a fixed length. The main contribution is a novel perceptually driven metric that adaptively determines the
lowest SH order required for each partition to result in no perceptible error in the SRIR. By using lower SH order for some partitions,
our technique saves a significant amount of computation and is almost an order of magnitude faster than the previous approach. We compared the subjective impact of this new method to the previous one and observe a strong scene-dependent preference for our technique.
As a result, our method is the first that can compute high-quality spatial sound for the entire impulse response fast enough to meet the
audio latency requirements of interactive virtual reality applications.
11:40
1aAAb4. Triton: Practical pre-computed sound propagation for games and virtual reality. Nikunj Raghuvanshi (Microsoft Res., 1
Microsoft Way, Redmond, WA 98052, nikunjr@gmail.com), John Tennant (The Coalition Studio, Microsoft Canada, Vancouver, BC,
Canada), and John Snyder (Microsoft Res., Redmond, WA)
Triton is a pre-computed wave-based acoustics system recently shipped in the game “Gears of War 4.” Games and VR present exciting new opportunities for virtual acoustics by providing the player with scene-dependent reverberation cues and conveying information
about visually occluded areas. Several technical challenges must be met. Scenes containing millions of polygons are common, with
mixed indoor-outdoor spaces like broken buildings, courtyards, caves, and rocks. A viable technique must handle this complex visual geometry without users’ intervention. The emphasis is on ensuring the resulting auralization is perceptually convincing, varying smoothly
on source and listener motion in such scenes. Highly occluded cases with salient paths undergoing multiple edge-diffraction and scattering are common. Computational requirements are quite stringent. A fraction of a single CPU core must be used for acoustic calculations
for many tens of moving sources. We discuss how these challenges shape Triton’s design. Pre-computation is used to minimize runtime
cost. Wave simulation provides complete automation for complex scene geometry. The produced fields contain billions of responses that
take terabytes of memory. A key contribution is compact encoding of this data in less than hundred megabytes: objective room acoustic
parameters are approached from a novel perspective to aid in spatial compression. The resulting parametric framework is fast and practical for current games and VR applications. Video demonstrations will be shown.
12:00
1aAAb5. Graphical processing units (GPU)-accelerated acoustic simulation for interactive experiences. Tony Scudiero (NVIDIA,
4363 Hamilton Dr., Eagan, MN 55123, tscudiero@gmail.com)
The importance of incorporating acoustic effects to the quality of immersion in virtual reality experiences has been the subject of
considerable attention recently due to a resurgence of interest in virtual reality. This work discusses advantages and challenges of using
graphical processing units (GPU) in real-time ray-based acoustic simulations for interactive applications, especially virtual reality. Existing ray-tracing libraries such as NVIDIA’s OptiX library can be used to create interactive-time simulations which can be applied to
audio for virtual reality experiences and games. This work additionally discusses some of the challenges present in creating an accessible
library which aims to allow non-experts to easily make use of acoustic simulations to enhance auditory immersion in new virtual reality
experiences and games.
3455
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3455
SUNDAY MORNING, 25 JUNE 2017
ROOM 206, 10:40 A.M. TO 12:20 P.M.
Session 1aAAc
Architectural Acoustics: Teaching and Learning in Healthy and Comfortable Classrooms I
Arianna Astolfi, Cochair
Politecnico di Torino, Corso Duca degli Abruzzi, 24, Turin 10124, Italy
Viveka Lyberg-Åhlander, Cochair
Clinical Sciences, Lund, Logopedics, Phoniatrics and Audiology, Lund University, Scania University Hospital,
Lund S-221 85, Sweden
David S. Woolworth, Cochair
Oxford Acoustics, 356 CR 102, Oxford, MS 38655
Invited Papers
10:40
1aAAc1. Active learning in modern schools. Markku Lang (Faculty of Education, Univ. of Oulu, Kaitov€ayl€a 7, Oulu, Oulu 90570,
Finland, markku.lang@oulu.fi)
We know that world is changing and in near future workforce is going to be more independent, contingent and temporary. Therefore
future learning environments and learning activities are essential to prepare all students for the challenges of work and life. Finland
started this school year with a new core curriculum for basic education, which is focusing on developing future skills (developing the
key competences as it is described in curriculum). To train and to develop future skills in schools new methods, learning landscapes and
activities are needed. Future Classroom Network (European Schoolnet) is using activities as a guideline for creating Learning Zones.
These activities and Learning zones are supporting key future skills: Critical Thinking—Investigate Creativity—Create Collaboration—
Exchange Learning to learn—Develop Digital competences—Interact Communication—Present But how do these future learning activities
look and sound like? What are teachers and students doing, when they are training Future skills? How school design should respect it?
11:00
1aAAc2. The effect of different acoustical treatment in a classroom. Erling Nilsson (Saint-Gobain Ecophon, Box 500, Hyllinge SE26061, Sweden, erling.nilsson@ecophon.se)
A common room acoustic measure in classrooms and other common public spaces is a suspended sound absorbing ceiling. However,
the acoustical condition in the classroom not only depends on the suspended ceiling. The size and shape of the room as well as the properties of building material and the interior fittings and furniture will also affect the room acoustical conditions. Another circumstance is
the non-fulfillment of the conditions for the classic diffuse field theory in rooms with absorbent ceiling treatment due to the non-uniform
distribution of the absorbing material. This makes the calculation of room acoustic parameters more complex. This paper addresses the
effect of different factors that are of importance for the acoustical conditions in a classroom. Outgoing from the unfurnished classroom
without suspended ceiling the effect of introducing a suspended ceiling, adding furniture, adding wall panels, as well as extra low frequency absorption will be exemplified based on measurements in a full scale classroom. The room acoustic parameters that are analyzed
are the reverberation time T20, the Speech Clarity C50 and the Sound Strength G. A calculation model adapted for the non-diffuse conditions in rooms with ceiling treatment will be briefly mentioned.
11:20
1aAAc3. Optimal classroom acoustic design with sound absorption and diffusion for the enhancement of speech intelligibility.
Giuseppina E. Puglisi, Filippo Bolognesi, Louena Shtrepi (Dept. of Energy, Politecnico di Torino, Torino, Italy), Anna Warzybok,
Birger Kollmeier (Medizinische Physik and Cluster of Excellence Hearing4All, Carl von Ossietzky Universit€at Oldenburg, Oldenburg,
Germany), and Arianna Astolfi (Dept. of Energy, Politecnico di Torino, Corso DC degli Abruzzi, 24, Turin 10124, Italy, arianna.
astolfi@polito.it)
Classroom design should be focused on the enhancement of the acoustic comfort for students and teachers. Long reverberation times
and excessive noise levels can raise vocal effort and negatively affect speech intelligibility. Recent studies and standards updates have
investigated whether acoustic treatment should include both absorbent and diffusive surfaces to account for the teaching and learning
premises at the same time; however, studies under realistic conditions for the improvement of existing classrooms acoustics are still
needed. In this work, an existing Italian classroom with poor acoustics was considered. Several solutions for treatment were simulated
R , including adjustment of the absorption and scattering coefficients of surfaces differently configured to reach
using CATT-AcousticsV
optimal reverberation time, and to increase Speech Transmission Index and Definition, especially for the positions in the furthest raw.
The effectiveness of the acoustic treatment was also evaluated in terms of enhancement of speech intelligibility using the Binaural
3456
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3456
1a SUN. AM
Speech Intelligibility Model (Rennies et al., 2013). Its outcomes are given as speech reception thresholds to yield a fixed level of speech
intelligibility. Model predictions indicted an improvement in speech reception thresholds up to 6.8 dB after the acoustic intervention.
11:40
1aAAc4. Good acoustics for teaching and learning. Jonas Christensson (Saint-Gobain Ecophon, St. Gobain Ecophon AB, Box 500,
Hyllinge 26503, Sweden, jonas.christensson@ecophon.se)
It is important that classrooms provide good speech intelligibility and speak comfort. Being able to listen without effort is important
for learning and we know that poor room acoustics is a burden that impedes learning and affect teachers’ voice health. A good classroom
is the Swedish forests where we can communicate over long distances without having to raise our voice. I have made several listening
tests in forests and also measured the sound reflections in different forests. The results are interesting and I mean that “forest acoustics”
should be the goal in terms of acoustic conditions in our schools. Many national sound standards put requirements on room acoustics in
classrooms. One requirement is reverberation time, according to ISO 3382-2, and it is often evaluated with T20. Unfortunately, this is a
very blunt measure, because we start T20-evaluation first after the sound pressure level dropped 5 dB. This “waiting time” is often quite
long and it is a problem because we miss a lot of important information from the early part of the decay curve. Therefore, I mean we
have to add C50 according to ISO 3382-1, to control if the room acoustics is good enough for teaching.
12:00
1aAAc5. Classroom acoustics and children’s speech perception. Lori Leibold (Ctr. for Hearing Res., Boys Town National Res.
Hospital, 555 North 30th St., Omaha, NE 68124, lori.leibold@boystown.org), Ryan W. McCreery (Audiol., Boys Town National Res.
Hospital, Omaha, NE), and Emily Buss (Otolaryngology/Head and Neck Surgery, Univ. of North Carolina, Chapel HIll, NC)
Children must learn in classrooms that contain multiple sources of competing sounds. While there are national standards aimed at
creating classroom environments that optimize speech intelligibility (e.g., ANSI/ASA 2010), these standards are voluntary and many
unoccupied classrooms fail to meet the acceptable levels specified. Moreover, little attention has been given to measuring and understanding effects of competing speech on children’s performance in the classroom. Data will be presented that describe typical noise levels in the classroom. Results from experiments investigating the consequences of competing noise and speech on speech perception at
different time points during childhood will be presented. Findings from experiments investigating potential benefits associated with
manipulating acoustic cues thought to aid in separating target from background speech will also be discussed.
SUNDAY MORNING, 25 JUNE 2017
ROOM 310, 10:35 A.M. TO 11:40 A.M.
Session 1aAO
Acoustical Oceanography: Acoustical Oceanography Prize Lecture
John A. Colosi, Chair
Department of Oceanography, Naval Postgraduate School, 833 Dyer Road, Monterey, CA 93943
Chair’s Introduction—10:35
Invited Paper
10:40
1aAO1. Exploring ocean ecosystems and dynamics through sound. Jennifer L. Miksis-Olds (School of Marine Sci. & Ocean Eng.,
Univ. of New Hampshire, 24 Colovos Rd., Durham, NC 03824, j.miksisolds@unh.edu)
Acoustic signals propagate long distances in the ocean and provide a means for marine life and humans to gain information about
the environment and for marine animals to exchange critical information. Innovation in underwater acoustic technology now permits the
remote monitoring of marine life and the environment without the need to rely on human observers, the physical presence of an observation vessel, or adequate visibility and sampling conditions. Passive recordings of the underwater soundscape provide information to better understand the influence of environmental parameters on local acoustic processes, to assess habitat quality and health, and to better
understand the risks of ocean noise on marine life. Active acoustic technology provides a high-resolution measure of biological and
physical oceanographic processes through time series of backscatter measurements. The ability to obtain passive and active acoustic
measurements contemporaneously, along with ancillary data to validate and enhance interpretations, is a powerful tool facilitating
insight into ocean and ecosystem dynamics. Knowledge gained and questions raised from the integration of acoustic and oceanographic
data in rapidly changing environments will be shared, along with a preview of the Atlantic Deepwater Ecosystem Observatory Network
(ADEON) program being launched off the South Atlantic Outer Continental Shelf.
3457
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3457
SUNDAY MORNING, 25 JUNE 2017
BALLROOM B, 10:40 A.M. TO 12:00 P.M.
Session 1aBAa
Biomedical Acoustics: Beamforming and Image Guided Therapy I: Algorithms
Costas Arvanitis, Cochair
Mechanical Engineering and Biomedical Engineering, Georgia Institute of Technology, 901 Atlantic Dr. NW, Room 4100Q,
Atlanta, GA 30318
Constantin Coussios, Cochair
Institute of Biomedical Engineering, Department of Engineering Science, University of Oxford, Old Road Campus Research
Building, Oxford OX3 7DQ, United Kingdom
Invited Paper
10:40
1aBAa1. Frequency-domain passive cavitation imaging. Kevin J. Haworth (Univ. of Cincinnati, 231 Albert Sabin Way, CVC3940,
Cincinnati, OH 45209, kevin.haworth@uc.edu), Kenneth B. Bader (Radiology, Univ. of Chicago, Chicago, IL), Kyle T. Rich, Christy
K. Holland, and T. Douglas Mast (Univ. of Cincinnati, Cincinnati, OH)
Apfel’s three golden rules (know thy sound field, know thy liquid, and know when something happens) should be considered when
monitoring acoustic cavitation-based ultrasound therapies. The third rule is often followed using passive cavitation detection with a single-element transducer. However, therapy guidance demands monitoring cavitation activity in the entire tissue volume of interest. Using
array-based passive cavitation detection with appropriate beamforming, maps of cavitation activity can be superimposed on pulse-echo,
grayscale images of tissue anatomy. In this talk, we will discuss one approach for generating cavitation activity maps, frequency-domain
passive cavitation imaging (FD-PCI). FD-PCI implements a delay, sum, and integrate algorithm, which will be described conceptually
and mathematically. The advantages and limitations of the algorithm will be discussed in the context of examples. Advantages of FDPCI include the innate frequency selectivity of the algorithm, the ability to use parallel computing for increased processing speed, the independence of the image resolution from the therapy insonation pulse shape, and the ability to quantify the acoustic power of emissions
detected by the array. Challenges of the algorithm will also be discussed, including poor axial resolution and limitations of estimating
the emitted acoustic power.
Contributed Papers
11:00
1aBAa2. Optimal beamforming using higher order statistics for passive
acoustic mapping. Erasmia Lyka, Christian Coviello, and Constantin
Coussios (Dept. of Eng. Sci., Inst. of Biomedical Eng., Univ. of Oxford, Old
Rd. Campus Res. Bldg., Headington, Oxford OX3 7DQ, United Kingdom,
erasmia.lyka@eng.ox.ac.uk)
Passive Acoustic Mapping (PAM) of sources of nonlinear acoustic emissions has been extensively investigated for monitoring ultrasound therapies.
Optimal data-adaptive beamforming algorithms, such as Robust Capon
Beamformer (RCB), were readily proposed as a means of improving source
localization, accounting simultaneously for array configuration and calibration errors. RCB, however, assumes that signal samples follow a Gaussian
distribution. Aiming at improving the spatial resolution of PAM, especially
in the axial direction with respect to the array, we propose an alternative
beamforming approach, Robust Beamforming by Linear Programming
(RLPB). This method makes no assumptions on the statistical distribution of
the received signals, and exploits not only the variance but also higherorder-statistics (HOS) of the received signals. Performance evaluation on
simulated and in vitro experimental data suggests improvement in spatial resolution on the order of 20% and 15% in the axial and transverse directions
respectively. This facilitates real-time mapping of disjoint cavitating regions
over biologically relevant lengthscales on the order of 2 mm in the axial
direction. It is expected that the proposed beamforming approach will
3458
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
provide the necessary improvement in PAM spatial resolution required in
several clinically relevant situations, where a single array is used and the ratio of depth to aperture becomes large.
11:20
1aBAa3. Attenuation estimation using passive acoustic mapping.
Michael Gray and Constantin Coussios (Inst. of Biomedical Eng., Univ. of
Oxford, Oxford OX37DQ, United Kingdom, michael.gray@eng.ox.ac.uk)
Passive acoustic mapping (PAM) techniques have been developed in
order to reduce risk and improve treatment efficacy by localizing and quantifying cavitation emissions during therapeutic ultrasound procedures. The
performance of these techniques may be significantly degraded by attenuation between the internal therapeutic target and the external monitoring system. Attenuation itself is an essential parameter in the determination of
therapeutic outcomes and safety of treatments such as HIFU ablation or volumetric hyperthermia. However, the spatial and temporal distributions of
this parameter are not typically known in clinical scenarios. To address
these challenges, we present a method for estimating attenuation using
broadband cavitation emissions, potentially allowing for restoration of PAM
performance, improved treatment monitoring and guidance, and mapping of
tissue attenuation over the course of a treatment. Results from simulations
and flow phantom experiments illustrate: (1) the impact of soft tissue-like
attenuation on PAM images, (2) the ability to estimate attenuation from
Acoustics ’17 Boston
3458
11:40
1aBAa4. Passive acoustic mapping in aberrating media with the
angular spectrum approach. Scott J. Schoen (Mech. Eng., Georgia Inst. of
Technol., 10,000 Burnet Rd., Austin, TX 78758, scottschoenjr@gatech.edu)
and Costas Arvanitis (Mech. Eng., Georgia Inst. of Technol., Boston, MA)
The ability to localize and characterize ultrasound-induced microbubble
oscillations through the intact skull with high spatial and temporal resolution holds significant promise for the diagnosis and treatment of brain diseases and disorders. In this study, we investigated the ability of angular
spectrum (AS) method, a fast planar projection method, to perform passive
acoustic mapping of microbubbles through an intact skull. Finite-difference
time-domain numerical simulations were used to model microbubble emissions’ propagation through homogeneous, stratified, and 2D inhomogeneous
(skull) environments approximately 80 mm by 160 mm. Reconstructions
with the AS approach were performed with constant and effective sound
speeds, as well as with multi-step propagation, to evaluate their ability to
correct for induced aberrations and localize the microbubbles. We also
investigated the impact of the receiver position on the localization accuracy.
Results for skull simulations indicated that the multi-step AS method
reduced the error in axial localization of the microbubbles by on the order
of 50% compared with the effective sound speed method, while incurring
approximately a 25% increase in computation time for each doubling of the
number of propagation steps. Both AS methods were several orders of magnitude faster than time-domain reconstruction. Further investigation of the
potential of this approach to correct skull aberrations is warranted.
SUNDAY MORNING, 25 JUNE 2017
ROOM 312, 10:40 A.M. TO 12:20 P.M.
Session 1aBAb
Biomedical Acoustics: Imaging I
Parag V. Chitnis, Chair
Department of Bioengineering, George Mason University, 4400 University Drive, 1G5, Fairfax, VA 22032
Contributed Papers
10:40
1aBAb1. Ultrasound enhanced delivery of cisplatin loaded
nanoparticles. Richard J. Browning, Shuning Bian (Dept. of Eng. Sci.,
Univ. of Oxford, BUBBL, IBME, ORCRB, Oxford OX3 7DQ, United
Kingdom, richard.browning@eng.ox.ac.uk), Philip J. Reardon (Div. of
BioMater. and Tissue Eng., UCL Eastman Dental Inst., Univ. College
London, London, United Kingdom), Maryam Parhizkar (Mech. Eng., Univ.
College London, London, United Kingdom), Anthony H. Harker (Dept. of
Phys. & Astronomy, Univ. College London, London, United Kingdom),
Vessela Vassileva (Dept. of Oncology, Univ. College London, London,
United Kingdom), Dan Daly (Lein Appl. Diagnostics, Reading, United
Kingdom), Barbara R. Pedley (Dept. of Oncology, Univ. College London,
London, United Kingdom), Mohan Edirisinghe (Mech. Eng., Univ. College
London, London, United Kingdom), Jonathan C. Knowles (Div. of
BioMater. and Tissue Eng., UCL Eastman Dental Inst., Univ. College
London, London, United Kingdom), and Eleanor P. Stride (Dept. of Eng.
Sci., Univ. of Oxford, Oxford, United Kingdom)
Cisplatin forms the basis for many chemotherapy regimens, however the
maximum permissible dose is limited by its systemic toxicity. Nanoencapsulation of drugs has been shown to reduce off-target side effects and can
potentially improve treatment burden on patients. However, uptake of nanoformulations at tumor sites is minimal without some form of active delivery.
We have developed a submicron, polymeric nanoparticle based on biocompatible and degradable poly(lactic-co-glycolic acid) (PLGA) capable of
encapsulating cisplatin and which can be bound to the surface of a phospholipid coated microbubble. The acoustic behavior and stability of the resulting nanoparticle loaded microbubbles will be compared with those of
unloaded microbubbles. Results will also be presented on the extravasation
3459
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
of particles in a tissue mimicking phantom using a novel long working distance confocal microscope that enables particle distributions to be measured
in situ and in real time.
11:00
1aBAb2. Optimizing gold nanorod volume for minimum cell toxicity
and maximum photoacoustic response. Oscar B. Knights, David Cowell
(School of Electron. & Elec. Eng., Univ. of Leeds, Leeds LS2 9JT, United
Kingdom, elok@leeds.ac.uk), James R. McLaughlan (Div. of Biomedical
Imaging, Univ. of Leeds, Leeds, United Kingdom), and Steven Freear
(School of Electron. & Elec. Eng., Univ. of Leeds, Leeds, United
Kingdom)
Plasmonic nanoparticles show great potential for molecular-targeted
photoacoustic (PA) imaging. To maximize light absorption, the gold nanorods (AuNRs) are illuminated at their surface plasmon resonance (SPR),
which for biomedical application is typically in the “optical window” of
700-900 nm. For AuNRs, one of the main factors that determines the SPR is
their aspect ratio. Since it is possible to have a similar aspect ratio, but different size of the particle the choice of particle could have a critical effect
on a number of factors, such as photoacoustic emissions, cell toxicity, and
therapeutic efficacy. For example, a particular sized AuNR may produce a
higher PA response, for an equivalent laser fluence, but be more toxic to
cell populations. In this study, the PA response of AuNRs with four different
volumes but similar aspect ratios (~4) are compared. A linear relationship
between incident laser fluence and PA amplitude is shown and results indicate that AuNRs with larger volumes produce stronger PA emissions. Invitro cell studies were performed on a lung cancer cell line to assess the cell
toxicity of the different sized AuNRs via a colorimetry assay.
Acoustics ’17 Boston
3459
1a SUN. AM
cavitation data, and (3) the enhancement of cavitation source imaging and
energy estimation following PAM input data attenuation compensation. In
the future, the technique could be expanded as a general broadband method
of attenuation correction for conventional diagnostic ultrasound images and
improved therapeutic ultrasound treatment planning.
11:20
1aBAb3. Ultrasound-mediated blood-brain barrier disruption:
Correlation with acoustic emissions. Miles M. Aron, Lester Barnsley,
Shamit Shrivastava (Dept. of Eng. Sci., Univ. of Oxford, Old Rd. Campus
Res. Bldg., IBME, Roosevelt Dr., Oxford, Oxfordshire OX3 7DQ, United
Kingdom, Miles.aron@hertford.ox.ac.uk), Marinke Van der Helm, Loes
Segerink (Faculty of Elec. Eng., Mathematics and Comput. Sci., Univ. of
Twente, Enschede, Netherlands), and Eleanor P. Stride (Dept. of Eng. Sci.,
Univ. of Oxford, Oxford, United Kingdom)
Blood-brain barrier (BBB) disruption mediated by ultrasound and
microbubbles (US-BBBD) is a promising strategy for non-invasive and targeted delivery of therapeutics to the brain. In US-BBBD, treatment control
is achieved by externally monitoring acoustic emissions (AE) and adjusting
ultrasound parameters in real-time to avoid AE associated with damage.
Recent work suggests that AE may also provide insight regarding the extent
of BBB opening and BBB recovery time. The mechanisms underlying BBB
opening and recovery, however, are largely not understood. To investigate
US-BBBD mechanisms with regard to AE, we developed an in vitro platform for monitoring both BBB integrity and AE during US-BBBD. Temporally resolved BBB integrity monitoring was achieved using a microfluidic
BBB-on-a-chip device with integrated trans-endothelial electrical resistance
(TEER) measurements. Well-characterized ultrasound exposure and AE
monitoring were achieved using a focally aligned high-intensity focused
ultrasound transducer and passive cavitation detector. In addition to recording TEER and AE data, our platform is compatible with fluorescence microscopy during ultrasound exposure, providing further insight into USBBBD mechanisms. This work further demonstrates potential for in vitro
screening of cavitation agents and/or therapeutics for novel US-BBBD
applications and strategies.
11:40
1aBAb4. Nonlinear propagation of two dimensional sound waves
observed at lipid interfaces. Shamit Shrivastava (Univ. of Oxford, Old Rd.
Campus Res. Bldg., Oxford OX3 7DQ, United Kingdom, shamit.
shrivastava@eng.ox.ac.uk) and Matthias F. Schneider (Medizinische und
biologische Physik, Technische Universitat, Dortmund, Germany)
Experimental results are presented on the acoustic propagation of mechanical perturbations in a lipid monolayer along the air-water interface.
The interface was excited by a piezo-cantilever, and propagating impulses
were measured optically using Forster Resonance Energy Transfer (FRET).
The velocity of propagation varied from 0.1 to 1 m/s depending on the compressibility of the interface. Near a nonlinearity in the state diagram of the
interface, for example, near a phase transition of the lipids, impulses propa-
3460
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
gated only when the initial mechanical impulse was greater than a certain
threshold. In fact, the impulse then propagated as a solitary shock wave
causing local adiabatic phase transition of the lipid molecules along the
way. Although the phenomenon has been observed here in pure lipid monolayers, which represent a well-documented model for biological membranes,
the origin of the observed phenomenon lies in the conservation of the entropy of the interface, determined by the change in state of the interface during the impulse. Given that the state diagrams of biological membranes
have nonlinearities near physiological conditions, nonlinear sound waves
are expected to be fundamentally involved in inter and intra cellular communication. Indeed, the observed acoustic phenomenon is characteristically
similar to nerve impulses.
12:00
1aBAb5. Focus ultrasound for augmenting convection-enhanced
delivery of nanoparticles in the brain. Ali Mohammadabadi (Diagnostic
Radiology and Nuclear Medicine, Univ. of Maryland School of Medicine,
110 S. Paca St., Rm. 104, Baltimore, MD 21201, ali.mohammadabadi@
umm.edu), David S. Hersh (Neurosurgery, Univ. of Maryland School of
Medicine, Baltimore, MD), Pavlos Anastasiadis (Diagnostic Radiology and
Nuclear Medicine, Univ. of Maryland School of Medicine, Baltimore, MD),
Philip Smith, Graeme F. Woodworth, Anthony J. Kim (Neurosurgery, Univ.
of Maryland School of Medicine, Baltimore, MD), and Victor Frenkel
(Diagnostic Radiology and Nuclear Medicine, Univ. of Maryland School of
Medicine, Baltimore, MD)
We previously demonstrated how ultrasound can enhance the dispersion
of locally administrated nanoparticles within the extracellular/perivascular
spaces in the ex vivo brain by non-destructively enlarging these regions. The
current study aimed to translate these results in vivo, where custom, nonadhering brain-penetrating nanoparticles (BPN: 60, 200, and 500 nm), were
administered directly into the brains of Sprague Dawley rats by convectionenhanced delivery. Non-invasive, transcranial focused ultrasound (TCFUS)
was carried out using an MRI-guided system (1.5 MHz, 10 ms pulses, 10%
duty cycle, and 2.3 MPa). 15 individual exposures in a 3 5 matrix (spacing: 1.5 mm) in one hemisphere were given, where the size of the focal zone
(-6 dB) was 1 1 X 8 mm. At 2hrs post-treatment brains were harvested
and sectioned, with digital images captured and processed using a custom
MATLAB script. This involved the “Otsu” thresholding method, based on
gray level histograms and threshold determinations for maximizing the
interclass variance. As expected, BPN distributions in the non-treated brains
decreased with an increase in diameter. Pretreating with TCFUS was found
to significantly increase the distribution of the 200 nm BPNs. These results
have broad implications for therapeutic delivery for a variety of brain diseases and disorders.
Acoustics ’17 Boston
3460
ROOM 202, 10:35 A.M. TO 12:20 P.M.
Session 1aNS
Noise, Physical Acoustics, ASA Committee on Standards, and Structural Acoustics and Vibration: Sonic
Boom Noise I: Low Boom Technology, Propagation, Etc.
Philippe Blanc-Benon, Cochair
Centre acoustique, LMFA UMR CNRS 5509, Ecole Centrale de Lyon, 36 avenue Guy de Collongue, Ecully 69134 Ecully
Cedex, France
Victor Sparrow, Cochair
Grad. Program in Acoustics, Penn State, 201 Applied Science Bldg., University Park, PA 16802
Chair’s Introduction—10:35
Invited Papers
10:40
1aNS1. Status and plans for NASA’s Quiet SuperSonic Technology (QueSST) aircraft design. Peter Coen and David Richwine
(NASA, NASA Langley Res. Ctr., MS 264, Hampton, VA 23681, peter.g.coen@nasa.gov)
Innovation in Commercial Supersonic Technology is one of six thrusts that guide NASA’s Aeronautics Research Strategy. The near
term objective of this activity is the establishment of a standard for acceptable overland supersonic flight, in cooperation with international standards organizations. In support of this objective, NASA supersonics research has had two focus areas in recent years. The first
is the design of aircraft that can fly at supersonic speeds without creating a loud sonic boom, and the second is to understand the community response to the relatively quiet sound of the overflight of such aircraft. Based on the recent successes in this research, NASA has
determined that the next steps in both these areas and in continued progress toward the near term objective is a flight demonstration.
NASA, in cooperation with industry partners, has initiated the preliminary design of a Low Boom Flight Demonstration Aircraft, named
QueSST for Quiet Supersonic Technology. This paper will describe the development of the design requirements for QueSST, and provide an overview of the design progress to date and future plans for the Flight Demonstration Project.
11:00
1aNS2. Development of high fidelity tools and robust design approaches for low boom aircraft. Lori Ozoroski (ASAB, NASA, 1
North Dryden St., Hampton, VA 23681, lori.p.ozoroski@nasa.gov) and Linda Bangert (CAB, NASA, Hampton, VA)
The NASA Commercial Supersonic Technology Project recently completed a project Technical Challenge, “Low Sonic Boom
Design Tools” which ran from 2011 to 2015. As part of this research effort, tools were developed, refined, and validated to support fullvehicle low-boom analysis, including inlet and nozzle effects. In addition, new and updated tools and processes were developed and
demonstrated for application to robust low boom shape optimization. The work included both fundamental research efforts, computational analysis of full vehicle configurations, and application of robust low boom design methods to low boom aircraft concepts. This
presentation will primarily focus on the technical achievements during the last 2 years leading to the recent successful completion of this
technical challenge.
11:20
1aNS3. Advances in numerical simulation of sonic boom in realistic atmospheres. François Coulouvrat, David Luquet, Regis
Marchiano (Institut Jean Le Rond d’Alembert (UMR 7190), Universite Pierre et Marie Curie & CNRS, Universite Pierre et Marie Curie,
4 Pl. Jussieu, Paris 75005, France, francois.coulouvrat@upmc.fr), and Franck Dagrau (Dassault Aviation, Saint Cloud, France)
Efficient and accurate numerical simulation of sonic boom is a key issue for both the design of low boom aircraft and the definition
of a standard on supersonic overland flights. Atmospheric turbulence is known since the 1960s to significantly alter the ideal N-wave
boom waveform. Such random fluctuations cannot be reproduced by the standard ray tracing method. Neither can it simulate the lateral
boom beyond the geometrical carpet edge. There occur many non-geometrical features such as creeping waves, wave guiding or scattering. To progress in the direction of boom simulation beyond the geometrical approximation, we developed the so-called FLHOWARD3D software. For accuracy, it can handle 3D temperature, density or wind heterogeneities, atmospheric absorption, and nonlinear
propagation effects. For efficiency, a one-way approximation is performed, neglecting the backscattered field. Nevertheless, the forward
field satisfies an accurate dispersion relation, even in moving atmospheres. Involved physical effects are handled separately by optimized
algorithms, combined by a second-order split-step approach. The resulting software is parallelized using the MPI paradigm. The performances of the software will be illustrated by two cases: boom scattering by turbulence, and lateral boom propagation in case of temperature inversion. This last case will be compared with flight test data.
3461
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3461
1a SUN. AM
SUNDAY MORNING, 25 JUNE 2017
11:40
1aNS4. Progress made during the first two American Institute for Aeronautics and Astronautics Sonic Boom Prediction
Workshops for calculating near field signatures of supersonic aircraft. Michael A. Park (Langley Res. Ctr., NASA, NASA Langley
Res. Ctr., MS 128, Hampton, VA 23681, Mike.Park@NASA.gov)
The American Institute for Aeronautics and Astronautics (AIAA) Sonic Boom Workshops examine the international state of the art
in sonic boom prediction of supersonic aircraft. A summary is provided for the first and second workshops held in 2014 and 2017. The
nearfield CFD (Computational Fluid Dynamics) cases from both workshops are described. They include a range of simple to complex
configurations. The first workshop used models with N-wave, flat-top, and shaped ground signatures. The second workshop used models
with quieter shaped ground signatures. To assess state of the art in nearfield CFD prediction, signatures are gathered from the international participants. These nearfield signatures are propagated to the ground with an augmented Burgers equation method and noise metrics are calculated. Statistics of the noise metrics are utilized to identify outliers for further examination. Comparisons are also made to
wind tunnel measurements through validation metrics where available. The convergence of these metrics with grid refinement is also
documented. This allows the determination of state of the art for nearfield sonic boom prediction and documents progress made between
the two workshops.
12:00
1aNS5. Summary and progress made in modeling of sonic boom propagation during AIAA sonic boom prediction workshops.
Sriram Rallabhandi (Aeronautics Systems Anal. Branch, NASA Langley, Rm. 190-25, Mailstop 442, NASA Langley Res. Ctr.,
Hampton, VA 23681, sriram.rallabhandi@nasa.gov) and Alexandra Loubeau (Structural Acoust. Branch, NASA Langley Res. Ctr.,
Hampton, VA)
This paper summarizes the atmospheric propagation modeling portion of the Second American Institute of Aeronautics and Astronautics (AIAA) Sonic Boom Prediction Workshop held in 2017 as well as an informal propagation comparison and benchmarking effort
conducted prior to the First AIAA Sonic Boom Prediction Workshop in 2014. The motivation behind these workshops is the industry’s
increased interest in low boom supersonic aircraft designs and the need to have an open, unbiased forum to promote best practices. The
propagation test cases from both exercises are described and discussed. Discussion is also included on the selection of test case conditions with multiple atmospheric profiles, representing geographical and seasonal variations of the relevant meteorological data. Propagated sonic boom ground signatures, loudness metrics, extent of the boom carpets and other propagation-related details were gathered
from a group of international participants. Comparisons are made between submissions, and the differences are analyzed in detail to
understand the state-of-the-art in sonic boom atmospheric propagation modeling. The progress made between the workshops and the lessons learned will be discussed.
SUNDAY MORNING, 25 JUNE 2017
ROOM 210, 10:40 A.M. TO 12:20 P.M.
Session 1aPA
Physical Acoustics and Biomedical Acoustics: Acoustofluidics I
J€urg Dual, Cochair
ETH Zurich, Tannenstr. 3, Zurich 8092, Switzerland
Charles Thompson, Cochair
ECE, UMASS, 1 Univ Ave, Lowell, MA 01854
Max Denis, Cochair
U.S. Army Research Lab., 2800 Powder Mill Road, Adelphi, MD 20783-1197
Invited Papers
10:40
1aPA1. Exploring the phenomenon of ultrasonic atomization for viscous fluids. James Friend (Mech. and Aerosp. Eng., Univ. of
California, San Diego, 345F Structural and Mech. Eng., M.S. 411 Gilman Dr., La Jolla, CA 92093, jfriend@eng.ucsd.edu)
We consider the choice of vibration modes and piezoelectric materials for acoustically driven atomization, an attractive method for a
broad range of applications, particularly pulmonary drug delivery. Whether by the definition of a figure of merit, a product of the resonator quality factor and electromechanical coupling coefficient, its output vibration displacement at a given input power, or the fluid flow
3462
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3462
1a SUN. AM
rate during atomization, we find that the combination of single-crystal 127.86-deg. Y-rotated lithium niobate and thickness-mode vibration produces an order of magnitude greater atomization flow rate and efficiency in comparison to classic lead zirconate-based devices
and newer, Rayleigh wave or Rayleigh/Lamb spurious-mode based devices alike. For the first time, fluids with viscosities up to 48 cP
are reported to be atomized, and we define an atomization Reynolds number ReA that can be used to both predict the atomization flow
rate for ReA>40 and the inability to atomize a given fluid at a particular vibration amplitude when ReA<40.
11:00
1aPA2. Predicting droplet sizes and production rates in ultrasonic atomization as a strongly nonlinear phenomena. James Friend
(Mech. and Aerosp. Eng., Univ. of California, San Diego, 345F Structural and Mech. Eng., M.S. 411 Gilman Dr., La Jolla, CA 92093,
jfriend@eng.ucsd.edu)
Atomization of fluids using ultrasonic irradiation of a fluid interface is well-known, yet the underlying physics is surprisingly complex and rather poorly understood. We review classical and modern theories of capillary wave formation and droplet atomization, from
Michael Faraday in 1831 onwards, and show how the atomization process occurs via induction of bulk turbulence that in turn engenders
capillary wave production and turbulence. We show the clear separation of droplet generation rate from the frequency of the excitation
ultrasound, and instead show the existence of a specific atomization frequency dependent upon the ratio of the fluid’s surface tension to
its viscosity.
11:20
1aPA3. Acoustic radiation force expansions in terms of partial-wave scattering phase shifts: Extended applications. Philip L.
Marston (Phys. & Astronomy Dept., Washington State Univ., Pullman, WA 99164-2814, marston@wsu.edu) and Likun Zhang (Phys.
Dept., Univ. of MS, Oxford, MS)
When evaluating radiation forces on spheres in sound fields, the interpretation of analytical results is greatly simplified by retaining
the use of s-function notation for partial-wave coefficients imported into acoustics from quantum scattering theory. This facilitates easy
interpretation of various scattering efficiency factors [L. Zhang and P. Marston, J. Acoust. Soc. Am. 140, EL178 (2016)]. This also facilitates the correction of certain plane-wave results [H. Olsen et al., J. Acoust. Soc. Am. 30, 633 (1958)] and the force parameterization
for a broader class of wavefields. For situations in which dissipation is negligible, each partial-wave s-function becomes characterized
by a single parameter: a phase shift. These partial-wave phase shifts are associated with scattering by plane traveling waves; the incident
wavefields of interest (progressive and standing wavefields and beams) are separately parameterized. (When considering outcomes, the
method of fabricating symmetric objects having a desirable set of phase shifts becomes a separate issue.) The existence of negative radiation force “islands” for beams reported in 2006 by Marston is manifested. Elementary standing and traveling wave force expressions
are also recovered. This approach also manifests the utility of conservation theorems [P. Marston and L. Zhang, J. Acoust. Soc. Am.
139, 3139 (2016)]. [Work supported by ONR.]
11:40
1aPA4. Beyond acoustophoresis: Particle manipulation near oscillating interfaces. Sascha Hilgenfeldt (Mech. Sci. and Eng., Univ.
of Illinois, 1206 W Green St., Urbana, IL 61801, sascha@illinois.edu), Bhargav Rallabandi (Mech. and Aerosp. Eng., Princeton Univ.,
Princeton, NJ), Siddhansh Agarwal, and David Raju (Mech. Sci. and Eng., Univ. of Illinois, Urbana, IL)
Inertial effects in microfluidics afford an interesting set of tools for the control of particle positions. The gradients of steady channel
flows, as well as the gradients of acoustic field amplitudes, have been used prominently to this purpose, the latter in acoustofluidics.
Here, we investigate directly the effect of an oscillating interface on the fluid surrounding it and particles suspended in the fluid. The fast
oscillatory motion gives rise to strong inertial effects, while the method allows for versatile force actuation because of the variety of
flow fields, frequencies, and length scales under the experimentalist’s control. We show in experiment and theory that oscillating bubbles
simultaneously (i) guide particles close to the bubble interface by streaming flow, and (ii) exert strong lift forces that can be used to sort
the particles by size or density. The lift forces and the ensuing particle displacement constitute an effect separate from streaming and
can be understood analytically on the time scale of oscillation and that of averaged, steady motion; unlike classical acoustofluidics, it
does not rely on density or compressibility contrasts. Comparison with experiments confirms that particle displacements scale more
favorably and flexibly with the dimensions of particle and microfluidic set-up than in traditional inertial microfluidics. Size sorting with
micrometer resolution can be accomplished within a millisecond, while the same device can exert controlled repulsive and attractive
forces.
12:00
1aPA5. Beyond acoustophoresis: Attractive and repulsive forces on particles. Sascha Hilgenfeldt (Mech. Sci. and Eng., Univ. of
Illinois, 1206 W Green St., Urbana, IL 61801, sascha@illinois.edu), Bhargav Rallabandi (Mech. and Aerosp. Eng., Princeton Univ.,
Princeton, NJ), Siddhansh Agarwal, and David Raju (Mech. Sci. and Eng., Univ. of Illinois, Urbana, IL)
Inertial effects in microfluidics afford an interesting set of tools for the control of particle positions. The gradients of steady channel
flows, as well as the gradients of acoustic field amplitudes, have been used prominently to this purpose, the latter in acoustofluidics.
Here, we investigate directly the effect of an oscillating interface on the fluid surrounding it and particles suspended in the fluid. The fast
oscillatory motion gives rise to strong inertial effects, while the method allows for versatile force actuation because of the variety of
flow fields, frequencies, and length scales under the experimentalist’s control. We show in experiment and theory that the forces on particles can be evaluated analytically, on both the oscillatory and the steady, time-averaged time scales. The latter formalism generalizes
streaming flow computations to particle motion, and reveals new potential strategies for manipulating particles with tunable attractive or
repulsive forces, depending not only on characteristics of the particles and physical properties of the fluid, but also the dynamical parameters of the driving.
3463
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3463
SUNDAY MORNING, 25 JUNE 2017
ROOM 304, 10:40 A.M. TO 12:20 P.M.
Session 1aPPa
Psychological and Physiological Acoustics: Perception of Synthetic Sound Fields I
Sascha Spors, Cochair
Institute of Communications Engineering, University of Rostock, Richard-Wagner-Strasse 31, Rostock 18119, Germany
Nils Peters, Cochair
Advanced Tech R&D, Qualcomm Technologies, Inc., 5775 Morehouse Drive, San Diego, CA 92121
Invited Paper
10:40
1aPPa1. Evaluation of object-based audio—What is the reference? Thomas Sporer, Judith Liebetrau, and Tobias Clauss (Fraunhofer
IDMT, Ehrenberg Str. 31, Ilmenau 98693, Germany, spo@idmt.fhg.de)
During the development of audio coding schemes, a number of methods for evaluation of the perceived audio quality have been
developed. To enable comparisons across test sites, several methods have been standardized. In standardized methods like ITU-R Recommendations BS.1116 and BS.1534 (MUSHRA), the output of a codec (signal under test) is compared to an open reference. This reference is the unimpaired input of the codec. Assuming that the codec is “transparent,” the signal under test should sound exactly like this
reference. For object-based audio, the input of a codec is a combination of raw audio channels and metadata describing position and
other properties of the audio objects. It does not make sense to listen directly to the raw data. For listening, it is necessary to calculate
the driving signal for each loudspeaker available in the listening room (rendering). Therefore, the comparison of different renderers is
difficult: the renderer used to generate the reference signal has an advantage. Using a dedicated loudspeaker as the reference does not
solve the problem either: loudspeakers always sound different than virtual sound objects. The presentation will discuss problems and solutions in more detail. Some promising setups based on multi attribute testing are presented.
Contributed Papers
11:00
11:20
1aPPa2. Investigation of perceptual attributes associated with projected
sound sources. Tom W€
uhle and M. Ercan Altinsoy (Chair of Acoust. and
Haptics, Dresden Univ. of Technol., Helmholtzstraße 18, Dresden 01062,
Germany, tom.wuehle@tu-dresden.de)
1aPPa3. A user-centered taxonomy of factors contributing to the
listener experience of reproduced audio. James Woodcock, William J.
Davies, and Trevor J. Cox (Acoust. Res. Ctr., Univ. of Salford, Newton
Bldg., Salford M5 4WT, United Kingdom, j.s.woodcock@salford.ac.uk)
One solution to reproduce sound from various directions is the projection of sound sources on reflective boundaries. In this case, the perceived
direction of the auditory event changes from the direction of the real source
to the direction of the projected source. Therefore, highly focused sound
sources are necessary. However, the focusing capabilities of such sources
are physically limited. Thus, the total amount of sound at the listening position is formed by the projected sound and by sound which is directly radiated from the real source. Both of these sound proportions influence the
perception of the listener. In a scenario with projected sound sources, a complex mixture of perceptual attributes change besides the direction of the auditory event. The present study investigates some of those attributes.
The traditional paradigm for the assessment of audio quality is that of a
listener positioned in the geometric center of a standardized loudspeaker
setup, fully attending to the reproduced sound scene. However, this is not
how listeners generally interact with audio technology. Audio is consumed
in a variety of environments and situations, over devices with varying quality, and by listeners with different expectations and needs. Drawing on
research from soundscapes, human computer interaction, and multimedia
quality of experience, this paper proposes a user-centered taxonomy of factors that influence the listener experience of reproduced audio. The taxonomy is supported by data from recent research into the perception of
complex reproduced sound scenes, and by new data from a web-based survey investigating the structure of experiences with audio technology. In this
survey, participants were asked to consider previous experiences with audio
technology, and data were collected on the experience itself (psychological
need fulfillment and affect), perceptual attributes related to the reproduced
audio, and the importance of audio quality to the experience. Results point
toward a model of listener experience that can be used to profile listener
experiences in different contexts and can be used as a measurement tool in
future controlled experiments.
3464
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3464
1a SUN. AM
Invited Papers
11:40
1aPPa4. Individual attributes describing the perception of synthesized sound fields in listening rooms. Vera Erbes and Sascha
Spors (Inst. of Communications Eng., Univ. of Rostock, Richard-Wagner-Str. 31, Haus 8, Rostock 18119, Germany, vera.erbes@unirostock.de)
Sound field synthesis techniques such as Wave Field Synthesis (WFS) are in theory defined in an anechoic environment but realworld installations have to be placed inside listening rooms with reflective walls. The arising reflections influence the desired synthesized sound field both from a physical and a perceptual point of view. This study investigates the perceptual aspects that are relevant for
WFS in reflective environments by means of the Repertory Grid Technique (RGT). By comparing a representative range of auditory
scenes with real and virtual sources in free field and in different listening rooms, subjects generate individual constructs to describe perceived differences and similarities as contrastive pairs. In a second step, subjects rate the scenes on scales constructed by their own contrastive pairs. The ratings are used to reveal relations between the constructs and the scenes on an individual basis. Clusters of constructs
can be found that in terms of content can be associated with a common perceptual aspect.
12:00
1aPPa5. Using binaural and spherical microphone arrays to assess the quality of synthetic spatial sound fields. Jonas Braasch,
Nikhil Deshpande, Jonathan Mathews, and Samuel Chabot (School of Architecture, Rensselaer Polytechnic Inst., 110 8th St., Troy, NY
12180, braasj@rpi.edu)
Recently, we completed the Collaborative-Research Augmented Immersive Virtual Environment Laboratory (CRAIVE-Lab) with a
usable floor area of 12 10 m2 at Rensselaer. The CRAIVE-Lab project addresses the need for a specialized virtual-reality (VR) system
for the study and enabling of communication-driven tasks with groups of users immersed in a high-fidelity multi-modal environment
located in the same physical space. For the acoustic domain, a 134-loudspeaker-channel system has been installed for Wave Field Synthesis (WFS) with the support of Higher-Order-Ambisonic (HoA) sound projection to render inhomogeneous acoustic fields. An integrated 16-channel spherical microphone array makes the CRAIVE-Lab an ideal test bed to study different spatial rendering techniques
such as Wave-Field Synthesis, Higher-Order Ambisonics and Virtual Microphone Control (ViMiC). In this talk, sound-field measurements taken with a traditional binaural manikin will be compared to spherical microphone recordings to assess the quality of the different rendering techniques for large-scale labs. A focus will hereby be set on assessing the sweet spot area for different rendering
techniques. [Work supported by NSF 1229391, NSF 1631674, and the Cognitive and Immersive Systems Laboratory (CISL).]
SUNDAY MORNING, 25 JUNE 2017
ROOM 311, 10:55 A.M. TO 12:00 NOON
Session 1aPPb
Psychological and Physiological Acoustics: Auditory Neuroscience Prize Lecture
Andrea Simmons, Chair
Brown University, Box 1821, Providence, RI 02912
Chair’s Introduction—10:55
Invited Paper
11:00
1aPPb1. Active listening in 3D auditory scenes. Cynthia F. Moss (Psychol. and Brain Sci., Johns Hopkins Univ., Biology-Psych. Bldg.
2123M, College Park, MD 20742, cynthia.moss@gmail.com)
As an animal moves in its natural environment, to seek food, track targets, and steer around obstacles, its distance and direction to
objects continuously change, invoking dynamic feedback between 3D scene representation, attention, and action-selection. Animals that
rely on active sensing provide powerful systems to investigate neural underpinnings of sensory-guided behaviors, as they produce the
very signals that inform motor actions. Echolocating bats, for example, transmit sonar signals and process auditory information carried
by echoes to guide behavioral decisions for spatial orientation. Further, the bat adapts its echolocation signal design in response to 3D
3465
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3465
spatial information computed from echo returns, and therefore, the directional aim and temporal patterning of its calls provide a window
into the animal’s attention to objects in its surroundings. In addition, the bat actively controls pinna position and head movements to
enhance auditory cues about 3D target position. These adaptive behaviors require an interface between auditory processing and motor
commands, and our research findings implicate the midbrain superior colliculus in sensory-guided spatial orienting behaviors. This talk
will review behavioral and neurobiological studies of 3D sonar scene analysis in the echolocating bat, an animal whose active control
over acoustic signals provides a window into its perceptual world.
SUNDAY MORNING, 25 JUNE 2017
ROOM 201, 10:35 A.M. TO 12:20 P.M.
Session 1aSA
Structural Acoustics and Vibration, Noise, Physical Acoustics, and ASA Committee on Standards:
Groundborne Noise and Vibration from Transit Systems
James E. Phillips, Chair
Wilson, Ihrig & Associates, Inc., 6001 Shellmound St., Suite 400, Emeryville, CA 94608
Chair’s Introduction—10:35
Invited Papers
10:40
1aSA1. Draft standard on “Methods for the Prediction of Ground Vibration from Rail Transportation Systems.” James E.
Phillips (Wilson, Ihrig & Assoc., Inc., 6001 Shellmound St., Ste. 400, Emeryville, CA 94608, jphillips@wiai.com)
A draft American National Standards Institute (ANSI) standard on “Methods for the Prediction of Ground Vibration from Rail
Transportation Systems” is nearing completion for review. The intent of this document is to standardize methods that were initially
developed thirty years ago and adopted by the Federal Transit Administration in the guidance manual “Transit Noise and Vibration
Assessment” for determining environmental vibration impacts at sensitive land uses adjacent to transit projects. This paper will outline
the topics in the draft standard.
11:00
1aSA2. Ground vibration propagation measurements in extreme conditions. Scott Edwards (Senior Associate, Cross-Spectrum
Acoust., 1500 District Ave, Ste. 1011, Burlington, MA 01803, sedwards@csacoustics.com)
Ground vibration propagation measurements are often required during the environmental impact assessment process for transit projects per Federal Transit Administration (FTA) guidance. Cross-Spectrum Acoustics (CSA) conducted these measurements in extremely
hot (100 + degrees Fahrenheit) and extremely cold (sub-zero degrees Fahrenheit) environments in 2016. This presentation will provide
an overview of conducting such procedures in extreme climates and lessons learned from CSA staff. Topics to be discussed will include
the effects of extreme temperatures on equipment, staff, and the collected data.
11:20
1aSA3. Ground-borne vibration issues from rail transit near university research buildings. Timothy Johnson and Gary Glickman
(Wilson Ihrig, 30 E. 20th St., New York, NY 10003, tjohnson@wiai.com)
A new light rail transit project being designed and constructed through a university campus presents many potential ground-borne
vibration issues. The rail alignment passes near numerous buildings on campus that contain various types of vibration sensitive research
equipment. Appropriate vibration criteria for the project are critical to ensure that future operation of the light rail vehicles (LRVs) will
not adversely affect activities on campus. Ground-borne vibration criteria are based on equipment sensitivity and the existing vibration
environment. Projections of future vibration levels from LRV operations were developed based on field measurement programs. Site
specific vibration measurements on campus were conducted to document the soil vibration propagation characteristics and building
response characteristics. Additionally, input vibration force characteristics, or vehicle force density levels, of a comparable vehicle were
measured and incorporated into the projections. Finally, site specific track design and vibration mitigation measures were modeled and
incorporated into the project design.
3466
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3466
11:40
12:00
1aSA4. Prediction of ground-borne vibrations on historical structures
due to tram traffic in Antalya, Turkey. Salih Alan and Mehmet Caliskan
(Dept. of Mech. Eng., Middle East Tech. Univ., Ankara 06800, Turkey,
caliskan@metu.edu.tr)
1aSA5. Predicting structure-borne sound from railway traffic. Juan
Negreira (Eng. Acoust., Lund Univ., Eng. Acoust., LTH, BOX 118, Lund,
Skane 22100, Sweden, juan.negreira@construction.lth.se), Peter Persson
(Structural Mech., Lund Univ., Lund, Skane, Sweden), and Delphine Bard
(Eng. Acoust., Lund Univ., Lund, Sweden)
This study investigates the ground-borne vibrations on historically landmarked structures in the city of Antalya due to tram traffic. Assessments are
conducted over predicted vibrations with respect to international standards.
The study serves as a guidance for the tenders in the upcoming tender process for projects of upgrading existing tram lines as well as construction of
new lines. In the prediction procedure, an existing Fourier transform based
theoretical model for the track and layered ground is implemented coupling
vehicle dynamics. Groundborne vibrations calculated at the base level of
structures are considered. Vibration assessment criteria taken from ISO
2631 standard are employed in the evaluations of predicted vibrations at the
respective locations in three mutually perpendicular directions.
Since noise exposure can disturb the well-being, acoustical comfort in
the built environment is of great importance when constructing new dwellings. Population growth causes densification of cities, which together with
space limitation issues, lead to buildings being constructed closer to existing
vibration sources such as motorways and railways, and vice versa. Likewise,
architectural trends, environmental benefits and cost result in increased use
of lighter materials such as wood and hollow-core concrete slabs. Lightweight structures make the achievement of acoustical comfort in dwellings
an increasing challenge. A major issue when designing buildings regarded
as acoustically pleasant, especially in the low-frequency range, is the lack of
reliable prediction models to be used during the design stage of the building.
Predictions of structure-borne noise are nowadays mostly made based on
measurements performed on existing buildings and engineers’ experience.
Hence, it is of interest to develop tools that could adequately predict noise
and vibrations. The computer models developed for that purpose could combine different numerical methods, and they may use measurement data as
input. The aim here is to investigate and develop numerical models that
could be used in the early design stage of structures, specially aimed at predicting structure-borne noise in railway tunnels.
SUNDAY MORNING, 25 JUNE 2017
BALLROOM A, 10:40 A.M. TO 12:20 P.M.
Session 1aSC
Speech Communication: Speech Technology (Poster Session)
Kelly Berkson, Chair
Dept. of Linguistics, Indiana Univ., 1021 E. Third St., Mem 322E, Bloomington, IN 47405
All posters will be on display from 10:40 a.m. to 12:20 p.m. To allow contributors in this session to see the other posters, authors of
odd-numbered papers will be at their posters from 10:40 a.m. to 11:30 a.m. and authors of even-numbered papers will be at their posters
from 11:30 a.m. to 12:20 p.m.
Contributed Papers
1aSC1. How time-based alignment of realized acoustic landmarks and
predicted landmarks improves analysis of feature cue modification
patterns in speech. Rebekah Bell, Jeung-Yoon Choi, and Stefanie
Shattuck-Hufnagel (Res. Lab. of Electronics, Massachusetts Inst. of Technol.,
50 Vassar St., Rm. 36-511, Cambridge, MA 02139, bellr@mit.edu)
Acoustic landmarks are abrupt spectral changes that signal the underlying manner features of phonemes (Stevens 2002). Our goal in developing an
automatic method to detect these landmarks is to create a robust, knowledge-based approach to phoneme extraction in automatic speech signal
processing. One challenge in such an approach is posed by massive
3467
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
reductions, in which many landmarks and other feature cues are missing.
Thus, there is a need to hand-label the acoustic landmarks that actually
occur in the speech signal and align them with predicted landmarks. However, this often results in a discrepancy between the locations of labels for
the automatically generated and hand-labeled landmarks, which leads to an
inaccurate analysis of where realized landmarks occur with respect to word
and phoneme interval boundaries. We attempted to solve this issue with a
time-based alignment method derived from the minimum edit distance algorithm. The result improved alignment of the realized landmark labels with
the predicted landmark labels, enabling a more accurate analysis of modifications in the hand-labeled (realized) landmark tiers.
Acoustics ’17 Boston
3467
1a SUN. AM
Contributed Papers
1aSC2. Landmark-based consonant voicing detection on multilingual
corpora. Xiang Kong (Comput. Sci., Univ. of Illinois at Urbana
Champaign, Champaign, IL), Xuesong Yang, Mark Hasegawa-Johnson
(Beckman Inst., Univ. of Illinois at Urbana-Champaign, Urbana, IL), JeungYoon Choi, and Stefanie Shattuck-Hufnagel (Speech Commun. Group, Res.
Lab. of Electronics, MIT, 50 Vassar St., Rm. 36-581, Cambridge, MA
02139, jyechoi@mit.edu)
This study tests the hypothesis that distinctive feature classifiers anchored at phonetic landmarks can be transferred cross-lingually without loss
of accuracy. Three consonant voicing classifiers were developed: (1) manually selected acoustic features anchored at a phonetic landmark, (2)
MFCCs (either averaged across the segment or anchored at the landmark),
and (3) acoustic features computed using a convolutional neural network
(CNN). All detectors are trained on English data (TIMIT) and tested on
English, Turkish, and Spanish (performance measured using F1 and accuracy). Experiments demonstrate that manual features outperform all MFCC
classifiers, while CNN features outperform both. MFCC-based classifiers
suffer an overall error rate increase of up to 96.1% when generalized from
English to other languages. Manual features suffer only an up to 35.2% relative error rate increase, and CNN features actually perform the best on Turkish and Spanish, demonstrating that features capable of representing longterm spectral dynamics (CNN and landmark-based features) are able to generalize cross-lingually with little or no loss of accuracy.
1aSC3. Selecting frames for automatic speech recognition based on
acoustic landmarks. Di He (Coordinated Sci. Lab., Univ. of Illinois at
Urbana-Champaign, 1308 W Main St. Rm. 403, Urbana, IL 61801, dihe2@
illinois.edu), Boon Pang P. Lim (Inst. for Infocomm Res. (I2R), Singapore,
Singapore), Xuesong Yang (Beckman Inst. for Adv. Sci. and Technol.,
Univ. of Illinois at Urbana-Champaign, Champaign, IL), Mark HasegawaJohnson (Beckman Inst. for Adv. Sci. and Technol., Univ. of Illinois at
Urbana-Champaign, Urbana, IL), and Deming Chen (Coordinated Sci. Lab.,
Univ. of Illinois at Urbana-Champaign, Urbana, IL)
Most mainstream Mel-frequency cepstral coefficient (MFCC) based
Automatic Speech Recognition (ASR) systems consider all feature frames
equally important. However, the acoustic landmark theory disagrees with
this idea. Acoustic landmark theory exploits the quantal non-linear articulatory-acoustic relationships from human speech perception experiments and
provides a theoretical basis of extracting acoustic features in the vicinity of
landmark regions where an abrupt change occurs in the spectrum of speech
signals. In this work, we conducted experiments, using the TIMIT corpus,
on both GMM and DNN based ASR systems and found that frames containing landmarks are more informative than others during the recognition process. We proved that altering the level of emphasis on landmark and nonlandmark frames, through re-weighting or removing frame acoustic likelihoods accordingly, can change the phone error rate (PER) of the ASR system in a way dramatically different from making similar changes to random
frames. Furthermore, by leveraging the landmark as a heuristic, one of our
hybrid DNN frame dropping strategies achieved a PER increment of 0.44%
when only scoring less than half, 41.2% to be precise, of the frames. This
hybrid strategy out-performs other non-heuristic-based methods and demonstrated the potential of landmarks for computational reduction for ASR.
1aSC4. A flexible discriminative approach to automatic phone and
broad phonetic group classification. Kantapon Kaewtip and Abeer Alwan
(Elec. Eng., UCLA, 623 1/2 Kelton Ave., Los Angeles, CA 90024,
jomjkk@gmail.com)
In this work, we present a novel framework to phone and broad phonetic
group (BPG) classification. The overall system adds discriminative power to
the traditional HMM framework. All phones share one HMM. However,
instead of using generative models (e.g., GMMs), our framework uses a discriminative classifier to predict the state probability (i.e., the probability of
an HMM state given a feature vector input). Then, the optimal state
sequence is decoded resulting in a time-alignment function between the
acoustic feature vector sequence and the state sequence. For each state s, the
corresponding feature vectors are averaged resulting in a single feature vector that represents the s-th vector of the block. Each phone class is represented by a block of feature vectors whose size is equal to the number of
3468
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
states. All feature vectors of the block are then concatenated to a single feature vector to represent a phone unit, which is used for a discriminative
phone classifier. We validate our framework using the TIMIT database. The
proposed framework with MFCCs has comparable performance to the-stateof-the-art phone classification algorithms, but with increased flexibility to
account for duration and other features such as articulatory features.
Improved performance for BPG classification is also observed.
1aSC5. Exploitation of phased-based features for emotional arousal
y,
evaluation from speech. Igor Guoth, Milan Rusko, Marian Ritomsk
Trnka Marian, and Sakhia Darjaa (Inst. of Informatics, Slovak Acad. of Sci.,
Dubravska cesta 9, Bratislava 845 07, Slovakia, igor.guoth@savba.sk)
The mel cepstral coefficients representing magnitude spectrum and
Teager energy operators are often used as features in emotion recognition.
The phase spectrum information is generally ignored. In this work an
approach is proposed based on the use of group delay function from all pole
models (APGD) to represent the phase information for the emotional arousal
recognition from speech. The experiments were done on the CRISIS acted
speech database with four levels of stress. The results of the arousal recognition system using the APGD features are compared to those using mel-frequency cepstral coefficients (MFCCs) and with Critical Band Based TEO
Autocorrelation Envelope features (TEO-CB-Auto-Env) which have been
successfully used in the task of emotion and stress detection in the past. The
feature extraction is applied on the voiced parts of speech. The combination
of APGD, MFCC, and TEO-CB-Auto-Env features has shown the best recognition results confirming the hypothesis that the phase and magnitude
spectra contain complementary information and their combination can
improve the reliability of the arousal recognition system.
1aSC6. Toward real-time physically-based voice synthesis. Zhaoyan
Zhang (Dept. of Head and Neck Surgery, Univ. of California, Los Angeles,
1000 Veteran Ave., 31-24 Rehab Ctr., Los Angeles, CA 90095, zyzhang@
ucla.edu)
While physically based voice production models have potential applications in clinical intervention of voice disorders and personalized natural
speech synthesis, their current use is limited due to the high computational
cost associated with simulating the voice production process. In our previous
studies [Zhang 2015, J. Acoust. Soc. Am. 137, 898], we have developed a
reduced-order voice synthesis program with significantly improved computational efficiency toward real-time applications. One of the simplifications
is the use of vocal fold eigenmodes as building blocks to reconstruct more
complex vocal fold vibration patterns, which has significantly reduced the
computational time, particularly if only a few eigenmodes are used in the
simulations. The goal of this study is to identify the minimum number of
eigenmodes that need to be included in order to achieve a balance between
computational speed and fidelity in voice acoustics and voice quality. The
results show that for most voice conditions as few as 30 eigenmodes are sufficient to accurately predict the fundamental frequency, vocal intensity, and
selected spectral measures. It is expected that for applications in which absolute values are not as essential, even smaller number of eigenmodes would
be acceptable, allowing near real time capability. [Work supported by NIH.]
1aSC7. Robust speaker identification via fusion of subglottal resonances
and cepstral features. Jinxi Guo, Ruochen Yang, Abeer Alwan, and Harish
Arsikere (Elec. Eng., UCLA, 56-125B Eng. IV Bldg., 420 Westwood Plaza,
Los Angeles, CA 90095-1594, lennyguo@g.ucla.edu)
This paper investigates the use of subglottal resonances (SGRs) for
noise-robust speaker identification (SID). It is motivated by the speaker
specificity and stationarity of subglottal acoustics, and the development of
noise-robust SGR estimation algorithms which are reliable at low SNRs for
large datasets. A two-stage framework is proposed which combines the
SGRs with different cepstral features. The cepstral features are used in the
first stage to reduce the number of target speakers for a test utterance, and
then SGRs are used as complementary second-stage features to conduct
identification. Experiments with the TIMIT and NIST 2008 databases show
that SGRs, when used in conjunction with PNCCs and LPCCs, can improve
the performance significantly (2-6% absolute accuracy improvement) across
all noise conditions in mismatched situations.
Acoustics ’17 Boston
3468
This study applies a semi-supervised graph-based dimensionality reduction algorithm (Laplacian Eigenmaps [Belkin & Nyogi, 2002]) to analyze
burst spectra from adult productions of English /k/ and /t/. Multitaper spectra calculated over 25-ms windows were passed through a gammatone filter
bank, which models the auditory periphery’s frequency selectivity and frequency-scale compression. From these psychoacoustic spectra, a graph was
constructed: node pairs (two spectra) were connected if they shared a common talker or target word, and connecting edges were weighted by the symmetric Kullback-Leibler divergence between the spectra. This graph’s
eigenvectors map the spectra into a low-dimensional feature space. Our preliminary experiments with 512 tokens produced by 16 talkers suggest that
this algorithm is able to learn a two-dimensional representation of the bursts
which reflects well-established articulatory constriction features. The first
dimension linearly separated /k/ from /t/ in the back vowel environment,
reflecting posterior versus anterior constriction place; the second dimension
linearly separated /k/ from /t/ before front vowels, reflecting apical versus
dorsal lingual articulator. Experiments are underway to test how well the
algorithm generalizes from the training set to handle unseen productions
both from the same talkers and from 5 novel talkers.
1aSC9. Vowel synthesis related to equal-amplitude harmonic series in
frequency ranges > 1 kHz combined with single harmonics < 1 kHz,
and including variation of fundamental frequency. Dieter Maurer and
Heidy Suter (Inst. for the Performing Arts and Film, Zurich Univ. of the
Arts, Toni-Areal, Pfingstweidstrasse 96, Zurich 8031, Switzerland, dieter.
maurer@zhdk.ch)
Front vowels can be synthesized on the basis of series of harmonics
equal in amplitude, with frequencies only above 1 kHz. In these cases, spectral energy usually attributed to the first formant frequency is lacking. The
present paper reports results of an experiment in which sound synthesis was
performed on the basis of harmonic series covering higher frequency ranges
above 1 kHz, combined with a single lower harmonic < 1 kHz, all harmonics equal in amplitude. Thereby, two or three sounds were synthesized for
which the higher frequency range and the frequency of the lower harmonic
is identical, but the frequency distance of the higher harmonics differs
resulting in different perceived pitches of the sounds. Vowel recognition of
all sounds was investigated by means of a listening test in which five phonetic expert listeners were asked to assign the synthesized sounds to Standard German vowel qualities. The results of the experiment reveal that the
perceived vowel quality of such types of sound pairs or sound triples differs,
confirming earlier indications of the spectral envelope being ambiguous
with regard to vowel quality. Implications for the acoustics and perception
of vowels are discussed.
1aSC10. Influence of noise on the speaker verification in the air traffic
control voice communication. Milan Rusko, Trnka Marian, Sakhia Darjaa,
Marian Ritomsky, and Igor Guoth (Inst. of Informatics of the Slovak Acad.
of Sci., Dubravska cesta 9, Bratislava 845 07, Slovakia, milan.rusko@
savba.sk)
The voice communication between pilots and the air traffic controllers is
vulnerable to various types of attacks. Speaker verification could be used as
an add-on security feature; however, there are several factors that make the
use of voice biometry in this scenario difficult to apply. These are among
others: open set of speakers, very short utterances, speaker noises, signal
clipping, foreign accent of non-native speakers, and high content of background and channel noises in the signal. This paper identifies sources of
noise in the entire communication channel and analyzes the influence of
these noise components of different types and levels on the reliability of the
speaker verification. An i-vector based speaker recognizer with PLDA scoring is used for the experiments. Cockpit noises of several aircrafts and
3469
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
limited-band channel noises are simulated by a software noise-generator.
The sensitivity of the speaker verification to the noises of different frequency bands is studied in comparison to the long-term speech spectrum
and its variability. Possible measures for increasing the noise robustness of
the system are discussed.
1aSC11. “Flat” vowel spectra revisited in vowel synthesis. Dieter Maurer
and Heidy Suter (Inst. for the Performing Arts and Film, Zurich Univ. of the
Arts, Toni-Areal, Pfingstweidstrasse 96, Zurich 8031, Switzerland, dieter.
maurer@zhdk.ch)
Some studies of natural and of synthesized vowel sounds indicate “flat”
vowel-related spectral envelopes or envelope parts in terms of vowel-related
frequency ranges with harmonics equal in amplitude. The present investigation addresses this question in a vowel synthesis experiment in which
sounds related to series of harmonics, multiples of 200 Hz in frequency and
equal in amplitude, were created. Thereby, for various frequency ranges, the
number of harmonics was increased stepwise from a single lower harmonic
to an increasingly broader harmonic series, and, inversely, it was also
decreased from a broad series of harmonics to a single higher harmonic.
The entire frequency range of investigation was 0.2-4 kHz. Vowel recognition was investigated by means of a listening test in which five phonetic
expert listeners were asked to assign the synthesized sounds to Standard
German vowel qualities. The results of the experiment reveal that synthesized sounds with frequency bands of series of two or more equal-amplitude
harmonics allow for a perceptual differentiation of the Standard German
vowels /i-y-e-E-a-O-o/. Methodological issues concerning future investigations as well as implications for the acoustics and perception of vowels are
discussed.
1aSC12. Finite element simulation of diphthongs in three-dimensional
realistic vocal tracts with flexible walls. Marc Arnela and Oriol Guasch
(GTM - Grup de recerca en Tecnologies Mèdia, La Salle, Universitat
Ramon Llull, C/ Quatre Camins 30, Barcelona, Catalonia 08022, Spain,
marnela@salle.url.edu)
During the production of diphthongs, acoustic waves propagate along a
time-varying three-dimensional (3D) vocal tract of complex geometry. The
shape of the vocal tract walls does not only change because of the action of
the articulators to produce a given sound, but also experience an elastic
back reaction to the inner acoustic pressure. In this work the Finite Element
Method (FEM) is used to simulate these phenomena. The mixed wave equation for the acoustic pressure and acoustic particle velocity expressed in an
Arbitrary Lagrangian-Eulerian (ALE) frame of reference is solved to
account for acoustic wave propagation in moving domains. The flexibility
of walls is considered by solving a mass-damper-stiffness auxiliary equation
for each boundary node. Dynamic vocal tract geometries are generated from
the interpolation of static 3D vocal tract geometries of vowels, obtained
from Magnetic Resonance Imaging (MRI). Some diphthong sounds are generated as examples
1aSC13. Formant pattern ambiguity of vowel sounds revisited in
synthesis: Changing perceptual vowel quality by only changing
fundamental frequency. Dieter Maurer (Inst. for the Performing Arts and
Film, Zurich Univ. of the Arts, Toni-Areal, Pfingstweidstrasse 96, Zurich
8031, Switzerland, dieter.maurer@zhdk.ch), Volker Dellwo (Phonet. Lab.,
Dept. of Comparative Linguist, Univ. of Zurich, Zurich, Switzerland),
Heidy Suter (Inst. for the Performing Arts and Film, Zurich Univ. of the
Arts, Zurich, Switzerland), and Thayabaran Kathiresan (Phonet. Lab., Dept.
of Comparative Linguist, Univ. of Zurich, Zurich, Zurich, Switzerland)
The influence of varying fundamental frequency on the perception of
vowel quality in synthesized vowels was tested in two experiments. In
experiment 1, based on investigations of natural Standard German vowel
sounds, various model formant patterns F1’ to F3’ were created and, for
each single pattern, sounds were synthesised on two or three fundamental
frequencies (range 200-600 Hz). In experiment 2, corresponding to opentube resonance characteristics for men, women and children respectively,
sounds were synthesised with formant patterns F1’ to F5’, formant frequencies being odd multiples of 500, 600, or 700 Hz and fundamental
Acoustics ’17 Boston
3469
1a SUN. AM
1aSC8. Learning acoustic features for English stops with graph-based
dimensionality reduction. Patrick Reidy (Callier Ctr. for Commun. Disord.,
The Univ. of Texas at Dallas, 1966 Inwood Rd., Mailbox B112-CD, Dallas,
TX 75235, reidy@utdallas.edu), Mary E. Beckman (Linguist, The Ohio State
Univ., Columbus, OH), Jan Edwards (Hearing and Speech Sci., Univ. of
Maryland, College Park, MD), Benjamin Munson (Speech-Language-Hearing
Sci., Univ. of Minnesota, Minneapolis, MN), and Allison Johnson (Hearing
and Speech Sci., Univ. of Maryland, College Park, MD)
frequencies being 1/3, 1/2 or 1/1 of the first formant frequency. Five phonetic expert listeners identified all synthesised sounds in a multiple-choice
identification tasks. The results of both experiments revealed that the perceived vowel quality can be changed systematically by varying fundamental
frequency only and that the changes can exceed the perceptual boundaries
of two neighboring vowels. Further, sounds related to open-tube resonance
patterns are not consistently perceived as neutral schwa vowels when fundamental frequency substantially varies. Thus, the result of both experiments
strongly confirm previous claims of formant pattern ambiguity as well as of
spectral envelope ambiguity of vowel sounds.
SUNDAY MORNING, 25 JUNE 2017
ROOM 302, 10:35 A.M. TO 12:20 P.M.
Session 1aSP
Signal Processing in Acoustics: Application of Bayesian Methods to Acoustic Model Identification and
Classification I
Edmund Sullivan, Cochair
Research, Prometheus, 46 Lawton Brook Lane, Portsmouth, RI 02871
Ning Xiang, Cochair
School of Architecture, Rensselaer Polytechnic Institute, Greene Building, 110 8th Street, Troy, NY 12180
Chair’s Introduction—10:35
Invited Papers
10:40
1aSP1. Model selection for profile structure in Bayesian geoacoustic inversion. Stan E. Dosso (School of Earth & Ocean Sci, Univ.
of Victoria, PO Box 1700, Victoria, BC V8W 3P6, Canada, sdosso@uvic.ca), Hefeng Dong (Dept. of Electron. Systems, Norwegian
Univ. of Sci. and Technol., Trondheim, Norway), and Kenneth Duffaut (Dept. of GeoSci. and Petroleum, Norwegian Univ. of Sci. and
Technol., Trondheim, Norway)
This paper considers model selection in Bayesian geoacoustic inversion, specifically, the role of seabed parameterization in resolving
geoacoustic profile structure. Bayesian inversion is formulated in terms of the posterior probability density (PPD) over the model parameters which are sampled numerically: Metropolis-Hastings sampling in principal-component space enhanced by parallel tempering is
employed here. A key aspect of quantitative geoacoustic inversion is that of parameterizing the seabed model. Trans-dimensional (transD) inversion methods model the seabed as a sequence of discontinuous uniform layers and sample probabilistically over the number of
layers. However, in some cases it may be expected for seabed properties to vary as smooth, continuous gradients which are not well represented by uniform layers. Most gradient-based inversions assume a representative functional form, such as a power law. A recent alternative is based on a linear combination of Bernstein-polynomial basis functions. This approach is more general and allows the form of
the profile to be determined by the data, rather than by a subjective model choice. This paper compares trans-D, power-law, and Bernstein-polynomial inversions for the problem of estimating seabed shear-wave speed profiles from the dispersion of interface waves. Simulations and data from Oslofjorden and/or the North Sea will be considered.
11:00
1aSP2. Tempered particle filters for non-linear model selection and uncertainty quantification of highly informative seabed data.
Jan Dettmer (Dept. of GeoSci., Univ. of Calgary, 3800 Finnerty Rd., Victoria, Br. Columbia V8W 3P6, Canada, jan.dettmer@ucalgary.
ca) and Stan E. Dosso (Univ. of Victoria, Victoria, BC, Canada)
Knowledge about seabed properties is important for many geoscientific and navy applications, such as sediment transport, sonar performance prediction, and detection of unexploded ordnance. Bayesian model selection and uncertainty estimation have been shown to
provide detailed, quantitative seabed knowledge that is valuable for these applications. However, the extreme computational cost limits
the utility of Bayesian methods for increasingly common big data sets. Here, we consider geoacoustic reflectivity surveys based on
towed source and receiver arrays. Such systems produce thousands of data sets with high information content that require non-linear
inversion along tracks many kilometers in length and cannot be analyzed by standard Bayesian sampling. A particle filter that includes
reversible jump Markov chain Monte Carlo updates is applied here for efficient posterior probability estimation. Efficiency is improved
by likelihood tempering of various particle subsets and including information exchange within the particle cloud. The tempering applies
to reversible jump updates and leads to significantly improved exploration of the trans-dimensional seabed model which accounts for
changes in the number of sediment layers and their properties along the track. For challenging track sections, where data change
3470
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3470
1a SUN. AM
abruptly, the particle cloud is resampled to increase the number of tempered particles. [Work supported by the U.S. Dept. of Defense,
through SERDP, and by ONR Ocean Acoustics.]
11:20
1aSP3. Bayes, models, and data. Edmund Sullivan (EJS_Consultants, 46 Lawton Brook Ln., Portsmouth, RI 02871, bewegungslos@
fastmail.fm)
The Kalman filter is often described as a Bayesian processor. However, it is more than that. It is also a natural framework for introducing a physical model into an estimation scheme. As, such, it is a processor that can improve an estimation scheme in two distinct
ways—by using prior statistics and by introducing a model. It is shown how the prior statistics are implicitly included in the Kalman
update equation, and how models can be introduced in two places—the prediction equation and the measurement equation. Assuming
linearity and Gaussianity, it is outlined how the Kalman equations evolve from the particle filter upon the assumptions of linearity and
Gaussianity. Since the update equation for the particle filter is Bayes’ rule, it is clear that the resulting Kalman update equation is also a
form of Bayes’ rule, thus verifying that the Kalman filter is indeed a Bayesian processor. It is then shown how a model can be introduced, further improving the quality of the estimate. An example is given, based on real data, where the bearing estimate from a short
towed array is found for the case of a significant bearing rate.
11:40
1aSP4. Bayesian modal identification with particle filtering for sediment property inversion. Zoi-Heleni Michalopoulou, Andrew
Pole (Mathematical Sci., New Jersey Inst. of Technol., 323 ML King Blvd., Newark, NJ 07102, michalop@njit.edu), and Nattapol
Aunsri (Information Technol., Mae Fah Luang Univ., Chiang Rai, Thailand)
Sequential Bayesian filtering methods have been previously used in dispersion curve tracking for long range sound propagation in
the ocean. Modal frequency probability density functions were extracted for sound speed inversion. Here, we calculate modal arrival
time densities, instead, and employ them for inversion for sediment sound speed and thickness and water column depth. Bayesian mode
identification is performed to this end. We investigate two methods for describing the statistical errors in power spectra, which we use in
the arrival time density calculation using normal modes. We then link these densities to the parameters of interest. The approaches are
tested with synthetic data as well as data collected in the Gulf of Mexico. [Work supported by ONR.]
12:00
1aSP5. Room acoustic modal analysis via model-based Bayesian inference. Douglas Beaton (TALASKE | Sound Thinking, 1033
South Blvd., Oak Park, IL 60302, douglas@talaske.com) and Ning Xiang (Graduate Program in Architectural Acoust., Rensselaer
Polytechnic Inst., Troy, NY)
This work illustrates the application of Bayesian Inference in analyzing modal behavior in an experimentally measured room
impulse response at a single location. The Prony Model is employed to model the impulse response as a sum of exponentially decaying
sinusoids in the time domain. Bayesian model selection is applied to estimate the appropriate number of modes in the model. Bayesian
parameter estimation determines the amplitude, decay time, and modal frequency of each mode. The Bayesian analysis is performed
using a nested sampling approach to approximate the evidence for each candidate model. Results from the analysis are verified by a Fourier analysis of the experimentally measured data, and also with classical modal theory. Additional experimental measurements are performed to validate individual modal parameter estimates. The likelihood landscape for the selected model is further explored by
uniformly sampling near the point of convergence at the end of nested sampling. Animations are used to observe transient behavior of
the sample population throughout the analysis.
3471
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3471
SUNDAY MORNING, 25 JUNE 2017
ROOM 309, 10:35 A.M. TO 12:00 NOON
Session 1aUWa
Underwater Acoustics, Acoustical Oceanography, Signal Processing in Acoustics, Structural Acoustics and
Vibration, Physical Acoustics and Biomedical Acoustics: Passive Sensing, Monitoring, and
Imaging in Wave Physics I
Karim G. Sabra, Cochair
Mechanical Engineering, Georgia Institute of Technology, 771 Ferst Drive, NW, Atlanta, GA 30332-0405
Philippe Roux, Cochair
ISTerre, University of Grenoble, CNRS, 1381 rue de la Piscine, Grenoble 38041, France
Chair’s Introduction—10:35
Invited Papers
10:40
1aUWa1. A single-sided representation for passive and active Green’s function retrieval, time-reversal acoustics, and
holographic imaging. Kees Wapenaar (Delft Univ. of Technol., Stevinweg 1, Delft 2628CN, Netherlands, c.p.a.wapenaar@tudelft.nl)
The homogeneous Green’s function, defined as the superposition of the Green’s function and its time-reversal, plays an important
role in a variety of acoustic applications, such as passive and active acoustic Green’s function retrieval, seismic interferometry, time-reversal acoustics, and holographic imaging. An exact representation of the homogeneous Green’s function originates from the field of optical holographic imaging (Porter, 1970, JOSA). In this representation, the homogeneous Green’s function between two points A and B
is expressed as an integral along an arbitrary boundary enclosing A and B. This implies that the Green’s function between A and B can
be retrieved from measurements carried out at a closed boundary, or, via reciprocity, from passive observations at A and B of the
responses to sources on a closed boundary. In practical situations, the closed-boundary integral usually needs to be approximated by an
open-boundary integral. This can lead to significant artifacts in the retrieved Green’s function. I will discuss a new, single-sided, representation of the homogeneous Green’s function, which obviates the need for omnidirectional access. Like the classical closed-boundary
representation, this new single-sided representation fully accounts for multiple scattering. I will indicate applications of this new representation in the aforementioned fields.
11:00
1aUWa2. Fluctuations in the cross-correlation for fields lacking full diffusivity: The statistics of spurious features. richard weaver
and John Y. Yoritomo (Phys., Univ. of Illinois at Urbana-Champaign, 1110 w green, Urbana, IL 61801, r-weaver@illinois.edu)
Inasmuch as ambient noise fields are often not fully diffuse the question arises as to how, or whether, noise cross-correlations converge to Green’s function in practice. Well-known theoretical estimates suggest that the quality of convergence scales with the square
root of the product of integration time and bandwidth. However, correlations from natural environments often show random features too
large to be consistent with fluctuations from insufficient integration time. Here, it is argued that empirical seismic correlations suffer in
practice from spurious arrivals due to scatterers, and not from insufficient integration time. Estimates are sought for differences by considering a related problem consisting of waves from a finite density of point sources. The resulting cross-correlations are analyzed for
their mean and variance. The mean is, as expected, Green’s function with amplitude dependent on noise strength. The variance is found
to have support for all times up to its maximum at the main arrival. The signal-to-noise ratio there scales with the square root of source
density. Numerical simulations support the theoretical estimates. The result permits estimates of spurious arrivals’ impact on identification of cross-correlations with Green’s function and indicates that spurious arrivals may affect estimates of amplitudes, complicating
efforts to infer attenuation.
11:20
1aUWa3. Global propagation of seismic body waves and correlation. Michel Campillo, Lise Retailleau, Pierre Boue, Lei Li
(ISTerre, Universite Grenoble Alpes, ISTerre, UGA Maison des GeoSci., Grenoble 38041, France, michel.campillo@univ-grenoblealpes.fr), Piero Poli (EAPS, MIT, Cambridge, MA), and Maarten de Hoop (RICE Univ., Houston, TX)
We discuss the nature of retrieved body waves at teleseismic distances from correlation of records in two separate bands T<10 s and
T>30 s. The short period correlations indicate the presence of deep phases that appear as correct reconstructions of actual phases. We
present an example of application to the reflectivity of the core-mantle boundary region. Careful tests show the reliability of the images
produced with ambient noise records. On the opposite, we analyze long period records and show that the correlations are dominated by
strong coherent phases (with time close to actual ScS or P’P’df) that are the signatures of high quality factor normal modes. By using
3472
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3472
1a SUN. AM
array analysis and spectral analysis, we identify the dominant constituents. We then make use of geometrical quantization to derive the
ballistic reverberations of rays that contributes to the emergence of signals at times close to body wave arrivals. Our study indicates that
the signals measured in the long period correlations have a physical significance, but their interpretation as station to station seismic ray
is nontrivial.
11:40
1aUWa4. Passive acoustic remote sensing of the coastal ocean using interferometry of diffuse noise fields. Oleg A. Godin (Phys.
Dept., Naval Postgrad. School, 833 Dyer Rd., Bldg. 232, Monterey, CA 93943-5216, oagodin@nps.edu) and Michael G. Brown
(RSMAS, Univ. of Miami, Miami, FL)
A two-point correlation function of a perfectly diffuse noise field is known to contain all the information about the environment that
can be obtained using transceivers placed at the two points, provided that environmental parameters are time-independent. This theoretical prediction underlies the approach to passive remote sensing that is known as noise (or wave) interferometry. However, acoustic noise
in the ocean is never perfectly diffuse, except at very high frequencies, where noise of thermal origin dominates. Moreover, the averaging times necessary for deterministic features to emerge from noise cross-correlations far exceed the time scales of temporal variations
of the ocean surface, e.g., due to surface gravity waves and in the water column, e.g., due to internal gravity waves and tides. This paper
reviews current theoretical understanding of limitations of noise interferometry, which result from time-dependence of environmental
parameters and noise anisotropy in the horizontal and vertical planes. It is demonstrated that, within these limitations, phase-coherent
data processing techniques, including back-propagation, waveform matching, and time-warping, can be successfully applied to measured
noise cross-correlations to characterize seafloor properties and evaluate current velocity in a coastal ocean. [Work supported by NSF and
ONR.]
SUNDAY MORNING, 25 JUNE 2017
ROOM 306, 10:40 A.M. TO 12:00 NOON
Session 1aUWb
Underwater Acoustics: Underwater Acoustic Uncertainty
Andrey K. Morozov, Chair
Teledyne, 49 Edgerton Drive, North Falmouth, MA 02556
Contributed Papers
10:40
1aUWb1. Modeling the shipping noise in uncertain environment for
marine space planning. Florent Le Courtois, G. Bazile Kinda, and Yann
Stephan (HOM, Shom, 13, rue du Chatellier BP 30316, Brest 29603,
France, florent.le.courtois@shom.fr)
The shipping noise is a major component of the underwater soundscape
at low frequency. The increase of the fleet size at world scale for several
decades have arisen the potential impacts of noise pollution on marine ecosystems. Modeling the shipping noise levels at large scale became an impor-
3473
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
tant task of marine environmental policies. However, because of fluctuating
and unknown environmental parameters, the numerical model may present a
bias in the estimated levels up to several tens of dB. To tackle this problem,
this paper relies on a semi-empirical formulation of the noise level standard
deviation, related to environmental mismatch. The model provides monthly
noise atlas at world scale, using statistical distribution of shipping. Results
are presented using worldwide ship traffic data for the years 2003, 2012,
and 2016. It provides a relevant tool to monitor the ambient noise evolution
and have been applied for the evaluation of the Marine Strategy Directive
Framework.
Acoustics ’17 Boston
3473
11:00
1aUWb2. Dynamically orthogonal equations for stochastic underwater
sound propagation. Wael Hajj Ali, Johnathan H. Vo, and Pierre F.
Lermusiaux (Computation for Design and Optimization Program, Dept. of
Mech. Eng., Massachusetts Inst. of Technol., 77 Massachusetts Ave.,
Cambridge, MA 02139, whajjali@mit.edu)
Grand challenges in ocean acoustic propagation and inference are to
accurately capture the dynamic environmental uncertainties and to predict
the evolving probability density distribution of stochastic acoustic waves,
all efficiently and rigorously, using the governing partial differential equations (PDEs). To start addressing these needs, the stochastic dynamically orthogonal (DO) PDEs for the parabolic wave equation are derived and
numerical schemes for their integration are obtained. Within the parabolic
approximation, these equations are the optimal reduced-order representation
of stochastic acoustic waves within the uncertain sound speed environment.
The DO equations govern the propagation of the mean field, the DO modes,
and their stochastic coefficients. Examples are provided for a set of idealized
test cases as well as for more realistic ocean environments, and predictions
are contrasted with those of other uncertainty quantification schemes. The
utilization of DO equations for end-to-end uncertainty prediction within
oceanographic-seabed-acoustic-sonar dynamical systems is discussed.
11:20
1aUWb3. Normal-mode statistics of sound scattering by a rough elastic
boundary in underwater wave-guide in a full coupled mode approach,
including back-scattering. Andrey K. Morozov (Appl. Ocean Phys. &
Eng., Woods Hole Oceanographic Inst., 49 Edgerton Dr., North Falmouth,
MA 02556, amorozov@teledyne.com) and John A. Colosi (Dept. of
Oceanogr. Graduate School of Eng. and Appl. Sci., Naval Postgrad. School,
Monterey, CA)
The underwater sound scattering by rough sea surface, ice, or rough
elastic bottom is studied. The effects of scattering from a rough elastic
boundary are included in a coupled mode propagation model by the analytical equation for solid-state layer impedance. A full two-way coupled mode
solution was used to derive the stochastic differential equations for the second order statistics in a Markov approximation for a transport theory. The
coupled mode matrix was approximated by a linear function of one random
parameter such as ice-thickness or the surface perturbation. A one parameter
3474
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Gaussian model has a form of a two-way differential stochastic equation for
the correlation of normal mode coefficients. The derived equation relates
the correlation matrix of mode coefficients (for both directions) with the
correlation function of the rough boundary and the power spectrum of the
its slopes. The theory gives a solution for sound attenuation and horizontal
coherence over long-range propagation along random interfaces. The result
can be used for the estimation of sound attenuation in long-range under-ice
propagation or attenuation of seismic waves in an underwater wave-guide
with random bathymetry.
11:40
1aUWb4. Sonar inter-ping noise field characterization during cetacean
behavioral response studies off Southern California. Shane Guan (Office
of Protected Resources, National Marine Fisheries Service, 1315 East-West
Hwy., SSMC-3, Ste. 13700, Silver Spring, MD 20902, shane.guan@noaa.
gov), Brandon L. Southall (SEA, Inc., Aptos, CA), Jay Barlow (Southwest
Fisheries Sci. Ctr., National Marine Fisheries Service, La Jolla, CA), and
Joseph F. Vignola (Dept. of Mech. Eng., The Catholic Univ. of America,
Washington, DC)
Concerns of effects from military mid-frequency active sonar (MFAS)
on marine mammals have motivated considerable recent research and technology development. However, robust characterizations of the complex
acoustic field during sonar operations have been limited. Additionally,
potential effects to marine mammals beyond simple exposure levels are not
well understood. Here, we investigate inter-ping reverberation during a behavioral response study with simulated MFAS off California in waters
deeper than 300 m using drifting acoustic recording buoys. Acoustic data
were collected before, during, and after playbacks of simulated MFAS. An
incremental computational method was developed to quantify the inter-ping
sound field during MFAS transmissions. Descriptive statistics are used to
compare the characteristics of inter-ping sound field and the natural background. Results show significant elevated sound levels within the MFAS
frequency band of the inter-ping sound field. In addition, the duration of elevated inter-ping sound field depends on the MFAS source distance. At a distance of 900-1300 m from the source, inter-ping sound field remained 5 dB
above natural background levels for approximately 15 s. The elevated interping sound levels at such large distances is most likely due to volume reverberation of the marine environment, although multipath propagation may
also contribute to this phenomenon.
Acoustics ’17 Boston
3474
SUNDAY AFTERNOON, 25 JUNE 2017
ROOM 208, 1:15 P.M. TO 3:40 P.M.
Session 1pAAa
1p SUN. PM
Architectural Acoustics and Noise: Noise and Soundscapes in Restaurants and Other Public
Accommodations
Brigitte Schulte-Fortkamp, Cochair
Institute of Fluid Mechanics and Engineering Acoustics, TU Berlin, Einsteinufer 25, Berlin 101789, Germany
Kenneth P. Roy, Cochair
Building Products Technology Lab, Armstrong World Industries, 2500 Columbia Ave., Lancaster, PA 17603
Chair’s Introduction—1:15
Invited Papers
1:20
1pAAa1. Soundscape versus noise—From the sound recording to the public space. Alexander U. Case (Sound Recording Technol.,
Univ. of Massachusetts Lowell, 35 Wilder St., Ste. 3, Lowell, MA 01854, alex@fermata.biz)
The enemy of desired sounds is unwanted sonic competition. The positive elements of an interior soundscape need to be rid of noisy
rivals. Solutions in the built environment might be inspired by approaches taken in the creation of sound recordings. Of course, noise is
minimized where possible, but remaining noise must be dealt with. Masking the noise, taking care that the noise does not mask the
soundscape, embracing the noise, and creating an environment of heightened awareness can improve the listener’s comfort and sense of
pleasure when immersed in a soundscape clouded by noise.
1:40
1pAAa2. The soundscape of dining. Keely Siebein (Siebein Assoc., Inc., 625 NW 60th St., Ste. C, Gainesville, FL 32607, ksiebein@
siebeinacoustic.com)
This paper explores how applying soundscape theory can address the acoustic concerns for various classes of restaurants; from luxury/
fine dining to moderate to fast food. A representative case study of each type of restaurant is examined. By analyzing the soundscape components of each dining space, such as the acoustic community, taxonomy, and itinerary, and the specific paths of communication that take
place, one can begin to develop an “acoustic identity” for each room. Each “acoustic identity” is shaped by the users, the aesthetic intent
and the soundscape analysis. Diagnostic measurements are made based on how people use the space and the various communication paths
present. Impulse responses, alpha bars and other acoustic metrics are used that are based on these communication paths assist in determining
basic design approaches and acoustic interventions for each space. By combining the soundscape analysis approach with taking diagnostic
measurements rooted in actual communication paths that are present, one can define and shape the acoustical identity for the restaurant.
2:00
1pAAa3. Case study: Renovation of a historical building to create a boutique hotel, restaurant, and pub in a harbor town. Steve
Pettyjohn (The Acoust. & Vib. Group, Inc., 5765 9th Ave., Sacramento, CA, spettyjohn@acousticsandvibration.com)
An existing inn and restaurant were to be refurbished to create a boutique hotel, restaurant, and pub in Ft. Bragg, California. Live
bands would play in the restaurant while hotel guests would occupy the rooms above the restaurant and pub and at the rear of the pub.
The existing structure allowed only a limited number of guest rooms. Other structures were to renovated to provide more lodging space.
The proposed uses provide multiple types of and conflicts between the different soundscapes. Sound levels with or without bands must
be acceptable in guest rooms. Both wall and floor/ceiling assemblies were designed to meet State Codes and the standard of the industry
for sound and impact transmission loss. The soundscape goal for the pub and the restaurant was to control excess sound generated by the
patrons. When a band is playing, the potential for significant impacts is greater without acoustical treatment and some control over the
band volume. A discussion of the options for acoustical treatments within the restaurant and pub are discussed based on the architectural
designs and goals. Alternate methods of controlling sound transmission through wall and floor/ceiling assemblies are presented on the
real world conditions and contractors methods.
3475
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3475
2:20
1pAAa4. Auralization as a tool for acoustical design of restaurants and public spaces. Kelsey Hochgraf (Acentech, 33 Moulton St.,
Cambridge, MA 02138, khochgraf@acentech.com)
Auralization is an invaluable decision-making tool for the acoustical design of restaurants and other public gathering spaces, and
accurate modeling and calibration of sound sources is critical to achieving perceptually plausible soundscapes of such spaces. Unlike
auralizations of performing arts venues, auralizations of restaurants and public gathering spaces afford owners, architects, and consultants the opportunity to directly experience how difficult (or easy) it is to communicate with others when immersed in the soundscape.
Also unlike auralizations of performing arts venues, a realistic auralization of a restaurant or public gathering space must account for the
Lombard Effect when calibrating source levels of occupant and activity noise. In this presentation, we will briefly review the history and
recent improvements of Acentech’s 3DListening studio in Cambridge, MA, including Lombard Effect modeling. Three recent case studies will be used to illustrate the unique role of auralization in the architectural design process, including a restaurant, a college pub
located underneath residences, and a multi-level collaborative, interdisciplinary work space at an independent school.
Contributed Papers
2:40
3:00
1pAAa5. Experimental validation of Bayesian design for broadband
multilayered microperforated panel absorbers. Yiqiao Hou, Cameron J.
Fackler , and Ning Xiang (Graduate Program in Architectural Acoust.,
Rensselaer Polytechnic Inst., Greene Bldg., 110 8th St., Troy, NY 12180,
hyqjoy@gmail.com)
1pAAa6. Soundscape of washroom equipment and its application.
Lucky S. Tsaih and Yosua W. Tedja (Dept. of Architecture, National
Taiwan Univ. of Sci. and Tech., 43 Keelung Rd., Sec. 4, Taipei 10607,
Taiwan, akustx@mail.ntust.edu.tw)
Single-layer microperforated panel (MPP) absorbers often exhibit limited absorbing bandwidth. A Bayesian inference framework (encompassing
both model selection and parameter estimation) has been utilized to design
broadband MPP absorbers. In this work, the broadband multilayered MPP
absorbers are experimentally validated. In demonstrating ability to meet
practical requirements on both high absorption and wide bandwidth, the current investigation uses a sample design scheme in the frequency range from
300 Hz to 2.4 kHz. A minimum requirement for three MPP layers and the
relevant three-layer MPP parameters are derived by the two levels of Bayesian inference to meet the design scheme. MPP samples, based on the predicted design scheme, are fabricated and used to conduct normal-incidence
sound absorption measurements in an impedance tube. In order to quantify
the fabrication tolerance, the measured acoustic data are used to estimate
the MPP parameters of the constructed absorber, using an inverse Bayesian
inference method. This paper will discuss the initial MPP design, experimental validations of the designed absorption performance, as well as fabrication tolerance estimations of the MPP parameters.
The soundscape of the three washrooms at NTUST with equipment such
as toilets, urinals, wash basins, showers, hand dryers, and tissue dispensers
have been studied. The acoustical attributes of each type of washroom
equipment have been measured, recorded and analyzed with LZFmax values
for the 12.5 Hz to 20K Hz frequency bands. Despite the intermittent occurrence of equipment sounds, the overall maximum sound pressure level for
the full frequency spectrum has been identified as 92 dB/ 83 dBA. It is
aligned with a NC 74 curve. Such high transient sounds could disrupt sleeping in adjacent dormitory rooms and possibly reduce the quality of lecturing
in adjacent classrooms. Light weight gypsum board and metal stud partitions, concrete masonry unit and concrete are the typical partitions used in
washrooms of the residential, healthcare, hospitality, and schools. The transmission loss values of the partitions were calculated with Insul in the initial
study. It was found that the majority of partitions studied have sufficient
transmission loss values in the 100 Hz and above frequency bands, but the
transmission loss will not be sufficient to reduce washroom equipment noise
levels in the frequency bands between 50 Hz and 100 Hz.
3:20–3:40 Panel Discussion
3476
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3476
SUNDAY AFTERNOON, 25 JUNE 2017
ROOM 207, 1:15 P.M. TO 5:40 P.M.
Session 1pAAb
Architectural Acoustics: Prediction of Direct and Flanking Airborne and Impact Sound Transmission
1p SUN. PM
Edwin Reynders, Cochair
KU Leuven, Kasteelpark Arenberg 40, Leuven 3001, Belgium
John LoVerde, Cochair
Veneklasen Associates, 1711 16th St., Santa Monica, CA 90404
Jordi Poblet-Puig, Cochair
DECA - LaC
aN, Universitat Politècnica de Catalunya, C/Jordi Girona 1-3, campus Nord, B1-206, Barcelona E-08034, Spain
Chair’s Introduction—1:15
Contributed Paper
1:20
surement of noise or vibration from fitness activities. The focus of recent
work in heavy impacts has been weights, as these are frequently used and
typically expected to represent the worst-case scenario. However, fitness
programs encompass a variety of activities that are dissimilar from heavy
weight impacts, such as use of the human body as an impact source (running, jumping, and dropping) and use of soft and limp materials as an
impact source (ropes, medicine balls). This paper presents measurement and
application of the predictive methods developed for a heavy rigid impact
source to a heavy soft impact source for comparison.
1pAAb1. Investigation of heavy-soft impact noise transmission in fitness
facilities. John LoVerde, David W. Dong, Samantha Rawlings, and Richard
Silva (Veneklasen Assoc., 1711 16th St., Santa Monica, CA 90404,
jloverde@veneklasen.com)
Fitness applications within a mixed-use building can disturb other occupants of the building and, as a result, require assessment and mitigation.
There are not currently established standards for conducting repeatable mea-
Invited Paper
1:40
1pAAb2. Efficient modeling of sound transmission through finite-sized thick and layered wall and floor systems. Carolina
Decraene (Civil Eng., KU Leuven, Kasteelpark Arenberg 40, Leuven 3001, Belgium, carolina.decraene@kuleuven.be), Arne Dijckmans
(Acoust. Div., Belgian Bldg. Res. Inst., Brussels, Belgium), and Edwin Reynders (Civil Eng., KU Leuven, Leuven, Belgium)
Built-up wall and floor systems such as roof panels, floors with floating screeds, etc., have found widespread application in building
construction. Achieving sufficient sound insulation with these systems is challenging because of their relatively low weight and complex
vibro-acoustic behavior. A fast and sufficiently accurate acoustic design tool is needed. The semi-analytical transfer matrix method is
able to efficiently compute the response of a thick or multilayered structure in the frequency-wavenumber domain but has important limitations. First, the system is assumed to be of infinite extent. At lower frequencies however, neglecting the modal behavior of the wall
can lead to large prediction errors. Second, integration over all possible incident plane waves is necessary to obtain the diffuse transmission loss, resulting in a high computation time. The transfer matrix approach is therefore extended in two ways. The modal behavior of
rectangular walls and floors with simply supported boundary conditions is approximately accounted for. Using the diffuse reciprocity
relationship, a hybrid modal transfer matrix-statistical energy analysis method is then developed such that integration of plane-wave
transmission over all angles of incidence is no longer necessary, largely decreasing the computational effort. The model is validated
against alternative numerical prediction models and experimental data.
3477
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3477
Contributed Papers
2:00
2:20
1pAAb3. Acoustics of naturally ventilated double transparent facades.
Daniel Urban (A&Z Acoust. s.r.o., S.H.Vajanskeho 43, Nove Zamky 94079,
Slovakia, ing.daniel.urban@gmail.com), Bert Roozen (Dept. of Phys. and
Astronomy, Soft Matter and Biophys., KU Leuven, Leuven, Belgium), Peter
Zat’ko (A&Z Acoust. s.r.o., Bratislava, Slovakia), Monika Rychtarikova
(KU Leuven, Faculty of Architecture, Leuven, Belgium), Peter Tomasovč
(Dept. of Bldg. Structures, STU Bratislava, Faculty of Civil Eng.,
Bratislava, Slovakia), and Christ Glorieux (Dept. of Phys. and Astronomy,
Soft Matter and Biophys., KU Leuven, Leuven, Belgium)
1pAAb4. Methodology for measuring and predicting heavy-weight
impact noise transmission. Richard Silva, David W. Dong, and John
LoVerde (Veneklasen Assoc., 1711 16th St., Santa Monica, CA 90404,
rsilva@veneklasen.com)
Noise and vibration from activity in fitness facilities, in particular dropping of weights, is a common source of disturbance and complaint in the
United States of America (USA). Products have been developed to mitigate
such impact, but quantitative and comparable data of their effectiveness is
lacking. In the United States, there is no standardized method for evaluating
the reduction in noise or vibration provided by these products, and also no
method for predicting noise and vibration levels in potentially affected
spaces. The authors’ previous research (Internoise 2015, Noise-Con 2016)
developed a preliminary test method to evaluate athletic tile flooring with
heavy weight drops. The method is based on the reduction in floor vibration
when the products are inserted. The method is analogous to the delta-Ln
term in the EN12354 calculation method, except applying to heavy weight
sources, and can therefore be used to predict the resulting sound levels in
receiving spaces. This paper reports additional measurements on different
structural systems to validate the applicability of the method. Various flooring products are compared, and the accuracy and repeatability of the measurement method is evaluated.
This publication presents results of research on naturally ventilated Double Transparent Facades (DTF). The influence of the structural design of
DTFs on the airborne sound insulation was investigated. For this purpose, 9
DTFs were measured in situ and 9 Double Transparent Façade Elements
(DTF) were measured in a laboratory environment. The influence of the cavity thickness, the parallelism of the constitution layers, the amount of
absorbing surfaces in the cavity, and the effect of ventilation slots were
investigated. Based on the performed measurements, a prediction model
that allows a fast engineering calculation of the sound insulation of DTF’s
was developed.
Invited Papers
2:40
1pAAb5. The effects of the element damping in sound insulation predictions following EN12354. Eddy Gerretsen (Level Acoust. &
Vibrations, De Rondom 10, Eindhoven NL-5612 AP, Netherlands, eddy.gerretsen@planet.nl)
In the prediction of the sound insulation between dwellings in accordance with EN 12354 the damping of the elements in the actual
construction is an important aspect. It should be taken into account through the structural reverberation time or, in case of well-damped
and generally light weight elements, it is included in the input parameters. As a simplified approach though, the effect of the damping
can be neglected. In this respect, several questions are relevant. In what situation should the structural reverberation time in the actual
construction be taken into account or when is an element enough damped to make the connections to other elements irrelevant. What
relations exists between the junction parameters for damped and reverberant elements. And what errors do we make in neglecting the
damping in the actual construction for the elements, for the junctions or for both. Looking deeper into the equations and relations
between them some global answers can be given to those questions.
3:00
1pAAb6. On the measurement of the radiation efficiency for the estimate of the resonant sound reduction index. Jeffrey Mahn
and Christoph H€
oller (National Res. Council Canada, 1200 Montreal Rd., Ottawa, ON K1C4N4, Canada, jeffrey.mahn@nrc-cnrc.gc.ca)
The estimate of the resonant sound reduction index has received attention over the years as the prediction method described in the
standard, ISO 15712 has been applied to lightweight building constructions. A method of estimating the resonant sound reduction index
involves the measurement of the total and the resonant radiation efficiencies of the building elements involved in the first order flanking
paths. The radiation efficiencies of different lightweight wall constructions were evaluated as part of a study conducted at the National
Research Council Canada are presented. The study focused on the measurement of the radiation efficiencies with the aim of developing
guidelines for the measurements. Predicted values of the flanking transmission loss for each flanking path are compared to data which
was measured in the National Research Council’s eight room flanking facility.
3:20–3:40 Break
3478
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3478
3:40
building design practice. The standard EN 12354 -1 did not defined a procedures for calculation of the apparent sound reduction index for such constructions. The paper suggests a possible approach for calculation of
apparent sound reduction index based on the calculation methodology for
the single partition defined by the standard. Some theoretical analyses of the
sound energy transmission through direct and flanking paths in a case of
double massive partition were presented. Verification of the proposed
approach were done by laboratory measurement and by FEM numerical
simulation. The analysis included a number of commonly used constructions
and types of their junctions.
1pAAb7. Dilemmas in the assessment of the insulating properties of
double massive partition according to EN 12354-1. Dragana Sumarac
Pavlovic, Milos Bjelic, Milos Dinic, Miomir Mijic (School of Elec.
Engineer, Univ. of Belgrade, Bulevar kralja Aleksandra 73, Belgrade
11000, Serbia, dsumarac@etf.rs), and Vlada Bezbradica (URSA, Beograd,
Belgrade, Serbia)
The introduced requirements for thermal insulation between dwellings,
as well as some practical reasons, initiated greater use of double walls in
Invited Paper
4:00
1pAAb8. The prediction of the flanking transmission in constructions of hollow concrete block masonry walls connected to
precast prestressed concrete hollow core floors. Jeffrey Mahn and Christoph H€
oller (National Res. Council Canada, 1200 Montreal
Rd., Ottawa, ON K1A 0R6, Canada, jeffrey.mahn@nrc-cnrc.gc.ca)
A common construction technique for multi-story buildings is to build walls of hollow concrete block masonry which are rigidly
connected to floors of precast prestressed concrete hollow core slabs. The airborne flanking transmission for buildings of this construction must be determined to predict the apparent sound transmission class to meet the requirements of the National Building Code of Canada. Ideally, this would be done using the prediction method and the vibration reduction index values found in the standard, ISO 15712.
However, prior studies conducted at the National Research Council Canada have shown that the hollow core slabs are neither homogeneous nor isotropic which are the requirements for predicting the values of the vibration reduction index (Kij) according to Annex E of the
standard ISO 15712. To determine if the theoretical values of the vibration reduction index could nonetheless be applied in practice, an
experimental investigation was performed on full scale junctions between concrete block masonry walls and precast concrete hollow
core floors built and tested in full compliance with the standard ISO 10848. The investigation found that conservative vibration reduction
index values could be predicted using Annex E of ISO 15712.
Contributed Paper
4:20
impose more stringent acoustic criteria. In many cases, the structure of the
building is unknown or there are no published tests available for the desired
floor-ceiling assembly. One method to evaluate performance of floor-ceiling
assemblies is to conduct impact testing on a small sample of the floor assembly in situ. Although this is less costly than testing a fully installed floor,
performance variations have been observed between mock up test assemblies and the final installation. This paper will evaluate the accuracy of
mock up tests in predicting the impact rating of final floor installations.
1pAAb9. Evaluation of mock up testing as a method to predict impact
ratings of new hard surface floors in existing buildings. Jennifer Levins,
David W. Dong, and John LoVerde (Veneklasen Assoc., 1711 16th St.,
Santa Monica, CA 90404, jlevins@veneklasen.com)
Renovations in multifamily residential buildings often involve the installation of new hard surface flooring. The new flooring must comply with
building code minimum ratings. Homeowners Associations may also
Invited Papers
4:40
1pAAb10. Apparent sound insulation in cross-laminated timber buildings. Christoph Hoeller, Jeffrey Mahn (Construction, National
Res. Council Canada, 1200 Montreal Rd., Ottawa, ON K1A 0R6, Canada, christoph.hoeller@nrc.ca), and David Quirt (JDQ Acoust.,
Ottawa, ON, Canada)
With the 2015 National Building Code now in effect in Canada, predicting the apparent sound insulation in buildings from laboratory
measurements is becoming increasingly relevant for architects and designers. In North America, the apparent sound insulation is classified in terms of Apparent Sound Transmission Class (ASTC). The ASTC rating includes both the transmission through the separating assembly and the transmission via flanking paths. The National Research Council Canada has published a number of guideline documents
that detail the calculation procedure for ASTC and provide the required laboratory data for various construction types. In NRC Research
Report RR-335, “Apparent Sound Insulation in Cross-Laminated Timber Buildings” the focus is on buildings which are constructed
from cross-laminated timber (CLT) panels. Measurements of the direct sound insulation of CLT panels and the vibration attenuation at
their junctions were conducted at the NRC in recent years. The report RR-335 describes how to combine the relevant data to obtain estimates of the apparent transmission loss and the ASTC rating for a given CLT construction. This presentation will present highlights of
the report and demonstrate the use of the Simplified Method and the Detailed Method to calculate the ASTC rating.
3479
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3479
1p SUN. PM
Contributed Paper
5:00
1pAAb11. Prediction of vibration reduction index for junctions made of cross laminated timber elements. Jordi Poblet-Puig (DC LaCaN, Universitat Politècnica de Catalunya, C/Jordi Girona 1-3, campus Nord, B1-206, Barcelona E-08034, Spain, jordi.poblet@upc.
edu) and Catherine Guigou-Carter (Ctr. Scientifique et Technique du B^atiment, Saint Martin d’Hères, France)
The new revision of standard EN 12354-1 will provide some prediction formulas for estimating the vibration reduction index (Kij)
of junctions made of cross laminated timber (CLT) elements. These are based on laboratory and in situ measurements. At the same time,
new Kij prediction formulas for heavyweight junctions have also been added to this revised standard, in order to include large amount of
data generated by means of parametric numerical analysis. The goal of this research is to study if the same philosophy based on numerical models used for the heavyweight junctions can be extended to CLT elements junctions. This step is not direct because the modeling
of CLT structures implies a list of non-trivial aspects to take into account: (1) the details of the junction construction (i.e., direction and
type of screws, presence of steel angles and plates); (2) the orthotropy on the mechanical properties of CLT panels; and (3) the different
range of material properties and how damping must be considered in the numerical model. The contribution presents the advances made
in this direction: comparison with available experimental data, analysis of specific aspects of CLT junctions, comparison with the standard formulation, and extension to different junction types.
5:20
1pAAb12. Measurement data for the prediction of the flanking transmission in lightweight building constructions. Jeffrey Mahn
and Christoph H€
oller (National Res. Council Canada, 1200 Montreal Rd., Ottawa, ON K1C4N4, Canada, jeffrey.mahn@nrc-cnrc.gc.ca)
The ISO 15712 series of standards describe a method of predicting the flanking transmission in homogeneous isotropic building constructions. Since the method was first published in 1979, there has been great interest in applying the prediction method to lightweight
constructions which are neither isotropic nor homogeneous. The prediction method becomes more complicated for lightweight constructions because the resonant sound reduction indices of the elements must be estimated from measurement data including the sound reduction indices and the resonant and total radiation efficiencies. However, there is the question of which sound reduction index and
radiation efficiencies should be used. Should they be for the whole wall or a single panel? Results from a study conducted at the National
Research Council Canada are presented. Predicted values of the flanking transmission loss for each flanking path are compared to data
which was measured in the National Research Council’s eight room flanking facility.
SUNDAY AFTERNOON, 25 JUNE 2017
ROOM 206, 1:20 P.M. TO 5:40 P.M.
Session 1pAAc
Architectural Acoustics: Teaching and Learning in Healthy and Comfortable Classrooms II
Arianna Astolfi, Cochair
Politecnico di Torino, Corso Duca degli Abruzzi, 24, Turin 10124, Italy
Viveka Lyberg-Åhlander, Cochair
Clinical Sciences, Lund, Logopedics, Phoniatrics and Audiolgoy, Lund University, Scania University Hospital,
Lund S-221 85, Sweden
David S. Woolworth, Cochair
Oxford Acoustics, 356 CR 102, Oxford, MS 38655
Invited Papers
1:20
1pAAc1. Compiled acoustic and indoor environmental condition data from 220 K-12 classrooms. Laura C. Brill and Lily M. Wang
(Durham School of Architectural Eng. and Construction, Univ. of Nebraska-Lincoln, 1110 S. 67th St., Omaha, NE 68182-0816, lbrill@
huskers.unl.edu)
A team at the University of Nebraska-Lincoln is currently engaged in a comprehensive study of indoor environmental conditions in
K-12 classrooms. Information about the indoor air quality, thermal comfort, lighting, and acoustic conditions have been collected from
220 classrooms across five school districts in Nebraska and Iowa. This paper will present an overview of the acoustic results with regards
to sound levels and reverberation time as well as how these results vary based on grade level and school district. This paper will also
3480
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3480
present an initial overview of the relationship between acoustic metrics and some of the metrics from indoor air quality, thermal comfort,
and lighting such as carbon dioxide levels, temperature, and electric illuminance levels. [Work supported by the United States Environmental Protection Agency Grant Number R835633.]
1:40
1p SUN. PM
1pAAc2. Mapping speech transmission index (STI) and background noise in university classrooms. Andrew Hulva, Michael
Ermann, Jeffrey Rynes (Architecture + Design, Virginia Tech, 201 Cowgill Hall (0205), Blacksburg, VA 24061-0205, mermann@vt.
edu), Randall J. Rehfuss (Architecture + Design, Virginia Tech, Dubllin, VA), Aaron Kanapesky (Architecture + Design, Virginia Tech,
Blacksburg, VA), and Alexander Reardon (Eng., Virginia Tech, Blacksburg, VA)
Noise and intelligibility measurements were taken in dozens of classrooms at approximately every-meter resolution and are presented as heat maps. In doing so we hope to determine (1) the largest (and loudest) classrooms that do not require loudspeaker speech
amplification and (2) the radius of muddled intelligibility circumscribed around noise sources such as air diffusers and fan coil units.
2:00
1pAAc3. Noisy voice or voice in noise: Evaluation of cognitive load on the speaker, work in progress. Ingrid A. Verduyckt
(Medicine, School of SLP and Audiol., Univ. of Montreal, Ingrid Verduyckt, Universite de Montreal, cp 6128, sucurcale Ctr. Ville,
Montreal, QC H3C3J7, Canada, ingrid.verduyckt@umontreal.ca), Dick Botteldooren, Annelies Vandevelde, and Annelies Bockstael
(Univ. of Ghent, Gent, Belgium)
We are comparing the cognitive load induced by various types of noise in the processing of information from speech. We examine if
there is a difference in cognitive load between external noise sources (background noise) and internal noise sources (dysphonic voice).
Our hypothesis is that noisy voices could be more cognitively demanding than background noise because they are more similar to the
target signal spatially and temporally and the perceived link with the target signal is stronger. 60 normal hearing subjects (18-30 years)
listen to texts in ten different conditions: (1) Healthy voice in multitalker babble noise, (2-4) Three types of dysphonic voices in background noise, (5-7) Three dysphonic voices in silence, (8) Healthy voice in silence, (9) Healthy voice with omnidirectional noise equivalent to dysphonic noise, (10) Healthy voice with dysphonic noise from same direction. We evaluate cognitive load in 4 ways: (1) during
the listening phase by subjects’ performance on a secondary graphic task, (2) after the listening phase by subjects’ answers to a multi
choice questionnaire and (3) by a free recall of the text, (4) by subjects’ grading of their listening effort. Methodological considerations
will be discussed and preliminary results will be presented.
2:20
1pAAc4. Communication problems among teachers and noise conditions… hearing difficulties also matters!. Lady Catherine
Cantor Cutiva, Pasquale Bottalico (Communicative Sci. and Disord., Michigan State Univ., 3207 Trappers Cove Trail, Apartment 2C,
Lansing, MI 48910, ladyccantor@gmail.com), Alex Burdorf (Public health, Erasmus Universiteit, Rotterdam, Netherlands), and Eric J.
Hunter (Communicative Sci. and Disord., Michigan State Univ., East Lansing, MI)
Previous studies on the influence of noise in the classroom on the hearing function of teachers have primarily focused either on physical education teachers or music teachers. However, the influence of classroom noise on hearing difficulties among teachers is largely
unknown. The aim of this study was to assess the association between classroom noise levels and self-reported classroom acoustics with
self-reported hearing impairments among teachers. In 12 public schools in Bogota, we conducted a cross-sectional study among 621 Colombian teachers at 377 workplaces. Teachers filled out a questionnaire on individual and self-perceived noise conditions inside the
classroom along with perception of their hearing impairments. Logistic regression analysis was used to determine associations between
background noise levels and self-reported hearing impairment. High noise levels in the surroundings of schools (Odds ratio [OR] 2.15;
95% confidence interval [CI] 1.25-3.68) was associated with self-reported hearing impairments, but self-reported classroom noise
showed no association (Odds ratio [OR] 1.34; 95% confidence interval [CI] 0.91-1.99). This study indicates that noise in schools may
play a role in self-reported hearing impairments among teachers.
2:40
1pAAc5. Assessment of auralizations of monosyllabic words for hearing impaired students. Konca Saher (Interior Architecture and
Environ. Design, Kadir Has Univ., Kadir Has Caddesi, Istanbul 34083, Turkey, konca.saher@khas.edu.tr)
This paper discusses results of a research project, which seeks to develop Turkish speech recognition tests by auralizations for hearing impaired students based on monosyllabically structured words. In the context of this study, two sets of 25-items phonetically balanced monosyllabic Turkish words were recorded in the anechoic chamber of TUBITAK National Metrology Institute in Gebze,
Turkey. Each monosyllabic word was recorded through a carrier sentence. After the vocal quality, accent and pronunciation in the
recordings were approved by qualified audiologists, auralizations of the recorded sentences were developed in an acoustic simulation
software (ODEON v12). Listening tests developed from auralizations in three classroom models with varying reverberation times and
signal to noise ratios were presented to ten hearing impaired students. The preliminary results of these listening tests are presented and
discussed in this study. This paper also aims to compare the results of listening tests with hearing impaired students with the listening
tests made previously with normal hearing students.
3:00
1pAAc6. Listening efficiency in real and simulated university classrooms. Nicola Prodi and Chiara Visentin (Dipartimento di
Ingegneria, Universita di Ferrara, via Saragat 1, Ferrara 44122, Italy, nicola.prodi@unife.it)
The present study examines how reverberation, background noise level and type, affect both speech reception performance and the
perceived effort of students in a university classroom. The classroom has a volume of 198 m3 and a reverberation time in occupied conditions of 0.6 s, complying with the target value of the DIN 18041 standard. Diagnostic Rhyme Tests (DRT) in the Italian language were
3481
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3481
proposed to a group of 26 normal-hearing young adults: half of them native (Italian), the other half non-native (German) speakers. Data
on speech intelligibility (SI) and response time (RT) were collected. The two quantities were combined in the joint metric of listening efficiency, effectively describing the interplay of perceptual and cognitive processes in speech reception performance. The experiment
took firstly place in situ, where the distinct effects a speech-shaped stationary noise and a fluctuating (ICRA) noise with the same shortterm STI were determined. Afterwards, acoustic simulations of the classroom setting were carried out, and the resulting binaural impulse
responses used for laboratory experiments with headphones. Simulations and in-situ measurements were compared in terms of STI, SI
and RT; then, listening tests under controlled conditions were accomplished with a selection of background noise levels and reverberation times.
3:20–3:40 Break
3:40
1pAAc7. Field survey on the sound environment of childcare facilities in Japan: Analysis of sound generation accompanied by
children’s activities. Saki Noguchi and Kanako Ueno (Meiji Univ., 1-1-1, Higashimita, Kawasaki, Kanagawa 214-8571, Japan, nsaki@akane.waseda.jp)
Today’s childcare facilities in Japan have problems in their sound environments such as bustling noises, and it is pointed out as a
result of lack of sound absorption and awareness of the sound environment of people in the field. On the other hand, the sound environment is actually different depending on age and type of activity, but detailed examination has not been done. In this presentation, for the
purpose of setting up an acoustic environment according to the development of children and the aim of childcare activities, we report the
results of surveying sound environments focusing on group size, age, activity space, and activity contents. We investigated the sound
environment and childcare activities in five childcare facilities with different facility type, scale, and operation type. It was observed that
children’s sounds changed with the development of language and behaviors. The characteristics of sound environment differ depending
on actual activity situation and spatial property, and special acoustic contrivance was necessary in the free play scene because different
activity sounds were mixed there.
4:00
1pAAc8. Trial in a nursery facility for improving the sound environment. Kanako Ueno and Ken Miyatsuka (Meiji Univ., 1-1-1,
Higashimita, Kawasaki, Kanagawa 214-8571, Japan, uenok@meiji.ac.jp)
In Japan, most nursery facilities have been built without considering the acoustic requirements; thus, the rooms tend to be reverberant and very noisy from the daily activities of children. The poor sound environment could be harmful not only as a living environment
for the children but also as a working environment for the nursery staff. This presentation reports on a case study of a nursery that
worked on the reduction of its high sound level. First, the architectural features of the nursery room and the status of the sound environment, which was reported by the nurses’ claims and measured to be approximately 80 dB or higher during lunch time, was investigated.
Second, to improve the noisy environment, two tasks were carried out. One was the installation of absorbing materials onto the walls
and ceilings, which shortened the reverberation time by half. The other was the introduction of management efforts aiming for children
to lower the loudness of their voices. These were performed with a demonstration of proper voice levels with animals of different scales
and the generation of a music box sound during lunch. The effects of these trials were physically analyzed by sound recordings and subjectively evaluated by nurses.
4:20
1pAAc9. Speakers comfort and voice use in different environments and babble-noise. What are the effects on effort and
cognition? Viveka Lyberg-Åhlander, Heike von Lochow, Susanna Whitling (Clinical Sci., Lund, Logopedics, Phoniatrics and
Audiolgoy, Lund Univ., Scania University Hospital, Lund S-221 85, Sweden, viveka.lyberg_ahlander@med.lu.se), Jonas Christensson,
Erling Nilsson (Ecophon St. Gobain, Hyllinge, Sweden), and Jonas Brunskog (Acoust., Denmark Tech. Univ., Kgs. Lyngby, Denmark)
Teachers often report voice problems related to the occupational environment, and voice problems are more prevalent in teaching
than in other occupations. Relationships between objectively measurable acoustical parameters and voice use have been shown. Speakers have been shown to be able to predict the speaker-comfort of an environment. Teachers with voice problems use the room differently
than their voice-healthy controls. The aim of this study was to investigate what vocal changes speakers do in different acoustical environments and noise conditions. Nine female speakers, voice patients, and voice-healthy were exposed to four controlled, acoustical
“environments” mounted in the same room: 1. stripped; 2. wall- and ceiling mounted absorbents; 3-4 as 2 but with extra ceiling absorbents and in two positions. The speakers were recorded with voice-accumulator and simultaneous voice recordings and spoke freely for
3-5 min in three noise conditions in each setting: silence, classroom noise (60 dBA), and day-care noise (75 dBA). Questionnaires on
effort needed were completed by speakers and listeners. There was a co-play between the rooms and the subjectively assessed vocaland listening effort and also a correlation to cognitive aspects. Listener assessments and the data from the voice accumulator will be presented. This knowledge may contribute to the area of classroom acoustics and speakers’ comfort in general.
4:40
1pAAc10. Experimental measurements of word intelligibility of pre-school children under acoustic interferences of
reverberation and background noise. Keiji Kawai and Kazunori Harada (Kumamoto Univ., 2-39-1 Kurokami, Kumamoto 860-8555,
Japan, kkawai@kumamoto-u.ac.jp)
Child day-care rooms require optimum acoustic condition as children from 0 to 5 years old are supposed to be vulnerable group
against interferences with verbal communication by background noise and excessive reverberation. However such interferences on the
children seem not to have been examined in the field of architectural acoustics. Thus on-site experiments were carried out to measure
word intelligibility of children from 3 to 5 years old. The procedure was like a true or false game. Each of the test words was mixed with
two or three levels of pink noise and convolved with the room impulse responses with different reverberation times. The words were
3482
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3482
presented to the children and control groups (elementary school pupils and college students) by a loudspeaker in a daycare room. They
were asked to judge whether the word was food or not, and to raise their hands holding yes or no signs. The experiment was repeated
three times in different daycare centers with revisions of the condition settings. As the results, the correct answer ratios of pre-school
children was lower than that of control groups, in general, and particularly, the ratio of 3-years-old children was much lower than that of
other groups. Also under low S/N ratio conditions the ratios of all the groups decreased along with long reverberation.
5:00
1p SUN. PM
1pAAc11. Voice production effects due to extreme reverberation times in real rooms. Michael Rollins (Dept. of Phys. and
Astronomy, Brigham Young Univ., Cincinnati, OH), Timothy W. Leishman (Dept. of Phys. and Astronomy, Brigham Young Univ.,
Provo, UT), Mark Berardi, and Eric J. Hunter (Dept. of Comm. Sci. & Disord., Michigan State Univ., 1026 Red Cedar Rd., East
Lansing, MI 48824, ejhunter@msu.edu)
Public school teachers have a heightened risk of voice problems. There are many potential causes of this increased vocal risk, including poor room acoustics (e.g., excessively high or low reverberation times). With increased understanding, rooms could be better
designed to maintain communication transfer (intelligibility), while mitigating unhealthy vocal effort and, by extension, voice problems.
The present study quantified the influence of a wide range of reverberation times (RT20) on vocal production parameters. Thirty-two
participants were recorded completing a battery of speech tasks in eight widely ranging conditions within a reverberation chamber.
Changes in RT20 had highly correlated effects on several vocal parameters, including smoothed cepstral peak prominence, acoustic
vocal quality index (AVQI), and pitch strength. As RT20 increased, vocal parameters tended toward values commonly associated with
dysphonic phonation. Additionally, results were gender dependent, with females tending to produce voice with higher vocal effort than
males. These findings begin to objectify the effects of room acoustics on vocal accommodations and provide grounds for developing
future talker-oriented room acoustical standards.
5:20
1pAAc12. Self-reported voice problems and contributing factors in a Francophone population of professional and student
teachers and nurses. Ingrid A. Verduyckt (School of Speech Lang. Pathol. and Audiol., Faculty of Medecine, Univ. of Montreal, Ingrid
Verduyckt, Universite de Montreal, cp 6128, sucurcale Ctr. Ville, Montreal, QC H3C3J7, Canada, ingrid.verduyckt@umontreal.ca),
Amandine Tordeur (Institut Libre Marie Haps en logopedie, Brussels, Belgium), and Laure-Anne Watteau (Brussels, Belgium)
Context: We explored the relation between work related factors, health practices, personality traits and stress-coping strategies and
self-reported voice problems in a population of 354 student teachers (ST), 344 professional teachers (PT), 147 student nurses (SN) and
104 professional nurses (PN). Method: An online survey was conducted. Beside an anamnestic questionnaire to collect data about voice
problems, work environment, health practices the VHI-10 was used to quantify voice symptoms, the Big Five-10 to explore personality
traits, and the WCC-27 to explore stress coping strategies. Results: The prevalence of self-reported voice problems was significantly
higher in ST as compared to SN (23% vs 14%, p = 0.025) and in PT versus PN (24% vs 12%, p = 0.08). VHI scores were significantly
higher for subjects self-reporting a voice problem and significantly higher in the professional than the student groups, with PT with selfreported voice problems having the highest scores (Mean = 22.34 SD = 0.635) p<0.001. An ANOVA shows that 55% of the variance of
the VHI scores is explained by the status of the subject (ST/PT/SN/PN), (F(3,945) = 382.156, p<0.001). A linear regression shows that
43% of the variance of the VHI scores was explained by Amount of private voice use, Conscientiousness, Extraversion and Emotion
centered coping scores (F(4,830) = 159.201,p<0.001).
3483
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3483
SUNDAY AFTERNOON, 25 JUNE 2017
ROOM 313, 1:20 P.M. TO 5:40 P.M.
Session 1pAB
Animal Bioacoustics: Biosonar
James J. Finneran, Chair
SSC Pacific Code 71510, US Navy Marine Mammal Program, 53560 Hull St., San Diego, CA 92152
Contributed Papers
1:20
2:00
1pAB1. Effects of prior intense noise exposure on the ability of big
brown bats to navigate through clutter. Kelsey N. Hom, James A.
Simmons, and Andrea Simmons (Brown Univ., Providence, RI 02912,
kelseynhom@gmail.com)
1pAB3. Neural spike train similarity algorithm detects differences in
temporal patterning of bat echolocation call sequences. Alyssa W.
Accomando (185 Meeting St., Box GL-N, Providence, RI 02906, alyssa.
accomando@nmmpfoundation.org), Carlos Vargas-Irwin, and James A.
Simmons (Neurosci., Brown Univ., Providence, RI)
Big brown bats (Eptesicus fuscus) emit intense biosonar calls and process returning echoes to forage and guide flight. Sound pressure levels of
emissions can exceed 110-120 dB SPL, amplitudes known to produce temporary threshold shifts in other vertebrates. We conducted behavioral
experiments to test the limits of bats’ ability to navigate after intense noise
exposures. Bats were trained to fly through a dense array of chains that produce a pattern of echoes mimicking those they would receive when flying
along vegetation. We quantified flight accuracy (10 flights) and changes in
the number and temporal patterning of emissions before and after exposure
to intense broadband noise (1 hr, 116 dB SPL). Four bats tested 20 min
post-exposure maintained flight accuracy and did not alter the temporal patterning of emissions from that observed pre-exposure. In contrast, two of
three bats tested 2 min post-exposure initially would not perform the task or
made errors in navigation. Temporal patterning of emissions during successful flights did not vary significantly from those measured during pre-exposure flights. These data suggest that prior intense noise exposure affects
motivation to fly but not the ability to process returning echoes. [Work supported by ONR and the Capita Foundation.]
1:40
1pAB2. Vocalizing strategies for acoustically jammed conditions for bat
echolocation. Hiroshi Riquimaroux (Shandong Univ., 27 Shanda Nanlu,
Jinan, Shandong 250100, China, hiroshi_riquimaroux@brown.edu)
Extraction of signal from noise is quite difficult when both signal and
noise have the same temporal and spectral characteristics, for example,
extraction of speech sounds of a single speaker from background of speech
sounds from group of people. However, we can extract speech signal of a
particular person from a group of speech noise, called cocktail party effects.
Echolocating bats often come across similar situation where their own vocal
signals are masked by vocalizations emitted by neighboring bats. Flying
horseshoe bats, Rhinolophus ferrumequinum, would compensate Dopplershifted returning echo frequencies to be constant by adjusting frequency of
emitted pulses, called Doppler-shift compensation. Amplitudes of the second harmonic are the most intense in echo locating sounds. Once emitted
pulses and returning echoes are added, a powerful masker is created. During
acoustically jammed conditions, paradoxically bats tend to make their echo
reference frequencies even closer. How can they still conduct Doppler-shift
compensation? They would amplify the fundamental frequency of FM component. How can they make the fundamental frequency, not the second harmonic, the strongest? The reason why and how they conduct this behavior
will be discussed. [Work supported by MEXT Japan and Grant from Shandong University.]
3484
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Bats emit echolocation sounds in complex temporal sequences that
change to accommodate dynamic surroundings. Efforts to quantify how
these patterns change have included analysis of inter-pulse intervals, sonar
sound groups, and changes in individual signal parameters. No standardized
method has been adopted for quantifying whether sequences of echolocation
calls are similar or different beyond these individual dimensions. Here, a
new method is presented for assessing the similarity in temporal structure
between trains of bat echolocation sounds. The spike-train similarity space
(SSIMS) algorithm, originally designed for neural data analysis, was applied
to determine which features of the environment influence temporal patterning of echolocation sounds emitted by flying big brown bats (Eptesicus fuscus). Using a relational point-process framework, SSIMS was able to
discriminate between pulse sequences recorded in different flight environments, as well as to separate flights depending on the bat’s expectation of its
surroundings based on previous experience.
2:20
1pAB4. Why hipposiderid biosonar is worth studying. Rolf M€
uller
(Mech. Eng., Virginia Tech, ICTAS Life Sci. District (Mail Code 0917),
Blacksburg, VA 24061, rolf.mueller@vt.edu), Ru Zhang, Liujun Zhang
(Shandong Univ. - Virginia Tech Int. Lab., Shandong Univ., Jinan, China),
Peiwen Qiu (Mech. Eng., Virginia Tech, Blacksburg, VA), and Xiaoyan Yin
(Shandong Univ. - Virginia Tech Int. Lab., Shandong Univ., Jinan, China)
Although the genus Hipposideros contains a diverse set of more than 70
species of echolocating bats, the biosonar system of this group has received
far less attention than that of the related horseshoe bats (family Rhinolophidae) which share the same basic cf-fm biosonar. Only a relatively small
number of field observations can be found in the literature and even fewer
laboratory studies on hipposiderids have been reported. The Shandong University—Virginia Tech International Laboratory has been working with two
of the larger hipposiderid species, the great roundleaf bat (Hipposideros
armiger) and Pratt’s roundleaf bat (Hipposideros pratti) and has conducted
biosonar as well as flight experiments with individuals from both species. It
was observed that the bats from both species have highly dynamic biosonar
systems that employ large non-rigid noseleaf motions as well as large rigid
and non-rigid motions of the pinnae. The motions seen in the hipposiderids
appear to be relatively larger and more frequent than those in the greater
horseshoe bats with which similar experiments were conducted. In addition,
the hipposiderids bats tested were found to be maneuverable fliers that
should make an excellent model system for the integration of dynamic sonar
with a highly capable flight system.
Acoustics ’17 Boston
3484
1pAB5. Design of a dynamic sonar emitter inspired by hipposiderid
bats. Luhui Yang, Allison Yu, and Rolf M€
uller (Mech. Eng., Virginia Tech,
1075 Life Sci. Cir, Blacksburg, VA 24061, 913022794@qq.com)
The noseleaves of Old World leaf-nosed bats (family Hipposideridae)
and the related horseshoe bats (Rhinolophidae) are notable for their elaborate static geometries and a conspicuous dynamics in which the noseleaves
change their shapes during biosonar pulse emission as a result of muscular
actuation. Whereas the noseleaves of horseshoe bats have already been used
as an inspiration for dynamic sonar emitter prototypes, the possible functional role of the specific static and dynamic noseleaf features of Old World
leaf-nosed bats have yet to be investigated in this manner. To accomplish
this, a dynamic emitter based on the time-variant morphology of Pratt’s
roundleaf bats (Hipposideros pratti) has been designed. The baffle shape
was simplified from a tomographic reconstruction of a biological sample.
Five shape features (anterior leaf, sella, lancet, and the two nostrils) were
preserved in the model. Motions of these parts were derived from threedimensional reconstructions of landmark points that were placed on the
noseleaf of echolocating bats and recorded with a stereo pair of high-speed
video cameras. Actuation mechanisms driven by three stepper motors (one
for lancet and sella, one for both nostrils, one for the anterior leaf) were
implemented to reproduce the dynamic noseleaf motion pattern observed in
the bats.
3:00
1pAB6. The relationship between pinna and noseleaf motions in
hipposiderid bats. Shuxin Zhang, Liujun Zhang, Ru Zhang (Shandong
Univ. - Virginia Tech Int. Lab., Shandong Univ., Shanda South Rd. 27,
Jinan, Shandong 250100, China, shuxinsduvt@yahoo.com), and Rolf
M€uller (Mech. Eng., Virginia Tech, Blacksburg, VA)
Old World leaf-nosed bats (Hipposideridae) are a family of bat species
that use elaborate baffle shapes to diffract the outgoing ultrasonic pulses and
the returning echoes. The baffles at both interfaces (“noseleaves” for emission, outer ears for reception) have dynamic geometries that can be changed
through muscular actuation. Shape changes in noseleaves and pinnae can
both coincide with sound emission and reception respectively, but the relationship between the dynamics in these two structures has yet to be investigated. To study this relationship, a set of no less than 17 landmarks was
placed on the noseleaf and one ear of Pratt’s roundleaf bats (Hipposideros
pratti) to track the dynamic geometry of these structures simultaneously
with a high-speed video camera array. The three-dimensional trajectories of
the landmark points were reconstructed using stereovision. The results
showed strong, systematic relationships between noseleaf and pinna motions
that were found to belong to two different qualitative types. In the first type,
noseleaf motions and pinna motion did not change direction during the recording period (e.g., one pulse) which resulted in approximately linear relationships between the positions of the landmarks on both structures. In the
second type, direction reversals occurred, but coupling between the motions
remained evident.
3:20–3:40 Break
3:40
1pAB7. A simulation study of the biosonar information in natural
foliage echoes. Chen Ming (Dept. of Mech. Eng., Virginia Tech, 814
Cascade Ct., Blacksburg, VA 24060, cming@vt.edu), Hongxiao Zhu (Dept.
uller (Dept. of
of Statistics, Virginia Tech, Blacksburg, VA), and Rolf M€
Mech. Eng., Virginia Tech, Blacksburg, VA)
Echolocating bats are capable of accurate navigation in forests at night.
To understand how they perceive the vegetation echoes and find passage
ways in dense foliage, previous work by the authors has studied a model for
homogeneous foliages that distributed leaves uniformly in a domain, simpli-
3485
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
fied individual leaves as circular disks and hence described an entire foliage
by only three parameters (mean leaf radius, orientation, and density). To further explore which additional information inhomogeneous structures within
a foliage may impart on the echoes, the model was transitioned from uniform leaf distributions to digital trees with branches and leaf clusters using
L-systems to mimic the branching patterns of natural trees. When the tree
size was small with very few and short child branches, inhomogeneities in
the echoes were readily apparent no matter how wide the beam was. If the
tree crown was large, featuring many branches and leaf clusters, the amount
of inhomogeneity seen in the echoes depended on the relative scale between
the tree size and sonar beamwidth. When big trees and relatively narrow sonar beamwidths were paired in the simulation, the echoes were inhomogeneous; as the beam got wider, the generated echoes were getting more
homogeneous.
4:00
1pAB8. Dolphin auditory brainstem responses to frequency-modulated
“chirps.” James J. Finneran (SSC Pacific Code 71510, US Navy Marine
Mammal Program, 53560 Hull St., San Diego, CA 92152, james.finneran@
navy.mil), Jason Mulsow, Ryan Jones, Dorian S. Houser (National Marine
Mammal Foundation, San Diego, CA), and Robert F. Burkard (Univ. at
Buffalo, Buffalo, NY)
Previous studies have demonstrated that increasing-frequency chirp (upchirp) stimuli can enhance auditory brainstem response (ABR) amplitudes
by compensating for temporal dispersion occurring along the cochlear partition. In this study, ABRs were measured in two bottlenose dolphins in
response to 5-ms, spectrally “white” clicks, up-chirps, and decreasing-frequency chirps (down-chirps). For all stimuli, bandwidth was constant (10 to
180 kHz) and peak-equivalent sound pressure levels (peSPLs) were 115,
125, or 135 dB re 1 lPa. Chirp durations varied from 125 to 2000 ls. Upchirps with durations less than ~1000 ls generally increased ABR peak
amplitudes compared to clicks with the same peSPL or energy flux spectral
density level, while down-chirps with durations above ~250 to 500 ls
decreased ABR amplitudes relative to clicks. Increases in ABR amplitude
occurred with up-chirps having a broad range of durations. The findings parallel those from human studies and suggest that the use of chirp stimuli may
be an effective way to enhance broadband ABR amplitudes in larger marine
mammals. [Work supported by US Navy Living Marine Resources
Program.]
4:20
1pAB9. Dolphin click-evoked auditory brainstem responses obtained
using randomized stimulation and averaging. James J. Finneran (SSC
Pacific Code 71510, US Navy Marine Mammal Program, 53560 Hull St.,
San Diego, CA 92152, james.finneran@navy.mil)
Measurement of auditory brainstem responses (ABRs) using conventional averaging (i.e., constant interstimulus interval, ISI) is limited to stimulus rates low enough to prevent overlapping of the ABRs to successive
stimuli. To overcome this limitation, stimuli may be presented at high rates
using pseudorandom sequences (e.g., maximum length sequences) or quasiperiodic sequences. However, these methods restrict the available stimulus
sequences and require deconvolution to recover the ABR from the overlapping responses. Randomized stimulation and averaging (RSA) is an alternate method for measuring evoked responses at high stimulus rates that
allows more control over stimulus jitter, is flexible with respect to sequence
parameters, and does not require deconvolution to extract the ABR waveform [Valderrama et al. (2012). “Recording of auditory brainstem response
at high stimulation rates using randomized stimulation and averaging,” J.
Acoust. Soc. Am. 132, 3856-3865]. In the RSA method, ABRs are obtained
by averaging responses to stimuli with ISIs drawn from a random distribution. In this study, ABRs were measured in three dolphins using conventional averaging and RSA. Results show the RSA method to be effective
provided the ISI jitter exceeds ~1-2 ms. [Work supported by the Naval Innovative Science and Engineering (NISE) Program at SSC Pacific.]
Acoustics ’17 Boston
3485
1p SUN. PM
2:40
4:40
1pAB10. The dynamics of a dolphin’s biosonar signals while
performing target discrimination tasks. Whitlow Au (Univ. of Hawaii,
P.O. Box 1106, Kailua, HI 96734, wau@hawaii.edu), John Atkins (Ocean
Instruments, Auckland, New Zealand), Heidi E. Harley (New College of
Florida, Sarasota, FL), Henri Volpilier (Univeristat’ Paris-Saclay, Paris,
France), and Wendi Fellner (Disney’s Epcot’s The Seas, Lake Buena Vista,
FL)
An experiment was conducted to determine if a blindfolded echolocating
dolphin modified its biosonar signals depending on the targets it was investigating in a target shape discrimination task. Biosonar signals were measured
with a specially designed bite-plate apparatus with a dowel extending from
the bite plate to support the hydrophone. The detected signals were digitized
and stored on a modified SoundTrap (Ocean Instrument New Zealand)
attached to the dowel. The dolphin engaged in a matching-to-sample task:
he first examined a sample target and then swam to a different area of the
pool to examine three alternative targets, one of which matched the sample.
The animal’s task was to point to the matched target. The characteristic of
each emitted signal was determined by calculating the peak frequency, center frequency, rms bandwidth, and rms duration. A specific target set was
used for each session of 15-18 trials; some sets were unfamiliar to the dolphin. Considerable amount of variations in the signal parameters were
observed across trials and sessions, but our statistical analyses suggested the
variations were not based on target identity, thereby leading us to conclude
that the clicks emitted by the dolphin did not differ with the target set.
5:00
1pAB11. Sonar processing by the spectrogram correlation and
transformation model of biosonar. Stephanie Haro (School of Eng.,
Brown Univ., 69 Brown St., Box 7251, Providence, RI 02912, Stephanie_
Haro@brown.edu), James A. Simmons (Neurosci., Brown Univ.,
Providence, RI), and Jason E. Gaudette (NUWC Newport, Newport, RI)
Echolocating big brown bats emit frequency-modulated (FM) biosonar
sounds and perceive target range from echo delays through spectrogram correlation (SC) and target shape from interference nulls in echo spectra
through spectrogram transformation (ST). Combined, the SCAT model is a
computationally unified auditory description of biosonar as a real-time process. We developed a Matlab implementation of SCAT and tested it with a
3486
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
succession of simulated bat-like FM signals (chirps), each followed by one
or more FM echoes that have realistic delay and spectral characteristics.
The model simulates neural response latencies in frequency-tuned delaylines that use coincidence detections for target ranging by SC. For ST, a
novel, deconvolution-like network transforms echo spectra into images of
the target’s glints by detecting coincidences between spikes that represent
spectral nulls in parallel channels tuned to null frequencies. Experiments
show that dolphins likely separate ST into two operations—MaPS for short
glint separations (macro power spectral features, <80 çs) and MiPS for longer separations (micro power spectral features, >80 çs). The ST deconvolution network models MiPS. The highly distributed character of the model
favors real-time operation, an important goal for bioinspired sonar development. [Work supported by ONR.]
5:20
1pAB12. Separation of MiPS/MaPS spectrogram transformations in
biosonar. Uday Shriram, James A. Simmons (Dept. of Neurosci., Brown
Univ., Providence, RI 02912, uday_shriram@alumni.brown.edu), and
Tengiz Zorikov (Georgian Acad. of Sci., Inst. of Cybernetics, Tblisi,
GA)
Echolocating bats and dolphins differ in signals, sound emission, and
reception pathways, and important aspects of the acoustic medium. They
do, however, receive broadcasts and echoes through similar parallel bandpass filters in the cochlea, which impose integration-times of several hundred microseconds. Echoes consisting of several overlapping reflections
from target glints (insect body-parts, fish swim bladders) interfere upon
reception to form complex echoes with spectral interference patterns that
characterize the target’s shape. Bats transform these patterns into images
that depict the glints themselves along the range axis, a process called Spectrogram Correlation and Transformation (SCAT). Experiments with dolphins suggest two different scales for recognizing shape from echo
spectra—macro- vs micro-power spectral features for ST (MaPS for short
glint separations of <80 çs; MiPS for longer separations >80 çs). We tested
this finding in big brown bats trained to distinguish between 2-glint echoes
with long, MiPS-like and short MaPS-like spectral features and found that
MiPS covers delay separations of about 25-500 çs from the frequency separation of spectral nulls, which have to fit into FM1 for ST to occur. At a
deep level, dolphins and bats appear to share a common processing strategy
for forming images. [Work supported by ONR.]
Acoustics ’17 Boston
3486
SUNDAY AFTERNOON, 25 JUNE 2017
ROOM 310, 1:20 P.M. TO 5:00 P.M.
Session 1pAO
Acoustical Oceanography: Topics in Acoustical Oceanography
1p SUN. PM
John A. Colosi, Chair
Department of Oceanography, Naval Postgraduate School, 833 Dyer Road, Monterey, CA 93943
Contributed Papers
1:20
2:00
1pAO1. Sediment parameter inversions in the East China Sea. Gopu R.
Potty, James H Miller (Dept. of Ocean Eng., Univ. of Rhode Island, 115
Middleton Bldg., Narragansett, RI 02882, potty@egr.uri.edu), Stan E.
Dosso (Univ. of Victoria, Victoria, BC, Canada), Julien Bonnel (Ensta
Bretagne, Brest cedex 9, France), Jan Dettmer (Dept. of GeoSci., Univ. of
Calgary, Victoria, Br. Columbia, Canada), and Marcia J. Isakson (Appl.
Res. Labs., Univ. of Texas, Austin, TX)
1pAO3. Scattering statistics of glacially quarried rock outcrops:
Bayesian inversions for mixture model parameters. Derek R. Olson
(Appl. Res. Lab., The Penn State Univ., P.O. Box 30, State College, PA
16804, dro131@psu.edu) and Anthony P. Lyons (Ctr. for Coastal and
Ocean Mapping, Univ. of New Hampshire, Durham, NH)
Geoacoustic inversions using wide-band acoustic sources (WBS)
deployed during the Asian Seas International Acoustic Experiment
(ASIAEX) along a circular path of radius 30 km centered on a vertical
hydrophone array was used to construct a pseudo 3D model of the seabed
sediments [Potty et al., J. Acoust. Soc. Am. 140, 3065, 2016]. The geoacoustic inversion approach is based on trans-dimensional Bayesian methodology in which the number of sediment layers is included as unknown in
addition to the layer parameters. In this study, the inverse problem is recast
such that the unknown parameters are sediment parameters such as porosity,
permeability, grain size, etc. The compressional and shear wave speeds and
attenuation are estimated from these parameters using Biot or similar geoacoustic models. This inversion approach enables direct comparison of the
inversion results to ground truth measurements for sediment cores. High resolution time-frequency analysis techniques were applied to extract modal
arrival times accurately. One-dimensional (depth-dependent) inversions will
be applied along the various acoustic propagation paths to construct a
pseudo 3D sediment model using interpolation. [Work supported by the
Office of Naval research.]
1:40
Knowledge of the probability distribution of the scattered amplitude
return from the seafloor in reverberation measurements and seafloor sonar
images is a prerequisite to designing effective target detection systems and
predicting their performance. Previous measurements have revealed that the
distribution is often heavier tailed than the Rayleigh distribution, and may
be modeled by the K, Weibull, and log-normal distributions, among others.
Recent measurements of the scattering statistics from rock seafloors resulted
in a bimodal distribution, which is poorly modeled by many commonly used
distributions. The rock surfaces were formed from glacial quarrying and exhibit a stepped structure. The observed distribution is hypothesized to result
from a mixture, where the scattered field from vertically oriented facets is
modeled as a K distribution, and the scattered field due to the horizontally
oriented facets is modeled as a Rayleigh distribution. If this hypothesis is
true, then roughness parameters may be estimated from scattering data. A
Bayesian technique for estimating the distribution of mixture parameters
from the probability distribution of the scattered field is presented. This
technique, while computationally expensive, reveals the relationship
between the mixture model parameters, and can reveal any degeneracies
that could lead to problems during inversions.
2:20
1pAO2. Study on parameter correlations in the modal dispersion based
geoacoustic inversion. Lin Wan, Mohsen Badiey (Univ. of Delaware, 261
S. College Ave., 104 Robinson Hall, Newark, DE 19716, wan@udel.edu),
and David P. Knobles (KSA LLC, Austin, TX)
1pAO4. Mode coupling and redistribution of the sound intensity over
depth at downslope propagation, in the area of thermocline’s contact
with bottom. Boris Katsnelson (Marine GeoSci., Univ. of Haifa, Mt.
Carmel, Haifa 31905, Israel, bkatsnels@univ.haifa.ac.il) and Andrey
Lunkov (Marine GeoSci., Univ. of Haifa, Moscow, Russian Federation)
The dispersion characteristics of acoustic normal modes are applied in
the estimation of seabed parameters (e.g., sound speed, density, and layer
depth). The modal arrival time difference is utilized to define the objective
function, which calculates the difference between the modeled and measured modal arrival times. In this paper, a shallow water experimental dataset
shows that the unknown sound speed, density, and layer depth of the sediment may not be successfully estimated by minimizing the modal dispersion-based objective function due to the correlations among them. For a
given modal arrival time difference, an increase (decrease) in sound speed
of the sediment can be compensated by increasing (decreasing) the density
or layer depth of the sediment. Since there is more than one set of parameter
values yielding the desired minimum of the objective function, an additional
constraint from other objective functions or the knowledge from direct
measurements is required. This paper utilizes a second objective function
defined by mode shapes in conjunction with the dispersion-based objective
function to partially remove the ambiguity of these unknown seabed parameters. [Work supported by ONR.]
Downslope propagation of the sound signal is studied in a coastal wedge
when the temperature profile reveals the strong thermocline. Sound source
is placed above the thermocline near the point of thermocline contact with
bottom (TC point). Depth dependence of the sound intensity is considered
far enough from the coastal line as a function of distance to the source. It is
shown that for some position of the source between TC point and the coast,
remarkable redistribution of the sound intensity over water depth takes place
(sound field is stressed to the bottom). Numerical simulations of this phenomenon are carried out using adiabatic and coupled normal mode decomposition for bottoms with different sound speeds. Results of modeling are
compared with the data of experiment in Lake Kinneret, where coastal slope
is characterized by variation of depth from the coast to the 40 m at the distance about 8 km and sound speed profile with the thermocline at 20 m
depth. Chirp signals with the center frequency 600 Hz were received by the
vertical line array (VLA). Vertical distribution of the sound intensity along
VLA was measured. Variability in accordance with theoretical model is
demonstrated. [Work was supported by ISF.]
3487
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3487
2:40
1pAO5. Acoustic mode based internal tide tomography from the
PhilSea 2009 experiment. Tarun K. Chandrayadula (Ocean Eng., IIT
Madras, 109 B Ocean Eng., IIT Madras, Chennai, Tamil Nadu 600036,
India, tkchandr@iitm.ac.in), John A. Colosi (Oceanogr., Naval Postgrad.
School, Monterey, CA), Peter F. Worcester, and Matthew Dzieciuch
(Scripps Inst. of Oceanogr., UCSD, La Jolla, CA)
The Philippine Sea 2009 experiment transmitted low frequency broadband sound pulses to a mode resolving array located at 185 km. The entire
experiment was conducted over a period of one month, during which, most
of the transmissions took place every 5 min, and the rest every 3 h. This talk
estimates the travel times of the mode pulses and inverts them for the internal tide induced sound speed variability across the propagation path. A first
order perturbation theory approach models mode travel times as range-averaged function of the sound speeds. Depending on the propagation distance,
the contributions from some internal tide wavelengths are enhanced while
others suppressed in the mode travel times. The talk shows the month-long
time-series of the mode peak arrival times and estimates a spectrum for the
observations. The presentation then derives an expression to relate the spectral components of the mode arrival times in terms of the internal tide spatial
wavenumbers. The magnitude of the arrival time spectral components at the
tide frequencies are used to invert for the respective internal tides across the
propagation path. The tomographic inverse for the internal tides are compared with CTD measurements at the receiver and the source arrays.
referred to problems of geoacoustic inversions (seabed mapping) using
simulated acoustic data and seismic monitoring using real data from a terrestrial seismograph to illustrate the various possible applications of the suggested method.
4:00
1pAO8. Head wave inversion technique using the low-frequency sound
from a Robinson R44 helicopter. Dieter A. Bevans and Michael J.
Buckingham (Marine Physical Lab., Scripps Inst. of Oceanogr., 9500
Gilman Dr., La Jolla, CA 92093-0238, dbevans@ucsd.edu)
A series of underwater acoustic experiments using a Robinson R44 helicopter and an underwater receiver station has been conducted in shallow
(16.5 m) water. The receiver station consisted of an 11-element nested
hydrophone array with a 12 m aperture configured as a horizontal line
(HLA) 0.5 m above the seabed. A microphone was located immediately
above the surface. The main rotor blades of the helicopter produce low-frequency harmonics, the fundamental frequency being ~13 Hz. The tail rotor
produces a sequence of harmonics six times higher in frequency. An analytical solution for the horizontal coherence for the head wave has been developed using a 3-layer (atmosphere-ocean-sediment) acoustic propagation
model. By comparing the theoretical coherence with the coherence function
from the data the sediment sound speed is recovered. The results from the
theoretical model and an experiment conducted north of Scripps Pier off the
coast of southern California are presented. [Research supported by ONR,
SMART(DOD), NAVAIR, and SIO.]
3:00
1pAO6. Observations of non-Rayleigh acoustic backscatter and spatial
correlation. Chad M. Smith and John R. Preston (Penn State, The Penn
State Univ., Appl. Res. Lab., State College, PA 16804, cms561@psu.
edu)
Observations of non-Rayleigh acoustic backscatter in data taken during
the 2015 Littoral Continuous Active Sonar (LCAS) experiment are discussed using K-distribution shape parameter and comparison with spatial
correlation width estimates. Data were collected using the Five Octave
Research Array (FORA) cardioid aperture in a towed, roughly monostatic
configuration. Several tracks were repeatedly measured using differing signal bands and pulse lengths allowing the use of matched filter envelope and
K-distribution statistics to characterize returns in several pulse parameter
configurations. Correlation width estimates are compared with non-Rayleigh
regions found using shape parameter. Early work uses shallow broadside
angles and vessel travel to estimate statistics for large regions providing an
efficient search for clutter events. [Work supported by Office of Naval
Research.]
3:20–3:40 Break
3:40
1pAO7. Signal characterization using Hidden Markov Models with
applications in acoustical oceanography. Michael Taroudakis and Costas
Smaragdakis (Inst. of Appl. and Computational Mathematics, Univ. of Crete
and FORTH, Voutes University Campus, Heraklion 70013, Greece,
taroud@uoc.gr)
The work presents a method for characterizing underwater acoustic signals using a Markov chain, with hidden variables, based on their wavelet
transform. Initially, we assign to the signal a Hidden Markov Model
(HMM) for which the conditional posterior probability density function
seems to be the most representative using an Expectation-Maximization
algorithm. Special techniques are applied to avoid over-fitting which in principle is not desirable for the sought applications. The features used for the
assignment consist of two dimensional time series obtained by preprocessing of signal’s wavelet packet coefficients. Subsequently, we use an approximation of the Kullback Leibler (KL) divergence as a similarity measure
among the HMMs. The approximation is obtained by employing MonteCarlo (MC) techniques simulating the significant sampling from the HMMs
posterior distributions. This technique is used in cases where the similarity
of two or more signals is to be exploited. These cases include a variety of
problems associated with the monitoring of the marine environment using
acoustic or seismic signals. The applications to be presented here are
3488
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
4:20
1pAO9. Acoustic quantification of abundance, biomass, and size class of
Atlantic menhaden (Brevoortia tyrannus) in a shallow estuary in Long
Island, New York. Brandyn M. Lucca, Hannah Blair (School of Marine
and Atmospheric Sci., Stony Brook Univ., 70 Lakewood Court, Apt. 16,
Moriches, NY 11955, brandyn.lucca@stonybrook.edu), and Joseph Warren
(School of Marine and Atmospheric Sci., Stony Brook Univ., Southampton,
NY)
Atlantic menhaden (Brevoortia tyrannus) is an euryhaline forage fish
commonly found on Long Island from late spring to fall and is both ecologically and economically important to fisheries along the entire eastern coast
of the United States. Schools of menhaden frequently occupy the shallow
(<4 m) bays and rivers of the western Peconic Estuary on Long Island, New
York. These shallow habitats are difficult to sample using traditional pelagic
(i.e., net trawls) or shore-adjacent (i.e., beach seines) methods due to shallow water depths and salt marsh coastline. We conducted multiple acoustic
surveys using fisheries echosounders (38, 120, and 200 kHz) and sidescan
sonar between May and November in 2015 and 2016 in Flanders Bay and
the Peconic River. Active acoustic surveys were capable of providing estimates of abundance, biomass, and length distribution of Atlantic menhaden.
Abundance and biomass estimates for the entire western Peconic Estuary
were extrapolated from survey data and showed large (order of magnitude)
variations in the menhaden population in these waters from spring to fall.
Length distributions of menhaden differed both among seasons and between
years. Data were ground-truthed using video, photographs, and morphological measurements of Atlantic menhaden.
4:40
1pAO10. Profiling measurement of internal tide in the Bali Strait by
reciprocal sound transmission. Fadli Syamsudin (Technol. for Regional
Resource Development, Agency for the Assessment and Application of
Technol. (BPPT), Jakarta, DKI Jakarta, Indonesia), Minmo Chen, Arata
Kaneko (Graduate School of Eng., Hiroshima Univ., 1-4-1 kagamiyama,
Higashi-Hiroshima, Hiroshima 739-8527, Japan, d153155@hiroshima-u.ac.
jp), John C. Wells (Civil Eng., Ritsumeikan Univ., Kusatsu, Shiga, Japan),
and Xiao-Hua Zhu (Oceanogr., Second Inst. of Oceanogr., Hangzhou,
China)
A reciprocal sound transmission experiment was carried out from 10 to
12 June 2015 along one-crossed-strait line in the Bali Strait with strong tidal
current to measure the vertical section structures of range-averaged current
and temperature at a 3 minutes interval. The five-layer structures of those
parameters in the vertical sections were reconstructed by the regularized of
Acoustics ’17 Boston
3488
travel time data for 2 rays. The hourly mean current showed a generation of
nonlinear internal tide with amplitudes of (1.0—1.5)m/s and period of 6
hours, superimposed on semi-diurnal internal tide with amplitudes, decreasing from the upper to lower layer. The hourly mean temperature was characterized by variation with amplitude of (1.0-1.5) and period of 6 and 8 hours.
Current variation revealed an out of phase relation between the upper and
SUNDAY AFTERNOON, 25 JUNE 2017
BALLROOM B, 1:20 P.M. TO 5:00 P.M.
Session 1pBAa
Biomedical Acoustics: Beamforming and Image Guided Therapy II: Cavitation Nuclei
Costas Arvanitis, Cochair
Mechanical Engineering and Biomedical Engineering, Georgia Institute of Technology, 901 Atlantic Dr. NW, Room 4100Q,
Atlanta, GA 30318
Constantin Coussios, Cochair
Institute of Biomedical Engineering, Department of Engineering Science, University of Oxford, Old Road Campus Research
Building, Oxford OX3 7DQ, United Kingdom
Invited Paper
1:20
1pBAa1. Transcranial acoustic imaging for real-time control of ultrasound-mediated blood-brain barrier opening using a
clinical-scale prototype system. Ryan M. Jones, Meaghan A. O’Reilly (Medical Biophys., Univ. of Toronto, 2075 Bayview Ave.,
Focused Ultrasound Lab (C713), Toronto, ON M4N 3M5, Canada, rmjones@sri.utoronto.ca), Lulu Deng, Kogee Leung (Physical Sci.
Platform, Sunnybrook Res. Inst., Toronto, ON, Canada), and Kullervo Hynynen (Medical Biophys., Univ. of Toronto, Toronto, ON,
Canada)
Multichannel beamforming of passively detected ultrasound (US)-stimulated acoustic emissions is a promising method for guiding
cavitation-mediated therapies. In the context of brain applications, our group and others have previously demonstrated the use of conventional beamforming techniques to transcranially map cavitation activity during microbubble (MB)-mediated blood-brain barrier
(BBB) opening. MB activity can be mapped at pressure levels below the BBB opening threshold, allowing target confirmation prior to
therapy delivery. By including skull-specific phase and amplitude corrections in the reconstruction process, the aberrating effects of the
cranial bone can be compensated for to improve image quality. Recently, we have designed, fabricated, and characterized multi-frequency, transmit/receive, sparse hemispherical phased arrays for MB-mediated brain therapy and simultaneous cavitation mapping
[Deng et al., Phys. Med. Biol. 61, 8476-8501 (2016)]. This talk will review our progress to date in using these prototype systems to
exploit the spatial information obtained from receive beamforming to actively modulate the therapeutic exposures during US-induced
BBB opening, following our previously developed single-element internal calibration approach [O’Reilly & Hynynen, Radiology 263,
96-106 (2012)]. We anticipate that this technique will improve the safety and efficacy of MB-mediated BBB opening, as well as other
future non-thermal US brain treatments such as cavitation-enhanced ablation, sonothrombolysis, and histotripsy.
Contributed Papers
1:40
1pBAa2. Towards transcranial focused ultrasound treatment planning:
A technique for reduction of outer skull and skull base heating in
transcranial focused ultrasound. Alec Hughes (Dept. of Medical
Biophys., Univ. of Toronto, 101 College St., Rm. 15-701, Toronto, ON
M5G 1L7, Canada, ahughes@sri.utoronto.ca), Yuexi Huang, and Kullervo
Hynynen (Sunnybrook Res. Inst., Toronto, ON, Canada)
Transcranial focused ultrasound is a rapidly-growing therapeutic modality with expanding applications in the treatment of brain disorders and diseases. As more treatments are proposed by clinicians, the need for
comprehensive, accurate treatment planning is required to take into account
3489
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
the complexities that can arise from the application of ultrasound to the
brain. These include skull heating, both on the outer surface of the skull and
at any bone at the skull base, as well as phasing corrections for skull aberration corrections. We will present a method for the reduction of outer skull
and skull base heating by using phased array controls. First, full-wave numerical simulations are used to demonstrate the corrections on a clinically
relevant skull base target using exported clinical imaging data. Then, results
from ex vivo experiments are presented to illustrate the application of these
phased array controls in the reduction of skull heating by scanning a 3D volume of heating around the focus, while computing the corrections in a clinically relevant timescale. Potential limitations of the method and future
directions will also be discussed.
Acoustics ’17 Boston
3489
1p SUN. PM
lower layer while temperature varied in phase for all five layers. The 2-day
average current formed a stratified structure, varying from -0.6 to -0.1m/s
and from 23.8 to 28, respectively. The five-layer current and temperature
were significantly over the inversion errors. It is suggested that thermal
stratification in the Bali strait was caused by intrusion of dense cold water
from the Indian Ocean due to coastal upwelling.
2:00
2:40
1pBAa3. Passive localization and classification of cavitation activity using
group sparsity. Can Baris TOP, Alper Gungor, and H. Emre Guven (Adv.
Sensing Res. Program Dept., Aselsan A.S., Mehmet Akif Ersoy Mah., 296.
Cad. No :16 Yenimahalle, Ankara 06370, Turkey, cbtop@aselsan.com.tr)
1pBAa5. A dual mode hemispherical sparse array for B-mode skull
localization and passive acoustic mapping within a clinical MRI guided
focused ultrasound platform. Calum Crake, Spencer Brinker, and Nathan
McDannold (Radiology, Brigham and Women’s Hospital, Harvard Med.
School, 221 Longwood Ave., Boston, MA 02115, crake@bwh.harvard.
edu)
In therapeutic high intensity focused ultrasound (HIFU) applications,
cavitation mapping is a powerful tool to monitor and guide the treatment
procedure. Furthermore, the frequency spectrum of the cavitation activity
can be used to classify the mode of cavitation (stable/inertial), enabling a
means for increasing the safety of the application. In this study, we formulate the cavitation mapping as a group sparse constrained optimization problem, minimizing l2,1-norm of the solution. The frequency bins related to a
class of cavitation activity (harmonic, ultra-harmonic, or broadband) are
grouped using l2-norm for each voxel, and l1-norm of the image is minimized. We solve this problem using an Augmented Lagrangian Method,
specifically the Alternating Direction Method of Multipliers (ADMM). We
used a simulation model to test this method on a 300 mm diameter 128-element hemispherical receiver array application. We calculate the radiated
pressure from the microbubbles inside the HIFU beam using a rigid vessel
bubble dynamics model. Then, we reconstruct the image associated with the
bubble activity at the focal region using the received signals for various
focal pressure distribution scenarios. The results show that the proposed
method provide improved resolution and sensitivity, especially for localizing inertial cavitation activity.
2:20
1pBAa4. Optimizing passive cavitation mapping by refined minimum
variance-based beamforming method: Performance evaluations in
macaque models. Tao Sun, Calum Crake (Radiology, Brigham and
Women’s Hospital; Harvard Med. School, 221 Longwood Ave., EBRC 514,
Focused Ultrasound Lab., Boston, MA 02115, taosun@bwh.harvard.edu),
Brian H. Tracey (ECE, Tufts Univ., Medford, MA), Costas Arvanitis
(Radiology, Brigham and Women’s Hospital; Harvard Med. School,
Boston, MA), Eric Miller (ECE, Tufts Univ., Medford, MA), and Nathan
McDannold (Radiology, Brigham and Women’s Hospital; Harvard Med.
School, Boston, MA)
Microbubble-mediated focused ultrasound (FUS) therapies harness mechanical and/or thermal effects to deliver drugs or ablate tissues. Passive
acoustic mapping (PAM) enables the spatio-temporal monitoring of cavitation activity, which is critical for the clinical translation of this technique.
Traditional PAM is based on delay-and-sum (DAS) beamforming, a method
whose quality tends to deteriorate due to issues including multi-bubble interference, distortion in the wavefront caused by the presence of the skull,
unmodeled variability of array elements, etc. To provide for robustness,
here we consider the use of minimum variance adaptive beamforming to
PAM and demonstrate significant improvement in image quality compared
to DAS. The minimum variance distortionless response (MVDR) method
was evaluated and further improved by adding diagonal loading and by
using subarray covariance estimates. Results demonstrate improvements in
both the resolution and image contrast compared to DAS using either traditional or a refined MVDR beamformer. The axial full width at half maximum of the microbubble activity at the focus was reduced to 79.5% and
38.5% of that in DAS image for traditional and refined MVDR beamformers, respectively. Moreover, the refined MVDR method greatly enhanced
the robustness while traditional MVDR beamforming induced self-nulling
effects. We anticipate that the proposed method will improve our ability to
monitor and control FUS-induced cavitation-based therapies.
Previous work has demonstrated that passive acoustic imaging may be
used alongside MRI for monitoring of focused ultrasound therapy. However,
current implementations have generally made use of either linear arrays
originally designed for diagnostic imaging or custom narrowband arrays
specific to in-house therapeutic transducer designs, neither of which is fully
compatible with clinical MR-guided focused ultrasound devices. Here we
have designed an array which is suitable for use within an FDA-approved
MR-guided transcranial focused ultrasound device, within the bore of a 3
Tesla clinical MRI scanner. The array is constructed from 5 0.4mm piezoceramic disc elements arranged in pseudorandom fashion on a low profile
laser-cut acrylic frame designed to fit between the therapeutic elements of a
230 kHz InSightec ExAblate 4000 transducer. By exploiting thickness and
radial resonance modes of the piezo discs the array is capable of both Bmode imaging at 5 MHz for skull localization, as well as passive reception
at the second harmonic of the therapy array for mapping of acoustic sources
such as emissions from cavitation. The strengths and limitations of the system for passive acoustic imaging during in vivo experiments will be discussed, utilizing robust and conventional time and frequency domain
beamforming methods.
3:00
1pBAa6. Transcranial histotripsy acoustic-backscatter localization and
aberration correction for volume treatments. Jonathan R. Sukovich,
Zhen Xu, Timothy L. Hall, Jonathan J. Macoskey, and Charles A. Cain
(Biomedical Eng., Univ. of Michigan, 1410 Traver Rd., Ann Arbor, MI
48105, jsukes@umich.edu)
Here, we present results from experiments using histotripsy pulses backscattered off of therapy-generated bubble clouds to perform point-by-point
aberration correction and bubble cloud localization transcranially over large
steering ranges to demonstrate the efficacy of these methods at improving
treatment efficiency and mapping volumetric treatments. Histotripsy pulses
were delivered through an ex vivo human skullcap mounted centrally within
a 500 kHz, 256-element histotripsy transducer with transmit-receive capable
elements. Electronic focal steering was used to steer the therapy focus
through individual points spanning a 30 mm diameter volume centered
about the transducer’s geometric focus. Backscatter signals from the generated bubble clouds were collected using array elements as receivers. Separate algorithms, based on time-domain information extracted from the
collected signals, were used to perform aberration correction and localize
the generated bubble clouds, respectively. The effectiveness of the aberration correction and localization results were assessed via comparison to
hydrophone measurements of the focal pressure amplitude and location
taken before and after backscatter aberration correction and localization
were applied. Backscatter aberration correction results showed increased
focal pressure amplitudes at all steering locations tested. Localization results
were in good agreement with hydrophone measurements, but were seen to
display preferential bias in the pre-focal direction at larger steering
distances.
3:20–3:40 Break
3490
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3490
Invited Paper
3:40
1p SUN. PM
1pBAa7. Cavitation enhanced drug delivery in-vivo using combined B-mode guidance and real-time passive acoustic mapping:
Challenges and results. Christian Coviello, Rachel Myers, Edward Jackson (OxSonics, Ltd., The Magdalen Ctr., Robert Robinson
Ave., Oxford OX4 4GA, United Kingdom, christian.coviello@oxsonics.com), Erasmia Lyka (Univ. of Oxford, Oxford, United
Kingdom), Lauren Morris, Cliff Rowe, James J. Kwan (OxSonics, Ltd., Oxford, Oxfordshire, United Kingdom), Robert Carlisle, and
Constantin Coussios (Univ. of Oxford, Oxford, United Kingdom)
Inertial cavitation nucleated by nano-scale sonosensitive particles (SSPs) at modest peak negative pressures (~1 MPa at 500 kHz)
and monitored by passive acoustic mapping (PAM) has been recently shown to improve the dose and distribution of anti-cancer agents
during ultrasound (US) enhanced delivery (Myers 2016, Kwan 2015). As applications of therapy monitoring using PAM have advanced
rapidly including its use in clinical trials, means of validating the performance of PAM in-vivo remains a major focus of efforts. For
drug delivery, PAM should not only quickly and reliably detect and localize desired and undesired cavitation, but it should provide
some predictor of successful delivery. In-vivo experiments using PAM in subcutaneous tumor implanted murine models across a range
of cancer cell lines (HEPG2, SKOV, EMT6, CT-26) demonstrate the detection of inertial cavitation by SSPs in the target regions when
sonicated by US, but no cavitation with US alone. Additionally when SSPs are co-administered with an oncolytic virus (vaccinia), a
small molecule chemotherapeutic (doxorubicin), or an immunotherapeutic (anti-PD-L1 antibody), PAM is able to effectively predict
successful delivery in the presence of cavitation in the target regions and unsuccessful delivery in the absence of cavitation in target
regions. Enhancements to PAM to deal with artifacts, spurious reflections, and to increase resolution were also able to improve the monitoring capability. Future work focuses on clinical translation and improved validation methods.
Contributed Papers
4:00
1pBAa8. Doppler passive acoustic mapping for monitoring
microbubble velocities in ultrasound therapy. Antonios Pouliopoulos,
Cameron Smith, Ahmed El Ghamrawy, Mengxing Tang, and James Choi
(BioEng., Imperial College London, Royal School of Mines, Imperial
College London, South Kensington Campus, London SW7 2AZ, United
Kingdom, a.pouliopoulos13@imperial.ac.uk)
The success of microbubble-mediated ultrasound treatments, such as
blood-brain barrier disruption and sonothrombolysis, is determined by
whether the correct cavitation dynamics are produced at the correct locations. Passive acoustic mapping (PAM) can track the location, magnitude,
type, and duration of microbubble-seeded cavitation produced during sonication. Using a single element passive cavitation detector (PCD), we
recently showed that microbubble velocities within the PCD listening volume can be determined by analysing the Doppler shifts in the microbubble
acoustic emissions. Here, we developed a PAM-based algorithm to passively track microbubble velocities using a linear array. Microbubbles embedded within a vessel were sonicated using a 1 MHz focused ultrasound
transducer (pulse length: 50 ms, peak-negative pressure: 200-600 kPa).
Acoustic emissions were captured by a co-aligned L7-4 linear array. PAM
using Capon beamforming was used to localize the acoustic emissions. We
spectrally analyzed the time traces in order to derive position-dependent
Doppler shifts and estimate axial velocities at each location. Doppler PAM
imaged the axial microbubble velocities along the ultrasound propagation
direction, at different time points during sonication. Microbubbles moved at
peak velocities of 1-2 m/s due to acoustic radiation forces, producing a time
dependent velocity profile. Doppler PAM allowed estimation of microbubble translation within an imaging plane, enabling enhanced monitoring of
therapeutic ultrasound applications.
4:20
1pBAa9. Passive acoustic mapping of extravasation for vascular
permeability assessment. Catherine Paverd, Erasmia Lyka, Delphine
Elbes, and Constantin Coussios (Inst. of Biomedical Eng., Univ. of Oxford,
Old Rd. Campus Res. Bldg., Headington, Oxford OX3 7DQ, United
Kingdom, catherine.paverd@eng.ox.ac.uk)
Prior research has demonstrated that Passive Acoustic Mapping (PAM)
enables real-time monitoring of cavitation activity occurring within the vasculature to achieve drug delivery and/or opening of the Blood Brain Barrier.
In the present work, we focus on whether sub-micron cavitation nuclei can
be imaged once extravasated. This would provide a means of determining
3491
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
both vascular permeability before or after ultrasound exposure, and realtime monitoring of successful drug delivery. A key challenge in achieving
these objectives is the spatial resolution of PAM. A novel bistatic setup was
used to achieve sub-millimetre resolution both axially and transversely to
two imaging arrays. A vertically oriented flow channel in a tissue mimicking
phantom was placed at the focus of two perpendicular confocal HIFU transducers, each with a coaxial linear imaging array. Sequential acoustic excitation at 0.5 or 1.55 MHz was used to first extravasate, and then re-excite the
nuclei once extravasated. The lower frequency creates stronger microstreaming while the higher frequency favors acoustic radiation force.
Results have demonstrated accurate localization of extravasated nuclei at
0.4 mm from the channel. Localization was achieved using optimal beamforming PAM and verified with fluorescence microscopy. Future work will
focus on in vivo applicability using murine or cancerous perfused organ
models.
4:40
1pBAa10. Passive microbubble imaging with short pulses of focused
ultrasound and absolute time-of-flight information. Mark T. Burgess,
Iason Apostolakis, and Elisa Konofagou (Biomedical Eng., Columbia Univ.,
630 West 168th St., Physicians and Surgeons 19-418, New York, NY
10032, mark.t.b42@gmail.com)
Focused ultrasound (FUS)-stimulated microbubble activity has been
proposed as an efficient technique in numerous therapeutic ultrasound applications. Passive imaging of microbubble activity is used to spatially map the
intensity and location of microbubble activity for correlation with therapeutic outcomes. Current passive imaging methods were developed for application with continuous-wave FUS therapies and have inherent limitations
including poor axial image resolution. This study seeks to implement a synchronous passive microbubble imaging method using short pulses of FUS
(200-500 kPa peak negative pressures, 2-3 cycles) at high frame rates (5005000 Hz pulse repetition rate) to preserve absolute time-of-flight and
improve axial resolution. In vitro and in vivo studies were carried out using
an 18-MHz imaging array (L22-14v LF, Verasonics, Inc.) and 1-MHz FUS
transducer aligned off-axis relative to the imaging array. A research-based
ultrasound system (Vantage 256, Verasonics, Inc.) was used for custom
transmit and receive sequences. Results indicate that this technique is able
to “localize” microbubbles with improved resolution compared to previous
methods and create detailed microvascular maps of microbubble activity
throughout the focal area. The application of this technique for monitoring
FUS-mediated blood-brain barrier opening will be shown. [Work supported
in part by NIH grants R01AG038961 and R01EB009041.]
Acoustics ’17 Boston
3491
SUNDAY AFTERNOON, 25 JUNE 2017
ROOM 312, 1:40 P.M. TO 4:40 P.M.
Session 1pBAb
Biomedical Acoustics: Imaging II
Parag V. Chitnis, Chair
Department of Bioengineering, George Mason University, 4400 University Drive, 1G5, Fairfax, VA 22032
Contributed Papers
1:40
2:20
1pBAb1. Ex vivo testing of basal cell carcinomas and melanomas with
high-frequency ultrasound. Christine E. Dalton, Zachary A. Coffman
(Biology, Utah Valley Univ., 800 W University Pkwy, Orem, UT 84058,
christine.e.dalton@gmail.com), Garrett Wagner (Comput. Sci., Utah Valley
Univ., Orem, UT), and Timothy E. Doyle (Phys., Utah Valley Univ., Orem,
UT)
1pBAb3. Functional neuro-imaging with magnetic resonance
elastography. Samuel Patz (Radiology, Brigham & Women’s Hospital, 221
Longwood Ave., Boston, MA 02115, patz@bwh.harvard.edu), Navid
Nazari (Biomedical Eng., Boston Univ., Boston, MA), Katharina Schregel,
Miklos Palotai (Radiology, Brigham & Women’s Hospital, Boston, MA),
Paul E. Barbone (Mech. Eng., Boston Univ., Boston, MA), and Ralph
Sinkus (Biomedical Eng., Kings College London, London, United
Kingdom)
The objective of this study is to significantly reduce the length of surgery for skin cancer patients by developing a diagnostic method to quickly
distinguish cancerous from non-cancerous tissue. A common treatment for
basal cell carcinoma and several melanomas is Mohs surgery, consisting of
surgical resection of the tumor and successive resections of the surrounding
tissues (margins). Because each excised specimen needs to be examined for
cancerous margins, skin cancer surgery can last up to 4 hours. To rapidly
evaluate Mohs surgical specimens, a high-frequency (20-80 MHz) ultrasound method, originally developed for testing breast cancer surgical specimens, was modified for smaller skin cancer tissues. The method uses a
narrow-beam (1.5-mm diameter) probe, a broad-beam (6.35-mm diameter)
transducer, an ultrasonic pulser-receiver, a digital oscilloscope, an aluminum test stage to hold the specimen, and a hybrid water immersion/contact
approach to acquire highly accurate data. The method is currently undergoing a feasibility study on skin cancer surgical margins at the Huntsman
Cancer Institute, Salt Lake City, Utah. Preliminary results from 16 patients
show that the 20-80 MHz peak density values from the power spectra are
consistent with those found in previous breast cancer margin studies.
2:00
1pBAb2. Ultrasonic characterization of human colon carcinoma cells in
the 5-25 MHz frequency range. Amy Longstreth, Judene Thomas, Yaa
Kwakwa, Janae Davis, and Maria-Teresa Herd (Phys., Mount Holyoke
College, 50 College St., South Hadley, MA 01075, longs22a@mtholyoke.
edu)
Early recognition of cancerous tissue is crucial in receiving a favorable
prognosis. Diagnostic tools that allow for an understanding of differences in
the ultrasonic characteristics (such as speed of sound (SOS), attenuation,
and backscatter coefficients (BSC)) in malignant and benign cells can aid in
early diagnostics. Although statistically significant distinctions between benign and cancerous tumor scatterer properties have been demonstrated, there
is little knowledge about which cell characteristics create differences in
scattering. This study centers on techniques using quantitative ultrasound to
quantify the microstructure of HTC (colon cancer) cells in an attempt to establish a greater understanding of scattering mechanisms. To analyze these
characteristics HTC cells were cultured, suspended in agar and prepared
into samples. Broadband BSC measurements were conducted using focused
transducers and narrowband attenuation and SOS measurements were performed using receiving and transmitting transducers. All experiments were
made in the 5-25 MHz range at 21 C. A comparison of the obtained results
was made with a similar study using a higher concentration of HTC cells to
test the ability to accurately estimate the ultrasonic properties. The results
introduce relevant data useful for comparative studies and further analysis.
3492
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Evaluate changes in the shear modulus of brain tissue as a new measure
of localized brain function. A spin-echo magnetic resonance elastography
(MRE) sequence was modified to allow two interleaved paradigms: stimulus
ON/OFF. To avoid neuronal habituation, a paradigm was active for 9s
before switching to the other paradigm. After each paradigm switch, a period of 1.8 s was allowed for hemodynamic equilibrium. Seven healthy
black mice were studied. An electrical current to the hind limb, ~1 mA, 3
Hz, pulse width ~250 ms, was used as the functional stimulus. A separate
control scan was also performed where no stimulus was applied for either
paradigm. Vibration frequency = 1kHz. In six of the seven animals, a localized increase in G’ was observed in the somatosensory and motor cortex
areas, whereas no difference was observed in the control scan. The average
increase of G’ = 14%. Two potential mechanisms were considered: (i) a vascular effect similar to BOLD in fMRI and (ii) calcium influx into the neurons. The first mechanism was ruled out based on results from an additional
experiment where hypercapnia was induced to cause vasodilation. This
implies the mechanism responsible is a primary measure of neuronal
activation.
2:40
1pBAb4. The frequency-dependent effects of low-intensity ultrasound
exposure on human colon carcinoma cells. Chloe Verducci, Hannah Seay,
Janae Davis, Amy Longstreth, Yaa Kwakwa, and Maria-Teresa Herd
(Phys., Mount Holyoke College, 50 College St., South Hadley, MA 01075,
verdu22c@mtholyoke.edu)
Previous studies have established a correlative relationship between the
acoustic properties of normal and malignant epithelial cells in response to
high-frequency ultrasound exposure, indicating ultrasound’s value as a tool
in modern non-invasive cancer detection. More recently, ultrasound exposure has been extended into therapeutic fields, manipulated in frequency and
power to stress and destroy specific cancer cells based on their determined
acoustic properties. The present study seeks to determine the frequency-dependence of low intensity ultrasound exposure on varying cell types, beginning with colon carcinoma cells. The cells were exposed to a single element
unfocused piezoelectric transducer at energies of less than 3 W/cm2 and frequencies of 0, 5, 10, 15, 20, and 25 MHz, and evaluated for cytotoxicity
indicated by lack of cell adhesion before and after treatment. Comparison is
also made to non-cancerous human colon epithelial cells. This project compares the frequency-dependent effects of low intensity ultrasound on cells of
varying cancerous and non- cancerous lineages, especially highlighting the
low-point threshold.
Acoustics ’17 Boston
3492
3:20
1pBAb5. Simulating fibrin clot mechanics using finite element methods.
Brandon Chung Y. Yeung and E. Carr Everbach (Eng., Swarthmore
College, 500 College Ave., Swarthmore, PA 19081, cyeung2@swarthmore.
edu)
Blood clots inside blood vessels impede blood flow and can lead to
blockage. Injection of thrombolytic agents affects the body systemically and
may lead to hemorrhage. Risk is reduced with sonothrombolysis—high-amplitude pulsed ultrasound that drives micron-sized bubbles into violent oscillations, destroying the fibrin mesh of a clot. To gain insight into the 3D
structure and mechanical behavior of fibrin clots, we fabricated a clot from
purified fibrinogen, imaged it using a confocal microscope and 3D printed a
plastic model of the image. Coordinates of fibrin connection points were
entered into ANSYS, the clot finite-element-simulated for nodal displacements, and the bulk Young’s modulus of the clot calculated. Simulations
suggested that the elastic moduli in the three orthogonal directions were Ex
= 113.5 Pa, Ey = 109.1 Pa, and Ez = 16.17 Pa. The close agreement between
Ex and Ey supported the assumption of isotropy in a fibrin clot. The deviation of Ez from Ex and Ey could be attributed to the presence of the glass
coverslip affecting the clot structure. Overall, results showed that confocal
microscopy and simulations in ANSYS are useful in modeling clot structure
and mechanics. Our next step is to simulate bubble activity inside the virtual
clot.
4:00
1pBAb7. Real-time monitoring and control of stable cavitation activity
in pulsed sonication. Corentin Cornu, Matthieu Guedra, Jean-Christophe
Bera, and Claude Inserra ( Univ Lyon, Universite Lyon 1, INSERM,
LabTAU, F-69003, LYON, France, 151, cours Albert Thomas, Lyon 69424,
France, corentin.cornu@inserm.fr)
Even if bubbles collapses are commonly thought to be the key element
of cell permeabilization for drug delivery applications, recent works have
demonstrated the possibility of transfecting cells by gentle oscillating bubbles (stable cavitation), possibly resulting in lower cell lysis or tissue damages. Nevertheless, in a bubble cloud, both stable and inertial cavitation
activities would naturally coexist, thus making difficult to quantify the contribution of both regime on the drug delivery process. To distinguish each
cavitation regime, a feedback-loop process is implemented on the subharmonic component emitted from the bubble cloud generated in a water tank
by a focused transducer. This feedback loop, acting at a 250ms loop rate,
allows adjusting the level of subharmonic emission as well as measuring inertial cavitation activity (broadband noise emission), by real-time modulating the applied voltage to the transducer. Evidences of control of the stable
cavitation activity are reported, associated with (1) the possibility of exciting a time-stable subharmonic component, (2) the lowering of the broadband noise level and (3) the saving of acoustic energy (up to 20%) to ensure
a given subharmonic emission level. [Work supported by the French
National Research Agency, LabEx CeLyA (ANR-10-LABX-0060) and
granted by the ANR-MOST project CARIBBBOU (ANR-15-CE19-0003).]
3:40
4:20
1pBAb6. Numerical investigation of the subharmonic response of a
cloud of interacting microbubbles. Hossein Haghi, Amin Jafari
Sojahrood, Raffi Karshafian, and Michael C. Kolios (Phys., Ryerson Univ.,
350 Victoria St., Toronto, ON M5B 2k3, Canada, hossein.haghi@ryerson.
ca)
1pBAb8. Towards the accurate characterization of the shell parameters
of microbubbles based on attenuation and sound speed measurements.
Amin Jafari Sojahrood (Dept. of Phys., Ryerson Univ., 350 Victoria St.,
Toronto, ON M5B2K3, Canada, amin.jafarisojahrood@ryerson.ca), Qian Li
(Biomedical Eng., Boston Univ., Boston, MA), Hossein Haghi, Raffi
Karshafian (Dept. of Phys., Ryerson Univ., Toronto, ON, Canada), Tyrone
M. Porter (Biomedical Eng., Boston Univ., Boston, MA), and Michael C.
Kolios (Dept. of Phys., Ryerson Univ., Toronto, ON, Canada)
Microbubbles (MBs) usually exist in polydisperse populations and often
strongly interact with each other. Accurate investigation of the dynamics of
the MBs requires considering the interaction between Microbubbles. We
have developed an efficient method for numerically simulating N interacting
MBs. The subharmonic (SH) responses of a polydispersions of 3-52 microbubbles with sizes between 2-5 microns, excited with ultrasound with a frequency and pressure of 1.8-8 MHz and 1-500 kPa were investigated. We
show that if the frequency is set to the SH resonance of the larger MBs, the
smaller MBs oscillations are controlled by the larger MB and the SH amplitude of the population of MBs increases. For small enough distances
between MBs, one large MB may control the oscillations of the rest. If the
excitation frequency is equal to the SH resonance of the smaller MBs, there
exists two pressure regions. For lower pressures, the oscillations of the
larger MB are out of phase with the smaller MBs and the resulting SH amplitude is smaller than the case without the bigger MB. As the pressure
increases the oscillations of the larger MB becomes in phase with the
smaller MBs and the SH response of the system is enhanced.
3493
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Measurements of microbubble (MB) shell parameters is a challenging
task because of the nonlinear dynamics of MBs. Shell parameter estimations
that are typically based on solving linear models will generate inaccurate
results, especially at higher pressure excitations. These approaches also often ignore the analysis of sound speed which provides useful information
about the bulk modulus of the medium. In addition, the effect of MB-MB
interaction is neglected. In this work, the attenuation and sound speed of
monodisperse MB populations with mean diameters of 4 to 6 micron and
peak concentrations of 1000 to 15000 bubbles/ml are measured for a pressure range of 10 to 100 kPa. The subharmonic pressure threshold of the solution was measured by narrowband excitations spanning from 1 to 4 MHz.
The subharmonic generation pressure threshold was used to estimate an initial guess for shell viscosity and surface tension. The experimental results
were fitted using numerical simulations of the Marmottant model and our
recently developed nonlinear model for attenuation and sound speed. The
effect of MB-MB interaction was also implemented using simulations of a
lattice of interacting MBs (fitted to the measured sizes of MBs) to take into
account the effect of concentration.
Acoustics ’17 Boston
3493
1p SUN. PM
3:00–3:20 Break
SUNDAY AFTERNOON, 25 JUNE 2017
ROOM 204, 1:20 P.M. TO 5:20 P.M.
Session 1pEA
Engineering Acoustics: Engineering Acoustics Topics I
Stephane Durand, Chair
LAUM, Universit
e du Maine, LAUM - Universit
e du Maine, Avenue Olivier Messiaen, Le Mans 72085, France
Contributed Papers
1:20
2:00
1pEA1. Reduced-size backing electrode microphone: Models and
measurements. Cheng Qian, Alexey Podkovskiy (LAUM, Universite du
Maine, Le Mans, France), Petr Honzik (Faculty of Transportation Sci.,
CVUT, Praha 1, Czech Republic), Nicolas Joly, and Stephane Durand
(LAUM, Universite du Maine, LAUM - Universite du Maine, Ave. Olivier
Messiaen, Le Mans 72085, France, stephane.durand@univ-lemans.fr)
1pEA3. On designing a new sound of the car-horn. SangHwi Jee,
Myungsook Kim, and Myungjin Bae (Soongsil. Univ., Seoul 06978, South
Korea, slayernights@ssu.ac.kr)
Reduced-size backing electrode microphones have been developed
recently to achieve an easier match to specified response requirements. Such
a development has used both analytical and numerical multi-physics modelings to validate the architecture efficiency. This microphone is composed of
a membrane covering an annular cavity surrounding a central backing electrode. This simplified structure leads however to a higher sensitivity and an
larger bandwidth, as shown in previous publications. A new modeling,
based on lumped elements modeling, has been developed, in order to provide an easy a quick design tool for choosing the microphone parameters,
these ones being fitted more precisely with a FEM modeling. These tools
have leaded to the development of prototypes which have been characterized. The comparison of the measured data to the ones computed with the
different models is presented. It shows a pretty good agreement between the
several modelings and the experimental data, and it enlightens the need to
take into account parasitic capacitance effects.
1:40
1pEA2. On acoustic characteristics of the Sound-7A coloring method.
Yi Eun Young and Myungjin Bae (Information and TeleCommun., Soongsil
Univ., Sangdo 1-dong, Dongjak-gu, Seoul, Seoul 156-743, South Korea,
go6051@naver.com)
The most fundamental aspect of the noise influence on human beings is
the size of the noise source. This paper investigates the acoustics characteristics of the sound-7A coloring system and compares each of the sound-7A
colors with respect to a curve type. In this paper, the sound-7A coloring
method is applied to the interlayer noise, when considering the level of
noise and the energy ratio of the lightweight source to frequency band. The
influence of the interlayer noise on cognitive mechanism of human beings is
analyzed based on the loudness curve of human consciousness. The colors
can be determined in both lightweight interlayer noise and heavy interlayer
noise. As previously explained by the cocktail party effect human perceptive
ability picks out some specific information among meaningless noise
because it is somehow irritating to human ears. Out of some characteristics
of the sound-7A coloring method, the energy ratio of the blue sound color
band is nearly as twice as much for that of pink noise. This study can be utilized in our daily lives since it transfers aural perception into visual images
such as illustrating lights in rainbow color coding and expressing musical
instruments in special lighting performances.
People are exposed to noise from birth to death. Hearing is considered
the first human sense to awaken and the earliest to fall asleep. Human auditory sense distinguishes about 400,000 different sounds and classifies them.
Noise is generally referred as unwanted sound, and human hearing is especially susceptible to noise. There are many kinds of noise such as household
noise, construction site noise, transportation noise, noise between floors, and
so on. Transportation noise, in particular, may harm more people since the
driver as well as the people on the street are influenced by the noise. This
study introduces a new friendly car-horn sound based on driver’s preference
through a MOS test over five different sounds. The sound has been selected
based on characteristics of human auditory sense, the brainwave test, stressindex, and perceptive responses from one hundred subjects who participated
in the experiment. The selected sound can be utilized in any motor vehicles
available at the market so that not only the drivers honking but also the
pedestrians hearing the car-horn sound can live more comfortably without
hearing any annoying noise from the streets.
2:20
1pEA4. Spatially shaped acoustic transducer arrays. Stephen C. Butler,
Thomas A. Frank, and Jackeline D. Diapis (Naval Undersea Warfare Ctr.,
1176 Howell St., Newport, RI 02841, stephen.c.butler1@navy.mil)
A shaped acoustic beam pattern that directs acoustic energy in the 6
45 direction with depressed acoustic energy at 0 is described. The shape
of the beam pattern is controlled by area shading and the angular radius of
the electro-acoustic transducer arrays. The advantage of such arrays is that
acoustic energy is directed only in the required space needed at 6 45 , thus
minimizing the electrical input power requirements. As a result, the number
of electrical wires and the driving circuit requirements is significantly
reduced because the same electrical drive voltage can be applied to all the
elements in the shaped array. The shaped array described herein can be
developed in the form of a one-dimensional (1-D), two-dimensional (2-D),
or three-dimensional (3-D) acoustic sonar transducer array (see US Patent
9,286,418 B1, Mar. 15, 2016). [Funding from ONR.]
2:40
1pEA5. Efficiency of capacitive micromachined ultrasound transducers
For large signal non-collapsed operation. Amirabbas Pirouz (Elec. and
Comput. Eng., Georgia Inst. of Technol., 771 Ferst Dr., Love Bldg., Rm.
209, Atlanta, GA 30332, a.pirouz@gatech.edu) and F. Levent Degertekin
(The George W. Woodruff School of Mech. Eng., Georgia Inst. of Technol.,
Atlanta, GA)
Although capacitive micromachined ultrasonic transducers (CMUTs)
are mostly considered for imaging applications for their broad bandwidth,
these devices can also prove useful for high intensity focused ultrasound
(HIFU) applications for therapeutics. For these purposes, energy conversion
efficiency is especially significant for high intensity and high duty cycle
3494
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3494
3:00–3:20 Break
3:20
1pEA6. Comparison of different playback techniques in binaural headrelated transfer function synthesis. Florian Wiese, Mina Fallahi, and
ur H€
ortechnik und Audiologie, Jade Hochschule
Matthias Blau (Institut f€
Oldenburg, Ofener Str. 16-19, Oldenburg, Lower Saxony 26121, Germany,
Florian.Wiese@jade-hs.de)
In binaural synthesis, the playback device (typically: headphones) can
play an important role in achieving the desired perceptual authenticity. One
potential source of error is the variability of headphone transfer functions
when the headphones need to be taken off and on again after equalization.
To avoid this issue, Erbes et al. (2012) proposed a device with miniature
loudspeakers located about 5 cm away from the ears, which could remain in
place. However, the device itself forms an obstacle not present in normal listening conditions and may therefore introduce direction-dependent artifacts.
As an alternative, we propose a device which is placed in the subjects’ ear
canals. Due to its small size and the position in the ear canal, it is hoped that
artifacts will be independent of the direction of sound incidence. In order to
compare the two devices which remain in place to the more traditional playback over classical headphones, binaural syntheses were generated for 432
source positions (12 elevations and 36 azimuths) and related to transfer
functions from real sources to microphones of a dummy head with ear
canals. It was found that the agreement between synthesis and measurement
was best with the in-ear device.
4:00
1pEA8. Active control of a finite line source using multiple directional
sources. Qi Hu and Shiu-Keung Tang (Dept. of Bldg. Services Eng., The
Hong Kong Polytechnic Univ., Hung Hom, KLN, Hung Hom Na, Hong
Kong, qi.bs.hu@connect.polyu.hk)
The active control of a finite line source in free space is studied using
different types of secondary control sources to create a particular quiet zone.
The simulations based on analytical formulation indicate that the active control improves significantly with the usage of directional secondary sources,
such as axially oscillating baffled circular pistons. The comparison between
directional sources of different directivity patterns shows that the directivity
property has a determinant effect on the control results. A multi-part directional source, then, is introduced as the novel secondary control sources
with its simplest two-part type. It consists of an inner-piston and an outerconcentric-annulus both oscillating axially with optimized amplitudes and
relative phase. This novel secondary source helps to achieve excellent control results within a realistic physical size. The control performance regarding the novel sources with more outer annuluses are also studied.
4:20
1pEA9. Influence of piezoelectric materials on the performance of thin
film hydrophones. Hanna Lewitz and Eckhard Quandt (Inst. for Mater.
Sci., Kiel Univ., Kaiserstr. 2, Kiel 24143, Germany, hale@tf.uni-kiel.de)
Piezoelectric materials have been implemented in hydrophones in the
early 20th century, mostly as cut bulk sensors or in cylindrical or spherical
form made of piezoelectric materials [1]. Beginning with quartz soon other
materials have been used, especially lead zirconate titanate with its high piezoelectric coefficients has been a large improvement in acoustic measurements. Nowadays miniaturization of the hydrophones is of interest, for
example, in high resolution arrays for noise monitoring. Thus hydrophones
materials are needed which have with less volume a similar performance as
the state of the art bulk materials. Therefore, this work investigates hydrophones based on different piezoelectric thin films. These are mostly MEMSproduction compatible and show good properties for the detection of acoustic signals. Therefore, sensors with different materials have been produced
and the performance has been evaluated under laboratory conditions. [1] C.
H. Sherman, J. L. Butler, Transducers and Arrays for Underwater Sound,
Spring Sciences + Business Media, New York, 2007.
3:40
4:40
1pEA7. Smartphones as research platform for hearing improvement
studies. Nasser Kehtarnavaz and Issa M. Panahi (Elec. Eng., Univ. of Texas
at Dallas, 800 West Campbell Rd., Richardson, TX 75080, kehtar@utdallas.
edu)
1pEA10. Electric power generation using acoustic helical wavefronts in
air. Ruben D. Muelas H., Jhon F. Pazos-Ospina, and Joao L. Ealo (School
of Mech. Eng., Universidad del Valle, Bldg. 351, Cali, Colombia, ruben.
muelas@correounivalle.edu.co)
This poster presents the development of software tools at the University
of Texas at Dallas to turn smartphones into a research platform for hearing
improvement studies as part of the newly funded R01 NIH project entitled
“Smartphone-Based Open Research Platform for Hearing Improvement
Studies.” The challenge in deploying smartphones as a research platform for
hearing improvement studies lies in the use of programming languages that
researchers are most familiar with. These languages are MATLAB and C. In
other words, the challenge to deploy smartphones as a research platform is
by not demanding researchers to know the programming languages associated with smartphones (Java for Android smartphones and Objective C for
iOS smartphones) in order to turn smartphones into implementation research
platforms. This challenge is met in this work by developing software shells
that allow signal processing codes that are written in either C or MATLAB
to be run on the ARM processors of smartphones/tablets. As part of this
poster presentation, demos of the apps that have been generated so far based
on these developed software shells will be presented.
Acoustic Vortices (AV) are able to transfer angular momentum to matter
and induce rotation to objects with different geometries. Several applications of this feature has already been reported. However, up to our knowledge, the possibility of generating electric power using this type of
wavefronts has not been reported. In this work, we present experimental
results on the electric power produced by a small generator coupled to a propeller of four blades that is insonified by an acoustic vortex beam of topological charge + 1. The AV is produced using a multitransducer of 123
commercially available emitters driven with a continuous signal of 20 Vpp
at 40 kHz in air. The propeller was located 40 mm far from the multitransducer, perpendicular to the principal axis of the AV. A voltage of 6 mV was
observed at the electric terminals. A discussion on the obtained efficiency of
the conversion is presented. Also, the possibility of harvesting residual
acoustic energy by means of wave-matter exchange of linear and/or angular
momentum is discussed. Special attention is paid to non-dissipative
processes.
3495
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3495
1p SUN. PM
ultrasound applications. The usual small signal coupling coefficient analysis
based on capacitance and resonance frequency measurement is not adequate
for large signal application when nonlinearity comes to affect device performance. Therefore, an energy conversion ratio (ECR) based on a nonlinear
large signal model has been proposed to analyze high power CMUT operation with and without DC bias (AC only). The results on a particular CMUT
operating around 5 MHz show that AC only operation at half of device
working frequency provides a higher level of pressure (0.8 dB more) as
compared to DC biased case and can achieve 90% ECR. Since the input and
output frequencies are not the same for AC only operation, Insertion loss
(IL) is defined as the ratio of mechanical output power divided by the available electrical power in this case. With that definition, AC only and DC biased operation shows about the same IL in line with the ECR calculations.
5:20
nonlinear gas oscillations with and without shock waves and resulting timeaveraged transport phenomena of mass, momentum and energy are investigated in the tube of square cross section. In particular, we find that there
exist streamlines of acoustic streaming visiting both the inside and outside
of boundary layer, which means that the mass, momentum and energy transports are confined neither in the inside nor in the outside of boundary layer
on the tube wall, contrary to two-dimensional flows.
1pEA11. Resonant gas oscillation in a tube of square cross section.
Takeru Yano (Mech. Eng., Osaka Univ., 2-1, Yamada-oka, Suita 565-0871,
Japan, yano@mech.eng.osaka-u.ac.jp)
Nonlinear resonant gas oscillations in a tube of square cross section are
studied by solving the full system of Navier-Stokes equations for threedimensional compressible gas flows with a finite-difference method. The
SUNDAY AFTERNOON, 25 JUNE 2017
ROOM 200, 1:15 P.M. TO 5:40 P.M.
Session 1pMU
Musical Acoustics and Architectural Acoustics: Concert Hall Acoustics
Jonas Braasch, Cochair
School of Architecture, Rensselaer Polytechnic Institute, 110 8th Street, Troy, NY 12180
David H. Griesinger, Cochair
Research, David Griesinger Acoustics, 221 Mt. Auburn St. #504, Cambridge, MA 02138
Chair’s Introduction—1:15
Invited Papers
1:20
1pMU1. Influence of the exact source and receiver positions on the uncertainty of acoustical measurements in concert halls. Ingo
B. Witew and Michael Vorlaender (Inst. of Tech. Acoust., RWTH Aachen Univ., Neustrasse 50, Aachen 52066, Germany, ingo.witew@
akustik.rwth-aachen.de)
Acoustical measurements are crucial to backing up theories or supporting conclusions in research and practical applications. In concert halls, however, it is well known that small changes to the receiver position yield a measurable change to the impulse response and
the calculated single number parameter. This gives raise to the questions whether these spatial fluctuations limit the validity of measurements and whether there are implications for measurement applications? The presented study discusses how a measurement uncertainty
approach may provide a new perspective to these problems. Based on array measurements a relationship has been established that quantifies how a change in measurement position leads to an average change to a room acoustic quantity. Strategies as outlined by the “Guide
to the Expression of Uncertainty in Measurement” (GUM) are used to determine the bounds in which valid measurement results can be
collected. It is discussed how these findings can be considered in applied measurement studies.
1:40
1pMU2. Disentangling room acoustics though binaural measurements: The importance of individual ear-canal variations. David
H. Griesinger (Res., David Griesinger Acoust., 221 Mt. Auburn St. #504, Cambridge, MA 02138, dgriesinger@verizon.net)
Evolution has endowed humans with extraordinary powers of hearing, powers that enable us to separate information-containing signals from noise and other signals with an acuity still unmatched by modern machines. We find that this ability depends critically on
resonances in the ear canal. The ear canal and concha form a horn that boosts frequencies above 1000 Hz as much as 18 dB. These
resonances are highly individual, headphones alter them, and just a 1 dB difference in the frequency balance at the eardrum can replace
comprehension with confusion. We have developed a computer app that non-invasively equalizes headphones to accurately reproduce
these resonances. Once headphones are individually equalized they reproduce binaural recordings with frontal localization and no head
tracking. Using Lokki’s anechoic recordings we can manipulate a single binaural measurement to create a binaural rendition of a small
ensemble nearly identical to a live recording in the same seat. We can then lower or boost specific reflections and hear how perceptions
of proximity, clarity, and envelopment change. We find that successful separation of the direct sound from reflections and reverberation
is essential for both proximity and envelopment. The earliest reflections, whether medial or lateral, are almost always either inaudible or
detrimental.
3496
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3496
2:00
1pMU3. The use and abuse of early energy in concert space design. Christopher Blair and Paul H. Scarbrough (Akustiks, LLC, 93
North Main St., Norwalk, CT 06854, cblair@akustiks.com)
1p SUN. PM
Fifty-five years ago Leo Beranek introduced the notion of a short initial time delay gap as being essential for acoustic “intimacy.”
However, in the authors’ consulting experience, it has become apparent that many problems in concert and recital halls, onstage and in
the audience, can be laid at the feet of too much early energy muddying clarity. A layman’s description of the effect might be: “There’s
not enough ‘air’ around the sound.” In support of this notion, recent research suggests that excessive early energy is actually the enemy
of intimacy, masking the direct sound, modulating the phase of tonal components, inhibiting source localization. This paper presents the
results of several listening experiments in halls of various sizes where adding absorption, venting or redirecting energy in specific critical
locations dramatically enhanced the perception of both clarity and reverberation for musicians onstage and in the audience. Such methods often challenge traditional musician (and designer?) preconceptions, so whenever possible we employ quickly realized A/B comparisons to demonstrate treatment effectiveness.
2:20
1pMU4. Hybrid shaping applied to concert hall design. Jose A. Nepomuceno (Ac
ustica & S^
onica, Acustica & Sonica, Rua Fradique
Coutinho, 955 cjt 12, S~ao Paulo, S~ao Paulo 05433-000, Brazil, info@acusticaesonica.com.br) and Christopher Blair (Akustiks, Norwalk,
CT)
One of the first questions posed at the beginning of any concert hall design process is whether the room will be in the traditional
long “shoebox” shape or the more intimate wrapping of the audience around the performers, characteristic of the “vineyard” approach.
This paper presents the design process, acoustical measurements, and musician comments for a recently completed room where the anonica de Minas Gerais, Belo Horizonte, Brazil. The concert hall has
swer was “both”. Sala Minas is the home of the Orquestra Filarm^
1500 seats with a hybrid shape combining the attractive physical and acoustical attributes of shoebox and vineyard approaches. While
the audience is held close to the performers, enhancing clarity and impact, the basic room geometry is a modified rectangle with a significant vertical “hard cap” zone providing ample reverberation, envelopment, and blending of orchestral sections. The result is a vineyard
configuration with unusually consistent acoustic character in all the seating sections. Adjustable acoustic elements include a movable
canopy over the stage, motorized acoustical banners, and shutters on the stage walls that can be opened or closed. After two seasons
since Sala Minas opening, the reviews about its acoustics are impressive.
2:40
1pMU5. Applying subjective perception to acoustical planning. Gunter Engel (M€
uller-BBM, Robert-Koch-Str. 11, Planegg 82152,
Germany, Gunter.Engel@mbbm.com)
Traditional room acoustical planning suffers from a considerable gap between the available measurement techniques and quality criteria at the one side and the subjective perception on the other side. Since the subjective perception is to a certain extent a matter of taste
and moreover considerably overlaid by expectations and impressions which have nothing to do with acoustics, it is no wonder that it is
so hard to find a suitable approach for creating the all over demanded perfect and world class acoustics. Applying new insights into the
influence of early reflections on the perceived sound characteristics helps considerably to tailor the acoustics of a hall to your wishful
thinking. The approach is illustrated by two concert halls with innovative design measures.
3:00
1pMU6. Achieving the excellent listening experience—Notes from 30 years experience in the successful integration of physical
and electronic architecture. Steve Barbar (E-coustic Systems, 30 Dunbarton Rd., Belmont, MA 02478, steve@lares-lexicon.com)
This paper will discuss important considerations in the elements that comprise electronic systems that alter perceived acoustics. In
addition, the paper will discuss aspects of optimizing physical architecture to enable electronic architecture to work efficiently.
3:20–3:40 Break
3:40
1pMU7. Comparison of listener preference in concert halls from the stage and the audience. Samuel W. Clapp (Audio Information
Processing, Tech. Univ. of Munich, Arcisstr. 21, M€
unchen 80333, Germany, samuel.clapp@tum.de), Anne Guthrie (Arup Acoust., New
York, NY), and Jonas Braasch (Graduate Program in Architectural Acoust., Rensselaer Polytechnic Inst., Troy, NY)
The perception of concert hall acoustics has been studied extensively from the perspective of the audience, but less extensively from
the perspective of the musicians performing on stage. In this study, impulse response measurements were conducted in a group of eight
concert and recital halls in the northeastern United States. A multi-channel microphone array was used to measure listening positions
both on the stage and in the audience. The impulse responses recorded with the microphone array were used to generate auralizations for
playback over multi-channel loudspeaker arrays, to investigate listeners’ and musicians’ preferences via listening tests. The audience
positions were presented to the test subjects via static, pre-convolved auralizations. For the stage positions, all test subjects were musicians who, during the course of the test, performed on their instruments and heard the resulting auralizations from the stages of the different concert halls, generated in real-time. The results allowed for a comparison of listener preferences from the stage and audience
positions in the same set of halls. The microphone array recordings also allowed for spatial energy analysis and the development of new
spatial room impulse response parameters that could be correlated with listeners’ preference judgments.
3497
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3497
4:00
1pMU8. V.L. Jordan and Jrn Utzon: Acoustic and architectural interactions in the early design of the Major Hall at the
Sydney Opera House, 1957-1962. Pamela Clements (Clements Acoust. Design Assoc., Unit 10, 23 Balfour Rd., Rose Bay, NSW 2029,
Australia, clements.pamela@gmail.com)
Vilhelm Lassen Jordan was the first acoustician engaged by Jrn Utzon to work on the Sydney Opera House. Jordan was a Danish
acoustic engineer whose advice was based in strictly scientific, quantifiable acoustics, combined with precedent. He was a pioneer in
acoustic modeling. Jordan had a major influence on Utzon’s first scheme for the Major Hall—a “rectangular “ form showing Jordan’s
classic approach to acoustic design. Utzon’s second, “faceted” scheme, developed between August 1960 and September 1963, was architecturally compelling but showed little of Jordan’s influence. Jordan complained of the architect’s “fancies” in the design, and by 1962
Utzon believed that Jordan had “given up.” In June 1962, Utzon brought in Lothar Cremer and Werner Gabler from the Berlin Institute
for Technical Acoustics, acousticians who provided scientific acoustic expertise and also worked with Utzon to integrate their acoustic
recommendations into his design. This paper focuses on Jordan’s early acoustic input into the design of the Major Hall, including his
collaboration with Utzon on the first (rectangular) scheme and his early design of the second (faceted) scheme. It also considers dichotomies between science and art in the early design approaches of Jordan, Cremer and Gabler for the Major Hall.
4:20
1pMU9. Auditory perception in rooms. Jens Blauert (Inst. of Commun. Acoust., Ruhr-Univ. Bochum, Commun. Acoust., Bochum
44780, Germany, jens.blauert@rub.de) and Jonas Braasch (School of Architecture, Rensselaer Polytechnic Inst., Troy, NY)
In the design process of concerts halls, it is the task of the architects—preferably with the aid of experience acoustical consultants—
to transform their concept on how the hall should sound into built form. To this end, a profound knowledge of the psychoacoustics of listening in concert halls is mandatory. Psychoacoustics relates auditory perception to the physical attribute of the sound field in the hall.
While the sound field can be assessed by physical measurement, the measurement of auditory percept requires human assessors. However, to avoid costly listening test, algorithms been developed for estimating certain features of auditory percepts in concert halls. These
estimators are useful for computer simulations of halls. In our talk, basic psychoacoustic phenomena and relevant instrumental estimators for psychoacoustic features will be evaluated. Finally, the concept of the “Quality of the Acoustics” of a concert hall will be considered in a broader context, thereby including aspects like the “Quality of Communication” in the hall. Type and quality of the
information carrier in the light of the cognitive background of the listeners will be discussed, as well as the influence of visual, tactile,
and olfactory cues on the quality assessment of concert halls.
4:40
1pMU10. Contemporary multi-use concert hall design: Experience and analysis. Anne Guthrie, Todd Brooks, Raj Patel, and Joe
Solway (Arup, 77 Water St., New York, NY 10005, anne.guthrie@arup.com)
The acoustic design practice at Arup in New York has designed multiple performing arts spaces with a wide range of acoustic characteristics and functions, several of which have opened within the past few years. Many of these spaces make use of adjustable acoustics
to accommodate a wide range of program within a single space, including acoustics control chambers, adjustable-height canopies, adjustable sound absorbing curtains and banners, and flexible configurations of various stage elements. Traditional objective parameters
have been examined for both audience and stage acoustics, and additional spatial analysis is explored through visualization of 3D
impulse responses. The relationships between these parameters and each venue’s program goals will be addressed. Our experiences and
findings obtained through the process of design, construction, and post-opening will be shared.
Contributed Papers
5:00
1pMU11. Methods to measure stage acoustic parameters: Overview and
future research. Remy Wenmaekers, Constant Hak, Maarten Hornikx
(Bldg. Phys. and Services, Eindhoven Univ. of Technol., P.O. Box 513,
Eindhoven 5600MB, Netherlands, r.h.c.wenmaekers@tue.nl), and Armin
Kohlrausch (Human Technol. Interaction, Eindhoven Univ. of Technol.,
Eindhoven, Netherlands)
The acoustics on stage has been recognized as an important design consideration for concert halls and other performance or rehearsal spaces. Stage
acoustic parameters such as STearly and STlate are used to judge the early
and late reflected sound levels on the stage. However, correlation of these
parameters to perceptual attributes have not always been found. An explanation could be that the parameters used are not appropriate and/or that musicians find it hard to judge acoustic conditions. Another possible explanation
is that the measurement methods used are not accurate enough. The goal of
previous research by the authors was to investigate the uncertainties in the
physical measurement. In this paper, an overview will be presented of the
main findings that can serve as a starting point for future research. The following topics are covered: time windows for early and late sound, the reference level at 1 m distance, directivity of the omnidirectional sound source,
impulse response quality, occupied stage measurements and directional
transducers. Finally, a fast and musician friendly measurement method is
3498
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
presented that can be used to accurately measure acoustic parameters on a
stage occupied by a full orchestra. The paper concludes with recommendations for future research.
5:20
1pMU12. Rich data trove gathered as a result of a unique opportunity
with a capacity audience. David Greenberg (Creative Acoust., LLC, 5
Inwood Ln., Westport, CT, david@creative-acoustics.com), Steve Ellison,
Melody Parker, and Roger W. Schwenke (Meyer Sound Labs, Inc.,
Berkeley, CA)
When an opportunity arises to perform an occupied room measurement,
normally a short time is allotted to acquire data in order to minimize event
disruption and audience discomfort. We therefore consider ourselves fortunate if one or two data sets are obtained under those conditions. An
extremely rare—perhaps unique—opportunity arose from a confluence of
factors: a 1600-seat concert hall at Liberty University, with a purposely
over-sized area of movable curtains and banners to adjust the architectural
reverberation; an installed active acoustics system, Constellation by Meyer
Sound; a concert comprising distinct performances by orchestra, choir, and
amplified worship band, with intermission to reset the architectural acoustics; and an interested client. The Constellation system provides 48 microphones distributed throughout the space, so a single set of sweeps through a
system loudspeaker results in 48 measurements. The variables tested were
Acoustics ’17 Boston
3498
(a) adjustable acoustic absorption in and out; (b) full audience in and out;
and (c) a nominal Constellation setting on and off. Measurement analysis
both confirmed design intentions and elucidated the behavior of the two
SUNDAY AFTERNOON, 25 JUNE 2017
ROOM 203, 1:15 P.M. TO 4:00 P.M.
Session 1pNSa
Noise, Psychological and Physiological Acoustics, and Structural Acoustics and Vibration: Perception of
Tonal Noise
Joonhee Lee, Cochair
Department of Building, Civil and Environmental Engineering, Concordia University, EV 6.231, 1515 Rue Sainte-Catherine
O, Montreal, QC H3H1Y3, Canada
Roland Sottek, Cochair
HEAD acoustics GmbH, Ebertstr. 30a, Herzogenrath 52134, Germany
Chair’s Introduction—1:15
Invited Papers
1:20
1pNSa1. Tonality calculation with modified DIN standard. Arne Oetjen and Steven van de Par (Acoust. Group, Carl von Ossietzky
Univ. Oldenburg, Carl-von-Ossietzky-Straße 9-11, Oldenburg D-26129, Germany, arne.oetjen@uni-oldenburg.de)
Many natural sounds emitted by rotating machinery such as gearboxes, turbochargers, electrical motors or generators contain tonal
parts. In the German standard “DIN 45691” a fft-based method for calculating the amount of perceived tonality in a complex sound is
suggest that is based on the level difference between the tonal component and the noise in the surrounding critical band. Although the
calculation method from DIN 45691 coincides well with subjective ratings for most stationary sounds, it often fails for sounds containing tonal components with rapid frequency changes such as turbochargers, due to the length of the analysis window. A pre-processing
stage using shorter analysis windows was implemented that was still able to estimate precise frequencies and amplitudes of tonal components. The method is based on obtaining accurate frequency estimates of tonal components from dividing the spectra of the time differentiated signal and the original signal and is a modification of the method of [Desainte-Catherine and Marchand, J. Aud. Eng. Soc., 48,
654-667]. A post-processing stage using tracking and peak-picking algorithms was added to remove detected tonal components too short
for being audible. This method allows both for high temporal and spectral resolution in the estimation of tonal components.
1:40
1pNSa2. Status quo of standardizing tonality calculation of stationary and time-varying sounds. Roland Sottek (HEAD Acoust.
GmbH, Ebertstr. 30a, Herzogenrath 52134, Germany, roland.sottek@head-acoustics.de)
For many years in various applications of noise assessment, tonality measurement procedures such as the Tone-to-Noise Ratio
(TNR), Prominence Ratio (PR), and DIN 45681 Tonality have been applied to identify and quantify prominent tonal components. Especially through the recent past as product sound pressure levels have become lower, disagreements between perception and measurement
have increased across a wide range of product categories including automotive, Information Technology, and residential products. One
factor is that tonality perceptions arising from spectrally elevated noise bands of various widths and slopes and from non-pure tones as
well as from discrete (pure) tones, and from combinations of these, can be mis-measured or escape measure in “hybrid” sound pressure
based tools and tools sensitive only to discrete tones. To address such issues, a new perceptually adequate tonality assessment method
based on a hearing model of Sottek was developed which evaluates the nonlinear and time-dependent loudness of both tonal and broadband components, separating them via the autocorrelation function (ACF) and giving their spectral relationships. This new perceptionmodel-based procedure, suitable for identifying and ranking tonalities from any sources, is proposed for the next edition of ECMA-74 as
an alternative to the existing methods TNR and PR (ECMA-74, Annex D).
3499
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3499
1p SUN. PM
adjustable acoustic systems in the occupied and unoccupied hall. The architectural and active acoustic system designs, performance, measurements,
and usage recommendations will be discussed.
2:00
1pNSa3. Frequency tones and elevated decibel level of tones that are indicators of ill health or a state of health with associated
pain. Bonnie Schnitta (SoundSense, LLC, 39 Industrial Rd., Unit 6, PO Box 1360, Wainscott, NY 11937, bonnie@soundsense.com) and
Carter H. Sigmon (Dept. of Physical Medicine & Rehabilitation, Naval Medical Ctr. San Diego, San Diego, CA)
A correlation was confirmed in a study between noise sensitivity and ill-health. This study included clients that had hired an acoustic
engineer to solve a noise problem, or patients who had a scheduled doctor’s appointment. Male and female patients in diverse outpatient
specialty clinics including: Sports Medicine, Pain Medicine, Hematology & Oncology and Breast Health were surveyed. The correlation
shown of noise sensitivity to ill-health found was for a vast array of illnesses. A prior study was confirmed of a correlation of ill-health
and noise sensitivity at lower frequencies that were typically only 1-2 dB(A) above ambient. Two additional findings were noted. First
there was a heighted sensitivity to elevated noise levels of at least 15 dB(A) above ambient, such as construction noise, children screaming, sirens, etc. This noise sensitivity was found in people of various illnesses including but not limited to patients with cancer, atrial fibrillation, undescribed back pain, or untreated pain from a shoulder injury. The second finding was that some or all of the sensitivity
dissipated upon treatment or remission of the illness or pain. This paper additionally discusses how to view this noise sensitivity as an indicator of ill-health, as well as influences on hospital design.
2:20
1pNSa4. Assessing tones in refrigeration and air-conditioning equipment. Derrick P. Knight (Ingersoll Rand - Trane, 3600 Pammel
Creek Rd., La Crosse, WI 54601, derrick.knight@irco.com)
AHRI 1140 is a procedure for assessing the quality of sound for air-conditioning and refrigeration equipment. The procedure is based
on work by Wells and Blazier in 1963 and jury listening tests by Penn State in 2001. This standard describes the measurements and calculations needed to determine the Sound Quality Index (SQI). The AHRI Technical Committee on Sound (TCoS) conducted informal
surveying of manufacturing, design and consulting engineers which failed to identify any active users of SQI. TCoS has begun editing
this standard in order to make it meaningful in identifying sound quality problems in a way that is useful to consultants and design engineers while being practical for manufacturers to adopt. The current effort is intended define a metric which will quantify the tonal characteristics of refrigeration and air-conditioning equipment. This presentation will briefly cover SQI and why TCoS believes SQI failed
to achieve adoption. Additional tonal metrics will be reviewed, and potential difficulties with the existing metrics will be discussed.
Most importantly, half of the presentation time will be used to allow audience feedback and suggestions.
2:40–3:00 Break
3:00
1pNSa5. The loudness of an amplitude-modulated sinusoid as a function of interaural modulator phase, modulation rate, and
level. Brian C. Moore, Matthew Jervis, Luke Harries, and Josef Schlittenlacher (Experimental Psych., Univ. of Cambridge, Downing
St., Cambridge CB3 9LG, United Kingdom, bcjm@cam.ac.uk)
The aim was to test a model of loudness for binaurally presented time-varying sounds (Moore et al., Trends in Hearing, in press). A
1000-Hz sinusoidal carrier was 100% sinusoidally amplitude modulated. The effect on its loudness of varying the interaural modulation
phase difference (the IMPD) was assessed. The IMPD of the test sound was 90 or 180 and that of the comparison sound was 0 . A
two-interval, two-alternative forced-choice method with a one-up/one-down rule was used to estimate the level difference between the
test and the comparison sounds at the point of equal loudness (the LDEL) for baseline levels of 30 and 70 dB SPL and modulation rates
of 1, 2, 4, 8, 16, and 32 Hz. The LDELs were negative (mean = -1.2 and -1.6 dB for IMPDs of 90 and 180 ), indicating that non-zero
IMPDs led to increased loudness. The model predicted that the LDELs should be most negative for modulation rates up to 4 Hz, and
should be close to zero for the rate of 32 Hz. The data showed a pattern similar to the predicted pattern, but the LDELs for the modulation rate of 32 Hz were about 0.6 dB more negative than predicted.
3:20
1pNSa6. Can partial loudness of the tonal content be the basis for tone adjustments? Jesko L. Verhey and Jan Hots (Dept. of
Experimental Audiol., Otto von Guericke Univ. Magdeburg, Leipziger Str. 44, Magdeburg 39120, Germany, jesko.verhey@med.ovgu.
de)
Environmental sounds containing tonal components are more annoying than those without audible tonal components. This is considered in several standards addressing the assessment of noise immissions. These standards have in common that the strength of the tonal
component, referred to as tonality, tonalness, or magnitude of tonal content is taken as the level of the prominent audible tone relative to
the surrounding background noise. Based on this magnitude of tonal content, some standards add tone adjustments to the measured
sound levels to account for the reduced acceptance of a sound with audible tonal components. On the basis of experimental data and
model predictions, the present study shows that the magnitude of the tonal content is better characterized by its partial loudness than by
the signal-to-noise ratio of the prominent tonal component. Partial loudness of the tonal component may be considered in future standards as a basis for the assessment of the annoyance of the tonal content of a sound and thus the determination of a tone adjustment.
3500
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3500
3:40
1pNSa7. Uncertainty in tone quantification methods of background noise for enclosed spaces. Joonhee Lee (Dept. of Bldg., Civil
and Environ. Eng., Concordia Univ., EV 6.231, 1515 Rue Sainte-Catherine, Montreal, QC H3H1Y3, Canada, joonhee.lee@concordia.
ca) and Lily M. Wang (Durham School of Architectural Eng., Univ. of Nebraska - Lincoln, Omaha, NE)
SUNDAY AFTERNOON, 25 JUNE 2017
1p SUN. PM
The noticeable tones in background noises can annoy and disturb human listeners. The noise community has been developing methods to quantify perception of tones and lately trying to propose new tone guidelines to regulate the maximum level of tones in noise.
Prior to proposing guidelines, the reliability of existing tone quantification methods should be examined. Thus, this paper investigates
uncertainty of the tone quantifying methods from ANSI or ISO standards including Tonal Audibility, Prominence Ratio, and Tone-toNoise Ratio. This study will discuss major causes of uncertainty in measuring the tones from using the methods. It will cover definitions
of tones in each method, how these metrics separate tones and broadband noises, and how they analyze signals in time and frequency
domains. Lastly, this paper will also investigate effects of room modes on the measured tonality in indoor environments. The variances
of the measured tonality across measurement positions for the assorted metrics will be presented. Acceptability of the variances in tonality will also be discussed.
ROOM 202, 1:15 P.M. TO 5:40 P.M.
Session 1pNSb
Noise and Physical Acoustics: Session in Honor of Kenneth Plotkin
Ben H. Sharp, Cochair
Ben Sharp Acoustics, 7802 Trammell Rd., Annandale, VA 22003
Juliet Page, Cochair
Environmental Measurement and Modeling, Volpe National Transportation Systems Center, 55 Broadway, Cambridge,
MA 02142
Victor Sparrow, Cochair
Grad. Program in Acoustics, Penn State, 201 Applied Science Bldg., University Park, PA 16802
Philippe Blanc-Benon, Cochair
Centre acoustique, LMFA UMR CNRS 5509, Ecole Centrale de Lyon, 36 avenue Guy de Collongue, Ecully 69134 Ecully
Cedex, France
Chair’s Introduction—1:15
Invited Paper
1:20
1pNSb1. Kenneth J. Plotkin—A most unforgettable character. Ben H. Sharp (Ben Sharp Acoust., LLC, 7802 Trammell Rd.,
Annandale, VA 22003, bhs940@yahoo.com)
In a career lasting over 45 years, Ken Plotkin established himself as a world leader in aeroacoustics, best known for his research studies of sonic boom. He developed the first practical method of predicting focused sonic booms and has been a key participant in recent
studies to mitigate sonic booms, for which he has been awarded several NASA awards for excellence. But Ken’s contributions to the
field of acoustics have been much more diverse than many realize. They have included topics such as highway, vehicle and tire noise,
psychoacoustics, aircraft noise simulation modeling, soundscape analysis and monitoring, and community noise. Ken was one of the
most intelligent people the author has ever met, and one of the most inventive. He had the ability to break down complex problems into
simple components that could be understood and solved, often using limited available data and simple theoretical models. Refreshingly
modest with a self-deprecating and cynical sense of humor, Ken was one of those rare people who, once met, would never be forgotten.
As a long-term colleague and friend, the author will discuss some of his major contributions and share some of his experiences working
with him.
3501
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3501
Contributed Papers
1:40
sonic boom research and advancement of quiet supersonic transportation.
This presentation will highlight the contributions of Dr. Plotkin to a myriad
of NASA projects. One of the largest efforts was the assembly and continual
improvement of sonic boom propagation software tools, collectively called
PCBoom, which allowed the analysis of real and imagined vehicles from
Mach cutoff conditions to the hypersonic. He was a driving force behind
reshaping aircraft to demonstrate quieter sonic booms, first with the plans
for a modified Firebee drone and SR-71, and then with the highly successful
Shaped Sonic Boom Demonstrator series of flights. Dr. Plotkin’s partnership
with NASA Armstrong resulted in the development of the low boom dive
maneuver to allow quiet sonic boom testing on structures and people using
existing aircraft, as well as a sonic boom cockpit display that has recently
been tested in flight. Dr. Plotkin was also instrumental in such research campaigns as SCAMP, WSPR, and FaINT. Throughout all, Dr. Plotkin’s phenomenal intellect, tireless dedication, and irreverent humor made working
with him a joy.
1pNSb2. Memorable interactions with Dr. Ken Plotkin during NASA
sonic boom research. Peter Coen (NASA, NASA Langley Res. Ctr., MS
264, Hampton, VA 23681, peter.g.coen@nasa.gov)
Dr. Ken Plotkin was an active and vital contributor to numerous NASA
research activities over the course of his career. His involvement clearly
advanced NASA’s objective and improved the state of knowledge of the
prediction and reduction of sonic boom noise, but it also created many fond
and amusing memories for those fortunate to work with him. This remembrance of Ken will recall some of his intellectual and comical contributions.
2:00
1pNSb3. Dr. Kenneth J. Plotkin: Friend and mentor. Joseph A.
Salamone (P.O. Box 1372, Tybee Island, GA 31428, joesalamone3@gmail.
com)
2:40
Dr. Kenneth J. Plotkin was the chief scientist at Wyle and was known in
the acoustics community as an expert in many aspects of aircraft and transportation noise. He was also widely regarded as an authority on the subject
of sonic booms. His enthusiasm for research and new discoveries was contagious—it was almost impossible not to be influenced by his example. He
was very willing to share his expertise with those who also ventured into the
realm of sonic boom propagation. It is an honor to be one of the people who
benefited from his willingness to share his wisdom and learn from his teaching. An initial discussion with Dr. Plotkin inquiring about his knowledge of
the rise time of shocks in sonic booms developed into friendship and mentoring that lasted almost 14 years. This presentation will highlight these
early conversations and share how they evolved over time. Additionally, the
impact he made on my engineering career and graduate school education
will also be presented. [Acknowledgments to the National Aeronautics and
Space Administration, Federal Aviation Administration, The Pennsylvania
State University, Wyle, and Gulfstream Aerospace Corporation.]
1pNSb5. Recollections from four decades of sonic boom research with
Dr. Kenneth J. Plotkin. Domenic J. Maglieri (Eagle Aeronautics, Inc., 732
Thimble Shoals Blvd.’Bldg. C 204, Newport News, VA 23606,
sonicboomexpert1@verizon.net)
Every so often the technical community is blessed to have a very special
& talented individual join its folds. In the past half century, the issue of
sonic boom needed such a talent who would provide leadership and pioneering contributions to the understanding of the sonic boom, its generation,
propagation, prediction, and minimization and its effect on people and structures. Ken was that person. This presentation is of a personal note touching
on a few recollections of working with Ken for over 45 years. It will begin
with my introduction to Ken in 1970, highlight a key ingredient to his 1971
doctorate thesis & reflect on the significance of his findings, his initial reaction to the 1990 review of his AIAA Journal of Aircraft paper, the coauthoring of a chapter on sonic boom in 1991,our views on need to demonstrate
the persistence of a shaped signature, & whether signature shaping will minimizing the transition focus boom, our discussions on whether booms are
observed from subsonic flight, & a listing of the boom efforts we enjoyed &
publications we coauthored. I miss Ken, his enthusiasm & entertaining
ways, his brilliance. I believe he deserves a place alongside G. B.Whitham,
A. R.Seebass, & A. R.George.
2:20
1pNSb4. Dr. Kenneth Plotkin’s myriad contributions to the National
Aeronautics and Space Administration’s supersonic mission. Edward A.
Haering (Res. AeroDynam., NASA Armstrong, M.S. 2228, PO Box 273,
Edwards, CA 93523, edward.a.haering@nasa.gov)
The world as a whole, and NASA, in particular, owes a large debt of
gratitude to Dr. Kenneth Plotkin for his decades of service in the field of
3:00–3:20 Break
Invited Papers
3:20
1pNSb6. Kenneth Plotkin——Military noise and sonic boom. Micah Downing (Blue Ridge Res. and Consulting, 29 N. Market St.,
Ste. 700, Asheville, NC 28801, micah.downing@blueridgeresearch.com)
As a young researcher at the Air Force Research Laboratory, I was blessed with the good fortune of working with Ken on a variety
of military sonic boom and aircraft noise projects. His mentorship on sonic boom theory required patience on his part but resulted in a
successful collaboration, which included an improved PCBoom model and focused sonic boom measurements. From this work, we demonstrated how to make sonic booms louder unlike his later efforts to help minimize future aircraft’s sonic booms. Ken also led major
upgrades to the military’s aircraft noise model, NoiseMap. His work resulted in the development of the simulation noise model, NMSim
and its noise animations. Through Ken’s efforts, we now have tools to better explain sonic boom and aircraft noise. As my mentor, I
hope to share some highlights of working with Ken.
3502
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3502
3:40
1pNSb7. Personal memories of Ken Plotkin. Nicholas P. Miller (Harris Miller Miller & Hanson Inc., 77 S. Bedford St., Burlington,
MA 01803, nmiller@hmmh.com)
1p SUN. PM
Ken and I were competitors, co-participants in conferences and professional societies and, occasionally, co-workers. Our times together at meetings or on projects covered about ten years starting in the late 1990s. We worked together on projects for the National
Park Service, and spent many meetings of SAE Committee A21 together. These times permitted us to become well-acquainted with
each other, and to develop mutual respect for each other’s experience and knowledge. We were almost exact contemporaries, being the
same ages and starting our careers at almost the same time—Ken at Wyle and I at BBN. Our experiences were somewhat different, with
Ken working mostly on military noise issues and I on general transportation noise. We each worked with expert mentors on the our respective coasts; Ken working with Lou Southerland and others in California, I with Ted Schultz, Dick Bolt, and others in Massachusetts.
From these times, we enjoyed each other’s company, learned a lot from each other, and made memories.
4:00
1pNSb8. Kenneth J. Plotkin’s contributions to acoustics and sonic boom research at NASA. Alexandra Loubeau (Structural
Acoust. Branch, NASA Langley Res. Ctr., MS 463, Hampton, VA 23681, a.loubeau@nasa.gov)
Dr. Kenneth J. Plotkin played a significant role in furthering knowledge related to outdoor sound propagation and, in particular, propagation of sonic booms from supersonic aircraft. This presentation focuses on Dr. Plotkin’s support of NASA’s research in these areas.
In recent years, he worked with NASA on expanding the capabilities of the modeling software PCBOOM for prediction of sonic boom
propagation for a variety of aircraft flight conditions in complex atmospheric environments. Dr. Plotkin was also instrumental in the
planning, execution, and analysis of several NASA supersonic flight test campaigns aimed at gathering data for development and validation of prediction models like PCBOOM. NASA’s sonic boom research has benefited greatly from these collaborations, and it is our
hope to honor his legacy by continuing work in this area.
4:20
1pNSb9. Overview of acoustic and sonic boom advancements during development of NASA launch vehicles. Janice Houston
(Marshall Space Flight Ctr., NASA Marshall Space Flight Ctr., Huntsville, AL 35812, janice.d.houston@nasa.gov), Jess Jones (AI
Signal Res. Inc., Huntsville, AL), R. Jeremy Kenny, Tomas Nesman, Darren Reed (Marshall Space Flight Ctr., Huntsville, AL), and
Bruce Vu (Kennedy Space Ctr., Melbourne, FL)
During the study and development of NASA space vehicles, acoustic environments have been a critical design input. This paper surveys some of the key challenges and focuses on the contributions and collaborations of Kenneth J. Plotkin/Wyle Laboratories with various NASA centers and personnel. In the mid-1960’s and early 1970’s, a method for predicting in-flight fluctuating environments for
vehicle systems was developed for the Saturn Development Programs at NASA Marshall Space Flight Center (MSFC). With the Space
Shuttle Vehicle development in the 1970’s, sonic boom became a concern and sonic boom focusing was studied. Attention was turned to
the Space Shuttle Orbiter entry maneuvers during the approach to the KSC landing site. In 1993, a PC version for sonic boom prediction
was developed for the National Launch System study. Near-field pressure data from computational fluid dynamics analyses were used to
develop the shape factors used in the X-33 sonic boom analyses. For the X-34 sonic boom analyses, the influence of plumes on the shape
factor was included. In the 2000s, rocket noise prediction software at the KSC launch platform was developed for the Constellation Program. All this acoustic work is being leveraged on NASA’s latest vehicle, the Space Launch System.
4:40
1pNSb10. Remembering Ken Plotkin: Colleague, mentor, and friend. Juliet Page (Environ. Measurement and Modeling, Volpe
National Transportation Systems Ctr., 55 Broadway, Cambridge, MA 02142, juliet.page@dot.gov)
Dr. Kenneth J. Plotkin’s contributions to the field of acoustics are numerous and cover a variety of areas such as sonic boom, jet
noise, community noise, and atmospheric propagation. I will review many of Ken’s contributions to the field of acoustics, including
shared experiences during unique and challenging projects and field measurement programs. I will also highlight Ken’s leading role in
the development of acoustic visualization techniques.
5:00–5:40 Panel Discussion
3503
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3503
SUNDAY AFTERNOON, 25 JUNE 2017
ROOM 210, 1:20 P.M. TO 5:40 P.M.
Session 1pPA
Physical Acoustics and Biomedical Acoustics: Acoustofluidics II
J€urg Dual, Cochair
ETH Zurich, Tannenstr. 3, Zurich 8092, Switzerland
Charles Thompson, Cochair
ECE, UMASS, 1 Univ. Ave., Lowell, MA 01854
Max Denis, Cochair
U.S. Army Research Lab., 2800 Powder Mill Road, Adelphi, MD 20783-1197
Invited Papers
1:20
1pPA1. Macro-scale cell manipulation using bulk ultrasonic standing waves for biopharmacy and cellular therapy applications.
Bart Lipkens, Kedar C. Chitale, Benjamin P. Ross-Johnsrud, and Walter Presz (FloDesign Sonics, 1215 Wilbraham Rd., Box S-5024,
Springfield, MA 01119, blipkens@wne.edu)
Acoustic standing wave fields are widely used in MEMS applications to manipulate micron sized particles in fluids with typical fluid
channel dimensions of half a wavelength. This report presents three novel acoustofluidic platforms for particle separation and/or manipulation at macroscale, i.e., tens to hundreds of wavelengths. The first platform uses multidimensional standing waves which generate lateral radiation forces that trap and tightly cluster suspended fluid or particulate, enhancing the gravitational settling effect that results in
continuous, macroscale separation. The second platform employs acoustic radiation forces generated near the edge of an acoustic standing wave to hold back particles and generate a wall type separation effect. The third platform uses the acoustic radiation forces generated
by a macroscale, angled standing wave to deflect particles in a controlled fashion for particle manipulation and/or differentiation. Applications are focused in biopharmacy and cellular and gene therapy: mammalian cell clarification, continuous perfusion of bioreactors,
cell concentration and washing, cell sorting and differentiation, fractionation, microcarrier-cell separation, and affinity acoustic separation. A commercial cell clarification device has been introduced. The key physics principles related to acoustic radiation force and low
Reynolds number multi-phase flows are discussed. Experimental results of cell clarification, perfusion, and manipulation are shown.
1:40
1pPA2. Acoustofluidic manipulation of biological bodies: Generation, visualization, and stimulation of cellular constructs. Dario
Carugo, Bjorn Hammarstr€
om, Umesh Jonnalagadda, Junjun Lei, Filip Plazonic, Walid Messaoudi, Zaid Ibrahim Shaglwf, Peter GlynneJones, and Martyn Hill (Eng. Sci., Univ. of Southampton, University Rd., Southampton SO17 1BJ, United Kingdom, d.carugo@soton.ac.uk)
Ultrasound-based external manipulation of biological bodies in microfluidics has emerged as a contactless way of manipulating cells
and particles for a range of applications, including sample enrichment, filtration, and sorting. Furthermore, it has been recently utilized
to drive cells to form multi-cellular architectures, including clusters and planar sheets, by appropriately designing the resonant ultrasound field within the acoustofluidic device. In this presentation, we demonstrate the development of ultrasonic bioreactors for generating 3D, scaffold-free tissue constructs. We apply this technology to the generation of neocartilage grafts and examine their potential for
repair chondral defects, and to the generation of co-culture models of the mucosal airway. Furthermore, we illustrate how the ultrasonic
standing wave field can be designed to generate and modulate different stress regimes on suspended cells, for activating mechanotransductive pathways or for enhancing intracellular delivery of bioactive compounds. Integration of acoustofluidic systems with advanced
microscopy techniques for quantifying biophysical effects of ultrasound on single cells or cellular constructs is also discussed.
2:00
1pPA3. Acoustofluidic manipulation of biological bodies: Applications in medical and environmental diagnosis. Dario Carugo,
om, Umesh Jonnalagadda, Junjun Lei, Filip Plazonic, Walid Messaoudi, Zaid Ibrahim Shaglwf, Peter Glynne-Jones,
Bjorn Hammarstr€
and Martyn Hill (Eng. Sci., Univ. of Southampton, University Rd., Southampton SO17 1BJ, United Kingdom, J.Lei@soton.ac.uk)
Ultrasound-based external forcing of biological bodies in microfluidics has emerged as a contactless way of manipulating cells and particles for a range of applications, including sample enrichment, filtration, and sorting. Recently, acoustic radiation forces have shown potential for manipulating pathogenic organisms in biological assays. In this presentation, we demonstrate the development of acoustofluidic
systems designed for high-throughput manipulation and capturing of biological bodies in applications ranging from medical to environmental diagnosis. Specifically, we apply our acoustofluidic systems to the detection of (i) cancer and immune cells in the early-stage diagnosis
of blood malignancies and allergies, and (ii) bacterial microorganisms, spores, and planktonic cells for screening of environmental and
industrial samples.
3504
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3504
2:20
1pPA4. Macroscale angled ultrasonic standing waves: A novel
approach for particle manipulation. Kedar C. Chitale, Walter Presz
(Flodesign Sonics, 380 Main St., Wilbraham, MA 01095, k.chitale@
fdsonics.com), Bart Lipkens (Flodesign Sonics, Springfield, MA), Benjamin
P. Ross-Johnsrud, Miles Hyman, and Marc Lamontagne (Flodesign Sonics,
Wilbraham, MA)
Macro scale acoustophoretic devices use radiation forces to trap particles inside a standing wave to separate them from a mixture in a continuous fashion. However, these devices are limited by factors such as flow
rates, residence times, and temperature rise which could be detrimental for
certain applications. A novel method of separating, sorting and differentiating various particles using bulk angled ultrasonic standing waves is presented. This technique offers very sensitive separation capability with
respect to size and acoustic contrast of particles. Universal curves are developed for particle deflection from the bulk flow direction at all wave angles
as a function of a non-dimensional parameter defined by the ratio of acoustic
radiation force to viscous drag force. Both CFD (Computational Fluid Dynamics) and model test data verify the analytical predictions. New macroscale, ultrasonic separator concepts are presented that use the angle wave
technology to effectively deflect and/or separate microcarrier beads from a
flowing mixture at high speeds when compared to conventional ultrasonic
separators. Model test data verify the ability to move, differentiate, separate,
or fractionate particles in suspension by size and acoustic contrast.
2:40
1pPA5. Acoustic edge effect: Novel acoustophoretic cell retention to
enable continuous bioprocessing. Benjamin P. Ross-Johnsrud, Erik Miller,
Hayley Hicks, Kedar C. Chitale, Walter Presz (FloDesign Sonics, 380 Main
St., Wilbraham, MA 01095, b.johnsrud@fdsonics.com), and Bart Lipkens
(Mech. Eng., Western New England Univ., Springfield, MA)
There is currently a shift in Bioprocessing towards continuous manufacturing of monoclonal antibodies or recombinant proteins in perfusion mammalian cell cultures (Konstantinov & Cooney, Journal of Pharmaceutical
Sciences, 2015). A cell retention device is the key technology component
that enables the shift to continuous production. A novel acoustic cell retention device operates by continuously drawing off a harvest flow, equal to the
perfusion rate of the bioreactor, while recirculating the retained cells back
to the bioreactor. The harvest flow path is tangent and significantly smaller
than the recirculation rate. The device utilizes a novel acoustophoretic effect
known as an “acoustic edge/interface” effect in conjunction with a recirculating flow beneath the acoustic harvest chamber which collects and returns
cells to the bioreactor. This interface effect operates by creating a radiation
pressure/force field at the interface between cell-free harvest and cell-laden
circulating fluids. Numerical results show an insight into the mechanism of
the acoustic edge effect. Experimental results confirm the existence of this
novel acoustic edge effect. CHO cell perfusion cultures were operated continuously for >15 days. This technology delivers continuous cell retention
and steady unhindered product transmission enabling continuous production
of biopharmaceuticals unlike traditional hollow-fiber tangential flow
filtration.
3:00
1pPA6. Acoustophoresis mediated chromatography processing:
Capture of proteins from cell cultures. Thomas Kennedy, Malcolm
Pluskal, Rudolf Gilmanshin (FloDesign Sonics, 380 Main St., Wilbraham,
MA 01095, t.kennedy@fdsonics.com), and Bart Lipkens (Mech. Eng.,
Western New England Univ., Springfield, MA)
Chromatographic purification of target biomolecules is an important
downstream process step in the development of new therapeutic agents,
3505
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
such as antibodies. This step employs chromatographic processes, such as
affinity separation utilizing Protein A, ion exchange, or mixed mode chemistries. The workflow typically involves a packed column(s) and several chromatographic steps to achieve the desired level of purity. The steps can be
time-consuming and expensive. A new process will be described, employing
an acoustic standing wave in a fluid chamber to partition and maintain solid
phase beads in an acoustically fluidized bed format to capture, wash, and
elute the target biomolecule. Purification workflow(s) will be described for
the following applications: (1) capture of a monoclonal antibody by Protein
A beads from a crude cell culture system, (2) capture of recombinant Green
Fluorescent protein (GFP) by anion exchange from a crude cell lysate. The
workflow will include wash step(s) and recovery with a specific elution conditions. This communication will clearly demonstrate that an acoustophoresis process can purify proteins without a packed chromatography column.
This new approach will minimize the time and cost involved in current purification workflows.
3:20–3:40 Break
3:40
1pPA7. Boundary interactions with vortical disturbance in an
acoustofluidic channel. Kavitha Chandra, Charles Thompson, and Vineet
Mehta (Univ. of Massachusetts Lowell, 1 University Ave., Lowell, MA
01854, kavitha_chandra@uml.edu)
In this work, mechanisms governing the generation of unstable vortical
disturbances and their spatiotemporal characteristics are examined. Timeharmonic boundary and pressure driven flows are of particular interest. In
the inner region near the solid-fluid interface, the vortical components of the
particle velocity are taken to behave incompressibly. To accommodate
time-dependent channel wall geometries a curvilinear coordinate based
pseudo-spectral method is developed. The method allows for the direct numerical solution of the three-dimensional time-dependent Navier-Stokes
equation for high streaming Reynolds conditions. The conditions for centrifugal destabilization and transition are examined.
4:00
1pPA8. Ultrasonic robotics in microfluidic cavities. J€
urg Dual, Michael
Gerlt, Philipp Hahn, Stefan Lakaemper, Ivo Leibacher (ETH Zurich,
Tannenstr. 3, Zurich 8092, Switzerland, dual@imes.mavt.ethz.ch), Andreas
Lamprecht, Peter Reichert, Nadia S. Vertti Quintero, Xavier Casadevall i
Solvas, Rudiyanto Gunawan , and Andrew deMello (ETH Zurich, Zurich,
Zurich, Switzerland)
Ultrasonic standing waves are often used in biomedical applications. It
has become quite common to move beads, cells, droplets, and other particles
for sorting or biomedical analysis in microfluidic cavities by bulk acoustic
waves or by vibrations excited by piezoelectric transducers. The motion of
particles is determined by streaming and radiation forces. For the calculation
of the radiation forces acting on single particles Gorkov’s potential is considered to be the modeling tool of choice, once the acoustic field and the
properties of constituents (fluid and particle density and compressibility,
respectively) are known. For the acoustic streaming, predictions can be
made numerically. For both aspects large uncertainties exist, due to the
complexity of the system and fluid structure interaction at multiple levels.
In this paper, first various characterization tools for the acoustic field in the
cavity are described. They consist of the interplay between numerical modeling of the device, impedance analysis of the piezoelectric transducer used,
interferometric analysis of surface displacements and an optical trap to measure the forces on the particles directly. Secondly, a number of recent applications are shown, including the sorting and immobilization of C. elegans.
Furthermore, fascinating behavior of multiple interacting particles is
reported.
Acoustics ’17 Boston
3505
1p SUN. PM
Contributed Papers
4:20
1pPA9. Acoustic nonlinearity and the generation of large tensile
pressures to explain atomization in drop-chain acoustic fountains. Oleg
Sapozhnikov (Phys. Faculty, Moscow State Univ., and CIMU, Appl. Phys.
Lab., Univ. of Washington, Leninskie Gory, Moscow 119991, Russian
Federation, oleg@acs366.phys.msu.ru), Elena Annenkova (Phys. Faculty,
Moscow State Univ., Moscow, Russian Federation), Wayne Kreider (CIMU,
Appl. Phys. Lab., Univ. of Washington, Seattle, WA), and Julianna C. Simon
(Graduate Program in Acoust., Penn State Univ., University Park, PA)
An ultrasound beam propagating upward in a liquid creates an acoustic
fountain at a gas interface in the form of a drop chain. High-speed photography shows that one or several drops in such a fountain explode in less than a
millisecond, resulting in liquid atomization [Simon et al. J. Fluid Mech.,
2015, 766, pp. 129-146]. To explain this phenomenon, a nonlinear theory
involving an isolated spherical drop is developed. The model considers an
initial excitation in the form of a spherical standing acoustic wave at the
lowest resonance frequency, i.e., when the drop diameter coincides with a
wavelength. If higher harmonics are generated inside the drop due to acoustic nonlinearity, these harmonics will also have the form of standing spherical waves. At higher frequencies, more of the energy of each harmonic is
localized near the drop center. Calculations demonstrate that harmonic generation can lead to large increases in both peak positive and peak negative
acoustic pressure at the drop center. Such large tensile pressures may exceed
the intrinsic cavitation threshold, leading to the nucleation of a bubble at the
center and explosion of the drop as the bubble grows rapidly. [Work supported by RFBR 17-02-00261 and NIH R01EB007643.]
4:40
1pPA10. Acoustic characterization of microbubble clouds by
attenuation and celerity spectroscopy. Lilian D’Hondt (Nuclear Technol.
Dept., French Atomic Energy Commission, CEA Cadarache, DEN/CAD/
DTN/STCP/LIET - B^at. 202, Saint Paul Lez Durance 13108, France, lilian.
d’hondt@cea.fr), Cedric Payan, Serge Mensah (Aix Marseille Univ, CNRS,
Centrale Marseille, LMA, Marseille, France), and Matthieu Cavaro
(Nuclear Technol. Dept., French Atomic Energy Commission, Saint Paul
lez Durance, France)
In 4th generation nuclear reactors cooled with liquid sodium, argon
microbubbles are present in the primary sodium. Due to the opacity of liquid
sodium, acoustic control methods are chosen for operating inspections but
this bubble presence greatly affects the acoustical properties of the medium.
It is therefore required to characterize the microbubble cloud, i.e., to provide
the bubble’s volume fraction and the size distribution. Safety demands the
proposed method to be robust and applicable with few assumptions (about
the bubble populations) as possible. The objective of this study is to evaluate the performance of spectroscopic methods (based on celerity and attenuation) in the presence of bubbles whose sizes and surface (or volume)
contributions are very different. Two methods of evaluating the histogram
and the void fraction are compared. The first is based on the inversion of the
integral equation of the complex wave number derived by Commander and
3506
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Prosperetti. The second, which assumes the populations to have log-normal
or sums of Gaussians distributions, performs an adjustment of the distribution’s parameters to fit attenuation and celerity curves measurements. These
methods are compared with experimental data obtained using ACWABUL
facilities at CEA Cadarache.
5:00
1pPA11. Experimental analysis of backscattering by cylindrical shell
with internal plate at oblique incidence. Yunzhe Tong, Bin Wang, and
Jun Fan (Shanghai Jiao Tong Univ., 800 Dongchuan Rd., Minhang District,
Shanghai 200240, China, tongyunzhe@sjtu.edu.cn)
Through an experimental approach, this paper studies the flexural wave
coupling on a cylindrical shell with internal structural. Impulse response
backscattering measurements are presented and interpreted for the scattering
of obliquely incident plane waves by a fluid-loaded stiffened cylindrical
shell and a corresponding empty cylindrical shell respectively. The stiffened
cylindrical shell is reinforced by a thin internal plate which is diametrically
attached to the shell along its axial direction. The time series data are fast
Fourier transformed and the modulus normalized according to the direct
wave spectrum. Result are plotted as frequency-angle spectra. Compared
with that from the corresponding empty cylindrical shell, the subsonic flexural waves on the cylindrical shell will interact between the attachments and
some of their energy can be converted into radiating waves.
5:20
1pPA12. T-matrix method implementation for acoustic Bessel beam
scattering from elastic solids and shells. Zhixiong Gong (School of Naval
Architecture and Ocean Eng., Huazhong Univ. of Sci. and Technol.,
Webster Physical Sci. 754, Pullman, WA 99164-2814, zhixiong.gong@wsu.
edu), Philip L. Marston (Dept. of Phys. and Astronomy, Washington State
Univ., Pullman, WA), Yingbin Chai, and Wei Li (School of Naval
Architecture and Ocean Eng., Huazhong Univ. of Sci. and Technol.,
Wuhan, Hubei, China)
T-matrix method (TMM) has been demonstrated to be an effective tool
for the application of acoustic Bessel beam (ABB) scattering from rigid
shapes, owing to the fact that the incident ABBs could be appropriate to
expand on the basis of spherical harmonics [Gong et al., J. Sound Vibr. 383,
233-247 (2016)]. In this work, we try to extend the TMM to further calculate ABB scattering from complicated elastic shapes, spheroids/ spheroidal
shells for instance. Some numerical techniques are successfully implemented to overcome the instability problem during matrix inversion procedures for nonspherical shapes. Resonance scattering theory and ray theory
[Kargl and Marston, J. Acoust. Soc. Am. 88, 1103-1113 (1990)] are
employed to explore and interpret several novel properties of scattering
from elastic shapes illuminated by ABBs, thus revealing the corresponding
mechanism of scattering by ABBs. Furthermore, the present work will perform as a foundation work to extend the applicability of TMM to study
acoustic radiation force and torque in ABBs in future. [Work supported by
NSFC.]
Acoustics ’17 Boston
3506
SUNDAY AFTERNOON, 25 JUNE 2017
ROOM 311, 1:35 P.M. TO 5:20 P.M.
Session 1pPPa
Charlotte M. Reed, Cochair
Research Laboratory of Electronics, Massachusetts Institute of Technology, Room 36-751, MIT, 77 Massachusetts Ave.,
Cambridge, MA 02139
William Rabinowitz, Cochair
Bose Corporation, The Mountain, Framingham, MA 01701
Chair’s Introduction—1:35
Invited Papers
1:40
1pPPa1. In “Lou” of the Lou you know. Constantine Trahiotis (Neurosci., UConn Health Ctr., 6 Lyme Pl., Avon, CT 06001, tino@
uchc.edu)
Lou Braida has distinguished himself as a scientist, educator, and, to the dismay of many, critic-par excellence. Others will speak to
his wide variety of fundamental contributions to knowledge concerning several areas of auditory perception. I will confine my remarks
to Lou the person and friend. In my interactions with Lou I discovered and have cherished an individual for whom many of you has
remained “e-Lou-sive.” I will introduce you to Lou the cheapskate, Lou the inventor, Lou the scammer, Lou the smuggler, Lou the travel
agent, Lou the traveling chef, and Lou the practical engineer-observer. These other incarnations of Lou, taken together with the more
widely known Lou, reveal the truly multidimensional ways in which he has led a very special life, one in which all of us have been fortunate to share.
2:00
1pPPa2. Cochlear mechanisms underlying the sharp frequency selectivity of hearing. Dennis M. Freeman, Roozbeh Ghaffari,
Shirin Farrahi, and Jonathan B. Sellon (Res. Lab. of Electronics, Massachusetts Inst. of Technol., 77 Massachusetts Ave., MIT Rm.
7-133, Cambridge, MA 02139, freeman@mit.edu)
Sharp frequency selectivity, which is a hallmark of mammalian hearing, originates in the cochlea. However, the underlying mechanisms remain unclear. The pioneering work of von Bekesy showed that sounds launch waves of motion along the spiraling basilar membrane, and subsequent hydrodynamic analysis has shown how mechanical properties of the cochlear partition can interact with fluid
forces to support sharp frequency tuning. These analyses have generally presumed (or even purported to prove) that longitudinal mechanical coupling through cochlear structures is negligible. Here, we demonstrate that the visco-elastic structure of the tectorial membrane (TM), a gelatinous structure that overlies the sensory receptor cells and plays a key role in stimulating them, also supports
traveling waves. The distance over which TM waves propagate provides a measure of mechanical coupling and, through the cochlear
map, determines a range of frequencies that correlates strikingly well with direct measurements of cochlear tuning in normal hearing
mice, in mice with genetic disorders of hearing, and in humans. These results demonstrate significant longitudinal coupling through the
TM and suggest that TM coupling plays an important role in determining the sharpness of cochlear frequency tuning.
2:20
1pPPa3. All I really need to know I learned from Lou and Nat: Lou Braida. Michael Picheny (Watson Multimodal, IBM TJ Watson
Res. Ctr., POB 218, Yorktown Heights, NY 10598, picheny@us.ibm.com)
Lou Braida was one of my two primary mentors in graduate school at MIT. Lou taught me innumerable things. In this talk, I will
only have time to focus on a few items. I will describe how I applied to Speech Recognition what I learned from him about the power of
a psychophysical approach to research problems, the importance of good data collection, and the value of long-term spectral characteristics in perception. Speech recognition by now is a relatively mature field, but at the time the field was relatively unexplored territory.
Lou’s training allowed me to see ways to make advances in speech recognition experimental design, speaker-independent speech recognition, and noise-immune features and processing. While many of these early ideas have been subsumed over the years by more sophisticated processing, many of them have their roots in techniques inspired by a combination of perceptual knowledge with principled
engineering design. Lou was, and is a master of both and I am forever grateful for his inspiration, mentoring, and friendship in shaping
my career.
3507
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3507
1p SUN. PM
Psychological and Physiological Acoustics and Speech Communication: Honoring the Contributions of
Louis Braida to the Study of Auditory and Speech Perception
2:40
1pPPa4. Computational models of speech perception by cochlear implant users. Mario Svirsky and Elad Sagi (OtolaryngologyHNS, New York Univ., 550 First Ave., NBV-5E5, New York, NY 10010, mario.svirsky@nyumc.org)
Cochlear implant (CI) users have access to fewer acoustic cues than normal hearing listeners, resulting in less than perfect identification of phonemes (vowels and consonants), even in quiet. This makes it possible to develop models of phoneme identification based on
CI users’ ability to discriminate along a small set of linguistically-relevant continua. Vowel and consonant confusions made by CI users
provide a very rich platform to test such models. The preliminary implementation of these models used a single perceptual dimension
and was closely related to the model of intensity resolution developed jointly by Nat Durlach and Lou Braida. Extensions of this model
to multiple dimensions, incorporating aspects of Lou’s novel work on “crossmodal integration,” have successfully explained patterns of
vowel and consonant confusions; perception of “conflicting-cue” vowels; changes in vowel identification as a function of different intensity mapping curves and frequency-to-electrode maps; adaptation (or lack thereof) to changes in frequency-place functions; and some
aspects of speech perception in noise. Our latest studies predict that enhanced phoneme identification by cochlear implant users may
result from deactivation of a subset of electrodes in a patient’s map. All these results build upon, and were made possible by concepts
from Lou’s work.
3:00–3:20 Break
3:20
1pPPa5. Early acoustic hearing and spoken language skills of children with cochlear implants. Rosalie M. Uchanski and Lisa S.
Davidson (Otolaryngol., Washington Univ. in St Louis School of Medicine, 4523 Clayton Ave., Campus Box 8115, St. Louis, MO
63110, r.uchanski@wustl.edu)
Development of spoken language is difficult for children born with hearing loss. While most clinicians agree on the goal of improving the audibility of spoken language as early as possible, there is less agreement on the types of devices (bilateral cochlear implants vs.
bilateral hearing aids vs. one hearing aid with one cochlear implant) they would recommend to achieve improved audibility. Additionally, acoustic properties of speech are conveyed differently by hearings aids (HAs) and cochlear implants (CIs); voice-pitch and prosodic
properties, assumed critical for learning words from continuous speech, are conveyed better with HAs than CIs while broad spectral
properties of individual speech segments are conveyed better with CIs than HAs. The relation between a simple model of a child’s early
(birth to ~3 years old) acoustic hearing experience (includes HA use, CI surgery dates, severity of hearing loss, etc.) and eventual spoken
language skills (tested at later ages of 8-10 years old) will be examined, especially in the context of which devices might be best for spoken language development. This examination reflects Dr. Louis Braida’s long-standing interest in understanding the acoustic properties
of speech and its perception, especially for the benefit of those with hearing loss.
3:40
1pPPa6. Factors affecting accuracy and intelligibility of transliterators who use cued speech. Jean C. Krause (Commun. Sci. and
Disord., Univ. of South Florida, 4202 E Fowler Ave., PCD 1017, Tampa, FL 33620, jeankrause@usf.edu)
Some deaf individuals access spoken information via transliterators who use Cued Speech, a system of hand gestures that supplement information available through speechreading alone. In this presentation, the accuracy and intelligibility of 12 transliterators with
varying degrees of experience are examined. Accuracy, or the percentage of cues correctly produced, was evaluated at three different
speaking rates (slow, normal, and fast), and intelligibility, or the percentage of words correctly received, was measured by presenting
the materials that the transliterators produced to nine expert receivers of Cued Speech. Results show that speaking rate had a large negative effect on accuracy, caused primarily by an increase in omitted cues, while increased experience level was generally associated with
increased accuracy. Intelligibility was generally higher than accuracy, with accuracy accounting for roughly 25% of the variance in intelligibility scores. We conclude by discussing factors such as speechreadability that could explain additional portions of the variance.
4:00
1pPPa7. Automated extraction of information from Language ENvironment Analysis (LENA) home recordings of older children
with autism. Mark A. Clements, Rahul Pawar, Desmond Caulley (ECE, Georgia Inst. of Technol., School of ECE, Georgia Inst. of
Technol., Atlanta, GA 30332-0250, clements@gatech.edu), Rebecca Jones, and Catherine Lord (Psychiatry, Weill Cornell Medicine,
White Plains, NY)
It has been established that children and adolescents with Autism Spectrum Disorder show a wide range of abilities to use spoken
words and establish interactive conversations. Automatic measurement of such abilities in naturalistic environments would greatly facilitate assessment and monitoring of such individuals. If sufficient accuracy could be achieved, studies could be performed whose sample
sizes are large enough to draw meaningful conclusions. In the current study, 16-hour home audio recordings using the LENA (Language
ENvironment Analysis) device are examined for older children and adolescents. Specific enhancements to the existing LENA analysis
platform include the ability to diarize recordings for subjects aged 5 through 13, to detect non-verbal vocalizations such as laughter and
whining, to identify child-directed speech, and to determine when questions are posed. Other higher-level descriptors involve extraction
of affect, computation of conversational interaction measures, detection of cross-talk and interruption events, and identifying emotional
outbursts. Subject-specific diarization based on a small amount of hand-labeled data yields acceptable accuracy. However, a newly
developed system based on i-vectors, specifically designed for the environment at hand, requires no such labeling at the onset.
3508
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3508
4:20
1pPPa8. Louis Braida’s influence on speech intelligibility research. Karen Payton (Elec. & Comput. Eng., Univ. of Massachusetts
Dartmouth, 285 Old Westport Rd., North Dartmouth, MA 02747-2300, kpayton@umassd.edu)
1p SUN. PM
In the Sensory Communications Laboratory at MIT, several researchers have grappled with the question of how to predict listeners’
perceptual performance using quantitative metrics. Most of the work has been motivated by the goal of understanding the effect of
acoustic degradations on speech intelligibility for hearing impaired listeners and determining the best way to mitigate or counter those
degradations. Louis Braida has been interested in this topic for many years. He worked with Ken Grant to investigate augmentation of
the Articulation Index (AI) with visual information to obtain an audio-visual AI. He continued this work with other students modeling
perceptual integration across modalities. Lou and I worked together to investigate the ability of the Speech Transmission Index (STI) to
predict intelligibility for impaired listeners. Initially, we tried to use it as a way to try and capture acoustic differences between conversational and clearly-articulated speech. That evolved into the development of a speech-based STI. Ray Goldsworthy extended our work,
comparing several speech-based STI techniques and developing a new one to predict speech intelligibility for cochlear implant users.
This talk will review the work done in this area under Lou’s mentorship and recent advances.
4:40
1pPPa9. The auditory-visual articulation Index. Ken W. Grant (National Military Audiol. and Speech-Lang. Pathol. Ctr., Walter
Reed National Military Medical Ctr., 301 Hamilton Ave., Silver Spring, MD 20901, ken.w.grant@gmail.com) and Joshua G. Bernstein
(National Military Audiol. and Speech-Lang. Pathol. Ctr., Walter Reed National Military Medical Ctr., Bethesda, MD)
Hearing aids (HAs) are the primary method for treating hearing impairment. However, in complex environments with competing
sound sources, HAs provide marginal benefits at best. Under these conditions, clinicians recommend facing the speaker to extract visual
speech information. Combined auditory-visual (AV) speech generally provides a signal that is much more resistant to noise and reverberation than an auditory-only signal. The Articulation Index (AI) established that different frequency regions of speech vary in their
degree of importance for intelligibility. However, frequencies most important for auditory-only speech intelligibility differ from those
that are most important for AV speech intelligibility. Thus, the optimal signal-processing solution may differ between AV and auditoryonly conditions. Braida and colleagues sought to develop an AV version of the AI to enable HA signal-processing strategies to be compared without the time and expense required for behavioral testing. This presentation describes this and other work inspired by Braida’s
efforts to predict AV speech intelligibility using only auditory-only and visual-only information. [Work supported by a grant from
CDMRP, #DM130027. The views expressed in this abstract are those of the authors and do not reflect the official policy of the Department of Army/Navy/Air Force, Department of Defense, or U.S. Government.]
5:00
1pPPa10. Contributions of Louis Braida to improved signal processing for hearing aids: Addressing the problem of reduced
dynamic range in listeners with sensorineural hearing loss. Charlotte M. Reed, Joseph G. Desloge, and Laura A. D’Aquila (Res. Lab.
of Electronics, Massachusetts Inst. of Technol., Rm. 36-751, MIT, 77 Massachusetts Ave., Cambridge, MA 02139, cmreed@mit.edu)
Lou’s early work in the area of improved signal processing for hearing aids included his research on compression amplification to
combat the effects of loudness recruitment in listeners with sensorineural hearing loss. Working with his doctoral students (including
Rich Lippmann, Steve De Gennaro, and Diane Bustamante), Lou made major contributions towards an analytical understanding of the
benefits and limitations of compression amplification as a component of hearing aids. Recently, Lou has been involved in work on a new
signal-processing scheme which operates to equalize the energy in a speech signal over time. This energy-equalization (EEQ) scheme
shares a similar goal with compression amplification in that they both attempt to match the range of speech levels into the reduced
dynamic range of a listener with sensorineural loss. Their operation, however, is different: while compression amplification is based on
the actual sound-pressure level of the signal, the EEQ scheme operates on relative energy calculations designed to reduce the variations
in overall signal level. In this talk, we will describe the EEQ processing system together with results obtained on its evaluation with
hearing-impaired listeners for speech reception in backgrounds of continuous and fluctuating noise.
3509
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3509
SUNDAY AFTERNOON, 25 JUNE 2017
ROOM 304, 1:40 P.M. TO 5:20 P.M.
Session 1pPPb
Psychological and Physiological Acoustics: Perception of Synthetic Sound Fields II
Sascha Spors, Cochair
Institute of Communications Engineering, University of Rostock, Richard-Wagner-Strasse 31, Rostock 18119, Germany
Nils Peters, Cochair
Advanced Tech R&D, Qualcomm Technologies, Inc., 5775 Morehouse Drive, San Diego, CA 92121
Contributed Papers
1:40
1pPPb1. Measuring speech intelligibility with speech and noise
interferers in a loudspeaker-based virtual sound environment. Axel
Ahrens, Marton Marschall, and Torsten Dau (Dept. of Elec. Eng., Hearing
Systems group, Tech. Univ. of Denmark, Ørsteds Plads, Bldg. 352, Kgs.
Lyngby 2800, Denmark, aahr@elektro.dtu.dk)
Loudspeaker-based virtual sound environments (VSEs) are emerging as
a versatile tool for studying human auditory perception. In order to investigate the reproducibility of simple sound scenes, speech reception thresholds
(SRTs) were measured with two interferers and in two spatial conditions
(co-located and 630 separated) using the Danish matrix sentence test Dantale II (Wagener et al., 2003). SRTs were measured in a typical listening
room and in a VSE consisting of a spherical 64-channel loudspeaker array
using simulated room acoustics with mixed-order-Ambisonics (MOA) playback. The speech maskers were taken from the same material as the target
(different talker, same sex). The noise maskers had the same long-term
spectrum and broadband envelope as the speech interferer but had random
phase (Best et al., 2013). The co-located conditions were reproduced comparably in the real room and in the VSE, with both speech and noise interferers. However, spatial separation led to a 3 dB higher benefit in the VSE
than in the real room in both interferer conditions. Previous studies using a
larger number of sound sources and more reverberation did not show such
systematic differences between virtual and reference conditions, suggesting
that reproduction errors may be masked in more complex scenes.
2:00
1pPPb2. Validating a perceptual distraction model in a personal twozone sound system. Jussi R€am€
o, Lasse Christensen (Electron. Systems,
Aalborg Univ., Fredrik Bajers Vej 7, Aalborg 9220, Denmark, jur@es.aau.
dk), Sren Bech (Bang & Olufsen, Struer, Denmark), and Sren H. Jensen
(Electron. Systems, Aalborg Univ., Aalborg, Denmark)
This paper focuses on validating a perceptual distraction model, which
aims to predict user’s perceived distraction caused by audio-on-audio interference, e.g., two competing audio sources within the same listening space.
Originally, the distraction model was trained with music-on-music stimuli
using a simple loudspeaker setup, consisting of only two loudspeakers, one
for the target sound source and the other for the interfering sound source.
Recently, the model was successfully validated in a complex personal
sound-zone system with speech-on-music stimuli. Second round of validations were conducted by physically altering the sound-zone system and running a set of new listening experiments utilizing two sound zones within the
sound-zone system. Thus, validating the model using a different sound-zone
system with both speech-on-music and music-on-speech stimuli sets. Preliminary results show that the model performance is equally good in both
zones, i.e., with both speech-on-music and music-on-speech stimuli, and
comparable to the previous validation round (RMSE approximately 10%).
The results further confirm that the distraction model can be used as a valuable tool in evaluating and optimizing the performance of personal soundzone systems.
2:20
1pPPb3. The effect of reverberation and audio spatialization on
egocentric distance estimation of objects in stereoscopic virtual reality.
Will Bailey (Acoust. Res. Ctr., Univ. of Salford, Newton Bldg., Crescent,
Salford M5 4WT, United Kingdom, j.w.bailey@edu.salford.ac.uk) and
Bruno M. Fazenda (Acoust. Res. Ctr., Univ. of Salford, Manchester, United
Kingdom)
It has been reported by numerous studies on distance perception in VR
that a compression of visual space occurs in virtual environments presented
using stereoscopic techniques. Other studies have shown that modified environmental auditory cues can affect egocentric spatial perception and that
increased order of modality improved the experience of immersive media.
Work was conducted to measure the effect of spatialized acoustic cues on
egocentric distance estimation in head mounted display VR. Results suggest
that the although early reflection content was not found to have significant
effect on estimation of distance, presence of reverberation increases the
perception of distance for objects further than 5m from the user and can
compensate for the spatial compression observed in the use of stereoscopic
VR.
Invited Papers
3510
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3510
2:40
1pPPb4. Evaluation of techniques for navigation of higher-order ambisonics. Joseph G. Tylka and Edgar Choueiri (Mech. &
Aerosp. Eng., Princeton Univ., MAE Dept. E-Quad, Olden St., Princeton, NJ 08544, josephgt@princeton.edu)
1p SUN. PM
Metrics are presented that assess spectral coloration and localization errors incurred by navigational techniques for higher-order
ambisonics. Previous studies on the coloration induced by such navigational techniques have been largely qualitative, and the accuracy
of previously-used localization models in this context is unclear. The presented metrics are applied in numerical simulations of navigation over a range of translation distances. Coloration is predicted using an auditory filter bank to compute the spectral energy differences
between the test and reference signals in critical bands, and localization is predicted using a precedence-effect-based localization model.
Coloration and localization errors are also measured through corresponding binaural-synthesis-based listening tests, wherein subjects are
first asked to rate the induced coloration relative to reference and low-pass-filtered “anchor“ signals, and subsequently judge source position. Relationships are drawn between the metrics and the results of the listening tests in order to validate the predictive capabilities of
the metrics.
3:00
1pPPb5. Perceptual evaluation of multichannel synthesis of moving sounds as a function of rendering strategies and velocity.
Cedric Camier and Catherine Guastavino (Multimodal Interaction Lab, McGill Univ., 3661 McTavish St., Montreal, QC H3A 1X1,
Canada, cedric.camier@gmail.com)
Sound-field synthesis for static sound sources has been extensively studied. Recently, dynamic sound sources synthesis has garnered
increased attention. Classical sound-field rendering strategies discretize dynamic sound-fields as a sequence of stationary snapshots. The
use of discrete multichannel arrays can generate further artifacts with moving sounds. Depending on the technique used, this results in
an amplitude modulation due to successive loudspeaker contributions (VBAP) or in multiple comb-filtering (WFS) which could affect
localization cues, especially at off-centered listening positions. We first present a detailed description of these artifacts. We then introduce a hybrid rendering strategy combining propagation simulation and VBAP at audio rate. We used this rendering strategy and WFS
to synthesize white noise revolving around listeners on a circular 48-loudspeaker array. On each trial, participants had to identify the trajectory (circle, triangle, or square) for velocities ranging from 0.5 to 2 revolutions per second. Performance was well above chance level
in all conditions. While WFS outperformed the hybrid rendering strategy at low velocities, no significant differences were observed at
high velocities for which participants relied on temporal cues rather than spatial cues. The results highlight how artifacts of the rendering
strategies interfere with dynamic sound localization at different velocities.
3:20–3:40 Break
Contributed Paper
3:40
available. This paper seeks to develop a method for determining the accuracy of the perceived localization of a virtual sound source generated using
a multichannel sound synthesis system. A test method is applied to the
sound field synthesis facility at Boys Town National Research Hospital, a
room (5.8 m x 5.2 m x 2.7 m) with reverberation time of 0.16 s at 125 Hz
and below 0.024 s at 250 Hz and above. Short bursts of broadband speechshaped noise are presented at a number of virtual source locations under
free-field and modeled reverberant-room conditions, and listeners are asked
to point to the subjective source location. Subjective localization results are
compared as functions of virtual sound location, and parameters of early
reflections and reverberation in the modeled sound environment. Results are
intended to guide future research on subjective room acoustics relevant to
children’s’ communication needs. [Work supported by NIH GM109023.]
1pPPb6. Determining the accuracy of sound field synthesis systems in
reproducing subjective source locations. Anna C. Catton, Lily M. Wang
(Durham School of Architectural Eng. and Construction, Univ. of Nebraska
- Lincolc, 1110 S. 67th St., Omaha, NE 68182-0816, anna.catton@huskers.
unl.edu), Adam K. Bosen, Timothy J. Vallier, and Douglas H. Keefe (Boys
Town National Res. Hospital, Omaha, NE)
Sound field synthesis systems developed and applied to study human
hearing perception differ in terms of the number and arrangement of loudspeakers in rooms of different sizes, and methods used to generate virtual
sound environments. Research has evaluated how well such systems physically reproduce room acoustic conditions, but limited subjective data are
Invited Papers
4:00
1pPPb7. Contributions of head-related transfer function choice and head tracking to virtual loudspeaker binaural rendering.
Brian F. Katz (Lutheries - Acoustique - Musique, Inst. @’Alembert, UPMC/CNRS, @’Alembert, bo^ıte 162, 4 Pl. Jussieu, Paris 75252
Paris Cedex 05, France, brian.katz@upmc.fr), Peter Stitt, Laurent Simon (LIMSI, CNRS, Universite Paris-Saclay, Orsay, France),
Etienne Hendrickx (LABSTICC (Laboratoire des Sci. et techniques de l’information, de la Commun. et de la connaissance), Universite
de Bretagne Occidentale, Brest, France), and Areti Andreopoulou (LIMSI, CNRS, Universite Paris-Saclay, Orsay, France)
This presentation will provide an overview of recent and ongoing studies regarding evaluations of sound fields using virtual loudspeaker binaural synthesis. Of specific interest is an identification of perceptual attributes affected by Head-Related Transfer Function
(HRTF) choice beyond basic localization error and the sensitivity of listeners to head tracking with regards to latency and externalization
judgments. A list of perceptual attributes, created using a Consensus Vocabulary Protocol elicitation method, and validated through listening tests, resulted in eight valid perceptual attributes for describing the perceptual dimensions affected by HRTF set variations.
Employing prescribed head movements, sensitivity to head tracker latency showed small but significant differences between single and
3511
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3511
multichannel audio source scenes. A similar protocol was employed to comparing the sense of externalization as a function of head rotation with and without head tracking. In contrast to several previous studies, results showed that head movements can substantially
enhance externalization, especially for frontal and rear sources, and that externalization can persist even though the subject has stopped
moving his/her head. These works were carried out during the course of the French funded BiLi (Binaural Listening) project (FUIAAP14).
4:20
1pPPb8. Auralization of acoustic spaces based on spherical microphone array recordings. Jens Ahrens (Chalmers Univ. of
Technol., Sven Hultins gata 8A, Gothenburg 412 58, Sweden, jens.ahrens@chalmers.se), Christoph Hohnerlein (Technische Universit€at
Berlin, Berlin, Germany), and Carl Andersson (Chalmers Univ. of Technol., G€
oteborg, Sweden)
Microphone arrays can capture the physical structure of a sound field. They are therefore potentially suited to capture and preserve
the sound of acoustic spaces within given physical limitations that are determined by the construction of the array. Especially spherical
microphone arrays have received considerable attention in this context. Superposed onto the limitations of the microphone array are the
limitations caused by the auralization system. We present results from user studies on the perceptual differences between spherical
microphone array recordings that are auralized with headphones as well as with a circular 56-channel loudspeaker array and headphone
auralization based on dummy head measurements of the same spaces. Head-tracking was applied in all cases in which headphones were
used.
4:40
1pPPb9. Sound environment and sound field reproduction using transducer arrays: Correlation between physical and
perceptual evaluations. Philippe-Aubert Gauthier and Alain Berry (Mech. Eng., Universite de Sherbrooke, 51, 8e Ave. Sud,
Sherbrooke, QC J1G 2P6, Canada, philippe_aubert_gauthier@hotmail.com)
Sound Environment Reproduction (SER) using Sound Field Reproduction (SFR) is aimed at the spatial reconstruction of a target
sound field captured using a microphone array. SFR has recently gained attention for SER in industrial or engineering contexts for sound
comfort or sound quality studies. The challenge is to create a reproduced sound field that first satisfies an assessment based on physical
evaluation, for example to satisfy any regulation based on physical quantities. However, the reproduced sound environment should also
success in perceptual evaluation. In this work, SER was applied to spatial sound field simulation in a vehicle mock-up. Both physical
and perceptual evaluations were completed. Physical metrics such as frequency-dependent averaged reproduction error (both phase and
magnitude) and averaged magnitude error (ignoring phase) were measured. Perceptual evaluations were based on similarity listening
tests while comparing SER with an original reference (the target sound field). Perceptual evaluations were compiled as similarity scores.
Correlation of similarity scores based on various physical evaluations suggests that the frequency-averaged and spatially averaged magnitude error is the physical evaluation metric that is the most correlated with the result of listening tests. This suggests that spatially
reproducing the accurate frequency spectrum is the first criterion for immersing SER.
Contributed Paper
5:00
1pPPb10. High-density data sonification of stock market information in
an immersive virtual environment. Samuel Chabot and Jonas Braasch
(School of Architecture, Rensselaer Polytechnic Inst., 40 3rd St., Troy, NY
12180, chabos2@rpi.edu)
Data sonification is an important tool to enhance a user’s ability to capture and process complex information. In this system, stock market data for
the top 128 publicly traded stock options are analyzed for the sonification.
3512
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
This information includes the daily trading price and volume of each stock.
Audio streams for conveying the daily information of each stock are generated using sine tone click trains, pitch alterations, and noise bursts. Each
audio stream is mapped to an individual loudspeaker in the 128 loudspeaker
array of Rensselaer’s Collaborative-Research Augmented Immersive Virtual
Environment Laboratory (CRAIVE-Lab) to create a high-density spatialized
sonification within the immersive virtual environment. [Work supported by
NSF #1229391 and the Cognitive and Immersive Systems Laboratory
(CISL).]
Acoustics ’17 Boston
3512
SUNDAY AFTERNOON, 25 JUNE 2017
ROOM 201, 1:20 P.M. TO 5:00 P.M.
Session 1pSA
Structural Acoustics and Vibration: General Topics in Structural Acoustics and Vibration I
1p SUN. PM
Benjamin Shafer, Chair
Technical Services, PABCO Gypsum, 3905 N 10th St., Tacoma, WA 98406
Contributed Papers
1:20
1pSA1. A study of complex system dynamics: Metronome
synchronization. Noah A. Sonne and Teresa J. Ryan (Eng., East Carolina
Univ., 200 Blue Beech Dr., Greenville, NC 27858, noahasonne@gmail.
com)
This work investigates energy exchange within a complex vibrating system. The system is made up of a mass, called the primary oscillator, and a
number of attached smaller structures, called the subordinate oscillator
array. Specifically, a rectangular rigid foam base is the primary mass and
mechanical metronomes are used as the subordinate oscillators. This work
explores how the orientation and arrangement of the metronomes on the
master structure affects the time it takes for metronome synchronization as
well as the resulting amplitude of oscillation of the vibration of the primary
mass. A MATLAB based image processing approach is used to measure
these system parameter. [This work was supported by the Robert W. Young
Award for Undergraduate Student Research.]
1:40
1pSA2. Vibration and sound of a flapping airfoil: The limit of small
bending rigidity. Avshalom Manela and Michael Weidenfeld (Aerosp.
Eng., Technion, Technion City, Haifa 32000, Israel, amanela@technion.ac.
il)
We investigate the near and far fields of a thin elastic airfoil set at uniform low-Mach flow and subject to leading-edge heaving actuation. The airfoil is “hanged” in the vertical direction and is free at its downstream end,
so that “hanging chain” gravity-induced tension forces apply. The structure
bending rigidity is assumed small, and we focus on analyzing the differences between a highly elastic airfoil and a membrane (where the bending rigidity vanishes). The near field is studied based on potential thin airfoil
theory, whereas the acoustic field is investigated using the Powell-Howe
acoustic analogy. The results shed light on the specific effect of structure
bending stiffness on the dynamics and acoustic disturbance of an airfoil.
2:00
1pSA3. Structural-acoustic optimization based on Fast Multipole
Boundary Element Method sensitivity analysis of a coupled acoustic
fluid-structure system. Nian Yang (State Key Lab. of Ocean Eng.,
Shanghai Jiao Tong Univ., Dongchuan 800 Rd., Shanghai, Shanghai
200240, China, yangnian@sjtu.edu.cn), Leilei Chen (College of Civil Eng.,
Xinyang Normal Univ., Xinyang, Henan, China), Kheirollah Sepahvand
(Dept. of Mech. Eng.,Tech. University of Munich, Inst. of VibroAcoust. of
Vehicles and Machines, Garching bei Munich, Germany), Hong Yi (State
Key Lab. of Ocean Eng., Shanghai Jiao Tong Univ., Shanghai, China), and
Steffen Marburg (Dept. of Mech. Eng.,Tech. University of Munich, Inst. of
VibroAcoust. of Vehicles and Machines, Munich, Germany)
For the light structures immersed in water, a full fluid-structure interaction (FSI) has to be considered when structural acoustics is analyzed. However, the computation costs for the FSI prediction and optimization is
always huge. The Fast Multipole Boundary Element Method (FMBEM) is
one of the most widely used methods to accelerate the computations.
3513
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Meanwhile, sensitivity analysis for FSI problems is the most time-consuming part of gradient-based optimization strategies. In this research, a
FMBEM-based sensitivity analysis is applied to a practical underwater model’s structural-acoustic optimization. Object function that represents the
overall radiated sound power are investigated, where the damping material
thickness in some specific areas were chosen as design parameters. An
improvement of the object functions are found within a limited number of
function evaluations. The FEM/FMBEM based sensitivity analysis is used
to calculate the sensitivity of the object function to the design parameters.
The method of moving asymptotes (MMA) is chosen as the optimization
algorithm. The efficiency of this optimization strategy applied on FSI practical model is investigated in detail.
2:20
1pSA4. Ball sensor rattle: Experimental and numerical sensitivity study
for seatbelt applications. Kai-Ulrich J. Machens, Jens Neumann, Jens
Scholz (Occupant Safety Systems, ZF TRW Active & Passive Safety
Technol., Industriestr. 20, Alfdorf D-73553, Germany, kai-ulrich.
machens@zf.com), and Marian Markiewicz (Novicos GmbH, Hamburg,
Germany)
Seatbelt systems are important elements of automotive safety systems.
Most seatbelt retractors are equipped with ball sensors, enabling retractors
to comply with mandatory vehicle sensitive locking requirements. The functional principle is based on ball inertia plus defined backlash of mating
surfaces, which renders seatbelt retractors susceptible to rattle. Sensor ball
rattle is considered as the most persistent parasitic noise source in occupant
safety industry. The vibration induced noise behavior of the ball sensor is
analyzed, both experimentally and by numerical simulation, predicting the
sound pressure spectrum up to 5 kHz. Impact forces among mating surfaces
are computed in the time domain with flexible multibody system analysis,
employing Craig-Bampton modes to approximate the vibrations of acoustically relevant substructures. Acoustic radiation into the sound field is determined in the frequency domain using preprocessed acoustic transfer vectors
from boundary element method analysis, significantly reducing computation
time. A sensitivity study with variation of sensor mass, gap, and excitation
demonstrates excellent correlation between numerical model prediction and
experimental result across a large variety of test cases. The presented methodology can predict rattle induced noise, and therefore delivers substantial
input for retractor design. Furthermore, other applications beyond seatbelt
retractors could equally benefit from using this approach.
2:40
1pSA5. A methodology to design multi-axis test rigs for vibration and
durability testing using frequency response functions. Polat Sendur
(Mech. Eng., Ozyegin Univ., Nisantepe Mahallesi, Orman Sokak,
Cekmekoy, Istanbul 34794, Turkey, polat.sendur@ozyegin.edu.tr), Umut
Ozcan, and Berk Ozoguz (Ford Otomotiv Sanayi A.S, Istanbul, Turkey)
The multi-axis simulators are designed for experimental verification of
the safe functioning of large components and subsystems under real world
customer usage in vibration and durability testing. Transformation of the
full vehicle conditions to mast rig testing with correct system dynamics and
Acoustics ’17 Boston
3513
vibration characteristics and boundary conditions is a key challenge in the
development of the experimental set-up. In this paper, a systematic methodology is formalized how to design the experimental set-up on MAST rig to
replicate the vehicle dynamics and vibration characteristics in vehicle conditions. System modes and frequency response functions are chosen as key
performance metrics to compare the dynamics of the system to be tested for
both full vehicle and rig design. Criteria on the metrics are defined to make
decision if the test rig design is sufficiently replicating the in-vehicle conditions. The methodology is illustrated on a side skirt attached to a heavy duty
truck chassis that demonstrates the application of the methodology in
practice.
a specific excitation frequency. These example design problems show that
material tailoring of plate structures using FGM can result in substantial
improvements of their vibration characteristics. The results can be used to
guide the practical design of FGM plates to enhance their dynamic
properties.
3:00–3:20 Break
Structural-acoustic optimization using state-of-the-art evolutionary algorithms may require tens of thousands of system solutions, which can be
time-limiting for full-scale systems. To reduce the time required for each
function evaluation, parallel processing techniques are used to solve the system in a highly-scalable fashion. The system acoustic radiation is modeled
as a stochastic problem using finite elements for the structural vibration and
boundary elements for the fluid loading and acoustic analysis. The approach
is demonstrated by minimizing the sound radiated from curved panel under
the influence of a turbulent boundary layer in the presence of added point
masses. Details of the point mass magnitudes and distribution are outcomes
of the optimization. Solver scaling information is provided that demonstrates the utility of the parallel processing approach.
3:20
1pSA6. Structural health monitoring under random flow loading.
Nicola Roveri, Silvia Milana, Antonio Culla, and Antonio Carcaterra (Dept.
of Mech. and Aerosp. Eng., Univ. of Rome La Sapienza, via Eudossiana 18,
Rome, Italy, nicola.roveri@gmail.com)
The aim of the work is the analysis of fluid-structure systems, when
excited by a flow consisting of an incompressible potential fluid with embedded vortexes. In fact, in many problems of relevant application interests,
the monitoring and the potentially detection of structural damages in structures undergoing loads in operative conditions is important. The present
method tries to identify simultaneously the load characteristics together
with the structural damage. The flow is characterized by the average velocity of the fluid conveying the vortexes and by the position and intensity of
the conveyed vortexes. A method for the identification of these flow parameters, based on vibration signals measured at the elastic fluid-structure interface, is proposed. Vibration signals are numerically generated and then
processed with time-frequency techniques, such as the ensemble empirical
mode decomposition and the normalized Hilbert transform. The sensitivity
of the algorithm to the measurement position and to single versus multipoints acquisitions are also investigated. A particular instantaneous frequency is firstly employed to estimate the load characteristics. The influence
of the load is then removed from the instantaneous frequency, so that the
damage position can be identified. The validity of the proposed method is
analyzed varying the flow parameters, the damage locations and depths;
effect of ambient noise is also taken into account.
3:40
1pSA7. Design of in-plane functionally graded material plates for
improved vibration characteristics. Nabeel T. Alshabatat (Mech. Eng.,
Tafila Tech. Univ., Tafila, Jordan), Kyle R. Myers (Penn State Univ., Appl.
Res. Lab., 3220B G Thomas Water Tunl, PO Box 30, State College, PA
16804, krm25@arl.psu.edu), and Koorosh Naghshineh (Mech. & Aerosp.
Eng., Western Michigan Univ., Kalamazoo, MI)
A method for improving the vibration characteristics of plate structures
is proposed. This method uses functionally graded material (FGM) instead
of isotropic material to construct the plates. The volume fraction of each
material constituent is defined in the plane of the plate by a 2D trigonometric law, while the material properties through the thickness are assumed constant. The finite element method is used for modal and harmonic analysis,
and a genetic algorithm is utilized for optimization of the chosen objective
function. The efficacy of the method is demonstrated by two design problems. In the first design problem, FGM is used to maximize the fundamental
frequencies of plates with different boundary conditions. In the second
design problem, the kinetic energy of a vibrating FGM plate is minimized at
3514
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
4:00
1pSA8. Structural-acoustic optimization using cluster computing.
Robert Campbell, Micah R. Shepherd, and Stephen Hambric (Appl. Res.
Lab., The Penn State Univ., P.O. Box 30, State College, PA 16804,
rlcampbell@psu.edu)
4:20
1pSA9. Acoustic testing techniques for replicating in-flight dynamic
loads. Kobi J. Cohen and Daniella Raveh (Aerosp. Eng., Technion, 3
Harottem St., Apt. 37, Haifa 3584706, Israel, kobic8@gmail.com)
Modern weapon systems used on combat aircrafts have complex electronic assemblies that are required to operate in challenging dynamic environment throughout their life cycle. Among the various sources of
excitation, aerodynamic noise is considered most significant. The paper
presents vibroacoustic measurements from captive flight, and attempts to
replicate them in acoustic laboratory testing. The question of interest is
which testing method, in terms of configuration and control scheme, is the
most adequate to accurately simulate the vibratory response of inner assemblies to flight loads. The paper examines acoustic test methods in a reverberant chamber. The tested article is a subsystem of a weapon system that
includes electrical assemblies, integrated inside a structural envelope. Two
test configurations are compared—“covered,” in which the subsystem is
tested inside its structural envelope, and “uncovered,” in which the subsystem is directly exposed to acoustic excitation. Acceleration measurements
show that when excited by in-flight acoustic levels, the acceleration
responses of the uncovered subsystem, are significantly lower than those
measured in flight. For the covered configuration, although the acoustic levels inside the envelope are attenuated by the structure, the resulting accelerations are significantly higher and closer to those of flight.
4:40
1pSA10. Dissipation as energy transport to molecular scale. Adnan Akay
(Bilkent Univ., Ankara 06800, Turkey, akay@cmu.edu)
Dissipation describes irreversible transfer of ordered kinetic energy
from a larger-scale to thermalized vibrations at the molecular scale. Considering dissipation as transfer of energy from one form of vibrations to
another form, models can be developed without the need for qualitative empirical constants. Examples of “lossless” damping mechanisms will be
derived to illustrate energy conversion at the molecular scale.
Acoustics ’17 Boston
3514
SUNDAY AFTERNOON, 25 JUNE 2017
BALLROOM A, 1:20 P.M. TO 5:20 P.M.
Session 1pSC
Speech Communication: Non-Native Speech and Bilingualism (Poster Session)
1p SUN. PM
Kristin Van Engen, Chair
Washington University in St. Louis, One Brookings Dr., Campus Box 1125, Saint Louis, MO 63130-4899
All posters will be on display from 1:20 p.m. to 5:20 p.m. To allow contributors in this session to see the other posters, authors of
odd-numbered papers will be at their posters from 1:20 p.m. to 3:20 p.m. and authors of even-numbered papers will be at their posters
from 3:20 p.m. to 5:20 p.m.
Contributed Papers
1pSC1. Acoustical analysis of English /r/ and /l/ by native Japanese
adults and children. Katsura Aoyama (Audiol. and Speech-Lang. Pathol.,
Univ. of North Texas, 1155 Union Circle #305010, Denton, TX 76209,
katsura.aoyama@unt.edu), James E. Flege (Univ. of Alabama at
Birmingham, Tuscania, Italy), Reiko Akahane-Yamada (ATR, Seika cho,
Kyoto, Japan), and Tsuneo Yamada (Open Univ. of Japan, Chiba,
Japan)
1pSC3. Individual variation in the perception of different types of
speech degradation. Drew J. McLaughlin, Melissa M. Baese-Berk
(Linguist, Univ. of Oregon, 1290, Eugene, OR 97403, dmclaug2@uoregon.
edu), Tessa Bent (Speech and Hearing Sci., Indiana Univ., Bloomington,
IN), Stephanie A. Borrie (Communicative Disord. and Deaf Education,
Utah State Univ., Logan, UT), and Kristin Van Engen (Psychol. and Brain
Sci., Washington Univ., Saint Louis, MO)
This study investigated the acoustic properties of American English /r/
and /l/ produced by native Japanese (NJ) and native English (NE) speakers.
The purpose of this study was to examine the differences in production
reported in Aoyama et al. (2004) acoustically. Aoyama et al. evaluated productions of /r/ and /l/ in 64 NE and NJ adults and children (16 participants
each in 4 groups) using intelligibility ratings. The data were collected twice
to study the acquisition of English by the NJ adults and children. In this
study, four acoustic parameters (duration, F1, F2, and F3) were measured in
256 tokens each of /r/ and /l/. The results showed that all of the acoustic parameters differed significantly between NJ and NE speakers at both times of
testing. Some aspects of acoustic parameters changed significantly over the
course of one year in NJ children’s productions. Lastly, the formant values
in NJ speakers’ productions indicated that their productions of both /r/ and /
l/ resembled NE speakers’ productions of /r/ more than the NE speakers’
productions of /l/. This finding is consistent with Aoyama et al.’s claim that
NJ speakers may have more difficulty in producing English /l/ than with
English /r/.
Both environmental noise and talker-related variation (e.g., accented
speech) can create adverse listening conditions for speech communication.
Individuals recruit additional cognitive, linguistic, or perceptual resources
when faced with such challenges, and they vary in their ability to understand
degraded speech. However, it is unclear whether listeners employ the same
additional resources when encountering different types of challenging listening conditions. In the present study, we compare individuals’ ability on a variety of skills —including vocabulary, selective attention, rhythm perception,
and working memory—with transcription accuracy (i.e., intelligibility scores)
of speech degraded by the addition of speech-shaped noise or multi-talker
babble and/or talker variation (i.e., a non-native speaker). Initial analyses
show that intelligibility scores across degradations of the same class (i.e., either environmental or talker-related) significantly correlate, but correlations
of intelligibility scores across degradation classes are weaker. The relationship between intelligibility scores and cognitive-linguistic skills is similar,
showing that while vocabulary and working memory correlate with multiple
degradation types, rhythm perception only correlates with environmental degradations. Taken together, these results indicate that listeners may recruit different resources when faced with different classes of listening challenges.
1pSC2. Factors influencing intelligibility and fluency in non-native
speech. Melissa M. Baese-Berk (Dept. of Linguist, Univ. of Oregon, 1290
University of Oregon, Eugene, OR 97403, mbaesebe@uoregon.edu) and
Tuuli Morrill (George Mason Univ., Fairfax, VA)
Substantial research has examined the factors making non-native speech
more difficult to understand than native speech. Prior work has suggested
that speaking rate is one such factor, with slower speech being perceived as
less comprehensible and more accented. Further, non-native speech is produced with shorter utterances and more frequent pauses than native speech.
Recent work has suggested that in addition to non-native speech being produced more slowly than native speech, it is produced with a more variable
speaking rate. In the present study, we examine the relationship between
variability in speaking rate, pausing and utterance length, and intelligibility
and fluency ratings of non-native speech. We asked listeners to transcribe
sentences produced by non-native speakers and to rate the fluency of read
speech. Preliminary results suggest that rate variability does correlate with
intelligibility of non-native speech, but not of native speech, and rate variability does not correlate as strongly with fluency. In addition, pause duration may interact with sentence complexity, but appears independent of rate.
These results suggest that while fluency and intelligibility are abstract constructs, examining variability in non-native speech and its relationship to a
number of factors may help explain why non-native speech is difficult to
understand.
3515
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
1pSC4. Effects of talker intelligibility and noise on judgments of
accentedness. Sarah Gittleman (Washington Univ. in St. Louis, 6023
Waterman Blvd., 2W, St. Louis, MO 63112, sgittleman@wustl.edu) and
Kristin Van Engen (Washington Univ. in St. Louis, Saint Louis, MO)
The damaging effect of background noise on the intelligibility of foreignaccented speech has been well documented, but little is known about the
effect of noise on listeners’ subjective judgments of accents. Noise adds distortion to speech, which may cause it to sound more “foreign.” On the other
hand, noise may reduce perceived foreignness by masking cues to the accent.
In this study, 40 native English speakers listened to 14 English-speaking
native Mandarin speakers in four levels of noise: -4 dB, 0 dB, + 4 dB, and
quiet. Participants judged each speaker on a scale from 1 (native-like) to 9
(foreign). The results showed a significant decrease in perceived accentedness
as noise level increased. They also showed a significant interaction between
noise and intelligibility: intelligibility (which had been measured for the same
talkers in a previous study) had the greatest effect on perceived accentedness
in quiet, and a reduced effect with increasing noise levels. These findings
indicate that listeners’ decreased access to acoustic-phonetic cues in the presence of background noise also reduces their sensitivity to phonetic variation
arising from foreign accents. Furthermore, the link between intelligibility and
accentedness is weakened by the presence of noise.
Acoustics ’17 Boston
3515
1pSC5. Early and late Spanish-English bilingual adults’ identification
of American English vowels. Miriam Baigorri (Long Island Univ.
Brooklyn, 1 University Plaza, Brooklyn, NY 11201, miriam.baigorri@liu.
edu) and Erika S Levy (Teachers College, Columbia Univ., New York,
NY)
Increasing numbers of Hispanic immigrants are entering the US (US
Census Bureau, 2011) and are learning American English (AE) as a second
language (L2). Accurate perception of AE vowels is important because
vowels carry a large part of the speech signal (Kewley-Port, Burkle, & Lee,
2007). The present study examined the accuracy with which early and late
Spanish-English bilingual adults identify AE vowels. Listeners were presented with AE vowels (/i/, / I /, /E/, /ˆ/, /æ /, /A/, and /o/) in a /g@bVp@/ context. They were instructed to click on the key word response from a choice
of nonsense words that contained the second vowel they heard. Findings
indicate that identification accuracy of L2 vowels was significantly higher
with early age of L2 acquisition. However, early bilingual listeners’ vowel
perception was not native-like, suggesting that the phonetic properties of
their native language influenced L2 speech perception. Additionally, identification accuracy varied as a function of the particular vowel, shedding light
on how the relationship between Spanish and AE vowel inventories might
explain the difficulties that arise in Spanish-English bilinguals’ identification of AE vowels. Findings are examined in relation to perceptual assimilation and discrimination of the stimuli by the same listeners.
1pSC6. Second language pronunciation training using acoustic-toarticulatory inversion. Jeffrey J. Berry, Abigail Stoll (Speech Pathol. &
Audiol., Marquette Univ., P.O. Box 1881, Milwaukee, WI 53201-1881,
jeffrey.berry@marquette.edu), Deriq Jones, Seyedramin Alikiaamiri (Elec.
and Comput. Eng., Marquette Univ., Milwaukee, WI), and Michael T.
Johnson (Elec. and Comput. Eng., Univ. of Kentucky, Lexington, KY)
The current work presents articulatory kinematic, acoustic, and perceptual data characterizing the effects of how visual biofeedback derived from
acoustic-to-articulatory inversion may influence vowel pronunciation training for native-Mandarin speakers of English. Ten participants were engaged
in a six-week pronunciation training program that included a focus on English vowel production. As an addition to traditional pronunciation training
techniques often used by speech-language pathologists, half of the participants were also provided with visual biofeedback displays detailing aspects
of their current tongue position as well as idealized positions for vowel targets. Visual displays were obtained using an acoustic-to-articulatory inversion model based on the Parallel Reference Speaker Weighting (PRSW)
method for model adaptation. Pre- and post-training changes in articulation
were compared between participants that used only traditional pronunciation
training methods and those who were given visual biofeedback based on
acoustic-to-articulatory inversion. Data analyses focus on articulatory-kinematic measures, obtained via electromagnetic articulography, measures of
vowel formant frequencies, and perceptual assessments based on phonetic
transcriptions from expert listeners. The results of the current work provide
insights regarding the value of PRSW-based adaptation of acoustic-to-articulatory inversion models and the resulting visual feedback displays as tools
in second-language learning of English pronunciation.
1pSC7. Perception of native language speech sounds does not predict
non-native speech sound learning. Pamela Fuhrmeister (Speech, Lang.,
and Hearing Sci., Univ. of Connecticut, 850 Bolton Rd., U-1085, Storrs, CT
06269, pamela.fuhrmeister@uconn.edu) and Emily Myers (Speech, Lang.,
and Hearing Sci., Univ. of Connecticut, Storrs Mansfield, CT)
Individual differences are often observed in laboratory studies of nonnative speech sound learning. One possible explanation for this variability is
that better detection of fine-grained contrasts within native language categories might facilitate non-native learning. For instance, Diaz et al. (2008)
found larger MMN responses to both native and non-native speech contrasts
in good compared to poor perceivers of a non-native contrast, suggesting a
general speech-related skill. The current study explores whether the ability
3516
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
to discriminate subtle differences in native language speech sounds correlates with non-native speech sound learning. To test this, we trained participants on a non-native, Hindi dental/retroflex contrast and assessed their
categorization and discrimination of a native /da/-/ta/ continuum. Additionally, participants completed a visual Flanker task in order to control for general motivation in the experimental setting. Neither native language
measures nor the Flanker task predicted non-native speech sound learning
abilities. Rather, using k-means cluster analysis, we found two distinct
groups of learners and non-learners that did not significantly differ on native
language or Flanker measures, suggesting that non-native speech sound
learning may be independent of those skills. Instead, non-native learning
success was best predicted by an ability to discriminate the non-native contrast at pretest.
1pSC8. Perception and production of American English consonants /v/
and /w/ by Hindi speakers of English. Vikas Grover (Commun. Disord.
and Deafness Dept., Kean Univ., NJ 07083, vgrover@kean.edu), Valerie
Shafer, D. H. Whalen (Speech-Language-Hearing Sci., The Graduate Ctr.,
CUNY, New York, NY), and Erika Levy (Commun. Sci. and Disord.,
Teachers College, Columbia Univ., New York, NY)
This study examined the ability of Hindi speakers of English to perceive
and produce American English (AE) consonants /v/ and /w/, which are difficult for Hindi speakers to distinguish (e.g., in “vest” and “west”). It also
examined whether the Hindi listeners’ length of residence (LOR) in the US
affected their performance. Two groups of Hindi speakers were included:
Hindi speakers who had been in the US for more than 5 years and Hindi
speakers who lived in India and used English as their second language. Participants performed perception and production tasks of naturally produced
tokens of word forms containing /v/ and /w/. Hindi listeners performed significantly less accurately than the English listeners on all tasks. The non-significant differences between the two Hindi groups indicated that the Hindi
US groups’ experience with the /v/-/w/ contrast in the US was insufficient to
allow for perceptual learning of this contrast. The findings shed light on
speech perception, production and comprehension (for lexical items that differ minimally, e.g., ‘viper vs. wiper’) challenges faced by native Hindi
speakers learning English. This information can also be helpful for designing perception and production training programs for this population.
1pSC9. Relation between acoustic-phonetic properties and speech
intelligibility in noise obtained with bilingual talkers. Sabine Hochmuth,
Tim J€
urgens, Thomas Brand, and Birger Kollmeier (Medical Phys. and
Cluster of Excellence Hearing4all, Universit€at Oldenburg, Carl-vonOssietzky Str. 9-11, Oldenburg 26129, Germany, sabine.hochmuth@unioldenburg.de)
An objective, language-independent way of predicting observed differences in speech intelligibility in noise across talkers based on their acousticphonetic properties was pursued by exploiting speech intelligibility data in
stationary speech-shaped noise uttered by bilingual talkers and comparing
inter-individual as well as intra-individual speech feature variations across
languages. Matrix sentence materials were used that were uttered by bilingual talkers of German/Spanish and of German/Russian and by the respective original matrix test talkers. Various acoustic-phonetic parameters
discussed in the literature as being related to speech intelligibility were
determined for each talker. Vowel space area, between-vowel category dispersion, and energy in the mid-frequency region represented by the speech
intelligibility index were found to be language-independent acoustic-phonetic properties most strongly related to speech intelligibility at least for
German, Russian and Spanish. Generally larger inter-individual variation
within languages than intra-individual variation across languages was found.
Hence, objective phonetic criteria like vowel space area may be used in the
future to objectively assess the potential of a talker to be easily understood
in a noisy background. One reason of the generally poorer intelligibility performance of Spanish compared to German or Russian may lie in the usage
of considerably smaller vowel space areas.
Acoustics ’17 Boston
3516
Dynamic L2 effects on L1 phonetics appear in experienced and novice
second language learners, raising the question of what linguistic and cognitive factors determine their occurrence, degree, and direction (assimilatory
versus dissimilatory). Unlike Chang (2012), our longitudinal data from
voiceless stops in early L1 English:L2 Japanese learners show primarily dissimilatory increase in English VOTs, an effect found most strongly early in
their first semester. Assimilatory L1 drift toward the lower VOTs of L2 Japanese may be disfavored because a decrease in English voiceless stop VOT
could threaten the L1 contrast between long and short lag stops. Furthermore, dissimilatory VOT increase on voiceless stops allows English speakers to distinguish phonetically similar L1 and L2 voiceless stops (e.g., Flege
and Eefting 1987). These two principles predict that our voiced stop data
could show increased L1 prevoicing (an assimilatory effect that would not
endanger the English voicing contrast), while also displaying non-identical
prevoicing/short-lag values for L1 and L2 voiced stops (separating the languages, as for Huffman and Schuhmann’s (2016) English-Spanish learners).
Overall, our data suggest that L1 changes in early L2 learning can be dissimilatory, and that phonetic properties of L1 and L2 contrasts affect how
L1 values restructure during early L2 acquisition.
1pSC11. The effect of second language orthographic input on the
learning of Mandarin words. Yen-Chen Hao (Modern Foreign Lang. and
Literatures, Univ. of Tennessee, Knoxville, TN) and Chung-Lin Yang
(Psychol. and Brain Sci., Indiana Univ., Memorial Hall 322, 1021 E 3rd St,
Bloomington, IN 47408, cy1@indiana.edu)
This study examines the effect of L2 orthography on Mandarin word
learning. English speakers at three proficiency levels participated in a Mandarin word-learning experiment. During the learning phase, half of the participants were provided with Pinyin (Chinese Romanization) and tone
marks (the PY group), while the other half were provided with characters
(the CH group). After learning, the participants judged the matching of
sound and meaning of 64 pairs, half of which were matches, while the other
half were either segmental or tonal-mismatch items. The results showed that
the Advanced and Intermediate learners in the CH group were more accurate than their counterparts in the PY group in the tonal-mismatch and match
conditions respectively. In contrast, the na€ıve participants in the PY group
were more accurate with the matches than those in the CH group. Also, participants in the PY group were overall inaccurate with the tonal-mismatch
items regardless of their proficiency levels. However, in the CH group the
Advanced learners scored significantly higher with the tonal-mismatch
items than the other groups. This study suggests that characters are more
effective than Pinyin in helping L2 learners encode the sounds of new Mandarin words, especially in tone encoding.
1pSC12. Is it blow or below? Non-native listeners’ perception of words
that contrast in syllable count. Keiichi Tajima (Dept. of Psych., Hosei
Univ., 2-17-1 Fujimi, Chiyoda-ku, Tokyo 102-8160, Japan, tajima@hosei.
ac.jp) and Stefanie Shattuck-Hufnagel (Res. Lab. of Electronics,
Massachusetts Inst. of Technol., Cambridge, MA)
Non-native speakers often have difficulty accurately producing and perceiving the syllable structure of a second language. For example, Japanese
learners of English often insert epenthetic vowels when producing English
words, e.g., stress produced as /sutoresu/. Similarly, when asked to count
syllables in spoken English words, they frequently overestimate the number
of syllables, suggesting that they tend to perceptually insert epenthetic vowels between adjacent consonants. These tendencies suggest the possibility
that learners may have difficulty distinguishing between English words that
contrast in syllable count, i.e., words that differ in the presence/absence of a
vowel, e.g., blow-below, sport-support. Furthermore, if listeners perceptually insert epenthetic vowels, then they should misperceive blow as below
more often than below as blow. To test these predictions, Japanese listeners
3517
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
participated in a 2AFC identification task, using 78 English minimal pairs
contrasting in syllable count such as blow-below. Results showed that Japanese listeners indeed had difficulty with this task. However, misidentification of blow-type words as below was less frequent than misidentification of
below-type words as blow, contrary to predictions based on perceptual epenthesis. These results suggest that simple comparison of syllable structure
between languages may not suffice to predict difficulties in L2 speech perception. [Work supported by JSPS.]
1pSC13. Costs and cues in code-switched lexical access. Alice Shen
(Dept. of Linguist, Univ. of California Berkeley, 1203 Dwinelle Hall,
Berkeley, CA 94704, azshen@berkeley.edu)
While perceiving code-switches incurs processing costs [Soares & Grosjean (1984, Memory & Cognition 12(4):380-386)], bilingual listeners use
acoustic cues to anticipate switches and facilitate processing [Fricke et al.
(2016, J. Memory Lang. 89:110-137)]. This study investigates the role of
anticipatory prosodic cues in the online processing of a code-switch from a
non-tonal to a tonal language. Experiment 1 compares reaction times for
perceiving Mandarin and English target words in English frames, to test
whether recognizing code-switches is costly. Although reaction times for
code-switched stimuli (475 ms) were not significantly slower than monolingual stimuli (466 ms), all participants self-reported as Mandarin-dominant
speakers who frequently code-switch, suggesting that language dominance
and experience may modulate switch costs. Reaction times to naturally-produced code-switched stimuli (426 ms) were faster than to spliced codeswitched stimuli (511 ms), suggesting that anticipatory cues facilitate the
perception of code-switches. Experiment 2 is a visual world eye-tracking
task. The proportion of looks toward images corresponding to the target
word (e.g., [mAU4tsØ5] “hat”), a cross-language phonetic competitor (e.g.,
& and a within-language phonetic competitor (e.g.,
[maUs] “mouse”),
[mAU2tˆin1] “towel”) are compared to assess any influence of anticipatory
cues &on target and non-target language activation levels during auditory recognition of code-switches.
1pSC14. Assimilatory and dissimilatory L1 English vowel drift in early
learners of Japanese. Katharina Schuhmann (Dept. of Germanic and Slavic
Lang. and Literatures, Penn State Univ., State College, PA, Katharina.
Schuhmann@gmail.com) and Marie K. Huffman (Dept. of Linguist, Stony
Brook Univ., Stony Brook, NY)
In contrast to predictions of the Speech Learning Model, Chang’s (2012)
study of novice English-speaking learners of Korean finds no clear segment
level assimilation between phonetically similar English and Korean vowels.
The relative complexity of the Korean and English vowel systems may have
made L2 to L1 vowel association inconsistent across speakers, leading to
contradictory segment level effects. We examined vowels in L1 English:L2
Japanese learners, hypothesizing that the less dense vowel space in Japanese
would simplify L1:L2 segment associations. Specifically, English [A] and
Japanese [a] should be associated by learners in a way that could lead to
segment level L1 drift effects, allowing us to determine whether assimilatory L1 drift would occur in English for novice learners of Japanese, as
Flege (1987, 1995) and Chang predict. Formant data for students in their
first and second semester of Japanese instruction show mostly L1 assimilatory drift in F1, and some L1 dissimilation in F2, with most speakers making
one change but not both. Overall, English [A] variants stay within L1 norms,
highlighting the importance of L1 phonetic repertoire in constraining L1
drift effects. Results will be compared to data for [i] and [u] to determine
whether systemic level drift also occurs.
1pSC15. Top-down influence on phonetic categorization of native vs.
non-native speech. Jessamyn L. Schertz (Dept. of Lang. Studies, Univ. of
Toronto Mississauga, 1265 Military Trail, Humanities Wing, HW427,
Toronto, ON M1C 1A4, Canada, jessamyn.schertz@utoronto.ca) and Kara
E. Hawthorne (Dept. of Commun. Sci. and Disord., Univ. of MS,
Edmonton, AB, Canada)
Speech perception requires integration of multiple sources of information, including bottom-up acoustic information and top-down contextual information, and listeners may adjust their reliance on a given source of
Acoustics ’17 Boston
3517
1p SUN. PM
1pSC10. Interaction of drift and distinctiveness in L1 English-L2
Japanese learners. Marie K. Huffman (Dept. of Linguist, Stony Brook
Univ., SBS S 201, Stony Brook, NY 11794-4376, marie.huffman@
stonybrook.edu), Katharina Schuhmann (Dept. of Germanic and Slavic
Lang. and Literatures, Penn State Univ., State College, PA), Kayla Keller,
and Chanda Chen (Dept. of Linguist, Stony Brook Univ., Stony Brook,
NY)
information depending on the communicative context. This work tests the
hypothesis that listeners increase reliance on contextual, relative to acoustic,
information when listening to a talker with a foreign accent, under the
assumption that the bottom-up information (non-native pronunciation) may
be less reliable. Native English listeners categorized an utterance-final target
word, where the initial consonant systematically varied in voice onset time
(VOT), as either “goat” or “coat.” Target words were embedded in carrier
sentences contextually biased towards one of the words (e.g., “The girl
milked the [coat/goat]” vs. “The girl put on her [coat/goat]”). Stimuli were
created from productions by two talkers: a native English talker and a native
Mandarin/L2 English talker with a discernable foreign accent. As expected,
acoustic information (VOT) was the primary cue for categorization, but sentence context also influenced perception in both talker conditions. Furthermore, preliminary results indicate that the semantic context effect is larger
in the Accented than in the Native condition, suggesting that listeners do
indeed increase reliance on contextual information when listening to foreign-accented speech.
1pSC16. Task differences do not impede overall learning when
adapting to a novel sound. David Saltzman and Emily B. Myers (Speech,
Lang., & Hearing Sci., Univ. of Connecticut, 850 Bolton Rd., Unit 1085,
Storrs, CT 06269, david.saltzman@uconn.edu)
Lexically guided perceptual learning (LGPL) and second-language
learning (L2) research seeks to understand how listeners adapt to novel
speech sounds. In LGPL, listeners shift their perception of a native phonetic
contrast in response to hearing an ambiguous token embedded in an unambiguous lexical context. In L2 tasks, often involve categorizing non-native
sounds, usually with explicit feedback. In general, L2 learning is seen as
more effortful and individually variable than LGPL. However, paradigms
differ in terms of stimuli (native vs. non-native phonetic contrasts) as well
as tasks (lexically-guided implicit feedback vs. explicit feedback). To test
whether L2-type vs. LGPL-type tasks yield differences in learning, participants were trained using an L2-style task to shift their native boundary along
an /s/ to /sh/ continuum. With feedback, participants categorized the midpoint token as one novel object, and an endpoint token (counterbalanced
across subjects) as another novel object. After training, a boundary shift
comparable to previous studies using the LGPL task was found, suggesting
that explicit and lexically-guided feedback produce similar magnitude of
learning. Despite this, participants in the L2-style task experienced lower
accuracy and more variability in perception of the continuum endpoints,
suggesting that task differences partially explain less consistency across
individuals in L2 research.
1pSC17. Perception of Russian palatalization contrasts by English
listeners. Kevin Roon (CUNY Graduate Ctr., 365 Fifth Ave., Ste. 7107,
New York, NY 10013, kroon@gc.cuny.edu) and D. H. Whalen (Haskins
Labs., New Haven, CT)
Russian contrasts palatalized vs. non-palatalized consonants across primary oral articulator, manner, voicing, and word position, in both stressed
and unstressed syllables. This palatalization contrast is challenging for
native English speakers to master, possibly due to English speakers not
being able to discriminate the relevant linguistic contrast in all the environments in which it exists in Russian. Previous studies have shown that English listeners are good at discriminating this Russian contrast prevocalically, but there is no experimental evidence indicating how well they
discriminate this contrast in the wide variety of environments in which it is
used in Russian, and when produced by different talkers. The present study
tested how well English listeners could perceive the Russian palatalization
contrast across manner, word position, primary oral articulator, and talker.
24 listeners performed an AX discrimination task in which A and X were
always produced by two different speakers, one male and one female. Trials
on which A and X mismatched differed in palatalization (e.g., /p/-/pj/) or
manner (e.g., /t/ vs. /s/). The results show the combinations of environments
that present the greatest challenges for English listeners in discriminating
the Russian contrast, with syllable-final obstruents being especially hard.
3518
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
1pSC18. Temporal-order processing of American-English vowel
sequences by native and non-native English-speaking listeners.
Catherine L. Rogers, Bogyeong Cheon, and Gail Donaldson (Dept. of
Commun. Sci. and Disord., Univ. of South Florida, USF, 4202 E. Fowler
Ave., PCD1017, Tampa, FL 33620, crogers2@usf.edu)
To understand the development of native-like proficiency in speech
processing, we must consider the apparent ease with which native speakers
process speech sounds under a variety of conditions. In the present study,
auditory temporal-order processing of American-English vowel sequences
was compared across three listener groups: monolingual English speakers
and relatively early vs. later learners of English as a second language. Using
the methods of Fogerty, Humes and Kewley-Port [2010, J. Acoust. Soc.
Am., 127, 2509-2520], 70-ms resynthesized versions of the syllables “pit,
pet, put,” and “pot” were presented in a two-syllable temporal-order processing task. Task difficulty was increased by decreasing syllable-onset asynchrony (SOA), i.e., the duration between syllable onsets. SOA thresholds
for accuracy of syllable-sequence identification were estimated using the
method of constant stimuli on each of four 72-trial blocks. Similar SOA
thresholds were obtained for native English speakers and early learners of
English, but SOA thresholds increased by a factor of two or more for later
learners of English. Furthermore, the average SOA threshold of the later
learners is similar to that of the older listeners in Fogerty et al. (2010), suggesting that increased processing time partially accounts for both groups’
increased difficulty in processing speech in noisy environments.
1pSC19. Perceptual similarity spaces of British English vowels by
speakers of Pakistani Urdu. Ishrat Rehman and Amalia Arvaniti (English
Lang. and Linguist, Univ. of Kent, SECL, Canterbury, Kent CT2 7NF,
United Kingdom, I.Rehman@kent.ac.uk)
Free classification was undertaken with 70 listeners from Lahore, Pakistan with Punjabi and Urdu as their first languages in order to shed light on
the English vowel features that have been most relevant in developing the
“new English” variety spoken by them (Pakistani English). The stimuli
were 19 hVd words carrying the Southern British English (SSBE) vowels.
The responses were statistically analyzed using hierarchical clustering and
multidimensional scaling. Listeners were sensitive to both F1 and F2, but
could not distinguish the high-mid from the low-mid vowels. The central
vowels /˘+/ and /Ù/ were in different groups indicating greater sensitivity to
backness (F2): listeners classed /Ù/ as back, but /˘+/, which is more fronted
in SSBE, as front. Diphthongs were grouped with monophthongs, sometimes based on the initial, sometimes on the final element; /aU/ /aI/ and /OI/,
however, formed a separate group, possibly because their first and last element are most distinct. Listeners were not sensitive to duration: /i+/~/I/, /U/
~/u+/, and /O+/~/`/ were grouped with each other. Overall, the results indicate
that Punjabi-Urdu listeners are sensitive to both height and backness, but
possibly prioritize the latter, while they lack an intermediate (central) space,
all features consistent with characteristics of Pakistani English.
1pSC20. Perceptual assimilation of Mandarin Chinese consonants by
native Danish listeners. Sidsel Rasmussen (English, Aarhus Univ., Jens
Chr. Skous Vej 4, Århus C 8000, Denmark, sira@cc.au.dk) and OckeSchwen Bohn (English, Aarhus Univ., Aarhus, Denmark)
A perceptual assimilation experiment examined the cross-linguistic mapping of Mandarin Chinese initial consonants to Danish consonants. 24 native
Danish listeners were auditorily presented with naturally produced CV syllables which consisted of the Mandarin initial consonants [p, ph, t, th, k, kh, x,
ts, tsh, s, tˆ, tˆh, ˆ, t, th , —, w, j] and vowels from the set [a, u, i, y] so that
none of the CV syllables violated the permissible combinations of C and V in
Mandarin. The Danish listeners identified the initial consonant of the stimuli
with phonetically unambiguous Danish orthographic symbols and provided
goodness ratings for each match. We found that assimilation patterns for
Mandarin consonants differed greatly as a function of the following vowel,
suggesting that native expectations regarding coarticulatory effects of V on C
importantly affect perceptual assimilation. For example, Mandarin [tˆ, tˆh, ˆ]
were assimilated to Danish <dj, tj, sj>, respectively, when followed by [a,
y], but to Danish <t, t, s> when followed by [i]. The detailed results allow us
to generate precise predictions for the discriminability and thus learnability
of Mandarin consonants for native Danish listeners and learners.
Acoustics ’17 Boston
3518
The realization of interdental fricatives as coronal oral stops, referred to
as interdental-stopping, is often attributed to substrate effects of ethnicity
and immigrant-heritage. Michigan’s Upper Peninsula (UP) is an excellent
case to examine this feature’s complex variation within bilingual and monolingual older-aged cross-sections of a rural immigrant speech community.
To what degree, if any, is interdental-stopping occurring among Michigan’s
UP Finnish and Italian-heritage speech communities? Interdental-stopping
has been documented in UP English [3, 2], but only a more recent report has
provided any quantitative account that tracks this salient feature and its sociophonetic trends within the community [1]. The present study examines this
feature among 41 Finnish-Americans and 30 Italian-Americans, whom are
all older-aged residents from Michigan’s Marquette County. Both samples
are stratified by gender, socioeconomic status and lingua-dominance. All
data is obtained from a reading passage task. This study reveals interdentalstopping occurring most often among Italian working-class males and least
among English-dominant bilinguals. Such findings goes beyond the claim
that this feature of UP English indexes working-class [2]. Interdental-stopping indexes working-class masculinity—held with prestige and used by
older-aged, working-class males, primarily among Italians, as a linguistic
marker of local identity in the UP English speech community.
1pSC22. How within-category gradience in lexical tones influences
native Chinese listeners and second-language Chinese learners
recognize words: An eye-tracking study. Zhen Qin, Jie Zhang, and Annie
Tremblay (Linguist, Univ. of Kansas, 1541 Lilac Ln., Blake Hall, Rm. 427,
Lawrence, KS 66045, qinzhenquentin2@ku.edu)
This study investigates whether within-category gradience in lexical
tones influences native and non-native Chinese listeners’ word recognition.
Previous offline research found that Chinese listeners have a more categorical perception of lexical tones, and thus show less sensitivity to within-category variability in tones, than non-native listeners. However, it is unclear
whether native and non-native listeners have sensitivity to within-category
gradience during online word recognition. Native Chinese listeners and proficient adult English-speaking Chinese learners were tested in a visual-world
eye-tracking experiment. The target was a level tone and the competitor was
a high-rising tone, or vice versa. The auditory stimuli were manipulated
such that the target tone was either canonical in the standard condition, phonetically more distant from the competitor in the distant condition, or phonetically closer to the competitor in the close condition. Growth curve
analysis on fixations suggested that native listeners showed a gradient pattern of lexical competition, with less competition in the distant condition
and more competition in the close condition than in the standard condition;
learners, on the other hand, showed increased competition in both the distant
and close conditions than in the standard condition. The native and nonnative listeners’ difference suggested the influence of their language
backgrounds.
1pSC23. Can second language suprasegmentals be learned? A study on
Japanese learners of Italian as foreign language. Elisa Pellegrino (Univ.
of Zurich, Plattenstrasse 54, Zuerich 8032, Switzerland, pellegrino.elisa.
1981@gmail.com)
One of the biggest challenges for L2 learners is to develop native-like
prosodic competence. There are evidence that computer-assisted pronunciation training helps learners perceive and produce L2 suprasegmentals. In
this study we test to the effectiveness of one of the techniques developed in
the area of spoken language technology for education and language learning—self-imitation—to facilitate the acquisition of the rhythmic and prosodic characteristics of Italian. 7 Japanese learners of Italian as Foreign
language (NNSs) and 2 Italian native speakers (NSs) were asked to read
aloud and record two sentences in Italian conveying different pragmatic
functions. The utterances of NNSs were manipulated as they receive the
segmental durational characteristics as well as the f0 characteristics of the
3519
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
corresponding NSs’ utterances. NNSs imitated their own voice previously
modified to match the reference NSs and recorded the new performance. To
quantify the degree of approximation to the native prosodic and rhythmic
pattern after self-imitation, pre- and post-training utterances were compared
to those of the native models by speech rate, f0, and segmental durational
characteristics. Preliminary results show that after the training utterance duration and vocalic intervals length better match the target duration.
1pSC24. Effects of experience on the production of Korean stop
consonant contrasts by Mandarin Chinese learners. Grace E. Oh
(English Lit. and Lang., Konkuk Univ., 120 Neungdong-ro, Gwangjin-gu,
Seoul 05029, Korea (the Republic of), gracey1980@yahoo.com)
The effect of L2 experience on the segmental and prosodic production
of second language was investigated. Thirty two Chinese learners of Korean
varying in the amount of experienced (3 months vs. 2 years) were compared
to sixteen age-matched native Korean speakers in their production of threeway contrastive stops (aspirated, lenis, and fortis) in Korean. To examine
both segmental and prosodic aspects, Korean four-syllable phrases (i.e.,
Accentual Phrase) beginning with each stop type in word-initial position
were elicited. VOT, f0, H1-H2 were analyzed and compared across groups
(native Korean, experienced, inexperienced groups). For further analyses of
the prosodic domain, f0 of the first two syllables (High-High for aspirated,
fortis and Low-High for lenis) was compared. The results revealed that the
experienced Chinese learners showed an early mastery of fortis stops, producing more native-like VOT and H1-H2 than the inexperienced group.
Also, Chinese groups were able to produce a sequence of AP with a nativelike f0 pattern regardless of the amount of experience. The production of f0
for stop contrasts, on the other hand, was non-native like even after 2 years
of experience, supporting previous observations that segments require
greater L2 experience than intonation to be acquired in a native-like
manner.
1pSC25. Identification of vowels of two different varieties of English by
native speakers of Japanese and Korean. Takeshi Nozawa (Lang.
Education Ctr., Ritsumeikan Univ., 1-1-1 Nojihigashi, Kusatsu 525-8577,
Japan, t-nozawa@ec.ritsumei.ac.jp) and Sang Yee Cheon (Dept. of East
Asian Lang. and Literatures, Univ. of Hawaii, Honolulu, HI)
Native speakers of Japanese and Korean heard and identified /i, I, eI, E,
æ, A, ˆ/ uttered in /bVd/, /dVd/ and /kVd/ frames by native speakers of
American and New Zealand English. New Zealand English has gone
through an idiosyncratic vowel shift. For instance, /æ/ and /E/ are raised and
/I/ is centralized. Overall American English vowels are identified more
accurately by the two listener groups. Both listener groups identified American English /E, æ/ better than New Zealand English equivalents, but on the
contrary American English /A/ is less accurately identified than New Zealand English /A/ (or /`/). Despite these similarities, some differences are
observed between the two listener groups. While Japanese listeners identified New Zealand English /i, I,/ less accurately than American English /i, I/,
Korean listeners identified New Zealand English /I/ more accurately than
American English /I/. Japanese listeners outperformed Korean listeners in
identifying American English /eI/ and New Zealand English /A,/, but Korean
listeners identified American English /ˆ/ and New Zealand English /I/, /æ/
and /ˆ/ better than did Japanese listeners. The results point to the effect of
L1 phonology and the differences in what each listener group believe each
English vowel sounds like. For instance, Korean non-high front vowels are
acoustically more similar to New Zealand English vowels /æ/ than to American English corresponding vowel. [Work partially supported by Grant-inAid for Scientific Research (C)16K02650.]
1pSC26. Development of semantic context facilitation for nonnativeaccented speech. Katherine E. Miller, Rachael F. Holt (Dept. of Speech and
Hearing Sci., Ohio State Univ., 110 Pressey Hall, 1070 Carmack Rd.,
Columbus, OH 43210, miller.7940@osu.edu), Tessa Bent (Speech and
Hearing Sci., Indiana Univ., Bloomington, IN), and Andrew Blank (Dept. of
Speech and Hearing Sci., Ohio State Univ., Columbus, OH)
Children can use sentence context to facilitate their understanding of
both native and nonnative speakers and this benefit increases with age. In
Acoustics ’17 Boston
3519
1p SUN. PM
1pSC21. Interdental-stopping among older-aged bilingual and
monolingual Finnish- and Italian-heritage Upper Peninsula
Michiganders. Paige Cornillie, Julianne Fosgard, Samantha Gibbs, Delani
Griffin, Olivia Lawson, and Wil A. Rankinen (Commun. Sci. and Disord.,
Grand Valley State Univ., 515 Michigan St. NE, Ste. 300, Office 309,
Grand Rapids, MI 49503, wil.rankinen@gvsu.edu)
this study, 5- to 7-year-old children (n = 90) and adults (n = 30) were compared on their ability to benefit from sentence context. In addition, we examined whether receptive vocabulary accounts for variability in spoken word
recognition differently for words spoken in sentences versus in isolation.
Stimuli were produced by either native- or nonnative-accented (Japanese
and Spanish) speakers. Listeners first identified words in isolation that had
been extracted from meaningful sentences. Then they completed the NIH
Toolbox Picture Vocabulary Test. Finally, they identified the same spoken
words embedded in the original sentences. Children and adults showed significant word recognition advantages for the sentence condition compared
to the isolated word condition for both native and nonnative speakers. However, adults showed a much larger benefit from sentence context than the
children in the nonnative but not the native condition. Further, older children benefited from context more than younger children. Finally, receptive
vocabulary was positively correlated with nonnative recognition of words in
sentences but not in isolation for both children and adults, suggesting that
better receptive vocabulary enhances use of context for nonnative speech.
1pSC27. Improving speech recognition in noise through speaking style
modifications for native and non-native listeners. Kirsten Meemann and
Rajka Smiljanic (Linguist, Univ. of Texas at Austin, 305 E. 23rd St., Mail
Code B5100, Austin, TX 78712, kirsten.meemann@utexas.edu)
It is well established that native listeners outperform non-native listeners
on word recognition tasks involving both speech-shaped noise (SSN) and
competing speech (speech babble). The present study examined whether
this non-native disadvantage can be compensated for by speaking style
enhancements. We also explored how these acoustic-articulatory modifications interact with energetic and informational masking at different signalto-noise ratios to determine intelligibility for the two listener groups. Native
and non-native participants heard noise-adapted (NAS) and clear speech
(CL) sentences mixed with either SSN, two-talker (2T), or six-talker (6T)
babble. CL and NAS significantly improved word recognition in noise, but
native listeners were better able to use the intelligibility-enhancing modifications. Results revealed an interaction between noise type and SNR such
that the intelligibility gain was larger for SSN at an easier SNR, but for 6T
babble at a harder SNR. The speaking style modifications enhanced intelligibility least in 2T babble for both listener groups. Speaking style adaptations improve word recognition under energetic masking (SSN), but are
most beneficial when informational and energetic masking are combined
(6T babble) and presented at a low SNR. The intelligibility benefit was
smallest in listening conditions with less energetic masking (2T babble) that
resulted from larger spectro-temporal dips.
1pSC28. Non-native word recognition in babble and white noise. Bin Li
(Linguist and Translation, City Univ. of Hong Kong, 83 Tat Chee Ave.,
Kowloon Tong 000, Hong Kong, binli2@cityu.edu.hk)
Noise causes degradation in speech signals, which poses difficulties for
non-native perception. In this study, we examined impacts of noisy conditions on word recognition in naturally produced sentences by Chinese
speakers of English (CE) and native speakers of English (NE). We also
manipulated the linguistic contexts where targeted words occur, in order to
assess how syntactic and semantic information may contribute to facilitate
speech perception in adverse conditions. We recorded and compared the
mean accuracy of keyword rewriting, to examine effects of noise and linguistic cues across listener groups. Results show that babble noise had more
severe influence on CE groups’ performance, which was also found commensurate with their English proficiencies. NE listeners, however, showed
different patterns in their tolerance of noise and in use of linguistic cues.
1pSC29. Overweighting of pitch cues in stop identification by Korean
learners of Mandarin Chinese. Sang-Im Lee-Kim (Dept of Foreign Lang.
and Literatures, National Chiao Tung Univ., Hsinchu, Taiwan, sangim119@
gmail.com)
The present study reports on a novel case where learners of a tonal language not only develop keen sensitivity to F0 cues in general, but the
increased sensitivity may also foster perceptual reorganization of cue
3520
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
weighting in stop identification. Korean learners of Mandarin and novice listeners participated in identification tasks for which pitch contours of Mandarin words containing unaspirated stops were digitally manipulated. In
word-initial position, learners showed considerably higher sensitivity to
onset pitch cues, showing a near-categorical perception from lenis to fortis
judgment along with higher onset pitch. In word-medial position, tone contours emerged as a significant predictor of the stops; predominant fortis
judgment in high-level tone vs. lenis judgment in mid-rising tone. The novice listeners showed similar patterns in both cases, but to a much smaller
degree. The effect of tone contours was discussed with reference to an auditory contrast whereby onset pitch of mid-rising tone is perceptually lower,
providing positive evidence for lenis stops, due to the contrast with the following high pitch. Taken together, the learners’ differential behavior suggests substantial reorganization of perceptual cues whereby onset pitch is
promoted to a primary cue for the lenis-fortis contrast, concurrent with significant underweighting of VOT cues.
1pSC30. The effect of noise and second language on turn taking in taskoriented dialog. A Josefine Srensen, Michal Fereczkowski, and Ewen N.
MacDonald (Dept. of Elec. Eng., Tech. Univ. of Denmark, Ctr. for Hearing
and Speech Sci., Bldg. 352, Ørsteds Plads, Kgs Lyngby DK-2800,
Denmark, emcd@elektro.dtu.dk)
Previous studies of floor transfer offsets (FTO), the interval between one
talker stopping and the other starting, suggest that normal conversation
requires interlocutors to predict when each other will finish their turn. We
hypothesized that noise and/or speaking in a second language (L2) would
result in longer FTOs due to increased processing demands. Conversations
from 20 pairs of normal hearing, native-Danish talkers were elicited using
the Diapix task in four conditions consisting of combinations of language
(Danish vs. English) and noise background (quiet vs. ICRA 7 noise presented at 70 dBA). Overall, participants took longer to complete the task in
both noise and in L2 indicating that both factors reduced communication efficiency. However, L2 had very little effect beyond completion time, likely
because the participants were very good in English. In contrast to our predictions, in the presence of noise, the median of the FTO distribution
decreased by approximately 30ms and the standard deviation decreased by
approximately 10%. However, the average duration of interpausal units
(i.e., utterances of continuous speech) increased by 40% in noise. These
findings are consistent with talkers holding their turn for longer, allowing
more time for speech planning.
1pSC31. The power of a unimodal distribution in cue reweighting:
Unimodality vs prediction error as signs of cue irrelevance. Zara
Harmon (Linguist, Univ. of Oregon, Eugene, OR), Kaori Idemaru (East
Asian Lang. & Literatures, Univ. of Oregon, Eugene, OR 97403, idemaru@
uoregon.edu), and Vsevolod Kapatsinski (Linguist, Univ. of Oregon,
Eugene, OR)
Maye & Gerken (2000) proposed that sound categories can be learned
from probability distributions: a unimodal distribution suggests a single category, while a bimodal one suggests two contrasting ones. Research on distributional learning has focused on developing a contrast through exposure
to a bimodal distribution. Here, we instead investigate how exposure to a
unimodal distribution affects perception of a pre-existing multidimensional
contrast (voicing, for which the primary cue is VOT). A total of 60 adult
native English speakers were exposed to either bimodal or unimodal VOT
distributions spanning the unaspirated/aspirated boundary (bear/pear).
However, we paired acoustic stimuli with pictures of bears and pears independently of VOT in training. For each stimulus, participants were asked to
guess the referent and received (random) feedback, generating an error signal that suggested VOT is no longer informative and should be downweighed. In this design, the bimodal distribution suggests the existence of
two categories but provides a clearer error signal: in the unimodal condition,
most training tokens have ambiguous VOT, preventing clear predictions of
voicing, thereby reducing prediction error. Nonetheless, participants downweighed VOT (and upweighed a secondary cue, F0) only with unimodal
training. We conclude that unimodality is a very strong cue to dimensional
irrelevance.
Acoustics ’17 Boston
3520
Normal-hearing individuals presented vocoded speech show great variability, even after explicit training. Assuming similar neural encoding, this
variability might be explained by linguistic and cognitive factors. Therefore,
we measured vocoded speech understanding in participants that should
show a range of linguistic and cognitive abilities; namely, children (8-10
years) and adults (18 years), and monolingual English speakers and bilingual Spanish-English speakers. We hypothesized that bilingual adults have
a “cognitive advantage” in their understanding of vocoded speech, related to
reinforcement of executive function skills by using multiple languages. We
also hypothesized that children may rely relatively more on their linguistic
knowledge, as they may not have developed these same cognitive skills and
strategies. Participants were trained on speech understanding simulating
shallow cochlear implant insertion depth (6-mm frequency-to-place mismatch) with auditory and visual feedback. Between training trials, participants were tested without feedback on standard (0-mm) or shallow (6-mm)
simulated insertion depths. Participants were then tested on measures of
phonological awareness and vocabulary in English and/or Spanish and the
five cognitive measures of the NIH Cognitive Toolbox. Associations
between these measures and speech perception may allow us to better tailor
therapy methods to members of particular age/language groups. [Work supported by NIH grant R01AG051603.]
1pSC33. Vowel undershoot in the production of nonwords by English
and Mandarin speakers. Chung-Lin Yang (Psychol. & Brain Sci., Indiana
Univ., 1101 E 10th St., Bloomington, IN 47405, cy1@indiana.edu), YuJung Lin, and Kuan-Yi Chao (Linguist, Indiana Univ., Bloomington, IN)
In a previous study (Yang, 2011), it was found that both Mandarin and
English speakers showed vowel undershoot (characterized by decreased
vowel duration and lowered first formant) in the production of English vowels while reading a list of real words, but Mandarin speakers could not make
a clear tense-lax distinction. The current study aims at examining the degree
of undershoot and tense-lax distinction when speakers had to reproduce
what they heard without any visual input. In a nonword repetition task,
Mandarin and English speakers were auditorily presented with a list, consisting of 8 English nonword triplets with the target vowels /i, I, eI, E/ embedded: monosyllables, disyllables and trisyllables. Each trial began with a
short training session where participants listened to the nonwords in each triplet as many times as they wanted. After training, each nonword were
played three times and participants repeated after each token. Our preliminary data showed that English speakers, while maintaining the tense-lax distinction, showed undershoot when producing nonwords across the three
syllabic conditions. On the contrary, Mandarin speakers showed very limited undershoot and unclear tense-lax distinction across the three syllabic
conditions, especially between /eI/ and /E/. [Yu-Jung Lin and Kuan-Yi Chao
contributed equally to this project.]
1pSC34. Effects of linguistic experience on brainstem encoding of speech
sounds. Tian Zhao and Patricia K. Kuhl (Inst. for Learning and Brain Sci.,
Univ. of Washington, Box 367988, Seattle, WA 98195, zhaotc@uw.edu)
Linguistic experience has been demonstrated repeatedly over the past
decades as an important factor influencing the perception of speech sounds.
At the behavioral level, speakers of different languages identify and discriminate speech sounds differentially (e.g., categorical perception). This
has been observed at the cortical level as a reduced Mismatch Response, a
measure that can reflect the sensitivity of the cortex to the differences
3521
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
between sounds. Yet, the effect of linguistic experience at an earlier stage of
speech processing, namely, the brainstem, is scarcely studied. In the current
study, we therefore aimed to examine specifically whether linguistic experience affects speech sound processing at the brainstem level. Twenty native
Spanish speakers and twenty monolingual English speakers are recruited
and their brainstem response to speech sounds will be measured with EEG.
Specifically, we will use synthesized speech sounds: /ba/ with a + 10 ms and
a -40 ms voice-onset-time (VOT). We hypothesize that while the encoding
by the two groups will be similar for the + 10 ms /ba/ (common to English
and Spanish), English speakers’ encoding of -40 ms prevoiced /ba/s (Spanish only) will be reduced relative to Spanish speakers. Data will be analyzed
and interpreted with regards to these hypotheses.
1pSC35. Acoustic comparison on syllabic rates between stress-timed and
syllable-timed language speakers. Myungsook Kim (English, Soongsil
Univ., Sangdo-ro 369, Seoul 06978, South Korea, kimm@ssu.ac.kr) and
myungjin bae (Dept. of IT Eng., Soongsil Univ., Seoul, South Korea)
Syllabic rates (i.e., phonation rate for each syllable) of a speaker provide
us with a lot of interesting information about the language in question. Especially between syllable-timed language and stress-timed language, we may
expect that speakers will show a gap in syllabic rates as they have their own
ways to pronounce. Syllabic rates are closely related with information density in speech communication although the Rosetta Project (2011) found a
negative correlation between the two illustrating the existence of encoding
strategies for each of 7 popular languages in the world. Since Korean was
not on the list for the Project, this paper further investigates and compares
acoustic characteristics on syllabic rates between two speakers, one of English, stress-timed language, and the other of Korean, syllable-timed language, when speaking both Korean and English. The result shows that the
English speaker speaks Korean 1.39 times faster than the Korean speaker in
syllabic rate. The English speaker also shows higher energy distribution in
high frequency range while the Korean speaker shows a relatively even distribution in all frequency ranges. When speaking English, the Korean
speaker speaks slower than the English speaker, showing not much difference in duration, pitch, and intensity between the stressed and unstressed
syllables. The paper concludes that keeping a proper syllabic rate for a target language might be important in language acquisition.
1pSC36. Analysis of allophones based on audio signal recordings and
parameterization. Andrzej Czy_zewski, Magdalena Piotrowska, and Bozena
Kostek (Gdansk Univ. of Technol., Narutowicza 11/12, Gdansk 80-233,
Poland, andczyz@gmail.com)
The aim of this study is to develop an allophonic description of English
plosive consonants based on recordings of 600 specially selected words.
Allophonic variations addressed in the study may have two sources: positional and contextual. The former one depends on the syllabic or prosodic
position in which a particular phoneme occurs. Contextual allophony is conditioned by the local phonetic environment. Co-articulation overlapping in
time demands a precise determination of allophonic pronunciation in the
context of phonemic transcription. The presented study is focused on creation of speech recordings that may serve for the analysis of allophone variation. Two sets of recordings are prepared. The first one consists of words
read by the non-native speakers. Tempo of reading is forced by a teleprompter. In the second case, every word is played back from the recordings of the
phonology expert and then the speaker repeats a particular word. The last
stage is the assessment of recordings by the same expert. Scores assigned by
the expert are included as a reference for signal analysis and parametrization. [Research sponsored by the Polish National Science Centre, Dec.
No.2015/17/B/ST6/01874.]
Acoustics ’17 Boston
3521
1p SUN. PM
1pSC32. Do linguistic and cognitive processing differences impact
vocoded speech understanding? Arifi N. Waked and Matthew Goupell
(Hearing and Speech Sci., Univ. of Maryland, 0100 Lefrak Hall, College
Park, MD 20740, awaked@umd.edu)
SUNDAY AFTERNOON, 25 JUNE 2017
ROOM 302, 1:15 P.M. TO 5:00 P.M.
Session 1pSP
Signal Processing in Acoustics: Application of Bayesian Methods to Acoustic Model Identification and
Classification II
Edmund Sullivan, Cochair
Research, Prometheus, 46 Lawton Brook Lane, Portsmouth, RI 02871
Ning Xiang, Cochair
School of Architecture, Rensselaer Polytechnic Institute, Greene Building, 110 8th Street, Troy, NY 12180
Chair’s Introduction—1:15
Invited Papers
1:20
1pSP1. Bayesian classification of environmental noise sources. Edward T. Nykaza, Matthew G. Blevins (ERDC-CERL, 2902
Newmark Dr., Champaign, IL 61822, edward.t.nykaza@usace.army.mil), Carl R. Hart (ERDC-CRREL, Hanover, NH), and Anton
Netchaev (ERDC-ITL, Vicksburg, MS)
Classification algorithms are an essential component of continuously running environmental noise monitors. Without them, one does
not know which noise sources are responsible for the levels recorded by the monitor. This is problematic given that continuously recording monitors may accumulate millions of triggered events and terabytes of data. In this study, we look at the utility of Bayesian classification methods. We compare the performance of these methods to some of the top performing environmental noise classifiers (e.g.,
support vector machines, random forest, and bagged trees), and discuss the advantages and disadvantages of the Bayesian approach. In
particular, we compare the accuracy, number of observations needed to achieve an accurate classification, computation time, and feature
importance.
1:40
1pSP2. Bayesian model selection for multilayer microperforated panel sound absorber design. Cameron J. Fackler, Yiqiao Hou,
and Ning Xiang (Graduate Program in Architectural Acoust., Rensselaer Polytechnic Inst., 110 8th St., Troy, NY, cfackler@gmail.com)
Microperforated panel (MPP) sound absorbers are capable of providing high sound absorption coefficients without the use of fibrous
materials; however, they typically function in narrow frequency ranges. By combining multiple MPPs into a multilayer absorber, the frequency bandwidth may be increased while maintaining a high absorption coefficient. Modeling the acoustic properties of an MPP
absorber requires four physical parameters per MPP layer. Since each additional MPP layer in a multilayer absorber increases the complexity of the acoustic model, Bayesian model selection is well-suited to the task of designing a multilayer MPP absorber. In such a
design, minimizing the number of layers used while still satisfying the design goals is desirable, in order to optimize material usage,
cost, and space required by the absorber. In a full Bayesian design framework, model selection determines the number of MPP layers
required, while parameter estimation determines the (physical) design parameters for each layer. In this work, an example design scheme
is specified to satisfy a practical need for acoustic absorption. The Bayesian framework produces a three-layer MPP design which meets
the target requirements. This absorber design is constructed, and impedance tube measurements are obtained to validate the acoustic
absorption properties.
2:00
1pSP3. Minimum mean square error estimation of a sparse broadband acoustic response with a hierarchical mixture Gaussian
prior. Paul J. Gendron (ECE Dept., Univ. of Massachusetts Dartmouth, 285 Old Westport Rd., North Dartmouth, MA 02747,
pgendron@umassd.edu) and Jacob L. Silva (ECE Dept., Univ. of Massachusetts Dartmouth, Darmouth, MA)
Considered here is a minimum mean square error (MMSE) estimator of a broadband acoustic response function over a vertical aperture based on an adaptive sparsity prior. The prior is a hierarchical Gaussian mixture distribution built on the assumption that acoustic
paths can be partitioned into a relatively coherent set of arrivals that on average exhibit Doppler spreading about a mean rate and a set of
incoherent paths that exhibit a flat Doppler spectra. The hierarchy establishes constraints on the parameters of each of these Gaussian
models such that coherent components of the response are both sparse and in the ensemble obey the Doppler spread profile. An empirical
Bayes approach is developed to estimate the latent parameters of the hierarchy, from which the shared time varying dilation process can
ameliorated thereby enhancing coherent multi-path combining. The model is tested with acoustic communication recordings taken in
shallow water at very low signal-to-noise ratios.
3522
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3522
2:20
1pSP4. Speaker localization in a reverberant environment using spherical statistical modeling. Boaz Rafaely (Elec. and Comput.
Eng., Ben-Gurion Univ. of the Negev, POB 653, Beer Sheva 84105, Israel, br@bgu.ac.il), Christopher Schymura, and Dorothea Kolossa
(Ruhr-Universit€at Bochum, Bochum, Germany)
1p SUN. PM
Estimation of the direction of arrival (DoA) of speakers in reverberant environments is an important audio signal processing task in
a wide range of applications. Recently, a reverberation-robust method for DoA estimation has been developed. It is based on the identification of time-frequency bins that are dominated by the direct path from the source. As the DoA statistics was found to have a multimodal distribution, clustering using Gaussian mixture modeling improved localization accuracy. However, this method employed linear
statistics over the azimuth and elevation angles, therefore introducing between 0 and 360 degrees azimuth. This work explores the use of
spherical statistics in the direct-path dominance approach to speaker localization in reverberant rooms, providing a more suitable angular
representation. Theoretical and experimental aspects of the proposed approach are investigated and validated using a computer simulation of a speaker in a room.
Contributed Papers
2:40
1pSP5. Inference on speed, depth, and range of a submerged object
from a limited vertical aperture under uncertain noise power. Abner C.
Barros (ECE Dept., Univ. of Massachusetts Dartmouth, 285 Old Westport
Rd., Darmouth, MA 02747, abarros1@umassd.edu), David C. Anchieta
(Elec. and Comput., Universidade Federal do Para, Belem, Para, Brazil),
and Paul J. Gendron (ECE Dept., Univ. of Massachusetts Dartmouth, North
Dartmouth, MA)
3:20–3:40 Break
3:40
1pSP7. Clustering technology for the analysis and classification of
bioacoustic vocalizations. Ian Agranat (Wildlife Acoust., Inc., 3 Mill and
Main Pl., Maynard, MA 01754, ian@wildlifeacoustics.com)
Considered here is an narrow band directed source and hydrophone receiver arrangement employed to infer the depth, speed, and range of an
oncoming submerged object. Tracking the scattering body by means of a
continuous wave transmission is challenging due to the difficulty of inferring
the frequencies and angles of the two returned closely spaced wave vectors.
Computation of the posterior pdf of these two wave vectors is accomplished
by a judicious Gibbs sampling scheme that accounts for the uncertainty in
the ambient acoustic noise level. Computational improvements are accomplished by taking full advantage of the prior distribution of the wave vectors
associated with the specific target scenario. Very short duration observations
of approximately 10 milliseconds are considered over which the Doppler
rate of change of the two wave vectors can be considered negligible. This
Bayesian scheme takes advantage of the analytic tractability of the conditional density of the received amplitudes and phases and of the noise powers.
The conditional densities of the ordered wave vectors however are constructed numerically by 2 dimensional inverse quantile sampling. The
inferred joint density of depth, range, and speed of the target is accomplished
by constructing an inverse transformation of the acoustic propagation model.
A new technique for clustering and classifying bioacoustics vocalizations from large audio recording data is presented. Over 30,000 terrestrial
passive audio and ultrasonic recorders deployed worldwide for the monitoring of birds, frogs and bats generate many petabytes of data annually. Our
methods seek to efficiently and automatically detect candidate vocalizations
from these massive datasets and sort them into clusters of similar vocalization patterns. Researchers can then quickly survey biodiversity and find species of interest by examining each of the resulting clusters. After clustering,
species-specific classifiers can be built by labeling whole clusters or training
classifiers after labeling individual detections. Once a classifier is built, it
can be used to search new data for species of interest. The algorithms detect
vocalizations comprising one or more short syllables. A variable-length observation sequence is extracted comprising spectral features changing
through time. A Hidden Markov Model is trained on these sequences and
Fisher scores are calculated for each detection. The dot product of two
Fisher scores provides a measure of similarity between two vocalizations
and forms the basis for clustering. The algorithms are designed to run in parallel on multi-core processors and stream the data to minimize memory
requirements.
3:00
4:00
1pSP6. Machine learning applied to estimating broadband source
signature characteristics in a shallow ocean environment. David P.
Knobles (KSA, LLC, PO Box 27200, Austin, TX 78755, dpknobles@
yahoo.com) and Mohsen Badiey (College of Earth, Ocean, and
Environment, Univ. of Delaware, Newark, DE)
1pSP8. Information loss due to environmental variability and
uncertainty in Bayesian localization of a narrowband source. Thomas J.
Hayward (Naval Res. Lab., 4555 Overlook Ave. SW, Washington, DC
20375, thomas.hayward@nrl.navy.mil)
Probabilistic machine and deep learning methods are critical elements in
the area of automation and artificial intelligence. The regression task considered here is to use supervised machine learning to predict a frequency dependent acoustic source level from received broadband signals that have
propagated through a shallow ocean waveguide possessing random features.
The details are intimately linked to a previously introduced sequential maximum entropy—Bayesian approach employed to generate marginal probability
distributions for both environmental and source parameter values. To meet the
requirement to have both low training and generalization errors, several regularization and non-linear optimization methods are considered to enhance performance. This includes convolutional networks. Optimization includes not
only the issue of sampling but also finding an effective model capacity, which
is one of the most significant challenges to successful machine learning applications. The methodology is applied to measured broadband data collected in
about 75 m of water about 60 miles south of Cape Cod Massachusetts in an
area called the mud pond. Both the training samples and the samples not used
in training have known source levels with which to measure both the training
error and the generalization error and thus quantify performance.
3523
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Performance degradation of Bayesian localization of a low-frequency
narrowband acoustic source due to variability and imperfect knowledge of
the acoustic environment is investigated in a computational study. The environmental variability is modeled as arising from water column fluctuations
associated with a diffuse random linear internal wave field in a shallowwater ocean waveguide. The ambient noise spatial cross-spectrum is represented by a Kuperman-Ingenito model. For the case of complex Gaussian
internal wave spectral amplitudes, a closed-form expression is derived for
the conditional pdf, given source location, of the signal spectral values
received on an acoustic array. Examples computed for a vertical receiver
array quantify localization performance degradation as an increase in the entropy of the Bayesian source-location posterior. The effects of model bias,
model spectral uncertainty, and medium variability as determined by the internal wave power spectrum are quantified separately and jointly. Potential
extensions to more general models of medium variability are discussed.
[Work supported by ONR.]
Acoustics ’17 Boston
3523
4:20
4:40
1pSP9. Passive acoustic source localization with multiple horizontal
arrays in shallow water. Dag Tollefsen (Norwegian Defence Res. Est.
(FFI), Boks 115, Horten 3191, Norway, dag.tollefsen@ffi.no), Peter
Gerstoft, and William S. Hodgkiss (Scripps Inst. of Oceanogr., Univ. of
California San Diego, La Jolla, CA)
1pSP10. Geophysical inversion by dictionary learning. Michael J. Bianco
and Peter Gerstoft (Marine Physical Lab., Univ. of California San Diego,
Scripps Inst. of Oceanogr., 9500 Gilman Dr., La Jolla, CA 92037,
mbianco@ucsd.edu)
This paper considers concurrent matched-field processing of data from
multiple, spatially separated acoustic arrays with application to towedsource data received on two bottom-moored horizontal line arrays from the
SWellEx-96 shallow water experiment. Matched-field processors are
derived for multiple arrays and multiple-snapshot data using maximum-likelihood estimates for unknown complex-valued source strengths and
unknown error variances. Starting from a coherent processor where phase
and amplitude is known between all arrays, likelihood expressions are
derived for various assumptions on relative source spectral information (amplitude and phase at different frequencies) between arrays and from snapshot to snapshot. Processing the two arrays with a coherent-array processor
(with inter-array amplitude and phase known) or with an incoherent-array
processor (no inter-array spectral information) both yield improvements in
localization over processing the arrays individually. The best results with
this data set were obtained with a processor that exploits relative amplitude
information but not relative phase between arrays. The localization performance improvement is retained when the multiple-array processors are
applied to short arrays that individually yield poor performance.
Dictionary learning, a form of unsupervised machine learning, has
recently been applied to ocean sound speed profile (SSP) data to obtain
compact dictionaries of shape functions which explain SSPs using as few as
one non-zero coefficient. In this presentation, the results of this analysis and
potential applications of dictionary learning techniques to the inversion of
real acoustic data are discussed. The estimation of true geophysical parameters from acoustic observations often is an ill-conditioned problem that is
regularized by enforcing prior constraints such as sparsity or energy penalities, and by reducing the size of the parameter search. Traditionally, empirical orthogonal functions (EOFs) and overcomplete wavelet and curvelet
dictionaries have been used to represent complex geophysical structures
with few parameters. Using the K-SVD dictionary learning algorithm, the
representation of ocean SSP data is significantly compressed relative to
EOF analysis. The regularization performance of these learned dictionaries
is evaluated against EOFs in the estimation of ocean sound speed structure
from ocean acoustic observations and the limitations of such unsupervised
methods are considered.
SUNDAY AFTERNOON, 25 JUNE 2017
ROOM 306, 1:20 P.M. TO 5:00 P.M.
Session 1pUWa
Underwater Acoustics: Ambient Sound in the Ocean
Peter Gerstoft, Chair
SIO Marine Phys. Lab. MC0238, Univ. of California San Diego, 9500 Gillman Drive, La Jolla, CA 92093-0238
Contributed Papers
1:20
1:40
1pUWa1. Arctic soundscape measured with a drifting vertical line
array. Emma Reeves, Peter Gerstoft, Peter F. Worcester, and Matthew
Dzieciuch (Scripps Inst. of Oceanogr., Univ. of California San Diego, 9500
Gilman Dr., La Jolla, CA 92093, ecreeves@ucsd.edu)
1pUWa2. Acoustic measurements of a controlled gas seep. Kevin M.
Rychert and Thomas C. Weber (Ocean Eng., Univ. of New Hampshire, 24
Colovos Rd., Durham, NH 03824, krychert@ccom.unh.edu)
The soundscape in the eastern Arctic was studied from April to September 2013 using a 22 element vertical hydrophone array as it drifted from
near the North Pole (89 23’N, 62 35’W) to north of Fram Strait (83 45’N
4 28’ W). The hydrophones recorded for 108 minutes on six days per week
with a sampling rate of 1953.125 Hz. After removal of data corrupted by
nonacoustic flow-related noise, 19 days throughout the transit period were
analyzed. Major contributors include broadband and tonal ice noises, seismic airgun surveys, and earthquake T phase arrivals. Statistical spectral
analyses show a broad peak in power at about 15 Hz similar to that previously observed and a mid-frequency fall off of about 20 dB/decade. Observations of the median noise levels with depth demonstrate the change in
dominant noise sources between high (200-500 Hz) and low (10-50 Hz) frequencies as the array transited southward. The median noise levels observed
are among the lowest of the sparse observations in the eastern Arctic, but
comparable to noise levels reported in the western Arctic.
3524
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
To verify existing models for conversion of acoustic target strength to
estimates for the total volume of methane gas released from the seafloor
through the water column, a synthetic seep system was designed and fabricated. This system creates individual bubbles of a specific sizes most commonly found in gaseous methane seeps, 1 to 5 mm radii, which can be
released at any interval and at any water depth. The synthetic seep system
was deployed off the coast of New Hampshire in an approximate depth of
50 m. Acoustic backscatter from 10 to 100 kHz was collected by steaming
over the synthetic seep multiple times, each with a predetermined and calibrated bubble size created by the system at depth. These data represent a
direct field measurement which tests models describing bubble size evolution during ascent through the water column, as well as models for acoustic
scattering from bubbles of different sizes. Validating these models directly
tests the ability of broadband sonar systems to acoustically monitor the
transport of gas from the seabed to the atmosphere.
Acoustics ’17 Boston
3524
1pUWa3. Environmental indicators from underwater soundscapes and
simultaneously collected non-acoustic data. Simon E. Freeman
(Underwater Acoust. and Signal Processing, US Naval Res. Lab., 7038 Old
Brentford Rd., Alexandria, VA 22310, simon.freeman@gmail.com) and
Lauren A. Freeman (Remote Sensing Div., US Naval Res. Lab.,
Washington, DC)
Drawing environmental conclusions from underwater acoustic recordings alone can to be challenging as most received sounds, especially in shallow water, are from sources that cannot easily be classified. This
presentation will discuss two ongoing projects in which soundscapes were
recorded simultaneously with non-acoustic, validating environmental data.
In one case, acoustic data were collected simultaneously with bottom composition, organism censuses, and night-time time-lapse imagery over shallow reefs in the Hawaiian Islands. Multivariate analysis showed that areas
defined by a “cool tropics” oceanographic regime grouped along a principal
component parallel to monotonically increasing acoustic frequency: protected or more remote sites produced soundscapes that featured greater levels of low frequency (<2 kHz) biological sound. Degraded and/or algae
dominated sites produced soundscapes featuring higher levels of high frequency (2-20 kHz) sound. The second case involves experimental work
using hyperspectral optical data and simultaneously obtained acoustic
recordings in shallow water, combined with in-situ spectrophotometer readings and visual benthic surveys. Optical reflectance and sound are produced
through completely different mechanisms, yet the underlying environmental
phenomena we wish to evaluate exhibit both. This presentation will discuss
the potential of fusing these data in order to enhance our understanding of
the shallow water environment. [Work supported by ONR.]
2:20
1pUWa4. A wind-driven noise model in deep water. Fenghua Li, Dong
Xu, and Yonggang Guo (State Key Lab. of Acoust., Inst. of Acoust., CAS,
No. 21 Beisihuanxi Rd., Beijing 100190, China, lfh@mail.ioa.ac.cn)
Surface processes, including non-linear surface wave interactions and
bubble oscillations, dominate the deep water ambient noise in the absence
of shipping and wildlife. A single-parameter noise model, determined by the
wind speed above the ocean surface, is proposed to interpret the deep ocean
acoustics from 1 Hz to 4 kHz. Below hundreds of hertz, the sound is primarily generated by surface wave interactions, which is the function of surface
wave spectrum. A theoretical expression of wave spectrum is proposed to
contain the gravity and capillary waves, which is in accord with the features
of measured wave spectra. The noise spectrum for frequency above hundreds of hertz due to the effective bubble oscillation within the bubble cloud
is proposed. The noise spectrum has been estimated as a function of frequency and wind speed based on available information of the bubble distribution. The model/data comparison shows that the proposed singleparameter noise model are in reasonable agreement with the data. [Work
supported by National Natural Science Foundation of China, Grant No.
11125420.]
2:40
1pUWa5. Simulations of the influence of sound speed profile and sensor
configuration in the measurement of radiated noise from ships in deep
and shallow waters. Christian Audoly (CEMIS/AC, DCNS Res., Le
Mourillon, BP 403, Toulon 83055, France, christian.audoly@wanadoo.
fr)
The ISO TC43/ SC3 standardization committee for underwater acoustics
was launched a few years ago, one of the priorities being the availability of
internationally agreed procedures to measure radiated noise from ships. A
first step was achieved by the publication of a first standard applicable for
deep waters. However, in order to study the impact of shipping noise on marine fauna in wide maritime areas, it is necessary to input sound source levels in the form of equivalent monopoles, instead of a “radiated noise level,”
which is affected by the interaction with sea surface and sea floor. Therefore, the committee is currently working on correction terms to remove
these effects. In that context, the objective of the present study is to determine the influence of sound speed profile and sensor configuration on the
sound source level estimation, using numerical simulator. The first case
3525
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
study is the correction term in deep waters: here, we look at the influence of
sound speed profile, whereas in previous studies, a constant speed of sound
is generally assumed. The second case study deals with shallow waters:
here, the main purpose is to compare different sensor configuration (number
and distribution in the water column).
3:00
1pUWa6. Underwater noise footprint measurements on a survey vessel.
Alex Brooker (Clarke Saunders Assoc. Acoust., Winchester, Hampshire,
United Kingdom) and Victor F. Humphrey (ISVR, Univ. Southampton,
Highfield, Southampton SO17 1BJ, United Kingdom, vh@isvr.soton.ac.
uk)
The potential impact of man-made underwater noise on the marine environment is receiving increased attention. Shipping is one of the main sources of such anthropogenic noise. In order to understand the underwater
soundscape considerable effort is being placed on generating underwater
noise maps, based on using AIS data to provide details of vessel locations
and operational characteristics. A key input for noise mapping models is an
adequate knowledge of the source strength and characteristics for each vessel. Currently the sources are usually assumed omnidirectional, given the
limited data on the true vessel radiation pattern. As part of the EU SONIC
(Suppression of underwater Noise Induced by Cavitation) project measurements were undertaken on a small survey vessel, operating under realistic
conditions at sea in shallow water. An autonomous recorder was used to
measure the sound pressure as a function of range and azimuth. The vessel
made a repeated runs past the autonomous recorder for a variety of different
ranges. This has enabled the vessel noise footprint to be measured as a function of frequency and speed for the vessel, showing how the azimuthal characteristics change with frequency.
3:20–3:40 Break
3:40
1pUWa7. Developing an essential ocean variable for the acoustic
environment. Peter L. Tyack (Biology, Univ. of St. Andrews, Sea Mammal
Res. Unit, Scottish Oceans Inst., St. Andrews, Fife KY16 8LB, United
Kingdom, plt@st-andrews.ac.uk) and A Partnership for Observation of the
Global Oceans International Quiet Ocean Experiment Working Group
(Plymouth Marine Lab., Plymouth, United Kingdom)
The ocean science community has invested heavily in coordinating systems developed to make ocean observations. The Global Ocean Observing
System (GOOS) is a research program developed to coordinate ocean
observing systems. Expert panels identify requirements for the systems in
terms of Essential Ocean Variables (EOVs). The absence of sound in the list
of EOVs should be striking to most ocean acousticians. Sound propagates so
well in the ocean that it is the best way to probe the marine environment
over long distances. Marine organisms have evolved ways to use the physics
of underwater sound for biosonar, to communicate, and to orient. During the
industrial age, humans have developed similar tools, and sources such as
commercial shipping have elevated ocean noise. A working group of the
Partnership for Observation of the Global Oceans and linked to the International Quiet Ocean Experiment has developed a description of an Essential
Ocean Variable for the Acoustic Environment designed to facilitate monitoring of the sound field of the oceans globally over decades, to model how
human and natural sources create the sound field, and to define effects of
changes in sound fields on marine life at the individual, population, and ecosystem levels.
4:00
1pUWa8. Characteristics of snapping shrimp noise in the northeastern
East China Sea. Zhuqing Yuan (Scripps Inst. of Oceanogr., 9500 Gilman
Dr., La Jolla, CA 92093, z9yuan@ucsd.edu), Chomgun Cho (Scripps Inst.
of Oceanogr., San Diego, CA), Hee-Chun Song, and William S. Hodgkiss
(Scripps Inst. of Oceanogr., La Jolla, CA)
Snapping shrimp sounds are a dominant source of high-frequency ambient noise (e.g., > 1 kHz) in temperate and tropical coastal waters of depths
less than 60 m. Surprisingly, a recent shallow water experiment conducted
in the northeastern East China Sea (SAVEX15) revealed an abundance of
Acoustics ’17 Boston
3525
1p SUN. PM
2:00
observed throughout the year, with horizontal displacements highest during
the austral winter (>20 cm). Because flexural waves exhibit weak attenuation, significant flexural wave energy reaches the grounding zone. Flexural
waves provide year-round excitation of the RIS that likely promotes iceberg
calving and thus ice shelf evolution. Understanding the ocean-ice shelf mechanical interactions is important to reduce the uncertainty in the global sea
level rise.
snapping shrimp sounds in approximately 100-m deep shallow water from
two 16-element vertical line arrays (VLAs) deployed over 10 days. Our preliminary analysis indicates the pressure amplitude statistics fits a SaS (symmetric alpha-stable) distribution due to a heavy tail over the commonly
assumed Gaussian distribution, while the temporal statistics of shrimp snaps
detected above some threshold appears to fit a non-homogeneous Poisson
process. In addition, the VLAs allow for localization of individual snapping
shrimp. In this paper, the temporal and spatial variability of the noise characteristics dominated by snapping shrimp sounds are investigated in the
northeastern East China Sea.
4:40
1pUWa10. Modeling on low-frequency underwater noise radiated from
a typical fishing boat based on measurement in shallow water. Zilong
Peng, Bin Wang, Jun Fan, and Kaiqi Zhao (Shanghai Jiao Tong Univ., 800
Dongchuan Rd., Minhang District, Shanghai, Shanghai 200240, China, zlp_
just@sina.com)
4:20
1pUWa9. Tsunami excitation of the Ross Ice Shelf, Antarctica. Peter
Gerstoft (SIO, UCSD, 9500 Gillman Dr., La Jolla, CA 92093-0238,
gerstoft@ucsd.edu), Peter Bromirski (SIO, UCSD, San Diego, CA), Zhao
Chen (SIO, UCSD, La Jolla, CA), Ralph A. Stephen (WHOI, Woods Hole,
MA), Rick C. Aster (Dept. of GeoSci., Colorado State Univ., Fort Collins,
CO), Doug A. Wiens (Dept. of Earth and Planetary Sci., Washington Univ.
in St. Louis, St. Louis, MO), and A. Nyblade (Dept. of GeoSci.,
Pennsylvania State Univ., State College, PA)
As evidenced in documents during the past decades, the impact of manmade underwater noise on the marine environment has always attracted
more and more interest of the global researchers. About ten kinds of models
have been proposed to predict underwater radiated noise (URN) by ships,
and most of which are applicable above 100 Hz. This paper is aimed to
modeling the low-frequency URN. Extensive measurements were made of
the URN of a small fishing boat (length 43 m, displacement 500 tons) at
South China Sea. The URN data show the high-level noise below 100 Hz is
mainly contributed by the mechanical noise (e.g., main engine and service
diesel generator) and propeller cavitation, and performs complex varying
characteristics with the speed. The effect on the Transmission Loss (TL)
from the sound-speed profile and bottom has been analyzed, compared with
the empirical function, which show the estimated TL has an important influence on the spectral source levels (SSLs). Inspired by the method in AQUO
(Achieve QUieter Oceans) project, a predicted model applied to typical fishing boat was built.
The responses of the Ross Ice Shelf (RIS) to the September 16, 2015 8.3
Mw Chilean earthquake tsunami (>75 s period) and infragravity (IG) waves
(50-300 s period) were recorded by a 34 element broadband seismic array
deployed on the RIS for one year from November 2014. Tsunami and IGgenerated signals travel from the RIS front as water-ice coupled flexural
waves at gravity wave speeds (~70 m/s). Displacements across the RIS are
affected by gravity wave incident direction, bathymetry under and north of
RIS, and water and ice shelf thickness/properties. Horizontal displacements
are about 5 times larger than vertical, producing extensional motions that
may facilitate expansion of existing fractures. Excitation is continuously
SUNDAY AFTERNOON, 25 JUNE 2017
ROOM 309, 1:20 P.M. TO 5:40 P.M.
Session 1pUWb
Underwater Acoustics, Acoustical Oceanography, Signal Processing in Acoustics, Structural Acoustics and
Vibration, Physical Acoustics, and Biomedical Acoustics: Passive Sensing, Monitoring, and Imaging in
Wave Physics II
Karim G. Sabra, Cochair
Mechanical Engineering, Georgia Institute of Technology, 771 Ferst Drive, NW, Atlanta, GA 30332-0405
Philippe Roux, Cochair
ISTerre, University of Grenoble, CNRS, 1381 rue de la Piscine, Grenoble 38041, France
Invited Papers
1:20
1pUWb1. Estimation of the entropy of seismic ambient noise: Application to passive imaging. Leonard Seydoux, Nikolai Shapiro
(Institut de Physique du Globe de Paris, UMR CNRS 7154, Paris, France), and Julien de Rosny (ESPCI Paris, PSL Res. Univ., CNRS,
Institut Langevin, 1 rue Jussieu, Paris 75005, France, julien.derosny@espci.fr)
Recovering Green’s functions from diffuse ambient noise correlation is an efficient technique for passive seismic imaging. However,
the field diffuseness is not completely fulfilled in practice because the noise is generated in preferential areas and contaminated by some
highly coherent signals due to earthquakes, for instance. Here we show that the noise entropy is a robust estimator of the diffuseness. In
the first part, we directly use the entropy as a metric of seismic noise activity to detect tremors around Piton de la Fournaise Volcano or
3526
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3526
highly coherent noise sources at continental scale using data collected by USArray seismic array. Then in the second part, we take the
benefit of the entropy to propose an original equalization process based on the analysis of the covariance matrix to mitigate the effect a
poorly diffuse noise field. The efficiency of the method is validated with several numerical tests. We apply the method to the data collected by the USArray, when a strong earthquake occurred. The method shows a clear improvement compared with the classical equalization to attenuate the highly energetic and coherent waves incoming from the earthquake, and allows to perform reliable travel-time
measurement.
1:40
1p SUN. PM
1pUWb2. Environmental seismology: What can we learn from ambient noise ? Eric Larose (ISTerre, CNRS & Univ. GrenobleAlpes, CS 40700, GRENOBLE Cedex 9 38058, France, eric.larose@univ-grenoble-alpes.fr)
Environmental seismology consists in studying the coupling between the solid Earth and the cryosphere, or the hydrosphere, the
anthropospher. In practice, we monitor the modifications of the wave propagation due to environmental forcing such as temperature and
hydrology, using ambient seismic noise that constitute a continuous, cheap and relatively reproducible source of vibrations. Recent
developments in data processing [1], together with increasing computational power and sensor concentration have led to original observations that allow for this new field of seismology. In this paper, we will review how we can track and interpret tiny changes in the subsurface of the Earth related to external changes from modifications of the seismic wave propagation, with application to geomechanics,
hydrology, and natural hazard [2]. We will demonstrate that, using ambient noise, we can track: thermal variations in the subsoil, in
buildings or in rock columns with application to damage estimation; the temporal and spatial evolution of a water table; the evolution of
the rigidity of the soil constituting a landslide, and especially the drop of rigidity preceding a failure event. [1] Shapiro, N. M., Campillo,
M., 2004. Geophys. Res. Lett. 31, L7 614. [2] E. Larose et al.: J. Appl. Geophys. 116, 62-74 (2015).
2:00
1pUWb3. Extracting changes in wave velocity from chaotic data. Roel Snieder (Colorado School of Mines, 1500 Illinois Str.,
Golden, CO 80401, rsnieder@mines.edu)
Using seismic interferometry it is possible to extract the Green’s function from recorded measurements of noise or other incoherent
signals, it is a way of creating order out off chaos. Since the noise is present at all times, this provides a way to measure seismic velocity
changes over time. The seismic velocity in the subsurface, and in buildings, is not constant in time. I will present measurements taken
during, and after, the 2011 Tohoku-Oki earthquake to show that deconvolution interferometry can be used to monitor the seismic velocity in the near surface, with a great temporal resolution. The recovery of the seismic velocity typically varies as log-time, which can be
related to a spectrum of relaxation processes in the earth.
2:20
1pUWb4. Imaging subsurface structures using reflections retrieved from seismic interferometry with sources of opportunity.
Deyan Draganov, Yohei Nishitsuji, Boris Boullenger, Shohei Minato, Kees Wapenaar, Jan Thorbecke (Dept. of GeoSci. and Eng., Delft
Univ. of Technol., Stevinweg 1, Delft 2628CN, Netherlands, d.s.draganov@tudelft.nl), Elmer Ruigrok (GeoSci., Utrecht Univ., Delft,
Netherlands), Charlotte Rowe (Geophys. Group, Los Alamos National Lab., Los Alamos, NM), Bob Paap, Arie Verdel (TNO, Utrecht,
Netherlands), and Martin Gomez (CNEA, Buenos Aires, Argentina)
The reflection seismic method is the most frequently used exploration method for imaging and monitoring subsurface structures with
high resolution. It has proven its qualities from the scale of regional seismology to the scale of near-surface applications that look just a
few meters below the surface. The reflection method uses controlled active sources at known positions to give rise to reflections recorded
at known receiver positions. The reflections’ two-wave travel time is used to extract desired information about and image the subsurface
structures. When active sources are unavailable or undesired, one can retrieve body-wave reflections from application of seismic interferometry (SI) to sources of opportunity—quakes, tremors, ambient noise, or even man-made sources not connected to the exploration
campaign. We show examples of imaging of subsurface structures using reflections retrieved from quakes and ambient noise. We apply
SI by autocorrelation to global earthquake to image seismic and aseismic parts of the Nazca plate and the Moho at these places, SI by
multidimensional deconvolution to P-wave coda from local earthquakes to image the Moho and the crust at the same places, and SI by
autocorrelation to deep moonquakes to image the lunar Moho and to ambient noise to monitor CO2 sequestration.
2:40
1pUWb5. Passive elastography: A shear wave tomography of the human body. Stefan Catheline (INSERM U1032, 151 cours albert
thomas, Lyon 69003, France, stefan.catheline@inserm.fr)
Elastography, sometimes referred as seismology of the human body, is an imaging modality recently implemented on medical ultrasound systems. It allows to measure shear waves within soft tissues and gives a tomography reconstruction of the shear elasticity. This
elasticity map is useful for early cancer detection. A general overview of this field is given in the first part of the presentation as well as
latest developments. The second part, is devoted to the application of time reversal or noise correlation technique in the field of elastography. The idea, as in seismology, is to take advantage of shear waves naturally present in the human body due to muscles activities to
construct shear elasticity map of soft tissues. It is thus a passive elastography approach since no shear wave sources are used.
3527
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3527
3:00
1pUWb6. Relocating drifting sensor networks with ambient noise correlations. Brendan Nichols, James S. Martin (Mech. Eng.,
Georgia Inst. of Technol., 801 Ferst Dr. NW, Atlanta, GA 30309, bnichols8@gatech.edu), Christopher M. Verlinden (Phys., U.S. Coast
Guard Acad., La Jolla, California), and Karim G. Sabra (Mech. Eng., Georgia Inst. of Technol., Atlanta, GA)
A network of drifting sensors, such as hydrophones mounted to freely drifting buoys, can be used as an array for locating acoustic
sources underwater. However, for accurate localization of such a source using coherent processing, the positions of the sensors need to
be known to a high degree of accuracy, typically more accurately than provided by dead reckoning or GPS alone. Past work has demonstrated the inter-sensor distances can be obtained from long-term ambient noise correlations on fixed arrays [Sabra et al., IEEE J. Ocean
Engineering, 2005, 30]. Here, the approach was extended for tracking drifting sensor motion by combining a stochastic search algorithm
with ambient noise correlation processing. Optimization of the stochastic search method was explored and performance compared to
acoustic data collected from a volumetric hydrophone vs. vector sensor array deployed in the Long Island Sound.
3:20–3:40 Break
3:40
1pUWb7. Passive sensing of head wave propagation in the ambient noise field and its implications for geoacoustic inversion. John
Gebbie (Metron, Inc., 1900 SW 4th Ave., Ste. 89-01, Portland, OR 97201, gebbie@metsci.com) and Martin Siderius (Portland State
Univ., Portland, OR)
Under certain conditions, the ambient noise field can produce and amplify head waves with unique propagation characteristics; these
are detectable with a vertical line array and can be analyzed to extract geoacoustic information. Head wave phenomenon in the seabed
can be observed using point sources, but these arrivals are usually very weak and difficult to detect ahead of the strong direct-path arrival. In contrast, surface-generated noise from wind and breaking waves are effectively a planar source that insonify the entire seabed at
every angle and from every direction. Ambient noise head waves are conical waves that are generated when noise first reaches the
seabed at the critical angle, and are amplified upon each subsequent interaction with the seabed. This interaction involves splitting the
incident wave into water-borne and seabed-borne components, each separated in time by a fixed lag. A vertical line array can observe
this phenomenon by cross-correlating beams steered upward and downward at the critical angle of the seabed. Together, the steering
angle and time lag depend on the seabed critical angle and ray cycle time through the waveguide. Experimental results will be presented
along with full-wave simulations that help illustrate the phenomena.
4:00
1pUWb8. A tomography experiment using ships as sources of opportunity. William A. Kuperman, Bruce Cornuelle, Kay L. Gemba,
William S. Hodgkiss, Jit Sarkar, Jeffery D. Tippmann, Christopher M. Verlinden (Scripps Inst. of Oceanogr., UCSD, Marine Physical
Lab., La Jolla, CA 92093-0238, wkuperman@ucsd.edu), and Karim G. Sabra (College of Mech. Eng., Georgia Inst. of Technol.,
Atlanta, GA)
An experiment was performed in the Santa Barbara Channel using four vertical acoustic receive arrays placed between the sea lanes
of in- and outgoing shipping traffic. The purpose of the experiment was to determine whether these sources of opportunity can be utilized for tomographic inversion of water column properties. The environment was continuously monitored throughout the duration of
the experiment. Ship tracks were obtained from the Automatic Identification System (AIS). Processing was developed to extract relative
time delays between the arrays from the ships’ random radiation fields. This information, together with AIS constraints were used for
inversion. Initial results are presented that also include an error analysis of the inversion.
4:20
1pUWb9. Deducing environmental properties from broadband matched-mode processing ambiguity surface striations generated
from baleen whale data. Aaron Thode (SIO, UCSD, 9500 Gilman Dr., MC 0238, La Jolla, CA 92093-0238, athode@ucsd.edu), Julien
Bonnel (ENSTA, Brest cedex 9, France), and Catherine L. Berchok (Marine Mammal Lab., Alaska Fisheries Sci. Ctr., Seattle, WA)
Numerous multi-year shallow-water recordings of several species of baleen whale have been obtained from shallow-water arctic
environments, including the Bering Sea. When non-linear time sampling is applied to the single-hydrophone data, these broadband signals yield individual normal mode arrivals, which in turn permit incoherent matched-mode processing (MMP) techniques to be applied
for source localization and geoacoustic inversion. When continuously broadband MMP ambiguity surfaces are constructed from pairs of
modes and plotted as a function of range and frequency, both the mainlobe and sidelobes form striations that embed information about
the type and amount of environmental mismatch present between the modeled and true environment. These striations are useful for identifying bandwidths of inversion-quality data within whale calls. Acoustic invariant theory explains how mismatched waveguide replicas
from simple environmental models, when applied to sufficiently low-frequency data, produce ambiguity surface striations that reveal the
true bottom interface sound speed and bottom sound speed gradient. During summer, whenever highly downward-refracting sound speed
profiles exist, mismatched MMP striations also contain information about the sound speed gradient. Examples of this visual approach to
inversion are shown on endangered North Pacific Right whale “gunshot” signals. [Work sponsored by the North Pacific Research
Board.]
3528
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3528
4:40
1pUWb10. Environmental characterization in the Chukchi Sea using Bayesian inversion of bowhead whale calls. Graham A.
Warner (JASCO Appl. Sci., 2305-4464 Markham St., Victoria, BC V8Z 7X8, Canada, graham.warner@jasco.com), Stan E. Dosso
(School of Earth and Ocean Sci., Univ. of Victoria, Victoria, BC, Canada), and David E. Hannay (JASCO Appl. Sci., Victoria, BC,
Canada)
1p SUN. PM
This paper estimates seabed and water-column properties of a shallow-water site in the Chukchi Sea using bowhead whale calls
recorded on asynchronous ocean-bottom hydrophones. Up- and down-swept bowhead whale calls were recorded on a cluster of seven
hydrophones within a 5 km radius. The calls excited multiple propagating modes, with modal dispersion controlled by environmental
properties and whale-recorder range. Frequency-dependent mode arrival times for nine whale calls are inverted using a trans-dimensional (trans-D) Bayesian approach that estimates the whale locations (easting and northing) and range-independent environmental properties (sound-speed profile, water depth, and seabed geoacoustic profiles). The trans-D inversion allows the data to determine the most
appropriate environmental model parameterization in terms of the number of sound-speed profile nodes and subbottom layers. The
inversion also estimates each whale-call instantaneous frequency function, relative recorder clock offsets, and residual-error standard
deviation, and provides uncertainty estimates for all model parameters and parameterizations. The sound-speed profile is found to be
poorly resolved, but water depth and upper sediment-layer thickness and sound speed are reasonably well resolved. Model estimates and
uncertainties are compared to those from separate inversions involving airgun dispersion and vessel noise data collected nearby (which
also represent sources of opportunity).
5:00
1pUWb11. Monitoring bubble production in a seagrass meadow using a source of opportunity. Paulo Felisberto (LARSyS, Univ.
of Algarve, Faro, Faro, Portugal), Orlando C. Rodrıguez, Jo~ao P. Silva, Sergio Jesus (LARSyS, Univ. of Algarve, Campus de Gambelas
- Universidade do Algarve, Faro, N/A PT-8005-139, Portugal, orodrig@ualg.pt), Hugo Quental-Ferreira, Pedro Pous~ao-Ferreira, Maria
Emılia Cunha (IPMA - Instituto Portugu^es do Mar e da Atmosfera, EPPO, Olh~ao, Portugal), Carmen B. de los Santos, Irene Olive, and
Rui Santos (Marine Plant Ecology Res. group, Ctr. of Marine Sci. of Univ. of Algarve, Faro, Portugal)
Under high irradiance, the photosynthetic activity of dense seagrass meadows saturates the water forming oxygen bubbles. The diel
cycle of bubble production peaks at mid-day, following light intensity pattern. It is well known that bubbles strongly affect the acoustic
propagation, increasing signal attenuation and decreasing the effective water sound speed, noticeable at low frequencies. Thus, the diurnal variability of bubbles may show an interference pattern in the spectrograms of low frequency acoustic signals. In an experiment conducted in July 2016 at the Aquaculture Research Station of the Portuguese Institute for the Sea and Atmosphere in Olh~ao, Portugal, the
spectrograms of low frequency (<20 kHz) broadband noise produced by water pumps in a pond of 0.48 ha covered by the seagrass Cymodocea nodosa showed interference patterns that can be ascribed to the variability of the sound speed in the water. Preliminary analysis
suggests that the daily cycle of bubble concentration can be inferred from these interference patterns.
Contributed Paper
5:20
1pUWb12. Ambient noise correlations on a mobile, deformable array.
Perry Naughton (Elec. and Comput. Eng., Univ. of California, San Diego,
9500 Gilman Dr., San Diego, CA 92103, pnaughto@eng.ucsd.edu), Philippe
Roux (ISTerre, Universite de Grenoble Alpes, Grenoble, France), Riley
Yeakle (Elec. and Comput. Eng., Univ. of California, San Diego, San
Diego, CA), Curt Schurgers (Qualcomm Inst., Calit2, Univ. of California,
San Diego, San Diego, CA), Ryan Kastner (Comput. Sci. and Eng., Univ.
of California, San Diego, San Diego, CA), Jules Jaffe, and Paul Roberts
(Scripps Inst. of Oceanogr., Univ. of California San Diego, San Diego,
CA)
This presentation describes a demonstration of ambient acoustic noise
processing on a set of free floating oceanic receivers whose relative
3529
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
positions vary with time. We show that we are able to retrieve information
that is relevant to the travel time between the receivers. With thousands of
short time cross-correlations of varying distance, we show that on average,
the decrease in amplitude of the noise correlation function with increased
separation follows a power law. This suggests that there may be amplitude
information that is embedded in the noise correlation function. We develop
an incoherent beamformer, which shows that it is possible to determine a
source direction using an array with moving elements and large element
separation. We show how the noise correlation function varies in the presence of a boat with a known GPS trajectory, and how this information can
be used recover the relative geometry of the deformable array. This work
indicates that the relative geometry of an array can be estimated using only
passive signals and the automatic identification system already present in
many coastal communities.
Acoustics ’17 Boston
3529
SUNDAY AFTERNOON, 25 JUNE 2017
BALLROOM A, 1:20 P.M. TO 4:20 P.M.
Session 1pUWc
Underwater Acoustics: Topics in Underwater Acoustics (Poster Session)
Vaibhav Chavali, Chair
Electrical Engineering, George Mason University, 4217 University Dr., Fairfax, VA 22030
All posters will be on display from 1:20 p.m. to 4:20 p.m. To allow contributors in this session to see the other posters, authors of oddnumbered papers will be at their posters from 1:20 p.m. to 2:50 p.m. and authors of even-numbered papers will be at their posters from
2:50 p.m. to 4:20 p.m.
Contributed Papers
1pUWc1. Determining the bottom surface in the randomly
inhomogeneous media. Andrei Sushchenko (School of Natural Sci., Far
Eastern Federal Univ., Sukhanova 8, Vladivostok, Primorskii krai 690090,
Russian Federation, sushchenko.aa@dvfu.ru), Igor Prokhorov (Inst. of
Appl. Mathematics FEB RAS, Vladivostok, Russian Federation), and
Kristina Sushchenko (School of Natural Sci., Far Eastern Federal Univ.,
Vladivostok, Russian Federation)
The authors study a problem of determining the bottom topography of a
fluctuating ocean using the data of side-scan sonars. Based on a kinetic
model of acoustic radiative transfer authors obtain a formula for determining a function describing small deviations of the bottom surface from a middle level. The impulse parcels of source and width of the directivity pattern
of the receiving antenna are constructed unfocused sebottom image. For
solving it, authors used iterative algorithm for focusing objects on the seabottom. Numerical experiments have been done on modeling data that demonstrate the accuracy of the obtained formula. Numerical analysis of the
volume scattering influence is done. The volume scattering filter allows to
reconstruct sea bottom relief from long range, e.g., signal from 150 m
includes more than 50% of volume scattering, hence object recognizing is
not possible without filtering. The width of directivity pattern affects to the
object defocussing. This effect is increased with slant range increasing.
Moreover, authors designed the algorithm of determining shaded areas on
the sea bottom. It allows to recognize each invisible point on the sea bottom
in case of non-static source. Thus, authors researched influence of volume
scattering on the seabottom relief reconstruction.
1pUWc2. Interferometric reconstruction of plate waves from cross
correlation of diffuse field on a thin aluminum plate. Aida Hejazi
Nooghabi (Univ. of Pierre and Marie Curie, 4, Pl. Jussieu Case 129, T.4600, Et.2, Paris 75252, France, aida.hejazi@gmail.com), Julien de Rosny
(Institut Langevin, Paris, France), Lapo Boschi (Univ. of Pierre and Marie
Curie, Paris, France), and Philippe Roux (Laboratoire ISTERRE, Grenoble,
France)
This study contributes to evaluating the robustness and accuracy of
Green’s function (GF) reconstruction by cross-correlation of noise, disentangling the respective roles of ballistic and reverberated (“coda”) signals.
We conduct a suite of experiments on a highly reverberating thin aluminum
plate, where we generate an approximately diffuse flexural wavefield. We
3530
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
validate ambient-noise theory by comparing cross correlation to the directly
measured Green’s function. We develop analytically a theoretical model,
predicting the dependence of the symmetry of the cross correlations on the
number of sources and signal-to-noise ratio. We validate this model against
experimental results. We next study the effects of cross-correlating our data
over time windows of variable length, possibly very short, and taken at different points in the coda of recordings. We find that, even so, a relatively
dense/uniform source distribution could result in a good estimate of the GF;
we demonstrate that this window does not have to include the direct-arrival
signal for the estimated GF to be a good approximation of the exact one.
Afterwards, we explicitly study the role of non-deterministic noise on cross
correlations and establish a model which confirms that the relative effect of
noise is stronger when the late coda is cross-correlated.
1pUWc3. Reflecting boundary conditions for interferometry by
multidimensional deconvolution. Cornelis Weemstra, Kees Wapenaar
(Dept. of GeoSci. and Eng., Delft Univ. of Technol., Stevinweg 1, Delft
2628 CN, Netherlands, kweemstra@gmail.com), and Karel N. van Dalen
(Dept. of Structural Eng., Delft Univ. of Technol., Delft, Netherlands)
Seismic interferometry (SI) takes advantage of existing (ambient) wavefield recordings by turning receivers into so-called “virtual-sources.” The
medium’s response to these virtual sources can be harnessed to image that
medium. Applications of SI include surface-wave imaging of the Earth’s
shallow subsurface and medical imaging. Most interferometric applications,
however, suffer from the fact that the retrieved virtual-source responses
deviate from the true medium responses. The accrued artifacts are often predominantly due to a non-isotropic illumination of the medium of interest,
and prohibit accurate interferometric imaging. Recently, it has been shown
that illumination-related artifacts can be removed by means of a so-called
multidimensional deconvolution (MDD) process. However, The current
MDD formulation, and hence method, relies on separation of waves traveling inward and outward through the boundary of the medium of interest. As
a consequence, it is predominantly useful when receivers are illuminated
from one side only. This puts constraints on the applicability of the current
MDD formulation to omnidirectional wavefields. We present a modification
of the formulation of the theory underlying SI by MDD. This modification
eliminates the requirement to separate inward-and outward propagating
wavefields and, consequently, holds promise for the application of MDD to
non-isotropic, omnidirectional wavefields.
Acoustics ’17 Boston
3530
Since 2005 a cabled deep-sea infrastructure is operative at 2100 m water
depth, 25 km off the port of Catania (Sicily). The infrastructure, under continuous improvement, is the first operative cabled node of the EMSO-ERIC,
hosting several multidisciplinary observatories built in collaboration by
INFN, INGV, CNR, CIBRA, and other scientific partners. Hydrophones
antennas, sensitive in the range of frequencies between 1 Hz and 90 kHz,
have been installed on seafloor observatories. Acoustic data are continuously digitized in situ at very high resolution, time-stamped with absolute
GPS time and sent to shore in real time, through optical fiber link. Together
with biological sounds, noise pollution study and monitoring were the main
goals of the research. Results of multi-year monitoring of anthropogenic
noise are discussed. Focus of the analysis is the noise level in the octave
bands centered at 63 Hz and 125 Hz, in compliance with the EU Marine
Strategy Framework Directive. The contribution of ship noise was modeled,
based on their data recorded via proprietary AIS antennas, and compared to
data. Noise at higher frequencies was also investigated. Detection of airguns emissions and recorded noise levels is reported. Status and coming
activities at the infrastructure is also presented.
1pUWc5. Spatial distribution of sound field scattered from the rough
seafloor interface. Linhui peng (Ocean Technol., Ocean Univ. of China,
238 Songling Rd., Information College, Qingdao, Shangdong 266100,
China, penglh@ouc.edu.cn), Gaokun Yu, and JIanhui Lu (Ocean Technol.,
Ocean Univ. of China, Qingdao, Shandong, China)
The scattering coefficient of rough seafloor interface is calculated using
first order perturbation theory. The interface roughness considered here is
described by the isotropic power law spectrum for isotropic rough seafloor
interface and anisotropic Gaussian spectrum for the rippled seafloor interface. The characteristic of spatial distribution is shown by the scattering
coefficient of the scattered field with forward scattering and backward scattering, which depend on the frequency and the grazing angle of the incident
wave, and the roughness of the interface. The dependence of spatial distribution of the scattered field on these parameters is analyzed by Bragg scattering of the sinusoidal interface.
1pUWc6. Computation of acoustic wave responses due to moving
underwater acoustic sources in complex underwater environments
using a spectral element method. Stephen Lloyd, Chanseok Jeong (Civil
Eng., The Catholic Univ. of America, 620 Michigan Ave., N.E.,
Washington, DC 20064, 34lloyd@cua.edu), Hom Nath Gharti, and Jeroen
Tromp (GeoSci., Princeton Univ., Princeton, NJ)
This work presents a new numerical approach for computing underwater
acoustic wave responses due to moving underwater acoustic sources in complex underwater environments using a Spectral Element Method (SEM).
The SEM is similar to the Finite Element Method (FEM), but uses a higher3531
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
order shape function with Gauss-Lobatto-Legendre quadrature, naturally
creating a diagonal mass matrix. Thus, we can use fast explicit time integration, taking advantage of a diagonal mass matrix, without compromising accuracy. Therefore, the SEM is much more suitable for large-scale parallel
3D time domain wave analyses than the conventional FEM. In our numerical experiments, we used a large-scale parallel SEM wave simulator, SPECFEM3D. We verified the SEM solution of acoustic (fluid pressure) waves in
a 3D acoustic fluid setting of an infinite extent, induced by a moving point
source, by using its analytical counterpart. Numerical experiments showed
that our tool accurately accommodates wave behavior at fluid-solid interfaces of complex geometries and infinite extents of water and solids, truncated
using absorbing boundary conditions. Due to such versatility, our tool can
be used for forward and inverse acoustic wave analyses in any complex
underwater systems of large extents (e.g., shallow water and deep ocean).
1pUWc7. Robust multiple focusing with adaptive time-reversal mirror
using a genetic algorithm. Gi Hoon Byun and Jea Soo kim (Korea
Maritime and Ocean Univ., Korea Maritime Univ., Dongsam 2-dong,
Yeongdo-gu, Busan 606-791, South Korea, gihoonbyun77@gmail.com)
Kim and Shin [J. Acoust. Soc. Am. 115 (2), 600-606 (2003)] suggested
an extension of the single constraint adaptive time-reversal mirror (ATRM)
to simultaneous multiple focusing, by considering multiple constraints. In
their proposed method, the optimization is performed using the linearly constrained minimum variance (LCMV) method, a well known optimization
method in the field of adaptive signal processing that allows multiple linear
constraints. However, highly correlated signal vectors from the probe source
locations cause prominent spatial sidelobes in multiple TR focusing. In this
study, a genetic algorithm is extended to LCMV method to calculate the
backpropagation vector which satisfies new constraint responses. Numerical
simulations demonstrate that multiple TR focusing combined with a genetic
algorithm can significantly suppress sidelobes, especially when the focal
locations are close to each other.
1pUWc8. Modeling method on acoustic scattering from penetrable
objects using a hybrid Kirchhoff/ray approach. Bin Wang, Kaiqi Zhao,
Jun Fan (School of Naval Architecture, Ocean and Civil Eng., Shanghai
Jiao Tong Univ., Dongchuan Rd. 800, Shanghai, Shanghai 200240, China,
lanseyifan48@sjtu.edu.cn), and Guoyin Zheng (School of Naval
Architecture, Ocean and Civil Eng., Shanghai Jiao Tong Univ., Wuhan,
Hubei, China)
A method is put forward to investigate the acoustic scattering from double-sided water loaded targets which is penetrable even in high frequency.
The present approach is an extension of the TriKirch method which was elaborated for non-penetrable targets [J. Acoust. Soc. Am. 140(3), 1878-1886
(2016)] and complex targets are dispersed into many triangular planar facets. Reflection coefficient of plane facet is introduced to calculate the scattering amplitude of each non-rigid triangular facets combined with TirKirch
method, and the total scattering amplitude of target is obtained by superposing the amplitude of all facets’ contributions coherently. Double bounce
(DB) contributions to the scattering is calculating by ray-tracing method.
Computations are made for a double-sided water loaded finite cylindrical
shell with perforated ring ribs, and are compared with the experimental
results, and both results show good agreement.
1pUWc9. Determination of eigenvalue in elastic multi-layered
waveguides. Wang Wei and Bin Wang (Shanghai Jiao Tong Univ., No.800,
Dongchuan Rd., Minhang District, Shanghai 200240, China, wei_wang@
sjtu.edu.cn)
With the extensive use of composites in vibration and noise reduction in
recent years, the properties of acoustic propagation in composite plates have
come to the foreground. However, traditional root-finding methods based on
contour integration and finite difference show low precision and root-missing problems on calculating the dispersion curve, especially the loss factor
is excessively influential and cannot be ignored. In this paper, a multimodal
approach (Pagneux et al., Proc. R. Soc. A (2006) 462, 1315-1339) is applied
to solve the eigenvalue problem for elastic multi-layered waveguides. The
longitudinal and transverse potential functions are expanded on a group of
Acoustics ’17 Boston
3531
1p SUN. PM
1pUWc4. Status and results from cabled hydrophones arrays deployed
in deep sea off East Sicily (EMSO-ERIC node). Giorgio Riccobene,
Francesco Caruso (Laboratori nazionali del Sud, Istituto Nazionale di Fisica
Nulceare, Catania, Italy), Salvatore Viola (Laboratori nazionali del Sud,
Istituto Nazionale di Fisica Nulceare, Via S. Sofia 62, Catanoa, Italy,
sviola@lns.infn.it), Francesco Simeone (Sez. Roma 1, Istituto Nazionale di
Fisica Nulceare, Roma, Italy), Sara Pulvirenti, Virginia Sciacca (Laboratori
nazionali del Sud, Istituto Nazionale di Fisica Nulceare, Catania, Italy),
Carmelo Pellegrino (Sez. Bologna, Istituto Nazionale di Fisica Nulceare,
Bologna, Italy), Fabrizio Speziale (Laboratori nazionali del Sud, Istituto
Nazionale di Fisica Nulceare, Catania, Italy), Fabrizio Ameli (Sez. Roma 1,
Istituto Nazionale di Fisica Nulceare, Roma, Italy), Giuseppa Buscaino,
Salvatore Mazzola (CNR-IAMC, Capo Granitola (TP), Italy), Francesco
Filiciotto (CNR-IAMC, Messina, Italy), Rosario Grammauta (CNR-IAMC,
Capo Granitola (TP), Italy), Gaetano Licitra (ARPAT, Pisa, Italy), Giorgio
Bellia (Laboratori nazionali del Sud, Istituto Nazionale di Fisica Nulceare,
Catania, Italy), Gianni Pavan (Univ. of Pavia, Pavia, Italy), Davide
Embriaco (INGV, La Spezia, Italy), Paolo Favali, Laura Beranzoli, Giuditta
Marinaro, Gabriele Giovanetti (INGV, Roma, Italy), Francesco Chierici
(IRA, INAF, Bologna, Italy), Giuseppina Larosa (Laboratori nazionali del
Sud, Istituto Nazionale di Fisica Nulceare, Catania, Italy), Antonio
D’Amico (NIKHEF, Amsterdam, Netherlands), and Elena Papale (CNRIAMC, Capo Granitola (TP), Italy)
orthogonal basis respectively and a matrix equation is then derived with the
boundary conditions, which can be solved after truncation at an adequate
number of modes. Dispersion curves of composite plates with different loss
are presented. The numerical simulation proves that it is more effective than
traditional methods
1pUWc10. Characterization of arctic ambient noise environment. Rui
Chen and Henrik Schmidt (Mech. Eng., MIT, Massachusetts Inst. of
Technol., 77 Massachusetts Ave., Rm. 5-223, Cambridge, MA 02139,
ruic@mit.edu)
Historically, ambient noise in arctic ocean is predominately produced by
diffuse thermal ice cracking events or ice ridge grinding. Isotropic, rangedistributed noise sources models are typically utilized to simulate this environment. However, the presence of the Beaufort Lens and changes in the
arctic climate have altered its ambient noise environment. Specifically, the
new noise environment consists mostly of ice cracking events which occur
at discrete ranges and bearings. As a result, these noise models may no longer be adequate. This study analyzes ambient noise data collected in the
Beaufort Sea during the 2016 ICEX US Navy Exercise to characterize the
new arctic ambient noise environment. Points of focus include determining
whether ice cracking noises in the new environment are discrete in time or
continuous as is the result from analysis of the SIMI’94 arctic ambient noise
data. Statistics on the ice cracking events in the new noise environment,
such as the events’ amplitude distribution, are also presented with the motivation of better describing the environment so that more precise models
may be created.
1pUWc11. Performance of a ray-based blind deconvolution algorithm
for shipping sources in the Santa Barbara channel. Nicholas C.
Durofchalk (Mech. Eng., Georgia Inst. of Technol., 101 N. College Ave,
Lebanon Valley College 208 Ctr. Hall, Annville, PA 17003, ncd001@lvc.
edu), Juan Yang (Inst. of Acoust., Chinese Acad. of Sci., Beijing, China),
and Karim G. Sabra (Mech. Eng., Georgia Inst. of Technol., Atlanta,
GA)
This paper investigates the performance of a ray-based blind deconvolution (RBD) algorithm for sources of opportunity, such as shipping noise, in
an ocean waveguide recorded on a vertical line array (VLA). The RBD
algorithm [Sabra et al., JASA,2010, EL42-7] relies on estimating the
unknown phase of the source through wideband beamforming along a wellresolved ray path to approximate the environment’s Green’s Functions (or
Channel Impulse responses) between the source and the VLA elements, as
well with recovering the unknown radiated source signal. The RBD algorithm is tested here for shipping sources recorded in the Santa Barbara shipping channel (water depth ~550 meters). The four VLAs, with short (~ 15
meter) and long (~56 meter) apertures, were deployed between the north
and south bound shipping lanes and collected acoustic data throughout a
week in mid-September, 2016. Here, we discuss (1) the performance of conventional beamforming and adaptive (MVDR) beamforming when estimating ray-arrivals, (2) the ability of the RBD algorithm to deconvolve multiple
VLAs using the only the same source phase estimated from a single VLA.
The performance of the RBD results will be discussed in terms of accuracy
of travel times and of the estimated Green’s Functions.
1pUWc12. Reflection coefficient measurement using a finite-differenceorsing, Carly Donahue, Dirk-Jan van Manen,
injection technique. Nele B€
and Johan O. Robertsson (Inst. of Geophys., Dept. of Earth Sci., ETH
Z€urich, Sonneggstrasse 5, NO H32, Z€
urich 8092, Switzerland, nele.
boersing@erdw.ethz.ch)
Experimentally determining the acoustic reflection properties of materials requires accurate knowledge of the incident and reflected wave field at
the reflecting interface. Consequently, a number of methods have been proposed for measuring the reflection coefficient, but most are limited to measuring only the normal incidence reflection coefficient or assume plane wave
conditions. Here, we derive the reflection coefficient from pressure measurements by using a finite-difference wave field injection technique, which is
applicable for a wide range of incidence angles and does not rely on the
common plane wave assumption. It requires measurements at three planes
3532
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
parallel to the reflector and addresses two key objectives, namely, (1) the
recorded wave field is separated into its incident and reflected components
without the need of time-windowing, and (2) the separated components are
re-datumed to the reflecting interface. The latter step comprises a forward
propagation in time of the incident wave and a time-reversed backward
propagation of the reflected wave. We experimentally test the methodology
on laboratory data of the reflection from the free surface recorded in water
and demonstrate its applicability to accurately measure the reflection coefficient for incidence angles up to 60 .
1pUWc13. Roughness parameters imaging with a multibeam
echosounder. Samuel Pinson, Yann Stephan (SHOM, SHOM, Brest 29200,
France, samuelpinson@yahoo.fr), and Charles W. Holland (Penn State
Univ., State College, PA)
The aim of the study is to perform quantitative imaging of the seafloor
random parameters using a multibeam echosounder. More specifically we
present in this communication a focus on the interface roughnesses with
controlled parameters using a 3D modeling of a layered media with rough
interfaces. The modeling consists of a sum of integrals over each interface
of the layered medium that implies a reasonable computation cost and the
possibility to perform a high number of numerical experiments. Specular
reflection and backscattering are analyzed by estimating their means and
variances through imaging algorithms. [Research supported by the ONR
Ocean Acoustics Program.]
1pUWc14. Interface wave studies on a laboratory test facility. Gopu R.
Potty, Rendhy M. Sapiie, Chris J. Small, and James H. Miller (Dept. of
Ocean Eng., Univ. of Rhode Island, 115 Middleton Bldg., Narragansett, RI
02882, potty@egr.uri.edu)
Rayleigh wave measurements were made in the Interface Wave Test Facility at the University of Rhode Island to develop techniques to estimate
the shear wave properties of near surface sediment. Repeating source events
at various ranges spaced equally at 0.15 m from a fixed receiver (accelerometer) created a virtual source array. The source events consisted of dropping
a tennis ball thereby exciting Rayleigh waves. A monitoring accelerometer
was used to record each source event at a fixed distance from the source
location in order to time it. This enabled to calculate the travel time of the
Rayleigh waves from the monitoring receiver to the fixed receiver. The
phase velocity dispersion is calculated using high resolution frequencywavenumber processing. The shear wave speed of the sediment layers in the
“sand tank” is estimated using an inversion scheme. The shear wave speed
estimates is compared to direct measurements using a calibrated bender element system at selected depths. The bender element system was placed in a
test tank initially filled with mason sand and the Rayleigh wave inversion
system was deployed on the surface of the sand in the tank. [Work supported
by Army Research Office and ONR.]
1pUWc15. Mode coherence in random matrix theory simulations.
Kathleen E. Wage (George Mason Univ., 4400 University Dr., MSN 1G5,
Fairfax, VA 22030, kwage@gmu.edu) and Lora J. Van Uffelen (Univ. of
Rhode Island, Narragansett, RI)
Random matrix theory (RMT) can be used to simulate the effect of internal waves on broadband acoustic mode time series in deep water, as
described by Hegewisch and Tomsovic in several papers [Europhys. Lett.
2012; J. Acoust. Soc. Am. 2013]. Using RMT, narrowband mode propagation consists of the multiplication of a series of propagator matrices
designed to model the mode coupling due to internal waves at a single frequency. In a recent paper, we varied the correlation of the mode coupling
matrices with frequency and examined the effect on the time series generated via Fourier synthesis [Van Uffelen & Wage, Inst. of Acoustics Conf.,
2016]. This talk focuses on another key aspect of the RMT model, i.e.,
mode-to-mode coherence at a single frequency. Modes decorrelate as they
propagate through internal waves. Building on Creamer’s work, Colosi and
Morozov use transport theory to analyze mode-to-mode coherence and
mode energy in deep water environments [J. Acoust. Soc. Am., 2009]. This
talk compares the energy and coherence results obtained with RMT methods
to those from transport theory. [Work sponsored by ONR.]
Acoustics ’17 Boston
3532
Underwater low-frequency sound can travel great distances in the
oceans, and sound triggered in the sea by the mechanical energy transfer
from the Earth’s crust (e.g., earthquakes or volcanoes) and by the energy
transfer occurring at the water surface (e.g., wave storms or ice-quakes) can
be detected at thousands of kilometers from the source. However, source
characterization based on recorded sound data analysis involves significant
scientific challenges and uncertainties. A variety of geological and physical
oceanographic features can cause horizontal refraction, reflection, and diffraction on global scale sound propagation. In this regard, three-dimensional
underwater sound models are required for accurately predicting global scale
sound propagation. In this work, based on a Southern Mid-Atlantic Ridge
earthquake event, we show the importance of geological and physical
oceanographic features in the long range propagation of oceanic sound. A
three-dimensional sound propagation model using the parabolic equation
(PE) approximation and the split-step Fourier (SSF) method is used. Numerical results are compared with field data recorded by hydrophones at great
distances from the source. Based on the case study, a discussion and recommendation on the global scale underwater sound modeling and data analysis
are presented.
1pUWc17. Investigation of error of propagated sound due to
bathymetric interpolation. Erin C. Hafla, Erick Johnson (Mech. Eng.,
Montana State Univ., 205 Cobleigh Hall, Bozeman, MT 59717-3900,
erinhafla@gmail.com), and Jesse Roberts (Sandia National Labs.,
Albuquerque, NM)
Paracoustic is a parallelized acoustic-wave propagation package developed by Sandia National Laboratories to model marine hydrokinetic (MHK)
devices in complex environments. It solves a linearized set of the velocitypressure partial differential equations using the finite-difference method and
3533
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
allows for 3D variations in medium densities, sound speeds, and bathymetry. In-situ measurements of these quantities, or the solution resolution from
hydrodynamic models, are sometimes resolved at a coarser spacing than is
required to accurately predict the propagated sound levels within Paracousti.
This will therefore require interpolation of these quantities onto a refined
grid, introducing model errors. The size of the refined grid is determined by
the maximum frequency of an MHK source profile and the underwater
sound speed. A single point source in a two-layer waveguide with realistic
bathymetry is used to compare the model error between three interpolation
schemes: nearest-neighbor, linear, and cubic. Preliminary results indicate
that there are significant differences in model error due to the particular
interpolation scheme used.
1pUWc18. 3D acoustic propagation modeling of the construction of the
Block Island Wind Farm. Anthony F. Ragusa, Gopu R. Potty, James H
Miller (Ocean Eng., Univ. of Rhode Island, 15 Receiving Rd., Narragansett,
RI 02882, afragusa@my.uri.edu), Ying-Tsong Lin, Arthur Newhall (Woods
Hole Oceanographic Inst., Woods Hole, MA), Kathleen J. Vigness-Raposa
(Marine Acoust., Inc., Middletown, RI), Jennifer Giard (Marine Acoust.,
Inc., Narragansett, RI), Michael Ross, and Jesse Roberts (Sandia National
Labs., Albuquerque, NM)
The Block Island Wind Farm (BIWF) consists of five turbines in water
depths of approximately 30m. The substructure for the BIWF turbines consists of jacket type construction with piles driven to the bottom pinning the
structure to the seabed. These jacket legs and foundation piles were driven
at a rake angle of approximately 13 from the vertical. This introduced
three-dimensional sound propagation effects as indicated by measurements
using a towed array during construction which showed azimuthal variability.
In order to model the complicated source, we will use finite element techniques (developed by Sandia National Laboratories) to provide the starting
field for a 3D parabolic equation model. Eventually the model predictions
will be compared to the actual measurements taken during wind turbine construction. This finite element model will be initially validated by modeling
the wave propagation in an instrumented sandbox at the University of Rhode
Island (URI) before applying the model to the Block Island modeling scenario. [Work supported by Bureau of Ocean Energy Management (BOEM).]
Acoustics ’17 Boston
3533
1p SUN. PM
1pUWc16. Global scale underwater sound modeling and data analysis.
Tiago Oliveira (Woods Hole Oceanographic Inst., Singapore, Singapore),
Ying-Tsong Lin, Arthur Newhall (Woods Hole Oceanographic Inst.,
Bigelow 213, MS#11, WHOI, Woods Hole, MA 02543, ytlin@whoi.edu),
Stephen Nichols (Penn State Univ., State College, PA), and Dave Bradley
(Penn State Univ., Woods Hole, Massachusetts)
SUNDAY EVENING, 25 JUNE 2017
EXHIBIT HALL D, 5:30 P.M. TO 7:00 P.M.
Exhibit
Exhibit Opening Reception
The instrument and equipment exhibit is located near the registration area in Exhibit Hall D.
The Exhibit will include computer-based instrumentation, scientific books, sound level meters, sound
intensity systems, signal processing systems, devices for noise control and acoustical materials, active
noise control systems, and other exhibits on acoustics.
Exhibit hours are Sunday, 25 June, 5:30 p.m. to 7:00 p.m., Monday, 26 June, 9:00 a.m. to 5:00 p.m., and
Tuesday, 27 June, 9:00 a.m. to 12:00 noon.
Coffee breaks on Monday and Tuesday mornings, as well as an afternoon break on Monday, will be held
in the exhibit area.
SUNDAY EVENING, 25 JUNE 2017
ROOM 305, 5:00 P.M. TO 6:00 P.M.
Meeting of Accredited Standards Committee (ASC) S2 Mechanical Vibration and Shock
C. F. Gaumond, Chair ASC S2
14809 Reserve Road, Accokeek, MD 20607
J. T. Nelson, Vice Chair ASC S2
Wilson Ihrig & Associates, Inc., 6001 Shellmound St., Suite 400, Emeryville, CA 94608
Working group chairs will report on the status of various shock and vibration standards currently under development. Consideration will
be given to new standards that might be needed over the next few years. Open discussion of committee reports is encouraged.
People interested in attending the meeting of the TAG for ISO/TC 108, Mechanical vibration, shock and condition monitoring, and four
of its subcommittees, take note—that meeting will be held in conjunction with the Standards Plenary meeting at 9:15 a.m. on Monday,
26 June 2017.
Scope of S2: Standards, specification, methods of measurement and test, and terminology in the field of mechanical vibration and shock,
and condition monitoring and diagnostics of machines, including the effects of exposure to mechanical vibration and shock on humans,
including those aspects which pertain to biological safety, tolerance, and comfort.
3534
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3534
MONDAY MORNING, 26 JUNE 2017
BALLROOM B, 8:00 A.M. TO 9:00 A.M.
Session 2aIDa
Interdisciplinary: Keynote Lecture
Keynote Introduction—8:00
2a MON. AM
Invited Paper
8:05
2aIDa1. Making, mapping, and using acoustic nanobubbles for therapy. Constantin Coussios (Inst. of Biomedical Eng., Dept. of
Eng. Sci., Univ. of Oxford, Old Rd. Campus Res. Bldg., Oxford OX3 7DQ, United Kingdom, constantin.coussios@eng.ox.ac.uk)
Acoustically driven bubbles continue to find new therapeutic uses, including drug delivery to tumors, opening the blood-brain barrier, and direct fractionation of tissues for surgical applications. Creating acoustic cavitation at length scales and pressure amplitudes
compatible with biology remains a major challenge and could be greatly facilitated by a new generation of nano-scale cavitation nuclei
that stretch our current understanding of bubble stability, acoustic microstreaming, and the interaction between cavitating bubbles and
biological media. Furthermore, in order to ensure patient safety and treatment efficacy, monitoring the location and activity of the nanobubbles during ultrasonic excitation is essential. It will be shown that Passive Acoustic Mapping (PAM) of sources of nonlinear acoustic
emissions enables real-time imaging and control of cavitation activity at depth within the body, thereby making it possible to monitor
therapy using solely acoustical means. Combined, these techniques for making, mapping, and using acoustic nanobubbles can enable
improved delivery of modern immuno-oncology agents to cells and tumors, needle-free transdermal drug delivery and vaccination, and
new spinal therapies to treat the spinal cord or repair the intervertebral disc.
MONDAY MORNING, 26 JUNE 2017
ROOM 208, 9:15 A.M. TO 12:20 P.M.
Session 2aAAa
Architectural Acoustics: Sound Propagation Modeling and Spatial Audio for Virtual Reality II
Dinesh Manocha, Cochair
Computer Science, University of North Carolina at Chapel Hill, 250 Brooks Building, Columbia Street, Chapel Hill,
NC 27599-3175
Lauri Savioja, Cochair
Department of Media Technology, Aalto University, PO Box 15500, Aalto FI-00076, Finland
U. Peter Svensson, Cochair
Electronic Systems, Norwegian University of Science and Technology, Acoustics Research Centre, Department of Electronic
Systems, Norwegian Univ. of Science and Technology, Trondheim NO - 7491, Norway
Chair’s Introduction—9:15
Invited Papers
9:20
2aAAa1. Fast multipole accelerated boundary element method for the Helmholtz equation in three dimensions on heterogeneous
architectures. Nail A. Gumerov (UMIACS, Univ. of Maryland, 115 A.V. Williams Bldg., College Park, MD 20742, gumerov@umiacs.
umd.edu) and Ramani Duraiswami (Comput. Sci. & UMIACS, Univ. of Maryland, College Park, MD)
Numerical simulations related to human hearing, architectural and underwater acoustics, and multiple scattering require computational solution of the Helmholtz equation in three dimensions with complex shaped boundaries. Boundary element methods (BEM) are
among the most accurate and efficient methods used for this purpose. However, solution of high frequency/large domain problems is
3535
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3535
challenging due to poor scaling of conventional solvers with the frequency and domain size. The use of the fast multipole methods
(FMM) resolves many problems related to the scalability [N.A. Gumerov and R. Duraiswami, J. Acoust. Soc. Am. 125(1), 191205,
2009]. Additional accelerations are needed to be able to solve practical problems over the entire range of human audible frequencies and
can be provided using graphics processors (GPUs) with multicore CPUs. In this work, we report development and demonstration of the
FMM/GPU accelerated BEM for the Helmholtz equation in 3D designed for hybrid CPU/GPU architectures. Innovations related to
choices of preconditioners, parallelization strategies, and choice of optimal parameters will be presented. A single PC version of the
algorithm shows accelerations of the order of 10 times compared to the BEM accelerated with the FMM alone. Results for computing
head related transfer functions and other standard calculations will be provided.
9:40
2aAAa2. Geometry-based diffraction auralization for real-time applications in environmental noise. Jonas Stienen and Michael
Vorlaender (Inst. of Tech. Acoust., RWTH Aachen Univ., Kopernikusstr. 5, Aachen, NRW 52074, Germany, jst@akustik.rwth-aachen.de)
Geometrical acoustics has become the number one choice in a variety of different virtual acoustics applications, especially if physicsbased sound field synthesis at real-time rates is desired. The procedures for interactive auralization on the one hand and calculation of room
impulse responses as well as outdoor noise prediction on the other hand are converging if not already merely differ in resolution of parameters, i.e., number of rays or order of image sources. When it comes to diffraction effects at geometrical boundaries, however, the discrepancy between model accuracy and calculation efficiency is still high due to the great computational effort required for geometrical path
analysis and propagation simulation. The overall goal of ongoing research in this field aims at closing the gap between the two worlds:
making diffraction effects audible in a Virtual Reality application with respect to correct physical sound propagation and true reproduction
that is required for serious applications in acoustic consulting business and for research, primarily for outdoor noise situations.
10:00
2aAAa3. Plane-wave decomposition and ambisonics output from spherical and nonspherical microphone arrays. Nail A.
Gumerov, Dmitry N. Zotkin, and Ramani Duraiswami (VisiSonics Corp., A.V. WIlliams Bldg. #115, College Park, MD 20742,
ramani@umiacs.umd.edu)
Any acoustic sensor embedded in a baffle disturbs the spatial acoustic field to a certain extent, and the recorded field is different
from a field that would have existed if the baffle were absent. Recovery of the original (incident) field is a fundamental task in spatial
audio. For some sensor baffle geometries, such as the sphere, the disturbance of the field by the sensor can be characterized analytically
and its influence can be undone to recover the incident field. However, for arbitrary shaped baffles, numerical methods have to be
employed. In the current work, the baffle influence on the field is characterized using boundary element methods, and a framework to
recover the incident field from measurements from sensors embedded on the baffle in the plane-wave function basis is developed. Field
recovery also allows generation of high-order ambisonics representations of the spatial audio scene. Experimental results using both
complex and spherical scatterers will be presented.
10:20–10:40 Break
10:40
2aAAa4. Head-related transfer function personalization for the needs of spatial audio in mixed and virtual reality. Ivan J. Tashev
and Hannes Gamper (Microsoft Res. Labs, Microsoft Corp., One Microsoft Way, Redmond, WA 98052, ivantash@microsoft.com)
Virtual Reality (VR) and Mixed reality (MR) devices are typically a head mounted screens, aiming to project objects in an entire
environment (VR), or in addition to the existing environment (MR). In both cases, these devices should have the means to create the spatial audio images of the projected objects. Considering the compact form factor of the devices, the most common approach is using binaural audio reproduction through a pair of headphones, isolated (VR) or acoustically transparent (MR). One of the key factors for
successful realization of such spatial audio system is personalization of the user Head-Related Transfer Functions (HRTFs). In this paper, we will present and compare several approaches for doing so under the heavy constraints of the VR/MR devices design.
11:00
2aAAa5. Perceptual evaluation on the influence of individualized near-field head-related transfer functions on auditory distance
localization. Guangzheng Yu, Yuye Wu, and Bo-sun Xie (School of Phys. and Optoelectronics, South China Univ. of Technol., Wushan
Rd. 381#, Tianhe District, Guangzhou, Guangdong 510640, China, scgzyu@scut.edu.cn)
It is desired that spatial audio for virtual reality is able to reproduce various auditory localization information so as to recreate virtual
source at different directions and distances. Distance-dependent binaural cue (binaural level difference (ILD), loudness, as well as environmental reflections are considered as auditory distance localization cues. In the case of free and near-field within about 1.0 m, the binaural cue which is encoded in near-field head-related transfer function (HRTFs) is an absolute and dominant distance localization cue
[D. S. Brungart, JASA, 1999]. However, HRTFs depending on individualized and non-individualized HRTFs are usually used in spatial
audio synthesis. In the present work, the perceptual influence of individualized HRTFs distance localization is evaluated by a psychoacoustic experiment. The binaural signals with various bandwidths are synthesized by filtering the input stimuli with individualized and
non-individualized near-field HRTFs and then reproduced by headphone. Preliminary results of virtual source localization experiment
indicate that individualized HRTFs influences little on distance localization at low frequencies but have some influence at mid and high
frequency. [This work was supported by the Natural Science Foundation of China, Grant No. 11574090.]
3536
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3536
11:20
11:40
2aAAa6. Algebraic reflections in acoustic ray tracing. Erik Molin (Eng.
Acoust., Div. of Eng. Acoust., Faculty of Eng., Lund Univ., P.O. Box 118,
Lund 22100, Sweden, erik.molin@construction.lth.se)
2aAAa7. Three-dimensional remote ensemble system using the
immersive auditory display “Sound Cask.” Shiro Ise (School of
Information Environment, Tokyo Denki Univ., Muzai Gakuendai Inzai21200, Chiba 270-1382, Japan, iseshiro@mail.dendai.ac.jp), Yuko Watanabe
(School of Information Environment, Tokyo Denki Univ., Inzai-shi, Japan),
and Kanako Ueno (Meiji Univ., Tokyo, Japan)
Stochastic ray tracing is currently one of the most popular geometric
acoustic algorithms. It is widely used, and it primarily excels for modeling
the late room response at high frequencies. A significant bottleneck of the
algorithm is the high computational cost of testing rays for intersection with
model geometry. Another is the high number of rays required for convergence. Several methods exist to reduce this cost by means of reducing geometric complexity. This paper proposes a method to algebraically compute
transmission paths between receivers and sources by use of bidirectional reflectance distribution functions (BRDF), thus decreasing the number of total
rays needed. For each ray-geometry intersection, transmission paths are calculated recursively, and several transmission paths can thereby be considered for each intersection test, while allowing for point-like transmitters and
receivers.
A 3D sound field simulation system using the immersive auditory display system, “sound cask,” has been developed for creating a virtual environment that would reproduce the 3D acoustics of concert halls for
musicians. The simulation system is based on the boundary surface control
principle. The original sound field was measured using a microphone array
consisting of 80 omnidirectional microphones installed at the nodes of the
C80 fullerene structure. The virtual sound field was then constructed in a
cask-shaped space (approx. 2 2 m), with 96 channel full-range loudspeakers installed in the space. The 3D acoustic waves of music, including
the acoustic condition on the stage, were created virtually inside the sound
cask. For this, the first step was to design inverse filters of the MIMO system
between the 96 loudspeakers and 80 microphones located in the sound cask.
Next, the inverse filters and the impulse responses measured in actual concert halls and signals from instruments played by musicians were convolved
in real time. Using two sound casks connected each other, three-dimensional
remote ensemble system can be realized. Room acoustic indices between
the actual and virtual conditions were compared, and subjective experiments
involving professional musicians were performed.
12:00–12:20 Panel Discussion
3537
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3537
2a MON. AM
Contributed Papers
MONDAY MORNING, 26 JUNE 2017
ROOM 207, 9:20 A.M. TO 12:20 P.M.
Session 2aAAb
Architectural Acoustics: Acoustic Regulations and Classification of New and Retrofitted Buildings I
Birgit Rasmussen, Cochair
SBi, Danish Building Research Institute, Aalborg University Copenhagen, A.C. Meyers Vænge 15, Copenhagen SV 2450,
Denmark
Jorge Patricio, Cochair
LNEC, Av. do Brasil, 101, Lisbon 1700-066, Portugal
David S. Woolworth, Cochair
Oxford Acoustics, 356 CR 102, Oxford, MS 38655
Invited Papers
9:20
2aAAb1. Update on the change in sound insulation requirements in Canada. Christoph Hoeller and Jeffrey Mahn (Construction,
National Res. Council Canada, 1200 Montreal Rd., Ottawa, ON K1A 0R6, Canada, christoph.hoeller@nrc.ca)
In the 2015 edition of the National Building Code of Canada (2015 NBCC, published in January 2016), sound insulation requirements between dwelling units are given in terms of Apparent Sound Transmission Class (ASTC). This is a significant change from the
requirements in previous editions of the NBCC which were given in terms of Sound Transmission Class (STC). While the STC rating
only accounts for sound transmission through the separating assembly, the ASTC rating also takes into account structural flanking transmission via adjoining building elements. An overview of the change in requirements and of NRC activities to support the code change
was given at the ASA meeting in Jacksonville (November 2015). This presentation will provide an update on the implementation of the
code change since it came into effect in federal regulations in early 2016. Tools and guidelines provided by the NRC such as Research
Report RR-331, “Guide to Calculating Airborne Sound Transmission in Buildings” and the revised version of soundPATHS, NRC’s
web application to calculate ASTC ratings, will be presented.
9:40
2aAAb2. Accommodation for assemblies in widespread use: The STC 50-ish wall. Benjamin Markham (Acentech Inc., 33 Moulton
St., Cambridge, MA 02138, bmarkham@acentech.com)
The classroom acoustics standard ANSI/ASA S12.60 makes a compelling accommodation: a 20 cm (8 in.) concrete masonry unit
wall, properly detailed, “…is an acceptable alternate assembly that conforms to the intent of [the requirements].” The relevant requirement is for a Sound Transmission Class (STC) of 50. This wall type is in widespread and generally successful use in U.S. school buildings with concrete construction. An analogous wall in widespread use in steel buildings is a single row of 92 mm (3-5/8 in.) metal studs
with two layers of 16 mm (5/8 in.) gypsum board on each side and insulation in the stud cavity. Under certain stud configurations, this
wall exceeds STC 50; in the most common stud configurations, however, it falls short. ANSI/ASA S12.60 makes no accommodation for
this wall. With requirements for STC 50 walls in FGI guidelines for healthcare, the ANSI/ASA standard for schools, code requirements
for multifamily buildings, and in other standards and design guidelines, there is an increasing need to reconcile compliance requirements
with constructions commonplace in the United States. This presentation will examine first the acoustical data and then the design implications—both benefits and pitfalls—of accommodations in American standards and design guidelines such as the one made in ANSI/
ASA S12.60.
10:00
2aAAb3. Laminated gypsum wallboard in mid-rise wood construction. Zhiqiang Shi (CertainTeed Acoust. Res., Saint-Gobain
Northborough R&D Ctr., 9 Goddard Rd., Northborough, MA 01532, zhiqiang.shi@saint-gobain.com) and Robert Marshall (CertainTeed
Bldg. Sci., Saint-Gobain, Mississauga, ON, Canada)
Mid-rise wood construction is a cost-effective and sustainable choice to achieve high performance in commercial and multi-family
residential housing. It is gaining popularity in the industry as several building codes, including IBC (2009) and NBCC (2015), now allow
five- and six-story constructions, respectively. Acoustic comfort in such buildings is important to attract and retain occupants. Because
of its lightweight construction nature, the new code changes the requirements for airborne sound insulation between dwellings from a
Sound Transmission Class (STC) rating which only describes the sound insulation of the common partition between rooms to an ASTC
rating, which includes contributions from all of the flanking paths. This paper shows laminated SilentFX/QC gypsum wall board is a
fast, economical, space-saving, and code-compliant solution in meeting the ASTC requirement of the new building codes with examples.
3538
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3538
The test methodology is based on ISO 15712 and ISO 10848 standard, and follows the procedure outlined in NRC-331 publication. It
consists of testing STC and radiation efficiency of wall partitions and measuring the vibration reduction index of various conjunctions
between the partitions and different flanking paths. The ASTC predictions were validated with prior results obtained from the NRC
Flanking Facility.
10:20
2aAAb4. Comparison of Sound Transmission Class and Outdoor-Indoor Transmission Class for specification of exterior facade
assemblies. John LoVerde and David W. Dong (Veneklasen Assoc., 1711 16th St., Santa Monica, CA 90404, jloverde@veneklasen.
com)
2a MON. AM
Design and specifications of the acoustical performance of the exterior facade of buildings to protect the occupants from transportation noise sources is a common acoustical task. Single number quantities (SNQ) are useful to specify and communicate acoustical performance to other design professionals. Common SNQ values include Sound Transmission Class (STC) and Outdoor-Indoor
Transmission Class (OITC). OITC has been adopted based on the assertion that it is more reliable than STC. This paper will compare
the SNQ using a large dataset and evaluate if there is any advantage of using OITC over STC as the method of specifying or communicating exterior facade performance.
10:40–11:00 Break
11:00
2aAAb5. Brazilian acoustics regulation: An overview and proposal of façade requirements. Carolina R. Monteiro, Marcel Borin
(Res. and Development, Harmonia Ac
ustica Davi Akkerman + Holtz, Av. Mofarrej 1200, S~ao Paulo, S~ao Paulo 053111000, Brazil,
carolina.monteiro@harmoniaacustica.com.br), Mariana Shimote, Teddy Yanagiya (Project Management, Harmonia Ac
ustica Davi
Akkerman + Holtz, S~ao Paulo, S~ao Paulo, Brazil), and Marıa Machimbarrena (Appl. Phys., Universidad de Valladolid, Valladolid,
Valladolid, Spain)
The Brazilian acoustics regulation for new residential buildings entered into force in 2013, and during the past years, the construction
and acoustic consultancy market developed new procedures to accomplish with the façade requirements. In this paper, typical existing
methods are presented, as well as new proposals to be incorporated in the future revision of the Regulation. Furthermore, existing and
possible future Brazilian requirements are translated to the suggested descriptor DnT,50 and compared within the acoustic classification
scheme for dwellings proposed in ISO/CD 19488.
11:20
2aAAb6. Improvement of acoustic and thermal performances of façades on retrofit buildings. Giovanni Semprini (Dept. of
Industrial Eng., Univ. of Bologna, Viale Risorgimento 2, Bologna 40136, Italy, giovanni.semprini@unibo.it), Antonino Di Bella (Dept.
of Industrial Eng., Univ. of Padova, Padova, PD, Italy), Simone Secchi (Dept. of Industrial Eng., Univ. of Florence, Firenze, Italy), Luca
Barbaresi (Dept. of Industrial Eng., Univ. of Bologna, Bologna, Bologna, Italy), Nicola Granzotto (Dept. of Industrial Eng., Univ. of
Padova, Padova, Italy), and Anastasia Fotopoulou (Dept. of Architecture, Univ. of Bologna, Athens, Greece)
Deep renovation of existing buildings represents an important strategy to go beyond the standard energy retrofit (as the increasing of
thermal insulation of building envelope) and to provide more benefits to residents considering non-energetic aspects, like the improvement of architectural quality of building and of indoor life quality. In this context, façade’s additional elements (balconies, solar greenhouses, etc.) can be effective solutions in order to increase thermal and acoustic performances as well as providing a more living space
to the outside. This paper analyzes and compares different technical construction systems for façade improvement in order to define robust solutions correlated both to outdoor climate context (to increase energy performances towards nearly Zero Energy Buildings) and
to outdoor noise for a better acoustic sound insulation. A case study is presented, and dynamic energy simulations, performed on a building made with materials and construction technology widespread in Italy in the ’60s, show the energy impact of different solutions. The
evaluation of acoustic façade insulation highlights the need to consider not only the properties of materials but also the effects of flanking transmission, sometimes correlated to thermal bridges, and façade shapes of which continuous balcony solutions give an important
increase on acoustic performances.
11:40
2aAAb7. Meta-analysis of subjective-objective sound insulation data. Jonas Brunskog (Acoust. Technol., DTU Elec. Eng.,
Elektrovej, Bldg. 352, Kgs. Lyngby DK-2800, Denmark, jbr@elektro.dtu.dk) and Birgit Rasmussen (Aalborg University Copenhagen
(AAU-CPH), SBi, Danish Bldg. Res. Inst., Copenhagen, Denmark)
A smaller meta-analysis using field data for subjective sound insulation in dwellings is carried out, using and analyzing data found in
the literature. The investigated parameter is the correlation coefficient between the sound insulation metrics and the subjective annoyance (or similar). Both airborne and impact sound insulation is considered. One of the objectives are to see if low frequency adaptation
terms according to ISO 717 will yield an increased correlation or not. Other investigated aspects is the influence of lightweight versus
heavyweight building elements and the question regarding the importance of vertical versus horizontal transmission.
3539
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3539
Contributed Paper
12:00
approach takes place for the definition of the concept of acoustic comfort,
which was rarely defined before and used to be mostly interpreted as absence, low, or acceptable levels of noise in a place. In this article, acoustic
comfort is not only lack of discomfort, it is explained as the ability to have
proper sound conditions in overall, for a certain activity in a certain space,
considering physical characteristics and the users’ demands. Then, a method
is set up for the evaluation of comfort in dwellings, including acoustic measurements and social surveys in test buildings. A questionnaire for the collection of the subjective responses of the tenants, regarding noise annoyance,
sound perception, and emotions, is presented and analyzed.
2aAAb8. On the definition of acoustic comfort in residential buildings.
Nikolaos - Georgios Vardaxis, Delphine Bard (Construction Sci., Lund
Univ., John Ericssons v€ag 1, Lund 22100, Sweden, nikolas.vardaxis@
construction.lth.se), and Kerstin Persson Waye (Occupational and Environ.
Medicine, Sahlgrenska Acad., Gothenburg Univ., Gothenburg, Sweden)
The aim of this study is to explore acoustic comfort in family apartments
in Scandinavia, not only regarding standardized acoustic data but also
including the users’ perception of their living sound environment. A first
MONDAY MORNING, 26 JUNE 2017
ROOM 206, 9:20 A.M. TO 12:20 P.M.
Session 2aAAc
Architectural Acoustics: Teaching and Learning in Healthy and Comfortable Classrooms III
Arianna Astolfi, Cochair
Politecnico di Torino, Corso Duca degli Abruzzi, 24, Turin 10124, Italy
Viveka Lyberg-ehlander, Cochair
Clinical Sciences, Lund, Logopedics, Phoniatrics and Audiology, Lund University, Scania University Hospital,
Lund S-221 85, Sweden
David S. Woolworth, Cochair
Oxford Acoustics, 356 CR 102, Oxford, MS 38655
Invited Papers
9:20
2aAAc1. Teachers’ voice parameters and classroom acoustics—A field study and online survey. Nick Durup, Bridget M. Shield,
Stephen Dance (London South Bank Univ., LSBU, 103 Borough Rd., London SE1 0AA, United Kingdom, nicksenate@hotmail.com),
and Rory Sullivan (Sharps Redmore Acoust. Consultants, Ipswich, United Kingdom)
Many studies have suggested that teachers have a significantly higher rate of voice problems than the general population. In order to
better understand the possible influences of room acoustics on different voice parameters, a study has been carried out by London South
Bank University which involved measurements of voice parameters for teachers working in classrooms with varying acoustic conditions.
Data relating to the voice, including the average speech sound level, fundamental frequency, and phonation percentage, were captured
using an Ambulatory Phonation Monitor (APM) which measured directly from skin vibrations in the neck, thereby excluding the effects
of other noise sources in the environment. The measured voice parameters were compared with the room acoustic data for the classrooms involved, which were surveyed separately from the voice measurements. In addition to the field measurements, an online questionnaire was undertaken with the support of two UK teacher trade unions. This was designed to gain further information on teachers’
experiences of voice problems and school acoustics in general and indicated that over 66% of the surveyed teachers had experienced
voice problems during their career. This paper will present the results of the field measurements and questionnaire survey.
3540
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3540
9:40
2aAAc2. Long-term voice monitoring with smartphone applications and contact microphone. Arianna Astolfi (Dept. of Energy,
Politecnico di Torino, Corso DC degli Abruzzi, 24, Turin 10124, Italy, arianna.astolfi@polito.it), Alessio Carullo, Simone Corbellini
(Dept. of Electronics and Telecommunications, Politecnico di Torino, Turin, Italy), Massimo Spadola, Anna Accornero (Dept. of
Surgical Sci., A.O.U. Citta della Salute e della Scienza di Torino, Turin, Italy), Giuseppina E. Puglisi (Dept. of Energy, Politecnico di
Torino, Torino, Italy), Antonella Castellana (Dept. of Electronics and Telecommunications, Politecnico di Torino, Turin, Italy), Louena
Shtrepi (Dept. of Energy, Politecnico di Torino, Torino, Italy), Gian Luca D’Antonio (Dept. of Management and Production Eng.,
Politecnico di Torino, Turin, Italy), Alessandro Peretti (School of Specialization in Occupational Medicine, Universita di Padova,
Padova, Italy), Giorgio Marcuzzo, Alberta Pierobon, and Giovanni B. Bartolucci (Dept. of Cardiologic, Thoracic and Vascular Sci.,
Universita di Padova, Turin, Italy)
2a MON. AM
In recent years, the growing interest in the recognition of voice disorders as occupational diseases has required screening methods
adaptable to the clinical requirements, capable to extend the collection of baseline data. In this framework, the use of smartphones has
gained increasing interest, thanks to advancements in digital technology, which made them suitable for recording and analyzing acoustic
R technology, have been developed for long-term monitoring of voice acsignals. Two smartphone applications, based on the Voice CareV
tivity when combined with a cheap contact microphone embedded in a collar. The applications have been tested in laboratory and used
for the monitoring of teachers at kindergarten, primary school, and university. Vocal Holter App allows the selection of short and long
term monitoring mode, and three different clusters of vocal parameters related to intensity, intonation, and load, respectively. Most of
the results are based on the distributions of occurrences of vocal parameters. A headlight informs the person under monitoring of pathologic voice. Vocal Holter Rec allows data recording and to perform a personalized analysis based on updated parameters. The equipment
allows downloading and saving data on a dedicated web site for further processing, comparisons over time, or sharing with physicians or
rehabilitators.
10:00
2aAAc3. Vocal fatigue in virtual acoustics scenarios. Pasquale Bottalico (Communicative Sci. and Disord., Michigan State Univ.,
1026 Red Cedar Rd., Lansing, MI 48910, pb@msu.edu), Lady Catherine Cantor Cutiva, and Eric J. Hunter (Communicative Sci. and
Disord., Michigan State Univ., East Lansing, MI)
The overuse of the voice by professional voice users, such as teachers, is known to cause physiological vocal fatigue. Vocal fatigue
is used to denote negative vocal adaptation that occurs as a consequence of prolonged voice use or vocal load. This study investigates
how self-reported vocal fatigue is related to voice parameters (sound pressure level SPL, fundamental frequency f0, and their standard
deviations) and the duration of the vocal load. Thirty-nine subjects were recorded while reading a text. Different acoustics scenarios
were artificially created to increase the variability in the speech produced (3 reverberation time, 2 noise conditions, and 3 auditory feedback levels), for a total of 18 tasks per subject presented in a random order. For each scenario, the subjects answered questions addressing their perception of vocal fatigue on a visual analog scale. A model of the vocal fatigue to acoustic vocal parameters is proposed. The
duration of the vocal load contributed to 55% of the variance explained by the model, followed by the interaction between the standard
deviations of the SPL and f0 (24%). The results can be used to give a simple feedback during voice dosimetry measurements.
10:20
odin (Psych.,
2aAAc4. Individual factors and its association with experienced noise annoyance in Swedish preschools. Fredrik Sj€
Umeå Univ., Beteendevetarhuset, Umeå 90187, Sweden, fredrik.sjodin@umu.se)
Studies have shown that preschool teachers often report having a troublesome working environment in terms of high noise levels.
Noise annoyance is often reported from employees working under poor acoustical conditions. The aim of the study was to investigate
whether there is an association between rated noise annoyance and actual noise exposure for Swedish preschool teachers. Furthermore,
the study also aimed to investigate whether preschool teacher with different individual characteristics differs in their rated noise annoyance at work. The study included 90 preschool teachers in Sweden. Data were collected by use of personal carried noise dosimeters and
by questionnaires during one representative work week. The average equivalent noise exposure was 71 dBA and the average rated noise
annoyance was 65 on a 0-100 mm scale. Rated noise annoyance was not correlated to the sound exposure during a work week (r = 0.66,
P = 0.42). Analysis of differences in noise annoyance ratings between preschool teachers with different individual characteristics (hearing impairment, tinnitus, age, and gender) revealed no significant statistical group differences. Other factors needs to be investigated to
better explain what affects differences in rated noise annoyance at work among preschool teachers in Sweden.
10:40
2aAAc5. One-year longitudinal study on teachers’ voice parameters in secondary-school classrooms: Relationships with voice
quality assessed by perceptual analysis and voice objective measures. Antonella Castellana (Dept. of Electronics and
Telecommunications, Politecnico di Torino, Corso DC degli Abruzzi, 24, Turin, TO 10129, Italy, antonella.castellana@polito.it),
Giuseppina E. Puglisi, Giulia Calosso (Dept. of Energy, Politecnico di Torino, Torino, TO, Italy), Anna Accornero (Dept. of Surgical
Sci., Universita degli Studi di Torino, Turin, Italy), Lady Catherine Cantor Cutiva (Dept. of Communicative Sci. and Disord., Michigan
State Univ., East Lansing, MI), Fiammetta Fanari (Dept. of Surgical Sci., Universita degli Studi di Torino, Torino, TO, Italy), Franco
Pellerey (Dept. of Mathematical Sci., Politecnico di Torino, Torino, TO, Italy), Alessio Carullo (Dept. of Electronics and
Telecommunications, Politecnico di Torino, Turin, Italy), and Arianna Astolfi (Dept. of Energy, Politecnico di Torino, Turin, Italy)
This longitudinal work explores the relationships between three analyses used for assessing teachers’ voice use: the voice monitoring
during lessons that describes the teachers’ Vocal Behavior (VB), the perceptual assessment of voice by speech-language pathologists
and the estimation of objective parameters from vocalizations to define teachers’ Vocal Performance (VP). About 30 Italian teachers
from secondary schools were involved at the beginning and at the end of a school year. In each period, teachers’ vocal activity was
monitored using the Voice Care device, which acquires the voice signal through a contact microphone fixed at the neck to estimate
sound pressure level, fundamental frequency, and voicing time percentage. Once in each period, two speech-language pathologists performed a perceptual assessment of teachers’ voice using the GIRBAS-scale. On that occasion, teachers vocalized a sustained vowel
3541
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3541
standing in front of a sound level meter in a quiet room. Jitter, Shimmer and other parameters were extracted using Praat, while a new
metric of Cepstral-Peak-Prominance-Smoothed was estimated with a MATLAB script. Several relationships between the outcomes of
each analysis were investigated, e.g., statistical differences between the dimension “G” from GIRBAS-scale and objective measures for
VB and VP, and correlations between objective measures and perceptual ratings were assessed.
11:00
2aAAc6. Speech sound pressure level distributions and their descriptive statistics in successive readings for reliable voice
monitoring. Antonella Castellana, Alessio Carullo (Dept. of Electronics and Telecommunications, Politecnico di Torino, Corso DC
degli Abruzzi, 24, Turin, TO 10129, Italy, antonella.castellana@polito.it), Umberto Fugiglando (Senseable City Lab, Massachusetts
Inst. of Technol., Cambridge, MA), Giuseppina E. Puglisi (Dept. of Energy, Politecnico di Torino, Torino, Italy), and Arianna Astolfi
(Dept. of Energy, Politecnico di Torino, Turin, Italy)
Due to the high prevalence of voice disorders among teachers, there is a growing interest in monitoring voice during lessons. However, the reliability of the results is still to be deepened, especially in the case of repeated monitoring. The present study thus investigates
the speech Sound Pressure Levels (SPL) variability under repeatability conditions aiming to provide preliminary normative data for the
results assessment. In a semi-anechoic chamber, 17 subjects read twice and subsequently two phonetically balanced passages, which
were simultaneously recorded with a sound level meter, a headworn microphone, and a portable vocal analyzer. Each speech sample
was characterized through the distribution of SPL occurrences and several descriptive statistics of SPL distribution were calculated. For
each subject, statistical differences between the two SPL distributions related to each passage were investigated using the Mann-Whitney
U-test. For each group of subjects using the same device, the Wilcoxon signed-rank test was applied to the paired lists of descriptive statistics related to each passage. For mean, mode, and equivalent SPL, the within-speaker and the within-group variability were assessed
for each device. For all the devices and SPL parameters, the within-speaker variability was not higher than 2 dB while the within-group
variability reached 5.3 dB.
Contributed Papers
11:20
2aAAc7. Vocal effort of teachers in different classrooms. Jamilla Balint,
Rafael P. Ludwig, and Gerhard Graber (Signal Processing and Speech
Commun. Lab., Graz Univ. of Technol., Inffeldgasse 16c, Graz, Styria
8010, Austria, balint@tugraz.at)
Classroom acoustics can be one of the reasons for causing voice disorders among teachers. The purpose of this study was to investigate the vocal
effort of teachers and the noise level of students under different classroom
conditions (reverberant and less reverberant classrooms, junior and senior
level, different teaching types like teacher-centered teaching and team
work). We were able to carry out measurements over a period of two weeks,
where the same two classes were taught by the same teachers in a reverberant classroom during the first week and in a less reverberant classroom during the second week. The results indicate a much greater increase in the
vocal effort of the teachers over a period of 6 hours in a reverberant classroom compared to a less reverberant classroom. In addition, teaching
younger students requires a greater vocal effort since the noise level is much
higher. Also, extended breaks are crucial for the recovery of the voice in
reverberant spaces, whereas the voice needs less time to recover in less
reverberant spaces. A questionnaire filled out by the participating teachers
confirmed that the level of exhaustion at the end of the day is much greater
in reverberant spaces.
11:40
2aAAc8. Norwegian experiences of acoustics in classrooms. Anders
Homb (SINTEF Bldg. & Infrastructure, Hgskoleringen 7B, Trondheim
7465, Norway, anders.homb@sintef.no)
During the last decades, the investments on new school buildings and
retrofitting of existing buildings have been considerable in Norway and it
will continue for the next years. A characteristic of this period have also
been a development of the teaching methods and as a consequence of that, a
change of the area planning and room layout, in the beginning without considering the acoustic challenges sufficiently. The acoustical norms,
3542
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
guidelines, and recommendations for schools were revised afterwards. Important issues in these documents were on reverberation time, speech intelligibility, and on background noise necessary for sufficient conditions during
the educational process. The paper will present the development of the
acoustical requirements and recommendations in this period together with
some layout examples. The revised requirements and recommendations
have been based on international experiences and some Norwegian research
studies. The paper will present some results from these studies, both from
measurements of relevant parameters and evaluation of student perception.
The analysis will focus on the reverberation time, signal no noise ratio, and
how to prevent noise in the classroom area from adjacent spaces. Finally,
the paper will give some suggestions on how to improve the acoustical conditions in the spaces for the future.
12:00
2aAAc9. Acoustical measurement of modern Japanese libraries.
Kazuma Shamoto, Mai Ikawa, Hiroshi Itsumura (Univ. of Tsukuba, 1-2
Kasuga, Tsukuba, Ibaraki 305-8550, Japan, shamoto.kazuma.ru@alumni.
tsukuba.ac.jp), Koji Ishida (Ono Sokki, Yokohama, Japan), and Hiroko
Terasawa (Univ. of Tsukuba, Tsukuba, Ibaraki, Japan)
The function of modern libraries is transforming from a quiet reading
space to an active, social, and interactive learning place for old and young
people including children. We investigate the room acoustics of modern
libraries in order to examine if they can accommodate both needs of silence
and bustling communication. We measured impulse responses and sounddecay distribution patterns at three libraries (two university libraries and a
public library) with different architectural styles. In addition, we measured
the noise level during the library opening hours with active visitors. Every
library showed different patterns of sound propagation and visitor activities.
Library spaces with densely installed shelves highly absorb noises, while
spaces with few shelves and a multiple-height structure were highly echoic.
Some carefully designed spaces showed a clear acoustical zoning, that the
sound from bustling noise area hardly reach to the quiet reading area even
with a high ceiling and sparse shelves.
Acoustics ’17 Boston
3542
MONDAY MORNING, 26 JUNE 2017
ROOM 313, 9:20 A.M. TO 12:00 NOON
Session 2aAB
Animal Bioacoustics: Behavior/Comparative Studies
Contributed Papers
9:20
2aAB1. Shipboard echosounders negatively affect acoustic detection
rates of beaked whales. Danielle Cholewiak, Annamaria Izzi DeAngelis,
Peter Corkeron, and Sofie M. Van Parijs (NOAA Northeast Fisheries Sci.
Ctr., 166 Water St., Woods Hole, MA 02543, danielle.cholewiak@noaa.
gov)
Beaked whales are cryptic, deep-diving odontocetes that are sensitive to
anthropogenic noise. While their behavioral responses to navy sonar have
been the subject of extensive study, little effort has been expended to evaluate their responses to other types of acoustic signals, such as fisheries
echosounders. From 1 July to 10 August 2013, the Northeast Fisheries Science Center conducted a shipboard cetacean assessment survey, combining
visual observation and passive acoustic data collection. Simrad EK60
echosounders were used to collect prey field data; echosounder use was
alternated on/off on a daily basis to test for an effect on beaked whale detection rates. The software package Pamguard was used to detect, classify, and
localize individual beaked whales. A GLM was used to test the relationship
between acoustic detections and covariates; echosounder use negatively
affected beaked whale acoustic detection rates, and acoustic event durations
were significantly shorter. These results suggest that beaked whales are
reacting to echosounders by either moving away or interrupting foraging activity. This decrease in detectability has implications for management and
mitigation activities. The use of scientific echosounders is rapidly increasing, thus leading to potentially broad ecological implications for disturbance
effects on these sensitive species as well.
9:40
2aAB2. Signature whistles facilitate reunions and/or advertise identity
in Bottlenose Dolphins. Nicholas Macfarlane (Int. Union for the
Conservation of Nature, Washington, DC), Vincent Janik (Biology, Univ. of
St. Andrews, St. Andrews, Fife, United Kingdom), Frants H. Jensen (Aarhus
Inst. of Adv. Studies, Aarhus Univ., Aarhus, Denmark), Katherine McHugh
(Chicago Zoological Society’s Sarasota Dolphin Res. Program, Mote
Marine Lab., Sarasota, FL), Laela Sayigh (Biology, Woods Hole
Oceanographic Inst., Woods Hole, MA), Randall Wells (Chicago
Zoological Society’s Sarasota Dolphin Res. Program, Mote Marine Lab.,
Sarasota, FL), and Peter L. Tyack (Biology, Univ. of St. Andrews, Sea
Mammal Res. Unit, Scottish Oceans Inst., East Sands, St. Andrews, Fife
KY16 8LB, United Kingdom, plt@st-andrews.ac.uk)
Animals with stable relationships need mechanisms to stay in touch
when separated. Five decades of research suggest that signature whistles are
likely candidates for serving this contact-calling purpose in bottlenose dolphins. However, difficulties identifying the vocalizing individual and measuring inter-animal distances have hindered tests of call functions among
wild dolphins. Moreover, signature whistles almost certainly serve a variety
of functions, so that to focus on contact calling, it is useful to identify contexts where animals need to maintain cohesion. By simultaneously tagging
pairs of mothers and calves to look at instances when the animals are separating and reuniting, we focus on testing specific contact functions of signature whistles. Drawing from the literature, we define three potential contact
3543
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
call functions for signature whistles, each with its own hypothetical signature whistle distribution during separations and reunions: location monitoring, reunion calls, and identity advertisement calls. To test these potential
functions, we estimated the probability of an animal producing a signature
whistle at different stages of temporary separation events. Using a binomial
logistic regression model, we found that the data are consistent with signature whistles functioning as reunion calls or identity advertisement calls but
not for location monitoring during separations and reunions.
10:00
2aAB3. First harmonic shape analysis of Brazilian free-tailed bat calls
during emergence. Yanqing Fu and Laura Kloepper (Dept. of Biology,
Saint Mary’s College, 264 Sci. Hall, Notre Dame, IN 46556, yfu@
saintmarys.edu)
Echolocating bats can adapt calls when facing challenging echolocation
tasks. Previous studies have shown that bats can change their pulse duration,
pulse repetition rate, or vary their start/end/peak frequencies depending on
behavior. Even though this kind of signal investigation reveals important
findings, these approaches to analysis use bulk parameters that may hide
subtleties in the call structure that could be important to the bat. In some
cases, calls may have the same start and end frequencies but have different
FM shapes and meet different sensory needs. In the present study, we demonstrate an algorithm for extracting the first harmonics of the Brazilian freetailed bat (Tadarida brasiliensis) to investigate how the shape of the call
changes. High pass filtering, power banded time-frequency analysis, and
search algorithms were used to isolate the first harmonics. By tracking the
first harmonics, the detailed frequency modulation shapes of different bat
group sizes were obtained, and the difference among those traces was measured. The detailed shape analysis will provide a new insight into the adaptive call design of bats.
10:20–10:40 Break
10:40
2aAB4. Vocalization in clouded leopards and tigers: Further evidence
for a proximate common ancestor. Edward J. Walsh (Boys Town National
Res. Hospital, 555 North 30th St., Omaha, NE 68131, edward.walsh@
boystown.org), Heather Robertson, Sandy Skeba (Nashville Zoo at
Grassmere, Nashville, TN), and JoAnn McGee (Boys Town National Res.
Hospital, Omaha, NE)
Previously, we reported that clouded leopards (Neofelis nebulosa) share
an unusual auditory response timing characteristic with tigers (Panthera
tigris) and possibly other representatives of the genus Panthera. Peripheral
auditory response timing, or latency, and stimulus frequency relationships
obey a rule in these species that is more parabolic in nature than the commonly observed inverse rule. That is, neural response latencies to low frequencies are as short, or shorter, than those measured at higher frequencies
in the measurable response spectrum. In this report, we consider the possibility that clouded leopards may share other bioacoustic qualities that differentiate large roaring cats from smaller members of the cat family, Felidae.
Acoustics ’17 Boston
3543
2a MON. AM
Peter L. Tyack, Chair
Biology, University of St. Andrews, Sea Mammal Research Unit, Scottish Oceans Institute, St. Andrews KY16 8LB,
United Kingdom
In that context, vocalizations were recorded from young adult clouded leopards at the Nashville Zoo at Grassmere and distinctive similarities within a
subset of P. tigris calls were observed. The acoustic properties of the low
level, friendly greeting known as prusten, more commonly known as chuffing, were of particular interest. Although similar in nature, quantitative and
structural differences clearly differentiate the intraspecies call and differences may provide insight to production mechanisms. This observation is of
particular interest in an evolutionary context because N. nebulosa and P.
tigris shared a proximate common ancestor.
11:00
2aAB5. A first description of rhythmic song in Omura’s whale
(Balaenoptera omurai). Salvatore Cerchio (New England Aquarium,
Central Wharf, Boston, MA 02110, scerchio@gmail.com), Sandra Dorning
(Univ. of Oregon, Eugene, OR), Boris Andrianantenaina (Les Baleines
Asseau, Nosy Be, Madagascar), and Danielle Cholewiak (NOAA Northeast
Fisheries, Woods Hole, MA)
Omura’s whale is a recently described tropical Balaenopterid whale
with virtually nothing known about their acoustic behavior. Recordings
have revealed a stereotyped 15-50 Hz amplitude-modulated vocalization,
rhythmically repeated in a typical Balaenoptera song manner. In order to
describe the characteristics of the song, continuous recordings were made
using archival recorders during 21 days at 4 sites off the northwest coast of
Madagascar in documented Omura’s whale habitat. A total of 926 hours of
recordings were manually browsed to identify all occurrences of the song
vocalizations, logging 9117 individual song units. Occurrence varied among
sites spread across 40 km of shelf habitat, indicating heterogeneous distribution of whales and use of habitat over space and days. Diel variation indicated higher incidence of song during daylight hours, counter to trends
found in other Balaenopterid whales. A total of 215 different individual series were identified ranging from 3 to 252 consecutive song units. For 121
individuals with more than 20 consecutive song units, the interval ranged
from 147.4 s to 289.0 s with a mean of 200.3 s (s.d. 25.9), and recorded
song duration ranged up to 13.33 hr. This represents the first description of
singing behavior for this species, suggesting a time-intensive behavioral display likely related to breeding.
3544
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
11:20
2aAB6. Two new whale calls in the southern Indian Ocean, and their
geographic and seasonal distribution over five sites and seven years.
Emmanuelle C. Leroy (Laboratoire GeoSci. Ocean, Univ. of Brest, IUEM
Technopole Brest Iroise, Rue Dumont d’Urville, Plouzane 29280, France,
emmanuelle.leroy@univ-brest.fr), Flore Samaran, Julien Bonnel (LabSTICC, ENSTA Bretagne, Brest cedex 9, France), and Jean-Yves Royer
(Laboratoire GeoSci. Ocean, Univ. of Brest, Plouzane, France)
Since passive acoustic is widely used to monitor cetaceans, unidentified
signals from biological sources are commonly reported. The signal’s characteristics and emission patterns could give keys to identify the possible sources. Here, we report two previously unidentified signals found in acoustic
records from five widely spread sites in the southern Indian Ocean and spanning seven years (2007, 2010 to 2015). The first reported signal (M-call)
consists of a single tonal unit near 22 Hz and lasting about 10 s. The second
signal (P-call) is also a tonal unit lasting about 10 s, but at a frequency near
27 Hz. The latter has often been interpreted as an incomplete Antarctic blue
whale Z-call (Balaenoptera musculus intermedia). From a systematic analysis of our acoustic database, we show that both signals have similar characteristics as blue whale vocalizations, but with spatial and seasonal patterns
that do not resemble any of the known populations dwelling in the southern
Indian Ocean. M-calls are recorded only in 2007, while P-calls are present
every recording year, with an increasing abundance with time. P-calls may
co-occur with but are clearly distinct from Z-calls. The sources of the two
new calls have yet to be visually identified.
11:40
2aAB7. The effects of wind turbine wake turbulence on bat lungs.
Dorien O. Villafranco, Sheryl Grace, and Ray Holt (Mech. Eng., Boston
Univ., 110 Cummington Mall, Boston, MA 02215, dvillafr@bu.edu)
Bat mortality is known to increase near wind turbines. Recent studies
are in disagreement as to the exact cause of death of these bats. Literature
suggests that they are either killed upon direct contact with the turbine
blades or by barotrauma. In barotrauma, a sudden change in the surrounding
air-pressure causes tissue damage in biological structures that contain air,
most notably the lungs. The present work develops a computational model
of the bat lung, in which the lung is modeled as a gas bubble with an elastic
shell immersed in a fluid, whose dynamics are governed by a Rayleigh-Plesset-like equation. Pressure gradients near the wind turbine are obtained
using computational fluid dynamics. The lung’s response to pressure
changes is attained by simulating the pressure’s effect on the gas bubble.
The study allows for a greater understanding of bat barotrauma and its
potential link to wind turbine pressure fields.
Acoustics ’17 Boston
3544
MONDAY MORNING, 26 JUNE 2017
ROOM 310, 9:15 A.M. TO 12:20 P.M.
Session 2aAO
Acoustical Oceanography: Session in Honor of David Farmer I
Tim Leighton, Cochair
Institute of Sound and Vibration Research, University of Southampton, Southampton, United Kingdom
Andone C. Lavery, Cochair
Applied Ocean Physics and Engineering, Woods Hole Oceanographic Institution, 98 Water Street, MS 11, Bigelow 211,
Woods Hole, MA 02536
Chair’s Introduction—9:15
Invited Papers
9:20
2aAO1. What’s in the water? Remote sensing of currents and “stuff” in the ocean. Gregory B. Crawford (Faculty of Sci., Univ. of
ON Inst. of Technol., 2000 Simcoe St. N., Oshawa, ON L1H 7K4, Canada, greg.crawford@uoit.ca)
High frequency acoustic remote sensing of the ocean was still in its infancy in the early 1980s. This paper reviews some early collaborative work with, and subsequent studies informed and inspired by Dr. David Farmer. We summarize early efforts to assess ocean currents, turbulence, and air-sea gas exchange. We will also review a few somewhat serendipitous research studies involving the acoustic
assessment of fish wakes and sand dollar populations. More recent results associated with the measurement of tsunami currents will be
presented, which provide a gentle reminder to those who would use acoustic velocity estimates to ask: what’s in the water?
9:40
2aAO2. Submesoscale dynamics in the coastal ocean. Burkard Baschek, Ingrid Benavides, Ryan P. North (Inst. of Coastal Res.,
Helmholtz-Zentrum Geesthacht, Max-Planck-Str. 1, Geesthacht 21502, Germany, Burkard.Baschek@hzg.de), Geoffrey Smith, and
Dave Miller (Naval Res. Lab., Washington, DC)
High-resolution observations reveal the fast dynamics of submesoscale eddies in the coastal ocean. The eddies seem to play an important role in the ocean energy cascade and are thought to be important drivers for phytoplankton production. The eddies are frequently
observed in the coastal and open ocean and are characterized by sharp gradients of 1 C/m and high Rossby numbers >10. In order to
simultaneously resolve the short temporal and spatial scales of submesoscale eddies, an observational multi-platform approach with
planes, a zeppelin, several vessels, gliders, and floats was used yielding a horizontal and vertical resolution of <1 m with repeat observations every 1 to 15 min. The Submesoscale Experiments (SubEx) took place off Catalina Island, CA, and off Bornholm in the Baltic
Sea. Observations were carried out with aerial sea surface temperature and hyperspectral measurements, rapid in situ measurements with
a towed instrument array, gliders with turbulence probes, as well as surface and subsurface velocity measurements with drifters, as well
as Radar and Acoustic Doppler Current Profilers. Additional SAR, SST, and ocean color satellite imagery is used to investigate the
occurrence of submesoscale eddies in the coastal ocean. The observations indicate intense mixing, turbulent dissipation, and subsequent
restratification. The temperature distribution is closely linked to phytoplankton concentrations suggesting a strong bio-physical
coupling.
10:00
2aAO3. Imaging and mapping water mass intrusions and internal waves. Timothy F. Duda, Andone C. Lavery, and Glen
Gawarkiewicz (Woods Hole Oceanographic Inst., WHOI AOPE Dept. MS 11, Woods Hole, MA 02543, tduda@whoi.edu)
Oceanic intrusions and internal waves can each be imaged with echo sounders for dynamics and flux studies. We have recently
investigated signatures of intrusions, while internal-wave imaging is long established, with Farmer as a leader. Intrusive flow at fronts
with counteracting temperature and salinity gradients along isopycnals is one of many mixing phenomena in the ocean. Advection within
the intrusions is accompanied by diapycnal mixing above and below that may provide buoyant forcing. Probable double-diffusive mixing processes can create sharp interfaces and microstructure that can provide intrusion-following echo patterns. This backscattering is
weaker than that from shear-generated turbulence microstructure, but it can be spatially coherent along intrusions and thus robustly detectable. Mapping of intrusion features with a shipboard system using this method may enable targeted physical measurements within
intrusions to advance our knowledge, and can provide 2D structure maps if augmenting dropped probes are used. Note that plankton
3545
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3545
2a MON. AM
Grant B. Deane, Cochair
Marine Physical Lab., Univ. of California, San Diego, 13003 Slack St., La Jolla, CA 92093-0238
backscattering may mask these signals in some environments and seasons. Internal-wave imaging by recording the geometry of passive
tracers moved vertically by the wave motions can be accomplished with even modest narrowband systems in plankton-rich environments, as shown with Quantifying, Predicting, and Exploiting Uncertainty program data from the East China Sea.
10:20–10:40 Break
10:40
2aAO4. Recent development and application of inverted echo sounders in observational physical oceanography. Qiang Li
(Graduate School at Shenzhen, Tsinghua Univ., B101 Tsinghua Campus, Nanshan Xili University Town, Shenzhen 518055, China, li.
qiang@sz.tsinghua.edu.cn), David Farmer (Inha Univ., Vancouver, Br. Columbia, Canada), Timothy F. Duda (Woods Hole
Oceanographic Inst., Woods Hole, MA), Steven R. Ramp (Monterey Bay Aquarium Res. Inst., Carmel Valley, CA), Xianzhong Mao
(Graduate School at Shenzhen, Tsinghua Univ., Shenzhen, China), Jae-Hun Park (Inha Univ., Seoul, South Korea), and Xiao-Hua Zhu
(The Second Inst. of Oceanogr., Hangzhou, China)
Since the first experiment carried out by Rossby in Bermuda, inverted echo sounders have been used in physical oceanography
observations for half a century. The inverted echo sounder measures the round-trip acoustic travel time from the sea floor to the sea surface, thus acquiring vertically integrated information on the thermal structure, from which the first baroclinic mode of thermocline
motion can be inferred. Arrays of inverted echo sounders have been deployed almost all over the global ocean to observe internal waves,
mesoscale eddies, western boundary currents, etc., providing valuable targeted data for physical oceanographers. Acoustic aspects of the
inverted echo sounder performance have been recently examined. Sources of error affecting instrument performance include tidal
effects, barotropic adjustment, ambient acoustic noise and sea surface roughness. The latter two effects are explored with a simulation
that includes surface wave reconstruction, acoustic scattering based on the Kirchhoff approximation, wind generated noise, sound propagation, and the instrument’s signal processing circuitry. Not only does the analysis enhance our understanding of the acoustic travel time
data but also suggests new approaches to extend the application of inverted echo sounders, for example, for sensing wind over the sea.
New deployments of inverted echo sounders and recent developments in hardware system and auxiliary accessories are also introduced.
11:00
2aAO5. Observations of Langmuir circulation in the open ocean using acoustic instrumentation mounted on a subsurface
drifting package. Len Zedel (Phys. and Physical Oceanogr., Memorial Univ. of NF, Chemistry-Phys. Bldg., Memorial University, St.
John’s, NF A1B 3X7, Canada, zedel@mun.ca)
The air-sea interface of the ocean is an energetic and dynamic environment that poses challenges for the positioning of instrumentation. One way to avoid the complexities of placing instrumentation at the ocean surface is to position a platform at some distance below
the surface and sample surface processes remotely using acoustic systems. David Farmer recognized the value of such a sampling
approach and led the Ocean Acoustics group at the Institute of Ocean Sciences to develop the required capabilities in the late 1980s. We
report on observations made during a cruise to Ocean Station Papa in October 1987. Acoustic instruments revealed persistent bands of
subsurface bubble clouds spaced by 5 to 10 m and extending in length up to 100 m. The clouds were aligned with the prevailing wind
direction consistent with the organization expected from Langmuir circulation. Average downward velocities of 6 cm/s were observed
in the bubble plumes that extended to a depth of 15 m. These early observations of near-surface processes motivated a series of instrumentation developments that have helped to explore the richness and complexity of movements within the ocean mixed layer.
11:20
2aAO6. Susy, Seascan, and other acoustical contraptions. Mark V. Trevorrow (Defence R&D Canada - Atlantic, 9 Grove St., PO
Box 1012, Dartmouth, NS B2Y 3Z7, Canada, mark.trevorrow@drdc-rddc.gc.ca)
In the early 1990s, the acoustical oceanography research group, led by Dr. David Farmer, developed several high-frequency sonar
platforms for near-surface oceanography. These platforms, suspended approximately 25 m below the surface, typically supported six frequencies of upward-looking echo-sounders and up to four steerable 100 kHz sidescan sonars. Additionally, the SUSY platform had four
extensible arms, 4.5 m in length, each supporting a wideband hydrophone. The platforms had sufficient batteries and data recording for
up to 40 hours of autonomous operation, which could be extended through scheduled on/off periods. The intent of these platforms was
to investigate properties of near-surface bubble due to breaking waves, with additional Doppler velocity measurements of surface waves
and convective circulations. These platforms were also used for other studies, such as tidal convergence zones and ship wakes. The SEACAN platform was also used to assess zooplankton and fish populations in lake in Japan. This presentation will review key field trials
and scientific achievements generated through use of these platforms.
11:40
2aAO7. Passing on the excitement of experimental oceanography. Craig L. McNeil, Eric D’Asaro, and Andrey Shcherbina (Appl.
Phys. Lab., Univ. of Washington, 1013 NE 40th St., Seattle, WA 98105, cmcneil@apl.washington.edu)
Having graduated from the University of Victoria in 1995 with David Farmer as my advisor, I learned how exciting it is to work at
the cutting-edge of experimental oceanography. During my Ph.D., I studied bubble mediated air-sea gas exchange which involved chasing winter storms in the NE Pacific by ship. That work inspired our study of the role of tropical cyclones in the global carbon cycle using
air deployed gas sensing floats equipped with ambient noise recorders to evaluate wave breaking effects. I was also inspired when David
enthusiastically showed me on the echosounder internal waves passing under the boat in Knight Inlet. I will present our recent AUV
measurements made in prominent estuarine features (e.g., salt wedge and internal hydraulic jump) and discuss the impact of these features on suspended sediment distributions and underwater acoustic communications.
3546
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3546
12:00
2aAO8. Thirty years of scintillating acoustic data in diverse ocean environments, thanks to David Farmer. Daniela Di Iorio (Dept.
of Marine Sci., Univ. of Georgia, 250 Marine Sci. Bldg., Athens, GA 30602, daniela@uga.edu)
2a MON. AM
The acoustic scintillation method was first applied in a coastal tidal channel in the early 1980s by David Farmer and this laid the
foundation for studies of estuarine channel flows, bottom boundary layer dynamics, deep-sea hydrothermal plumes, and now more
recently hydrocarbon seeps. Over short distances, using high frequencies and high transmission rates, amplitude and phase fluctuations
measured over transmitter and receiver arrays have been used to infer horizontal (or vertical) flows and turbulent motions, all averaged
along the acoustic path over the range separating transmitters and receivers. Autonomous and cabled instrumentation have provided
measurements of temporally and spatially averaged quantities, continuously in time. This ability to make long-term continuous measurements has shown major advances in our understanding of acoustic forward scatter from velocity and temperature fluctuations in moving
random media and for identifying strong turbulence levels in a variety of ocean settings. Much of the measurements described here
would not have been possible without the generous contributions of David Farmer.
MONDAY MORNING, 26 JUNE 2017
ROOM 312, 9:15 A.M. TO 12:20 P.M.
Session 2aBAa
Biomedical Acoustics and Physical Acoustics: Impact of Soft Tissue Inhomogeneities and Bone/Air on
Ultrasound Propagation in the Body
Vera Khokhlova, Cochair
University of Washington, 1013 NE 40th Street, Seattle, WA 98105
Robin Cleveland, Cochair
Engineering Science, University of Oxford, Inst. Biomedical Engineering, Old Road Campus Research Building,
Oxford OX3 7DQ, United Kingdom
Chair’s Introduction—9:15
Invited Papers
9:20
2aBAa1. Fullwave simulations of ultrasound propagation in the human body: Applications to imaging and motion estimation.
Gianmarco Pinton (Biomedical Eng., Univ. of North Carolina at Chapel Hill and North Carolina State Univ., 109 Mason Farm Rd.,
Taylor Hall Rm. 348, CB7575, Chapel Hill, NC 27599, gfp@unc.edu)
The Fullwave simulation tool, which solves a modified Westervelt equation, can model 3D ultrasound propagation in the human
body. It includes the effects of nonlinearity, frequency dependent attenuation, and scattering. This high order finite difference simulation
has a high dynamic range and ability to include sub-resolution scattering physics enabling it to tackle the computationally challenging
problem of generating ultrasound images directly from the first principles of propagation and reflection. Three dimensional acoustical
maps of the human body are derived from the Visible Human project, which provides the high degree of anatomical fidelity required to
model sources of image degradation such as reverberation clutter and aberration. A three dimensional intercostal imaging scenario
shows how the ribs degrade the image quality via reverberation and yet can improve the beam shape via apodization. It is shown that the
in situ Mechanical Index (MI) in the human body differs significantly from derated estimates in a homogeneous medium. Due to tissue
heterogeneities, the peak MI is close to the transducer and far from the focal region. Finally, we present a numerical model of subresolution displacements by using an impedance flow model applied to a scatterer composed of two elements. This shows that the numerical
model can support displacements that are over three orders of magnitude smaller than the grid spacing. Applications to acoustic radiation
force, elastography, and shear shock wave tracking are discussed.
3547
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3547
9:40
2aBAa2. Effects of soft tissue inhomogeneities on nonlinear propagation and shock formation in high intensity focused
ultrasound beams. Petr V. Yuldashev, Anastasia S. Bobina (Phys. Faculty, M.V. Lomonosov Moscow State Univ., 119991, Russian
Federation, Moscow, Leninskie Gory, Moscow 119991, Russian Federation, petr@acs366.phys.msu.ru), Tatiana D. Khokhlova (Dept.
of Medicine, Univ. of Washington, Seattle, WA), Adam D. Maxwell, Wayne Kreider (Appl. Phys. Lab., Ctr. for Industrial and Medical
Ultrasound, Univ. of Washington, Seattle, WA), George R. Schade (Dept. of Urology, Univ. of Washington, Seattle, WA), Oleg
Sapozhnikov, and Vera Khokhlova (Phys. Faculty, M.V. Lomonosov Moscow State Univ., Moscow, Russian Federation)
Recent pre-clinical studies on boiling histotripsy (BH) of kidney in vivo have shown that the presence of soft tissue inhomogeneities
such as skin, fat, and muscle layers do not prevent shock formation required for treatment. However, the increase in source power
required to compensate for tissue aberrations may be impractically high and result in nearfield damage. Simulations can provide deeper
insight into the impact of aberrations on shock formation in tissue and mitigation methods. A previously developed numerical solver of
the Westervelt equation was refined to account for tissue inhomogeneities assuming that backward scattering effects can be neglected.
Irradiation of human kidney using a single-element 1-MHz transducer with 5-cm radius and 9-cm focal distance was considered. Spatial
distributions of the sound speed and mass density in tissue were reconstructed directly from a 3D CT scan. Values of nonlinear and
absorption coefficients at each spatial location were assigned by CT image segmentation, using known literature data for different tissue
types. Results are illustrated and discussed, including degradation of the focal maximum, shock formation, power compensation requirements, and the potential for modeling the impact of inhomogeneities through the use of an elevated equivalent attenuation. [Work supported by RSF-14-12-00974 and NIH R01EB7643.]
10:00
2aBAa3. Atlas-based simulations of high-intensity focused ultrasound. Bradley Treeby (Medical Phys. and Biomedical Eng., Univ.
College London, Biomedical Ultrasound Group, Wolfson House, 2-10 Stephenson Way, London NW1 2HE, United Kingdom, b.
treeby@ucl.ac.uk) and Jiri Jaros (Faculty of Information Technol., Brno Univ. of Technol., Brno, Czech Republic)
In silico investigations of high-intensity focused ultrasound (HIFU) have many applications, including patient selection (determining
whether a patient is a good candidate for therapy), treatment verification (determining the cause of adverse events or therapy failures),
and treatment planning (determining the optimum sonication parameters before therapy). Here, we use a patient atlas to study the effects
of tissue heterogeneities on HIFU sonications in the kidney and liver. The patient atlas is derived from a segmentation of digital cryosection images from the Visible Human Project run by the U.S. National Library of Medicine. For each organ, simulations were repeated
under both linear and nonlinear conditions, for different driving frequencies, and for several artificial configurations. These included
using a constant sound speed, constant impedance, no absorption, and without particular anatomical features (e.g., the muscle, skin, and
ribs). The relative importance of absorption, reflection, refraction, nonlinearity, and frequency is described. The parallel computing
requirements needed to perform these large-scale full-wave ultrasound simulations in heterogeneous and absorbing media are also
discussed.
10:20
2aBAa4. Thermodynamically viable wave equations for power law attenuation in viscoelastic media. Sverre Holm (Univ. of Oslo,
Gaustadalleen 23B, Oslo N 0316 Oslo, Norway, sverre@ifi.uio.no)
Many complex media of great practical interest, such as in medical ultrasound, display an attenuation that increases with a power
law as a function of frequency. Usually, measurements can only be taken over a limited frequency range, while wave equations often
model attenuation over all frequencies. There is therefore some freedom in how the models behave outside of this limited interval, and
many different wave equations have been proposed, in particular, fractional ones. In addition, it is desirable that a wave equation models
physically viable media and for that two conditions have to be satisfied. The first is causality, and the second is a criterion that comes
from thermodynamic considerations and implies that the relaxation modulus is a completely monotonic function. The latter implies that
attenuation asymptotically cannot rise faster than frequency raised to the first power. These criteria will be explained and used to evaluate several of the ordinary and fractional wave equations that exist.
10:40
2aBAa5. Nonlinear ultrasound simulations using the discontinuous Galerkin method. James F. Kelly (Dept. of Statistics and
Probability, Michigan State Univ., 1. East Lansing, MI 48824, kellyja8@stt.msu.edu), Xiaofeng Zhao, and Robert J. McGough (Dept. of
Elec. and Comput. Eng., Michigan State Univ., East Lansing, MI)
Histotripsy with nonlinear ultrasound is an emerging noninvasive therapeutic modality that generates cavitation with high-intensity
shock waves to precisely destroy diseased soft tissue. Numerical simulation of these shock waves in nonlinear, absorbing media is
needed to characterize histotripsy systems and optimize treatments. We are developing a discontinuous Galerkin code based on the
Westervelt equation to simulate transient wave propagation in the brain and skull. The discontinuous Galerkin method is a good choice
for this simulation problem since this approach has high-order accuracy, geometric flexibility, low dispersion error, and excellent scalability on massively parallel machines. The Westervelt equation is formulated in a first-order flux form and discretized using a strong
form of the discontinuous Galerkin method. Numerical results, both linear and nonlinear, from 1D and 2D discontinuous Galerkin codes,
are presented and compared to both analytical and numerical benchmark solutions. In particular, the discontinuous Galerkin method captures nonlinear steepening of a high-intensity pulse with minimal numerical artifacts. The development of a 3D massively parallel code
is also briefly discussed. [This work was supported in part by a grant from the Focused Ultrasound Foundation.]
3548
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3548
11:00
2aBAa6. Focusing ultrasound through bones: Past, current, and future transcostal and transskull strategies. Jean-Francois Aubry
(Institut Langevin, CNRS, 17 rue Moreau, Paris 75012, France, jean-francois.aubry@espci.fr)
Bones reflect, refract, distort, and absorb ultrasonic waves. Most medical application of ultrasound avoid bony structures. Nevertheless, for liver and brain therapy, the rib cage and the skull are in the ultrasonic path. We will present non-invasive methods to detect the
presence of the ribs and shape the beam in order to sonicate in between the ribs. It is thus possible to non-invasively aim a target shadowed by the rib cage while avoiding sonication through the bones. Such approaches take advantage of the ultrasonic imaging capabilities
of multi-element arrays. Nevertheless, non-invasive brain therapy requires to sonicate through the skull. CT and MR- based techniques
have been developed to estimate the phase shifts induced by the skull. Dedicated multi-element arrays have been developed to generate
the appropriate phase-corrected signals. The number of elements of the arrays has progressively increased in order to shape at best the
corrected beam (64 elements in 2000, 300 in 2003, and 1024 in 2012): clinical transcranial therapies will be presented for the treatment
of essential tremor, parkinsonian tremors, and tumors with a 1024 element array. We will also present preliminary results of a novel
game changing transcranial focusing technique requiring a dramatically lower number of elements.
11:20
2aBAa7. Focusing ultrasound through the skull for neuromodulation.
Joseph Blackmore (Inst. of Biomedical Eng., Univ. of Oxford, University of
Oxford, Old Rd. Campus Res. Bldg., Oxford OX3 7DQ, United Kingdom,
joseph.blackmore@wadh.ox.ac.uk), Michele Veldsman, Christopher Butler
(Nuffield Dept. of Clinical NeuroSci., Univ. of Oxford, Oxford, United
Kingdom), and Robin Cleveland (Inst. of Biomedical Eng., Univ. of
Oxford, Oxford, United Kingdom)
Focused ultrasound for neuromodulation is emerging as a non-invasive
brain stimulation method, whereby low-intensity pulsed ultrasound is
focused through the skull to locations within the brain. The ultrasound
results in excitation of the targeted brain region, and stimulation in the
motor and visual centers has already been reported. One barrier is that the
strongly heterogeneous skull bone distorts, aberrates, and attenuates the
ultrasound beam leading to disruption and shifting of the focus. While transducer arrays can be used to correct for these aberrations, this equipment is
expensive and complex. Here, numerical modeling is used to determine the
optimal placement of a single element focused transducer to achieve the
required focusing. Numerical simulations, using a point source at target
locations in the visual cortex, are employed to determine the phase and amplitude on a spherical surface placed outside the head. The optimal placement of the transducer is determined by minimizing the weighted phase
error over the transduce surface. Appropriate focusing is then confirmed by
simulating the pressure field in the brain tissue for the optimal transducer
location. Both elastic and fluid-type models of the skull are considered to
assess the impact of shear waves on the targeting.
11:40
2aBAa8. Imaging cortical bone using the level-set method to regularize
travel-time and full waveform tomography techniques. Jonathan R.
Fincke (Mech. Eng., Massachusetts Inst. of Technol., 99 Hancock St., Apt.
10, Cambridge, MA 01239, jfincke@mit.edu)
Regularizing travel-time and full waveform tomographic techniques with
level set methods enables the recovery of cortical bone geometry. Bone
3549
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
imaging and quantification is of general clinical utility. Applications include
prosthetic fitting, fracture detection, and the diagnosis of osteoporosis as well
as monitoring the disease’s progression. Recently, there is significant interest
in imaging through the skull, imaging long bones as well as estimating their
material properties. In this case, a cylindrical acquisition geometry is used,
allowing the bone to be insonified from all angles. Frequencies between 200
kHz and 800 kHz are used to enable penetration into the bone region.
12:00
2aBAa9. Flash focus ultrasonic images sequences for shear shock wave
observation in the brain. David Espindola and Gianmarco Pinton (Dept. of
Biomedical Eng., Univ. of North Carolina at Chapel Hill, 109 Mason Farm
Rd., Taylor Hall Rm. 348, Chapel Hill, NC 27599, daanesro1@gmail.com)
Nonlinear shear waves have a cubic nonlinearity which results in the
generation of a unique characteristic odd harmonic signature. This behavior
was first observed in a homogeneous gelatin phantom using ultrafast plane
wave ultrasound imaging and a correlation-based tracking algorithm to
determine particle motion. However, in heterogeneous tissue, like brain, the
heterogeneities generate clutter that degrades motion tracking to the point
where the shock waves and their characteristic odd harmonics are no longer
observable. We present a high frame-rate ultrasound imaging sequence consisting of multiple focused emissions that improves the image quality and
reduces clutter to generate high quality motion estimates of shear shock
waves propagating in the brain. A point spread function analysis is used to
characterize the improvements of the proposed imaging sequence. It is
shown that the flash focus sequence reduces the side lobes by 20 dB while
retaining the same spatial resolution translating to a sensitivity up to the
11th harmonic. The flash focus sequence are then used to acquire high
frame-rate (6500 fps) ultrasound movies of an ex-vivo porcine brain in
which a shear wave propagates. Using an adaptive tracking algorithm, we
compute the particle velocity in a field of view as deep as the brain. It is
therefore demonstrated that the proposed method can detect the nonlinear
elastic motion and the odd harmonics with sufficient sensitivity to observe
the development of a shear wave into a shock wave as it propagates in the
brain.
Acoustics ’17 Boston
3549
2a MON. AM
Contributed Papers
MONDAY MORNING, 26 JUNE 2017
BALLROOM B, 9:20 A.M. TO 12:20 P.M.
Session 2aBAb
Biomedical Acoustics: Beamforming and Image Guided Therapy III: Ablation and Histotripsy
Costas Arvanitis, Cochair
Mechanical Engineering and Biomedical Engineering, Georgia Institute of Technology, 901 Atlantic Dr. NW,
Room 4100Q, Atlanta, GA 30318
Constantin Coussios, Cochair
Department of Engineering Science, Institute of Biomedical Engineering, University of Oxford, Old Road Campus Research
Building, Oxford OX3 7DQ, United Kingdom
Invited Paper
9:20
2aBAb1. Real-time ablation monitoring and lesion quantification using harmonic motion imaging guided focused ultrasound
(HMIgFUS). Elisa Konofagou (Columbia Univ., 1210 Amsterdam Ave., ET351, New York, NY 10027, ek2191@columbia.edu)
High-Intensity Focused Ultrasound (HIFU) monitoring is currently hindered by time- and cost-inefficient or inconclusive warranting,
thus an imaging technique for efficient and reliable guidance. Harmonic Motion Imaging (HMI) uses a focused ultrasound (FUS) beam
to generate an oscillatory acoustic radiation force for an internal, non-contact palpation to internally estimate relative tissue hardness.
HMI also uses ultrasound imaging with parallel beamforming and estimates and maps the tissue dynamic motion in response to the oscillatory force at the same frequency based on consecutive RF frames. HMI has already been shown feasible in simulations, phantoms, ex
vivo human and bovine tissues as well as animal tumor models in vivo. Using an FUS beam, HMI can also be seamlessly integrated with
thermal ablation using HIFU, which leads to changes in the tumor stiffness. In this paper, an overview of HMI will be provided, including the capability of HMI to characterize and image the tumor prior to ablation, localize the beam for treatment planning, as well as
monitor subsequent lesioning in real time. The findings demonstrate that HMI is capable of both detecting and characterizing the tumor
prior to HIFU ablation as well as correctly depict and quantify the lesion during treatment. More importantly, HMI is shown capable of
distinguishing the tumor margins from those of the thermal lesion in vivo in order to accurately determine treatment success. HMI thus
constitutes an integrated, real-time method for efficient HIFU monitoring.
Contributed Papers
9:40
2aBAb2. Real-time feedback control of high-intensity focused
ultrasound thermal ablation using echo decorrelation imaging.
Mohamed A. Abbass, Jakob K. Killin, Neeraja Mahalingam, and T. Douglas
Mast (Biomedical Eng. Program, Univ. of Cincinnati, 231 Albert Sabin
Way, Cincinnati, OH 45267-0586, abbassma@mail.uc.edu)
The feasibility of controlling high-intensity focused ultrasound (HIFU)
thermal ablation in real time using echo decorrelation imaging feedback
was investigated in ex vivo bovine liver. Sonication cycles (5.0 MHz, 0.7 s
per HIFU pulse, 20-24% duty, 879-1426 W/cm2 spatial-peak, and temporal
peak intensity) performed by a linear image-ablate array were repeated until
the minimum cumulative echo decorrelation within the focal region of interest exceeded a predefined threshold. Based on preliminary experiments
(N = 13), a threshold of 2.7 for the log10-scaled echo decorrelation per
millisecond was defined, corresponding to 90% specificity of local ablation
prediction. Controlled HIFU thermal ablation experiments (N = 10) were
compared with uncontrolled experiments employing 2, 5, or 9 sonication
cycles. Controlled trials showed significantly smaller average lesion area
(4.78 mm2), lesion width (1.29 mm), and treatment time (5.8 s) than 5-cycle
(7.02 mm2, 1.89 mm, 14.5 s) or 9-cycle (9.31 mm2, 2.4 mm, 26.1 s) uncontrolled trials. Prediction of local ablation using echo decorrelation was
assessed using receiver operator characteristic (ROC) curve analysis, in
which controlled trials showed significantly greater prediction capability
(area under the ROC curve AUC = 0.956) compared to 2-cycle uncontrolled
3550
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
trials (AUC = 0.722). These results suggest that ablation control using echo
decorrelation may improve the precision, reliability, and duration of ultrasound-guided HIFU treatments.
10:00
2aBAb3. Sub-millimeter bistatic passive acoustic mapping. Delphine
Elbes, Catherine Paverd, Robin Cleveland, and Constantin Coussios (Eng.
Sci., Univ. of Oxford, Old Rd. Campus Res. Bldg., Headington, Oxford
OX3 7DQ, United Kingdom, delphine.elbes@eng.ox.ac.uk)
Passive acoustic mapping (PAM) is an emerging technique used to
image sources of non-linear acoustic emissions, such as inertially cavitating
bubbles, during ultrasound therapy. When using a conventional diagnostic
ultrasound array, the transverse resolution is typically an order of magnitude
better than the axial resolution, which may be inadequate for monitoring
treatment at acoustically large distances from the array. Here, we describe
an experimental technique that utilizes two orthogonal and coplanar 128element linear arrays (6.25 MHz centre frequency) to overcome this limitation. The resolution of bistatic PAM was quantified by varying the distance
between sources, as well as the array-to-sources distance. The optimal number of elements required was identified by considering the resolution, the accuracy in quantifying the energy of acoustic emissions, and the
computational cost. The resulting resolution (achieved with 256 elements)
was close to theoretical transverse resolution limit, on the order of hundreds
of microns, and the advantage of bistatic PAM over conventional
Acoustics ’17 Boston
3550
10:20
2aBAb4. Real-time acoustic-based feedback for histotripsy therapy.
Jonathan J. Macoskey, Jonathan R. Sukovich, Timothy L. Hall, Charles A.
Cain, and Zhen Xu (Biomedical Eng., Univ. of Michigan, Carl A.
Gerstacker Bldg., 2200 Bonisteel Blvd., Ann Arbor, MI 48109, macoskey@
umich.edu)
Histotripsy uses high-pressure microsecond ultrasound pulses to generate cavitation to fractionate cells in target tissues. Two acoustic-based feedback mechanisms are being investigated to monitor histotripsy therapy in
real-time. First, bubble-induced color Doppler (BICD) is received by an
ultrasound probe co-aligned with the histotripsy transducer to monitor the
cavitation-induced motion of residual cavitation nuclei in tissue throughout
treatment. Second, acoustic backscatter of the histotripsy pulse from the
cavitation bubbles is received by directly probing elements of histotripsy
transducer to monitor acoustic emissions from the cavitation bubbles during
treatment. In these experiments, histotripsy was applied to agarose phantoms and ex vivo tissue by a 112-element, 500 kHz semi-hemispherical
ultrasound array with a 15 cm focal distance. The BICD signals were collected on a Verasonics system by an L7-4 probe. The BICD and backscatter
signals were compared to high-speed optical images of cavitation in phantoms and histology of tissue. A consistent trend was observed in both the
BICD and backscatter waveforms throughout treatment in both tissue and
agarose phantoms that correlated with high-speed imaging and histological
analysis. These results suggest that BICD and acoustic backscatter can provide non-invasive, real-time, quantitative feedback of tissue treatment progression during histotripsy, thus improving treatment efficiency and
accuracy.
10:40–11:00 Break
11:00
2aBAb5. Characterization of cavitation-radiated acoustic power using
single-element detectors. Kyle T. Rich and T. Douglas Mast (Biomedical
Eng. Program, Univ. of Cincinnati, 231 Albert Sabin Way, Cincinnati, OH
45267, richkylet@gmail.com)
Standard approaches to quantifying cavitation activity using emission
measurements made by single-element passive cavitation detectors (PCD)
would be facilitated by improved quantitative and system-independent characterization techniques. Although the strength of an individual emission
source can be determined from absolute pressure measurements by a calibrated PCD, this approach requires spatially resolved detection of single
bubbles at known locations. Here, a method is shown for characterizing an
ensemble of emission sources, quantified by their radiated acoustic power
per unit area or volume of a defined region of interest (ROI). An analytic
diffraction-correction factor relating frequency-dependent PCD-measured
pressures to cavitation-radiated acoustic power is derived using a spatial integral of the PCD sensitivity. This approach can be applied to measurements
made by any PCD without a priori knowledge of the number or spatiotemporal distribution of cavitation bubbles. Simulations show that cavitationradiated acoustic power per unit ROI volume or area is accurately recovered
by compensation of emissions received by focused or unfocused PCDs.
Measurements from previous sonophoresis experiments are analyzed, showing that skin permeability changes from 0.41 or 2.0 MHz sonication are
comparably correlated to the radiated acoustic power of subharmonic emissions per unit area.
3551
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
11:20
2aBAb6. Acoustic radiation force on a sphere in tissue due to the
irrotational component of the shear field body force. Benjamin C.
Treweek, Yurii A. Ilinskii, Evgenia A. Zabolotskaya, and Mark F. Hamilton
(Appl. Res. Labs., Univ. of Texas at Austin, 10000 Burnet Rd., Austin, TX
78758, btreweek@utexas.edu)
Acoustic radiation force on a sphere in soft tissue can be written as the
sum of four distinct contributions. Two arise from incident and scattered
compressional waves only, one from direct integration of the time-averaged
Piola-Kirchhoff stress tensor over the surface of the sphere, and one from
the irrotational component of the body force producing deformation of the
surrounding medium. The other two contributions also incorporate scattered
shear waves, and they are found by the same procedures. Three of these
terms are known analytically [Ilinskii et al., Proc. Meet. Acoust. 19, 045004
(2013)], but the contribution relating to the shear field body force must be
found numerically. Preliminary results for this term were obtained through
simplifying approximations and presented at the fall 2016 ASA meeting; the
present submission extends this work to cases where these approximations
do not hold. Helmholtz decomposition of the shear field body force is performed using 3D Fourier transforms, then the irrotational potential is integrated over the surface of the sphere. Various sphere materials are
considered, and comparisons are made with known results for a sphere in an
ideal fluid. [Work supported by the ARL:UT McKinney Fellowship in
Acoustics.]
11:40
2aBAb7. Investigation of the source of histotripsy acoustic backscatter
signals. Jonathan R. Sukovich, Timothy L. Hall, Jonathan J. Macoskey,
Charles A. Cain, and Zhen Xu (Biomedical Eng., Univ. of Michigan, 1410
Traver Rd., Ann Arbor, MI 48105, jsukes@umich.edu)
Recent work has demonstrated that acoustic backscatter signals from
histotripsy-generated bubble clouds may be used to localize generated bubble clouds and perform non-invasive aberration correction transcranially.
However, the primary source of the measured signals, whether from emissions generated during bubble expansion, or scattering of the incoming
pulses off of the incipient bubble clouds, remains to be determined and may
have important implications for how the acquired signals may be used.
Here, we present results from experiments comparing the acoustic emissions
and growth-collapse curves of single bubbles generated optically to those
generated via histotripsy. Histotripsy bubbles were generated using a 32-element, 1.5 MHz spherical transducer with pulse durations <2-cycles; optical
bubbles were nucleated using a pulsed Nd:YAG laser focused at the center
of the histotripsy transducer. Optical imaging was used to capture the time
evolution of the generated bubbles from inception to collapse. Acoustic
emissions from the generated bubbles were captured using the receive-capable histotripsy transducer elements as well as with a commercial hydrophone mounted within. Imaging results indicated that optically nucleated
bubbles experienced more rapid growth than histotripsy generated bubbles.
Acoustic emissions from both sets of bubbles were comparable, however,
suggesting the primary component of the measured histotripsy “backscatter”
signal is an emission generated during bubble expansion.
12:00
2aBAb8. Cylindrically converging nonlinear shear waves. John M.
Cormack, Kyle S. Spratt, and Mark F. Hamilton (Appl. Res. Labs., Univ. of
Texas at Austin, 10000 Burnet Rd., Austin, TX 78758, jcormack@utexas.
edu)
The low shear moduli of soft elastic media permit the generation of
shear waves with large acoustic Mach numbers that can exhibit waveform
distortion and even shock formation over short distances. Waves that converge onto a cylindrical focus experience significant dispersion, causing
waveforms at the focus and in the post-focal region to differ significantly
from the source waveform even in the absence of nonlinear distortion. A
full-wave model for nonlinear shear waves in cylindrical coordinates that
accounts for both quadratic and cubic nonlinearity is developed from first
principles. For the special case of an infinite cylindrical source with particle
motion parallel to the axis, for which nonlinearity is purely cubic, the nonlinear wave equation is solved numerically with a finite-difference scheme.
Acoustics ’17 Boston
3551
2a MON. AM
monostatic PAM was illustrated in the context of non-invasive fractionation
of the intervertebral disc. It is concluded that, at depth, bistatic PAM enables
improved real-time treatment monitoring on biologically relevant length
scales. [Work supported by UK Engineering and Physical Sciences
Research Council (EP/K020757/1).]
The full-wave model is compared with a piecewise model based on a generalized Burgers equation for cylindrically converging waves outside of the
focal region and linear diffraction theory in the focal region. For waveforms
with wavelength much smaller than the source radius, conditions are
explored for which the approximate piecewise model shows good agreement
with the full-wave model. [Work supported by the ARL:UT McKinney Fellowship in Acoustics.]
MONDAY MORNING, 26 JUNE 2017
ROOM 205, 9:15 A.M. TO 12:20 P.M.
Session 2aEA
Engineering Acoustics: Ducts and Mufflers I
Mats Åbom, Cochair
The Marcus Wallenberg Laboratory, KTH-The Royal Inst. of Technology, Teknikringen 8, Stockholm 10044, Sweden
David Herrin, Cochair
Department of Mechanical Engineering, University of Kentucky, 151 Ralph G. Anderson Building, Lexington, KY 40506-0503
Chair’s Introduction—9:15
Invited Papers
9:20
2aEA1. Design, development, and implementation of low cost high performance mufflers for heavy duty diesel engines. Mohan D.
Rao (Mech. Eng., Tennessee Tech, Box 5014, Box 5014, Cookeville, TN 38505, mrao@tntech.edu)
In this paper, details on the design and fabrication of affordable high-performing passive exhaust mufflers and associated low volume
manufacturing technology for commercial heavy duty diesel engines are presented. The exhaust noise radiation to the atmosphere from
large diesel engines used in earth-moving, military, and other heavy equipment machines ranks as a major noise source of the urban
environment. A solution to this problem is the use of mufflers to significantly reduce noise pollution to the atmosphere. There are several
types of common mufflers, including reactive designs using resonators, expansion chambers, perforations, and dissipative configurations
using absorptive materials. The design and realization of mufflers for a particular engine is dependent on several design parameters,
such as internal combustion engine characteristics, acoustical requirements by standards, production volumes and cost, with the requirements varying from application to application. Particulars of successful muffler designs for three different machines—a commercial excavator, a tele handler, and a military ground vehicle are presented in this paper. The designs include both box type and round mufflers
with Helmholtz resonators, reversible flow chambers, and perforated tubes, all capable of custom fabrication using conventional manufacturing processes in a local machine shop using commercially available materials.
9:40
2aEA2. On challenges and considerations in designing cold end exhaust systems for current and future automotive applications.
Raghavan Vasudevan (Exhaust R&D, Magneti Marelli, 3900 Automation Ave., Auburn Hills, MI 48326, raghavan.vasudevan@
magnetimarelli.com)
Efforts to meet ever-stringent fuel economy standards has led to increased focus on light weighting technologies, and proliferation of
alternative powertrains such as hybrid vehicles. This has significantly increased the complexity of designing exhaust systems to meet
high level NVH performances against thin powertrain targets. In this paper, some applications will be presented with latest efforts in
addressing some of these challenges. (a) Novel methodologies in predicting exhaust high frequency flow noise using transient CFD simulations will be discussed. (b) Customized tuning applications: Case study in optimization, Daisy-chaining, and positioning of narrow
frequency tuning applications (e.g., concentric tube Helmholtz resonators with tuned orifices, pipe length tuning) driven by tighter clearances and directed light weighting efforts will be discussed. (c) Hybrid applications—Hybrid powertrains pose a unique challenge as the
engine can function both as power plant and a power generator, both of which have different NVH requirements. Efforts in addressing
these challenges with limited packaging space will be discussed.
3552
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3552
10:00
2aEA3. Application of micro-perforate tubes in motorcycle mufflers. Henry C. Howell (ProDC Development Ctr., Harley-Davidson
Motor Co., ProDC Development Ctr., 11800 W. Capitol Dr., Wauwatosa, WI 53222, hank.howell@harley-davidson.com)
Regulator noise requirements, torque and horsepower goals, and sound quality targets drive the acoustic performance expected from
motorcycle mufflers, but exposed motorcycle mufflers also become important styling features on bikes. Weight, shape, temperature,
rider leg position, coatings, and cosmetics all become factors which influence the mufflers final design. With these factors in mind, the
uses of micro-perforated metals in grazing flow applications have allowed improvements in all of these areas.
10:20
2aEA4. Sensitivity study of exhaust system using the Moebius transformation. Yitian Zhang (W. L. Gore and Assoc., 1025
Christina Mill Dr., Newark, DE 19711, yitzhang@wlgore.com)
2a MON. AM
The performance of exhaust system is not only dependent on the system itself, but also on the boundary conditions, which are the
impedances at the inlet and outlet. For many cases, the exact value of these impedances are not known or easily measured. It is of interest to see the range of performance variation, given the range of possible values of impedance. An exhaustive method to determine the
response variation can be used, but is computationally expensive. However, it can be proved that the relationship between boundary impedance and response is in the form of the Moebius transformation, which is a conformal transformation. Taking advantage of this property, the computation can be much reduced. It is also shown that the sensitivity of this dependence can be studied visually using the
Moebius transformation.
10:40
2aEA5. Acoustic performance of an annular Helmholtz resonator and its application in exhaust system. Xin Hua and Jagdish
Dholaria (Faurecia Emissions Control Technologies, 950 West 450 South, Columbus, IN 47201, xin.hua@faurecia.com)
Helmholtz resonator is a traditional acoustic tuning component, which consists of an enclosed volume and a throat neck. It is tuning
and packaging friendly and thus has been widely used in exhaust systems. An annular Helmholtz bottle resonator usually has a straightthrough pipe with a certain length of perforation. A larger sleeve is mounted to cover the straight-through pipe with one end open and
the other end closed. The annular volume gap between the straight-through pipe and the sleeve becomes the neck of the Helmholtz resonator. In this research, an annular Helmholtz resonator is investigated. Numerical simulation and non-linear least square method are
used to propose a correction on empirical equation to estimate the resonator targeting frequency. Afterward, this type of Helmholtz resonator is applied to a two-box exhaust system. Insertion loss is used to investigate the resonator performance in the system.
11:00
2aEA6. Noise suppressors with engineered compliance in fluid hydraulic systems. Kenneth Cunefare, Elliott Gruber, and Nathaniel
Pedigo (Georgia Tech, Mech. Eng., Atlanta, GA 30332-0405, ken.cunefare@me.gatech.edu)
Fluid power hydraulic systems, common on a wide variety of industrial and construction equipment, frequently exhibit undesirable
noise characteristics. The noise is primarily due to the pump-induced pressure pulsation in the fluid. One means to control this fluidborne noise is through the use of a suppressor integrating compliance, as may be introduced using a pressurized bladder, or an elastic
compliant liner exposed to the fluid. This compliance causes an impedance change at the inlet to the suppressor. Within the suppressor,
the compliance leads to a reduced sound speed, which may then lead to fluid particle velocities high enough for damping to become
effective. Classical dissipative noise control means, as is common in air or gas mufflers, is otherwise ineffective in fluid systems because
of the low particle velocity. An engineered solid material, a syntactic foam, is under development for use in hydraulic systems. The syntactic foam of interest is comprised of microspheres dispersed in a polymer. Particular challenges include retaining functionality at high
system pressures, which may be addressed by pressurizing the microspheres. The foam’s performance increases with volume fraction of
microspheres, and internal pressurization. The material is enabling of a other fluid noise control devices, including water hammer
arrestors.
11:20
2aEA7. Proper integration of plane wave models into the design process. David Herrin (Dept. of Mech. Eng., Univ. of Kentucky,
151 Ralph G. Anderson Bldg., Lexington, KY 40506-0503, dherrin@engr.uky.edu) and Tamer Elnady (Ain Shams Univ., Cairo, Egypt)
Muffler and silencer design is primarily accomplished through cut and try approaches in many industries. Although test mufflers are
inexpensive to manufacture and test, better designs can be arrived at rapidly and less expensively using plane wave methodologies.
Moreover, engineers develop intuition. The current work is aimed at demonstrating how plane wave models can be integrated into the
design process. It is first demonstrated that plane wave models can reliably determine the performance of complicated mufflers below
the cutoff frequency. Tips for developing plane wave models are summarized.
11:40
2aEA8. Sound quality control of axial fan noise by using microperforated panel housing with a hollow tube. Yatsze Choy, Yan
Kei Chiang, and Qiang Xi (FG639 Dept. of ME, The Hong Kong Polytechnic Univ., Hong Kong 852, Hong Kong, mmyschoy@polyu.
edu.hk)
This study presents a novel passive noise control approach to directly suppress sound radiation from an axial-flow fan, which
involves micro-perforated panels (MPP) backed by cavities and a hollow tube. Apart from the sound suppression performance in terms
of insertion loss, sound quality of axial fan with a dipole nature is also investigated which serves as a significant supplementary index
for assessing the noise control device. The noise suppression is achieved by the sound cancelation between sound fields from the fan of
dipole nature and sound radiation from a vibrating panel via vibro-acoustic coupling and interference from the hollow tube boundaries,
3553
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3553
as well as by sound absorption in micro-perforations. A two-dimensional theoretical model, capable of dealing with strong coupling
among the vibrating micro-perforated panel, sound radiation from the dipole source, and sound fields inside the cavity and the duct is
developed. The theoretical results are validated by both finite element simulation and experiment. Results show that an addition of hollow tube enhances the sound suppression performance in the passband region of the MPP housing device. The findings of the current
research have the potential to control ducted-fan noise effectively, to enhance the quality of products with a ducted-fan system.
12:00
2aEA9. Revisiting the Cremer impedance. Raimo Kabral, Mats Åbom (The Marcus Wallenberg Lab., KTH-The Royal Inst. of
Technol., KTH-The Marcus Wallenberg Lab., Stockholm 10044, Sweden, kabral@kth.se), and B€
orje Nilsson (Dept. of Mathematics,
o, Sweden)
Linnaeus Univ., V€axj€
In a classical paper (Acustica 3, 1953), Cremer demonstrated that in a rectangular duct, with locally reacting walls, there exits an impedance (“the Cremer impedance”) that maximizes the propagational damping for the lowest mode. Later (JSV 28, 1973), Tester
extended the analysis to include a plug flow and ducts of both circular and rectangular cross-section. One limitation in the work of Tester
is that it simplified the analysis of the effect of flow only considering high frequencies or well cut-on modes. This approximation is reasonable for large duct applications, e.g., aeroengines, but not for many other cases of interest. Kabral et al. (Acta Acustica united with
Acustica 102, 2016) removed this limitation and investigated the exact Cremer impedance including flow effects. As demonstrated in
that paper the exact solution exhibits some special properties at low frequencies, e.g., a negative real part of the wall impedance. In this
paper, the exact Cremer impedance is further analyzed and discussed.
MONDAY MORNING, 26 JUNE 2017
ROOM 304, 9:35 A.M. TO 12:20 P.M.
Session 2aEDa
Education in Acoustics, Public Relations Committee, and Student Council: Communicating Scientific
Research to Non-Scientists
Andrew A. Piacsek, Cochair
Physics, Central Washington University, 400 E. University Way, Ellensburg, WA 98926
Kerri Seger, Cochair
Scripps Institution of Oceanography, 9331 Discovery Way, Apt. C, La Jolla, CA 92037
Chair’s Introduction—9:35
Invited Papers
9:40
2aEDa1. Shouting across the chasm. Soren Wheeler (Radiolab, New York Public Radio, 25 N 6th St., Madison, WI 53704, swheeler@
wnyc.org)
As people are interested in science, we assume science is interesting. But if you want to be an effective science communicator, you
must assume the exact opposite. After more than 20 years working on the public understanding of science, the thing that continues to astonish and trouble me is how many people shudder at the word “science,” or at any sniff of the words, cadences, or even attitudes associated with science. And yet those are the people we most desperately need to reach. Thus, to communicate science is to cross a chasm of
interest, knowledge, inclination, and language. With a few examples, I will talk about the ways that voice, structure, and emotional experience can help us shout across that chasm, and the ways in which audio storytelling, in particular, can shine a light on how to get people
to listen to us when we talk about science.
10:00
2aEDa2. Talking science with non-scientists. Karin Heineman (Executive Producer, Inside Sci., American Inst. of Phys., One Phys.
Ellipse, College Park, MD 20740, kheinema@aip.org)
Can you explain your research to a non-scientist, or even a 7th grader in 90 seconds or less—and explain it in a way that’s engaging
and understandable? Communicating science to an audience of non-scientists requires a very different skill set than giving a successful
talk to a conference or exchanging ideas with peers or colleagues. Effective science communication skills are important to have—you’re
3554
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3554
providing lay audiences with accurate and reliable science information, while protecting the public from misinformation, and also influencing public policy and inspiring a sense of wonder in non-scientists. This talk is targeted for a broad audience of scientists who would
like to learn how science communication skills will improve their ability to promote their research and get more media attention. A huge
communication tool is video. Don’t be afraid to be on camera—learn what to expect from an on-camera interview; learn tricks to get
your message across; learn what impact the right visuals can have in communicating your message and research. Learn the tools and
tricks of the trade of communicating science from speech, to tone, to what not to wear— and make a lasting impression on any
audience.
10:20
2aEDa3. Communicating science 101. Brad Lisle (Foxfire Interactive, 500 East Washington St., Ste. 30, North Attleboro, MA 02760,
brad@foxfireinteractive.com)
2a MON. AM
What’s the best way to communicate science to non-scientists? Learn how producer/director Brad Lisle tackles this question through
two different science media projects. The first, ZOOM into Engineering, targets engineers who are interested in doing hands-on engineering activities with K-6 students. The second, The Global Soundscapes Project, targets middle school students and the general public
and explores the emerging field of soundscape ecology.
10:40–11:00 Break
11:00
2aEDa4. Joe McMaster: “Telling your story and bringing abstract ideas to life on screen.” Joseph McMaster (Sci. Journalist &
Filmmaker, 700 Technol. Sq., Cambridge, MA 02139, joemcmaster1@gmail.com)
Mark Twain said, “I like a good story well told. That is the reason I am sometimes forced to tell them myself.” So how might you
tell the story of your scientific research in engaging ways? And what can you do if non-scientists find the subject difficult or hard to visualize? Award winning science journalist and filmmaker Joe McMaster shares concrete examples of what has worked (and sometimes not
worked) in his 25 years of making films about science and technology for television and the web—many of which have required bringing abstract ideas to life on screen.
11:20
2aEDa5. Integrating science communication into undergraduate and graduate curricula. Laura Kloepper (Biology, Saint Mary’s
College, 262 Sci. Hall, Saint Mary’s College, Notre Dame, IN 46556, lkloepper@saintmarys.edu)
Effective science communication (“SciComm”) to both a technical and non-technical audience is considered a key skill for contemporary scientists. Unfortunately, most of the SciComm education occurs after individuals receive their degree, and few science students
receive formal training in communication. In this presentation I will describe a course that teaches SciComm to first-year undergraduate
science students and can be easily modified for graduate curricula. This course consists of a series of modules specific to various communication methods (oral presentations, formal writing, informal writing, and television/radio) and audiences. Students are exposed to
SciComm through group assignments, written assignments, community experiences, and virtual lectures. I will describe these modules
and give example assignments and assessment. By integrating SciComm early into science education, students can develop both the
technical and communication skills necessary to be effective researchers and communicators.
11:40
2aEDa6. Don’t cite it, write it. Raising awareness of acoustics through Wikipedia. Thais C. Morata (Div. of Appl. Res. and
Technol., National Inst. for Occupational Safety and Health, 1090 Tusculuam Ave., M.S. C27, Cincinnati, OH 45226, tmorata@cdc.
gov), Max Lum (Office of the Director, National Inst. for Occupational Safety and Health, Cincinnati, OH), James Hare (Office of the
Director, National Inst. for Occupational Safety and Health, Washington, DC), and Leonardo Fuks (Escola de Musica, Universidade do
Rio de Janeiro, Rio de Janeiro, Rio de Janeiro, Brazil)
Wikipedia is accessed by hundreds of millions around the world and that makes Wikipedia one of the most powerful platforms for
the dissemination of science information. While Wikipedia offers high-quality content about certain topics, a large proportion of articles
are insufficiently developed. The Wikimedia Foundation has engaged in partnerships with scientific and academic institutions to
improve the coverage and communication of science to the public. These efforts are beneficial to professional and academic associations
interested in sharing reliable, vetted information about their discipline with the world. The National Institute for Occupational Safety
and Health (NIOSH) is one of the agencies engaged in this effort. NIOSH developed and manages the WikiProject Occupational Safety
and Health. NIOSH also participated in a classroom program (where students write Wikipedia articles) to expand and improve Wikipedia’s content on acoustics, noise, and hearing loss prevention. Faculty and students from the University of Rio de Janeiro contributed
content on basic principles of acoustics. Metrics on these efforts are publicly available so reach can be evaluated by the number of views
and quality of entries. Throughout these initiatives, new scientific content related to acoustics was successfully added to Wikipedia, and
the quality of the entries were improved.
12:00–12:20 Panel Discussion
3555
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3555
MONDAY MORNING, 26 JUNE 2017
BALLROOM A, 10:20 A.M. TO 11:40 A.M.
Session 2aEDb
Education in Acoustics: Education in Acoustics Poster Session
Eoin A. King, Chair
Mechanical Engineering, University of Hartford, 200 Bloomfield Avenue, West Hartford, CT 06117
All posters will be on display from 10:20 a.m. to 11:40 a.m. To allow contributors in this session to see the other posters, authors of
odd-numbered papers will be at their posters from 10:20 a.m. to 11:00 a.m. and authors of even-numbered papers will be at their posters
from 11:00 a.m. to 11:40 a.m.
Contributed Papers
2aEDb1. A cross-university massive open online course on
communication acoustics. Sebastian M€
oller (Quality and Usability Lab,
TU Berlin, Sekr. TEL-18, Ernst-Reuter-Platz 7, Berlin 10587, Germany,
sebastian.moeller@tu-berlin.de), Jens Ahrens (Audio Technol., Chalmers
Univ. of Technol., Gothenburg, Sweden), M. Ercan Altinsoy (Inst. of
Acoust. and Speech Commun., TU Dresden, Dresden, Germany), Martin
Buchschmid (Chair for Structural Mech., TU M€
unchen, M€
unchen,
Germany), Janina Fels (Inst. of Tech. Acoust., RWTH Aachen Univ.,
Aachen, Germany), Stefan Hillmann, Christoph Hohnerlein (Quality and
uller (Chair for
Usability Lab, TU Berlin, Berlin, Germany), Gerhard M€
Structural Mech., TU M€
unchen, M€
unchen, Germany), Bernhard U. Seeber
unchen, Munich, Germany), Michael
(Audio Information Processing, TU M€
Vorlaender (Inst. of Tech. Acoust., RWTH Aachen Univ., Aachen,
Germany), Stefan Weinzierl (Fachgebiet Audiokommunikation, TU Berlin,
Berlin, Germany), Sebastian Knoth, and Wolfram Barodte (Serviceeinheit
Medien f€ur die Lehre, RWTH Aachen, Aachen, Germany)
Four of the nine big Technical Universities in Germany, together with
Chalmers University of Technology in Sweden, have developed a new Massive Open Online Course (MOOC) on the subject of Communication Acoustics. The idea is to foster education on the late Bachelor or early Master
level by joining the expertise available at individual universities and by creating an online course offered both to local as well as remote students. The
course started in winter term 2016 and is hosted on the EdX platform. It is
offered in English language and roughly divided into two parts: The first
part covers basics on acoustics, signal processing, human hearing, speech
production, as well as electroacoustics and psychoacoustics. The second
part introduces selected applications, such as sound recording and reproduction, sound fields and room acoustics, binaural technology, speech technology, as well as product sound design. The course material consists of
explanatory videos and text as well as audiovisual material, exercises, and
self-assessments. The final examination takes place as a written or online
exam, with physical presence at the contributing sites. The talk will provide
insights into the experiences we made, and illustrates how we overcame the
obstacles inherent to cross-university education.
2aEDb2. Acoustics in African Music Instrument Technology: Training
the Baton Bearer for Sustenance in Music Education in Nigeria.
Stephen G. Onwubiko (Music, Univ. of Nigeria, Nsukka Enugu State,
Enugu, Nsukka 234042, Nigeria, stephen.onwubiko@gmail.com)
With sparsely programs in musical acoustics located and taught throughout the country’s institutions, the role of teaching music acoustics in higher
education has received no attention with less trained teachers/educators,
which has also become a controversial issue among scholars exploring that
aspect of education. This paper shows ideological-opinions on integrating
acoustics trained educators in African music instrument technology into the
curriculum, since its technology invariably stress on acoustics which is an
integral part of music technology and African music instrument technology
alike. Teachers possess the power to create conditions that can help
3556
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
students learn a great deal—or keep them from learning much at all. Teaching is an intentional act of creating those conditions according to Parker
Palmer, (1998). The technology of any musical instrument is to satisfy the
ear and heart acoustically speaking, for each musical instrument ranging
from western instrument to the traditional African instrument has its technology, its sound, pitch, frequency and vibrations likewise these instrument
vary in pitch, sound, frequency, tonal reflection, texture and color. This paper constantly explores these issues through qualitative interviews and
observations with teachers and scholars on training the baton bearer in African musical acoustics for sustenance in music education in Nigeria.
2aEDb3. Self-taught Finite Element Method implementation to the
assessing of natural frequencies and modal shapes in 2D rectangular
plates. Augusto R. Carvalho de Sousa (Lab. of Vibrations and Acoust.,
orio de
Dept. of Mech. Eng., Federal Univ. of Santa Catarina, Laborat
Vibraç~
oes e Ac
ustica, Universidade Federal de Santa Catarina,
Florian
opolis, Santa Catarina 88040-900, Brazil, augusto_carvalho@live.
com) and Jeferson R. Bueno (Civil Eng. Academic Dept., Federal
Technolog. Univ. of Parana – UTFPR, Campo Mour~ao, Parana, Brazil)
This work presents a self-taught guide to the implementation of a Finite
Element Model (FEM) to assess the natural frequencies and modal shapes
of 2D rectangular plates. The study offers a practical way of understanding
the FEM processing in commercial software available in the market and is
motivated by the Computer Vibro-acoustic module of the Graduate Program
in Mechanical Engineering of the Federal University of Santa Catarina, Brazil. A steel plate with dimensions of 200 mm x 500 mm x 2 mm is implemented, and the results obtained from this algorithm are compared with
results given by a commercial software. Two configurations are tested and
validated: a free plate, i.e., no boundary conditions, and a door-like plate
whose boundary conditions are the door hinges and the door handle. The
implementation is made for the direct and superposition methods in FEM,
using both symbolic and numerical approaches in MATLAB software.
Results obtained present acceptable errors in most frequency bands, thus
validating the algorithm and enhancing the understanding of FEM modeling
for vibro-acoustic applications.
2aEDb4. K-12 students and Freshmen College Teaching Pattern for
Science . Ambika Bhatta (Phys., Lawrence High School, 1 University Ave.,
Lowell, MA 01854, ambika_bhatta@student.uml.edu), Patrick Nsumei ,
and Rafael Cabanas (Phys., Lawrence High School, Lawrence, MA)
This paper presents a model of classroom physics instructions at Health &
Human Service (HHS) high school, Lawrence, Massachusetts. Our focus is to
create new approaches, procedures, and concepts used in our classrooms to
reach the demography. The challenge is to motivate them to do and appreciate
science, particularly physics and acoustics. The presented work utilizes the
model of audio, video, computer aided and theoretical modalities to help students’ access to the fundamental concepts. It will also be shown that it complements any inefficiency in Math and high order thinking skills.
Acoustics ’17 Boston
3556
The use of ultrasonics sensors in laboratory exercises and robotics has
become popular in recent years. That is, however, only one avenue for
exploring the world of inaudible sound that is around us. This talk discusses
exercises to interact with the ultrasonic world surrounding us using smartphones, computers, and tablets. Messages can be sent through analog or digital means and the very concept of what is inaudible varies significantly
from one person to another. The use of ultrasound is not limited to range
finding but also spans communication and art.
2aEDb6. Conceptual understanding and situational interest of middleschool-aged youth who participate in 4-day soundscape science camp.
Maryam Ghadiri Khanaposhtani (Dept. of Forestry and Natural Resources,
Purdue Univ., B066 Mann Hall West lafayett, IN 47906, ghadiry85@gmail.
com), ChangChia James Liu (Educational Studies, Purdue Univ., West
lafayett, IN), Bryan C. Pijanowski (Forestry and Natural Resources, Purdue
Univ., West Lafayette, IN), and Daniel Shepardson (Departments of
Curriculum and Instruction and Earth and Atmospheric Sci., Purdue Univ.,
West Lafayette, IN)
The purpose of this study was to investigate how participation in an inquiry-based environmental camp contributed to the conceptual understanding and triggered the situational interest of middle school-aged youth to a
new field called “Soundscape Ecology.” The focus of this study was to
understand how participants were affected cognitively and affectively by
primary attributes of the immersive soundscape activities. We used descriptive interpretive approaches and several sources of data from drawing activities, pre-post questionnaires, interviews, observations, and participant
3557
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
artifacts. Our study showed that participants’ conceptual understanding as
well as their interest were positively affected by variables such as direct
interaction with nature, access to authentic technology, collaborative teamwork, and having choice and control. Our results suggest that scientific field
work, combined with opportunities to engage youth in scientific education
through the use of authentic tools, has the potential to foster an environment
in which participants can better comprehend scientific principles.
2aEDb7. Modal Analysis of a Bamboo Composite I-Beam—Results of a
collaborative interdisciplinary project. Eoin A. King (Acoust. Program
and Lab, Univ. of Hartford, 200 Bloomfield Ave., West Hartford, CT
06117, eoking@hartford.edu), Sigridur Bjarnadottir, and Hernan Castaneda
(Civil Eng., Univ. of Hartford, West Hartford, CT)
Bamboo has the potential to be considered a sustainable alternative for
conventional construction materials. Traditionally, bamboo culms were used
for structural applications (buildings, bridges) in certain parts of the world.
The structural behavior of bamboo culms can be unpredictable due to material variations, therefore unsuitable for structural applications in the United
States, for example. In recent years, glue laminated bamboo, constructed
from bamboo culms that have been crushed and glued together to form
boards, has been developed. These boards maintain the excellent mechanical
properties of bamboo (high tensile and compressive strength, excellent ductility) while eliminating some material uncertainty. Research into the feasibility of Glue Laminated Bamboo for structural applications, such as IBeams, is quite novel, and there are many avenues that must be investigated
and validated before the standardization of bamboo as a construction material can occur. One key feature that needs to be assessed is how the material
properties will influence the dynamic response of an I-Beam during excitation. This paper presents results of a modal analysis of a bamboo composite
I-Beam conducted as a collaborative project between undergraduate civil engineering students and acoustical engineering students.
Acoustics ’17 Boston
3557
2a MON. AM
2aEDb5. Teaching ultrasound in air. Craig N. Dolder (Inst. of Sound and
Vib. Res., Univ. of Southampton, Highfield Campus, Southampton SO17
1BJ, United Kingdom, C.N.Dolder@soton.ac.uk) and Tim Leighton (Inst. of
Sound and Vib. Res., Univ. of Southampton, Southampton, Hampshire,
United Kingdom)
MONDAY MORNING, 26 JUNE 2017
BALLROOM C, 9:15 A.M. TO 12:20 P.M.
Session 2aIDb
Interdisciplinary: Neuroimaging Techniques I
Martin S. Lawless, Cochair
Graduate Program in Acoustics, The Pennsylvania State University, 201 Applied Science Building, University Park,
PA 16802
Adrian KC Lee, Cochair
University of Washington, Box 357988, University of Washington, Seattle, WA 98195
Sophie Nolden, Cochair
RWTH Aachen University, Jaegerstrasse 17/19, Aachen 52066, Germany
Z. Ellen Peng, Cochair
Waisman Center, University of Wisconsin-Madison, 1500 Highland Avenue, Madison, WI 53711
G. Christopher Stecker, Cochair
Hearing and Speech Sciences, Vanderbilt University, 1215 21st Ave South, Room 8310, Nashville, TN 37232
Chair’s Introduction—9:15
Invited Papers
9:20
2aIDb1. Approaches to pushing the limits of human brain imaging. Bruce Rosen (Radiology, Massachusetts General Hospital,
Bldg.149, 13th St., Rm. 2301, Charlestown, MA 02129, bruce@nmr.mgh.harvard.edu)
By enabling visualization of physiological processes, “Functional imaging,” broadly defined, has dramatically enhanced our ability
to explore and better understand human neuroscience and human disease. fMRI has become the keystone of a broad array of functional
imaging methods that are revealing the links between brain and behavior in normal and pathological states. Very high strength magnets
and advanced large N phased-array coils now enable ultra-high spatial and temporal resolution MRI and fMRI, while advances in MR
gradient coil technology have improved our ability to assess tissue microstructure and connectivity almost an order of magnitude.
Beyond MRI, positron emission tomography (PET) imaging provides the means to map neurochemical events with exquisite sensitivity,
and recent work suggests the potential to extend neurochemical mapping towards quantification of receptor trafficking, and measurements of metabolism and neurotransmitter release dynamics on time frames of a few minutes; tomographic optical imaging allows for
portable, bedside assessment of hemodynamics and oxidative metabolism; and densely sampled whole-head magnetoencepholography
can, when combined with fMRI, permit high temporal resolution mapping of both cortical and now subcortical brain activity.
10:00
2aIDb2. Finding and understanding cortical maps using neuroimaging. Martin I. Sereno (Dept. of Psych., San Diego State Univ.,
5500 Campanile Dr., San Diego, CA 92182, msereno@sdsu.edu)
Much of the neocortex, as well as many parts of the brainstem, are divided into “areas” that contain internal topological maps of receptor surfaces. Previously, it was only possible to find the borders and internal organization of these areas using invasive microelectrode mapping and post-mortem architectonics studies in animals. Advances in non-invasive neuroimaging methods over the past two
decades have made it possible to extend these studies to the living human brain. This talk summarizes the development and current
state-of-the-art of computational methods for non-invasive neuroimaging of cortical maps and cortical areas in human brains, focusing
on cortical-surface-based functional MRI, structural MRI, and diffusion-MRI neuroimaging analysis methods originally introduced by
my laboratory more than two decades ago. Although topological maps of receptor sheets (e.g., retina, skin touch receptors, and cochlea)
are often associated with the earliest stages of sensory processing in the brain, topological maps have turned up in many “higher level”
areas. In describing these findings, we emphasize the fundamental architectural bifurcation between visual and somatosensory maps,
which are based on a 2D receptor sheet, and auditory maps, which are based on a 1D line of receptors.
10:40–11:00 Break
3558
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3558
11:00
2aIDb3. Neuroimaging of the speech network. Frank Guenther (Boston Univ., 677 Beacon St., Boston, MA 02115, guenther@cns.bu.
edu)
Historically, the study of the neural underpinnings of speech has suffered from the lack of an animal model whose brain activity
could be measured using invasive electrophysiological techniques. The development of non-invasive structural and functional neuroimaging techniques in the latter part of the 20th century has led to a dramatic improvement in our understanding of the network of brain
regions responsible for speech production. Techniques for measuring regional cerebral blood flow, including positron emission tomography (PET) and functional magnetic resonance imaging (fMRI), have illuminated the neural regions involved in various aspects of
speech, including feedforward control mechanisms as well as auditory and somatosensory feedback control circuits. More recently,
fMRI studies utilizing repetition suppression have been used to identify the neural representations used in different parts of the speech
network, including the identification of a syllable representation in left ventral premotor cortex. Magnetic resonance imaging has also
been used to investigate the anatomical structure of the speech network, providing crucial information regarding connectivity within the
network as well as identifying anomalies in the sizes of neural regions and/or white matter pathways in speech disorders.
2a MON. AM
11:40
2aIDb4. Using electroencephalography as a tool to understand auditory perception: Event-related and time-frequency analyses.
Laurel Trainor (Psych., Neuroscience & Behaviour, McMaster Univ., 1280 Main St. West, Hamilton, ON L8S4K1, Canada, ljt@
mcmaster.ca)
Electroencephalography (EEG) largely reflects postsynaptic field potentials summed over many (hundreds of thousands of) neurons
that are aligned in time and orientation. These electrical fields propagate in all directions such that determination of the sources of electrical fields measured at the surface of the head is much less accurate than localizations using fMRI. However, EEG can be measured
with sub-millisecond timing resolution, offering a great advantage for studies of hearing. EEG can measure activity from various nuclei
along the subcortical pathway, from primary and secondary auditory cortex and from cortical regions beyond. EEG can be particularly
useful for understanding preconscious processing stages, and auditory processing in infants and others who can not make verbal
responses. Traditional methods of analysis relate peaks (“components”) in the EEG time waveform to stages of processing. However,
communication between brain circuits is reflected in neural oscillations, which can be measured through time-frequency analyses of
EEG recordings. Such approaches reveal, for example, how frequency is encoded in the brainstem, and how predictive timing and predictive coding are accomplished in the cortex. I will illustrate these points with example applications largely from our lab and argue that
EEG can greatly enhance our interpretation of psychophysical data.
MONDAY MORNING, 26 JUNE 2017
ROOM 200, 9:15 A.M. TO 12:20 P.M.
Session 2aMU
Musical Acoustics and Psychological and Physiological Acoustics: Session in Memory of David Wessel
William M. Hartmann, Cochair
Physics and Astronomy, Michigan State University, Physics-Astronomy, 567 Wilson Rd., East Lansing, MI 48824
Andrew C. Morrison, Cochair
Natural Science Department, Joliet Junior College, 1215 Houbolt Rd., Joliet, IL 60431
Chair’s Introduction—9:15
Invited Papers
9:20
2aMU1. David Wessel—The Michigan State years. William M. Hartmann (Phys. and Astronomy, Michigan State Univ., PhysicsAstronomy, 567 Wilson Rd., East Lansing, MI 48824, hartmann@pa.msu.edu)
After finishing his Ph.D. work with Estes at Stanford, David Wessel came to Michigan State as an assistant professor of psychology.
Dave arrived with a strong background in mathematical psychology and with an intense passion for making music with computers. In
the 1970s, Dave and I presented two electronic music concerts to standing-room-only audiences. The division of our labors was straightforward. I provided the venue and audio system. Dave provided all the music, which he was able to do because of his many connections
3559
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3559
with electronic musicians around the world. Dave also organized the first computer music conference, held in East Lansing during one
of the worst snow storms of the year. This talk will also describe Dave’s initial studies of musical timbre and his early years at IRCAM,
as I observed them from East Lansing and in Paris.
9:40
2aMU2. David Wessel—The IRCAM years. Tod Machover (Media Lab, MIT, MIT Media Lab, 75 Amherst St., Cambridge, MA
02139, tod@media.mit.edu)
David Wessel was a musical visionary who combined scientific rigor, technological savvy, and sonic adventure to powerfully influential the formative years of Pierre Boulez’s IRCAM in Paris. Wessel was trained in mathematics and percussion, receiving a Stanford
Ph.D. in musical psychoacoustics. He brought these specialties to computer music at the crucial moment when real-time digital synthesis
was being developed. His Antony (1977) was the first musical work to use Giuseppe di Giugno’s 4A machine, and his Timbre Maps
(1978) demonstrated for the first time that sonority alone could produce structural relationships. Wessel brought free-jazz principles to
live computer music performance, and was a pioneer in understanding and influencing the development of MIDI. Wessel became
IRCAM’s Director of Pedagogy in 1980, and in that role inspired a generation of international composers, technologists and scientists, a
veritable Who’s Who of today’s most prominent creators. As a member of IRCAM’s Artistic Committee, Wessel influenced the selection of artists for IRCAM residences and helped to invent a successful model for combining pedagogy, research and creation. Above all,
David Wessel’s omnivorous love of all kinds of music, and his deep generosity, brought an unequaled spark of humanity to the world of
man, music, and machines.
10:00
2aMU3. David Wessel’s Inventive Directorship of UC Berkeley’s Center for New Music and Audio Technologies (CNMAT).
Adrian Freed (1608 MLK Jr. Way, Berkeley, CA 94709, adrian@adrianfreed.com)
The main professional focus of David Wessel’s final 30 years was the development and nurturing of UC Berkeley’s Center for New
Music and Audio Technologies (CNMAT). Jean-Baptiste Barrière succinctly described David Wessel as bringing a scientific consciousness to music and a musical consciousness to science. This was manifest in CNMAT practice by building apparatus/instruments that
served musical production AND research-apparatus that was concurrently validated as novel and significant in three communities:
music, science and engineering. I present the major achievements of CNMAT and the special transdisciplinary practices that made the
center so productive for its modest size in its 3 concurrent spheres: research, music creation, and education. This will include a brief
case study of CNMAT’s unique acoustics research apparatus, a 141-driver spherical speaker array. Larger institutions attempted unsuccessfully to create such a high resolution array. David Wessel led CNMAT’s success by attracting strong researchers and support engineers over an extended period, creatively finding funding from a diverse combination of extra-mural, government and industry sources,
bringing together experts from multiple institutions internationally and tapping the intellectual capital of the UC Berkeley academic
community. I conclude by pointing out recent initiatives of David’s mentees who carry CNMAT practices in their work.
10:20
2aMU4. David Wessel—A unique professor in Berkeley. Ervin Hafter (Psych., Univ. of California, Berkeley, 1854 San Lorenzo
Ave., Berkeley, CA 94707, hafter@berkeley.edu)
Among my best experiences during a half century in Berkeley was the opportunity to interact as friend and colleague of Professor
David Wessel. His remarkable blend of scientific brilliance and creativity allowed him to look deeply into questions, figure out the good
bits, and come up with new and exciting approaches to an answer. This connection between theory and solution defined his role on a student’s committees, and I found that both students and their advisors were grateful for the clarity of his advice. If you asked David for
help, you found a seemingly endless fount of generosity, a gift that made him special to everyone who worked with him. Today, I will
reminisce on an array of memories that range from his depth of knowledge in both the sciences and arts, on his skills as a chef, on his
willingness to provide technical help to everyone, on his compulsion to get us interested in new forms of music, and even on his charming mid-western-American accented French. Yes, Wessel was a brilliant guy, but we will also remember him as a particularly sweet person who left a mark on those fortunate enough to have known him.
10:40–11:00 Break
11:00
2aMU5. David Wessel: A few stories of an antidisciplinarian. Psyche Loui (Wesleyan Univ., 207 High St., Middletown, CT 06459,
ploui@wesleyan.edu)
Timbre, gesture, Open Sound Control, additive synthesis, parallel computing: These were just a few of David Wessel’s many brainchildren. As a Professor of Music, a founding director of Center for New Music and Audio Technologies (CNMAT), and an Affiliate
Professor of Psychology at UC Berkeley, David Wessel was a wise advisor and a wonderful scientist, musician, and friend. I will narrate
the exceptional experience of working with the creative force that was David Wessel, both from the perspective of a music perception
and cognition scientist, and through the lens of Berkeley’s CNMAT, which thrived under his leadership as a synergistic center for performers, researchers, and composers.
3560
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3560
Contributed Paper
2aMU6. Pitching timbre analogies with David Wessel. Punita G. Singh
(Sound Sense, 16 Gauri Apartments, 3 Rajesh Pilot Ln., New Delhi 110011,
India, punita@gmail.com)
Contemporary thinking and research on timbre and its use as a dynamic,
structural component in music performance have been profoundly influenced by the insights and insounds of David Wessel. His intrepid and creative approach opened up vistas of timbre spaces navigable through
multidimensional trajectories. Wessel’s experiments with timbre streaming
[Computer Music J. 3, 4552, (1979)] inspired my own work on perceptual
organization of complex-tone sequences [Singh, J. Acoust. Soc. Am. 82,
886-899 (1987)]. The finding of a timbre “interva” akin to a pitch interval as
a threshold for streaming reinforced Wessel’s notion of timbral analogies
[Ehresman and Wessel, IRCAM Rep 13/78, (1978)]. Later work on measuring timbre differences through FO thresholds for streaming (Singh and
Bregman, J. Acoust. Soc. Am. 102(4), 1943-1952, (1997)) also lent support
to the idea of intervallic relationship between timbres. More recently, my
work relating Auditory Scene Analysis to Hindustani rhythms brought us together again, presenting and drumming in a multicultural percussion session
at the ASA meeting in San Francisco in 2013. For a person so into timing
and timbre, David’s untimely departure dealt a discordant blow that can be
partially assuaged through such tributes that review, extend, and honor his
work.
2a MON. AM
11:20
Invited Papers
11:40
2aMU7. David Wessel—A scholar and a performer. Andrew C. Morrison (Natural Sci. Dept., Joliet Junior College, 1215 Houbolt
Rd., Joliet, IL 60431, amorrison@jjc.edu)
In addition to his scholarly work in many fields related to the fields of music psychology and computer music, David Wessel was a
gifted composer and a talented percussionist. In this presentation, I will mention my brief interactions with David as we looked at the
acoustics of a unique musical instrument: the hand-played hang. The hang is an instrument inspired by the Caribbean steelpan which
was popular for a time with many percussionists. After a few comments, we will remember David’s musical legacy with some videos of
his performances.
12:00–12:20 Panel Discussion
3561
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3561
MONDAY MORNING, 26 JUNE 2017
ROOM 203, 9:15 A.M. TO 12:20 P.M.
Session 2aNSa
Noise, Architectural Acoustics, and ASA Committee on Standards: Noise Impacts and Soundscapes on
Outdoor Gathering Spaces I
Brigitte Schulte-Fortkamp, Cochair
Institute of Fluid Mechanics and Engineering Acoustics, TU Berlin, Einsteinufer 25, Berlin 101789, Germany
K. Anthony Hoover, Cochair
McKay Conant Hoover, 5655 Lindero Canyon Road, Suite 325, Westlake Village, CA 91362
Chair’s Introduction—9:15
Invited Papers
9:20
2aNSa1. Tranquillity in the city—Building resilience through identifying, designing, promoting, and linking restorative outdoor
environments. Greg Watts (Eng. and Informatics, Univ. of Bradford, Chesham, Richmond Rd., Bradford, West Yorkshire BD7 1DP,
United Kingdom, g.r.watts@bradford.ac.uk)
Tranquil spaces can be found and made in the city and their promotion and use by residents and visitors is an important means of
building resilience. Studies have shown that spaces that are rated by visitors as tranquil are more likely to produce higher levels of relaxation and less anxiety that should ultimately result in health and well-being benefits. Such spaces can therefore be classed as restorative
environments. Tranquil spaces are characterized by a soundscape dominated by natural sounds and low levels of man-made noise. In
addition, the presence of vegetation and wild life has been shown to be an important contributory factor. Levels of rated tranquillity can
be reliably predicted using a previously developed model TRAPT and then used to design and identify tranquil spaces, improve existing
green spaces and develop Tranquillity Trails to encourage usage. Tranquillity Trails are walking routes that have been designed to enable residents and visitors to reflect and recover from stress while receiving the benefits of healthy exercise. By way of example three
Tranquillity Trails designed for contrasting areas are described. Predictions of the rated tranquillity have been made along these widely
contrasting routes. Feedback from users was elicited and used to gauge benefits.
9:40
2aNSa2. Acoustic renovation for an office courtyard near a busy highway. Bennett M. Brooks and Nathaniel Flanagin (Brooks
Acoust. Corp., 30 Lafayette Square - Ste. 103, Vernon, CT 06066, bbrooks@brooks-acoustics.com)
A complex of several office buildings utilizes a common courtyard as an outdoor gathering space. Regularly scheduled events and
celebrations occur in this space. These activities can be disrupted by the noise emitted by vehicular traffic on a nearby busy highway.
Field tests were conducted to quantify and characterize the background ambient sound due to the road traffic. The office complex, courtyard, and highway were modeled in a computer aided design system, and various renovation concepts to reduce the perception of highway noise at the venue were studied. The auralized results of design studies were used to generate a virtual reality presentation for
evaluation by office management. The reduction of highway noise by several treatment options was noticeable. Also, perceived voice
clarity (speech intelligibility) in the courtyard improved with reduced noise. The recommended design treatment provided a significant
calculated reduction in highway noise level, with an improved acoustic environment (as perceived) for comfort and speech clarity.
10:00
2aNSa3. Attention focusing in complex acoustical environments. Sabrina Skoda (Inst. of Sound and Vib. Eng., D€
usseldorf Univ. of
Appl. Sci., M€
unsterstraße 156, D€
usseldorf 40476, Germany, sabrina.skoda@hs-duesseldorf.de), Brigitte Schulte-Fortkamp (Inst. of
org
Fluid Mech. and Eng. Acoust., TU Berlin, Berlin, Germany), Andre Fiebig (HEAD Acoust. GmbH, Herzogenrath, Germany), and J€
usseldorf, Germany)
Becker-Schweitzer (Inst. of Sound and Vib. Eng., D€usseldorf Univ. of Appl. Sci., D€
The prediction of the perceived overall sound quality of environments consisting of multiple sound sources poses a challenge. The
interaction of different sound events results in a great amount of sensory information to be processed but human cognitive capacity is
limited. Therefore, listeners tend to focus attention on specific events, the choice of which is influenced not only by the available environmental information but also by the current tasks being performed and individual conditions. To investigate how human attention can
be drawn to singular sound sources in complex environments and how this affects the overall evaluation, a series of listening experiments was carried out at Duesseldorf University of Applied Sciences. Participants were asked to evaluate the sound quality of different
acoustical environments which consisted of varying combinations of environmental sounds. The most noticeable sound events were
identified and were individually rated in the same evaluation. The results show that the overall pleasantness in a complex acoustical
environment can be well explained based on the ratings of singular environmental sounds.
3562
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3562
10:20
2aNSa4. A methodology to awake citizens’ awareness on the effects of leisure noise. Luigi Maffei, Massimiliano Masullo, Giuseppe
Ciaburro, and Luigi D’Onofrio (Dept. Architecture and Industrial Design, Universita degli Studi della Campania “Luigi Vanvitelli,” Via
S.Lorenzo, Aversa 81031, Italy, luigi.maffei@unina2.it)
In recent years, several cities’ administrations have proposed projects of urban renewal of historical districts to face state of neglect
and degradation problems. Frequently these projects include new outdoor social activities such as pubs, bistros, and shops along and/or
on the streets with a huge amount of investments, and the success of these projects is then measured through the increase in local population and tourists that frequent, during day and night time, these sites. On the other hand, more people frequent these sites more crowd
noise and more consequent complaints of the resident population can be expected. Finally, the due imposition of administrative restrictions can nullify the original aim of the project. Starting from these considerations, a methodology based on sound recordings of different outdoor gathering spaces, on subjective questionnaires administered during listening tests, on features extraction’s algorithms, and
finally on the artificial neural networks is proposed. The methodology aims to improve the awareness of citizens about the impacts that
leisure noise may have on a specific urban project and in general on the quality of the urban environments.
2a MON. AM
10:40
2aNSa5. In-situ measurements of soundscapes at outdoor gathering spaces—How to consider reliability aspects. Andre Fiebig
(HEAD Acoust. GmbH, Ebertstr. 30a, Herzogenrath 52134, Germany, andre.fiebig@head-acoustics.de)
It is well acknowledged that soundscape investigations must be carried out in the original context. Original contexts like outdoor
gathering spaces show usually a strongly varying behavior and are highly uncontrolled. Therefore, if outdoor spaces with highly timevariant noise conditions are considered, it appears difficult to measure reliably the status quo of the site under scrutiny, reflecting measurement uncertainties appropriately. When and how long must be measured, how must residents or visitors be interviewed to obtain
reliable and valid data are emerging questions in the context of soundscape investigations. To investigate the reliability of acoustic as
well as perceptual measurement data gained by in-situ measurements at specific outdoor gathering spaces, consecutive acoustic and perceptual measurements were performed in Aachen and analyzed. Based on these measurements, some basic requirements to be met
regarding reliability and validity were derived and will be discussed. Such investigations are relevant for the preparation of the second
part of the International Standard on Soundscape ISO 12913-2 dealing with data collection.
11:00
2aNSa6. Nestled in nature and near to noise—The Ford Amphitheatre. K. Anthony Hoover (McKay Conant Hoover, 5655 Lindero
Canyon Rd., Ste. 325, Westlake Village, CA 91362, thoover@mchinc.com)
The historic Ford Amphitheatre was relocated in 1920, from what would later become the location of the Hollywood Bowl, across
the Hollywood Freeway, into an arroyo with a dramatic natural backdrop. Since then, the freeway has become increasingly noisy. The
original wood structure was destroyed by a brush fire in 1929, and rebuilt in concrete in 1931, with several subsequent modifications and
renovations. The current major renovation to this 1200-seat outdoor amphitheatre includes an expanded “sound wall” that will help to
mitigate freeway noise while providing optimal lighting and control positions. The remarkably uniform distribution of ambient noise
throughout the seating area and the apparent contributions by the surrounding arroyo will be discussed, along with some of the unique
design and construction opportunities.
11:20
2aNSa7. Recovering historic noise. Pamela Jordan (Eng. Acoust., Tech. Univ. Berlin, Technische Universit€at Berlin, Institut f€
ur
Str€omungsmechanik und Technische Akustik Fachgebiete der Technischen Akustik, Sekr. TA7, Einsteinufer 25, Berlin 10587,
Germany, pam.f.jordan@gmail.com)
It is easy to think of noise like refuse—an undesired byproduct of other activities to be minimized or eliminated. However, the richest source of information about a historic site is frequently the garbage that individuals and groups have left behind. Applying the same
logic, non-designed sound can also be perceived as an essential component of a heritage location, providing a sensorial understanding of
past realities as well as contemporary conditions. Could noise be approached as a resource rather than simply a dilemma, even in cities?
This paper seeks to reframe the concept of noise in urban environments with a focus on outdoor heritage sites. The Berlin Wall will be
presented as a case study where visitors’ perception of unintentional sound provides key information about the past. Inadvertent preservation of historic noise sources and patterns along the Wall, such as vehicular traffic, landscape maintenance, and visitor crowds, has
enabled visitors to experience the soundscape of the past in situ rather than through a recording or secondary source. By extending valuations beyond the present moment, it is possible to see the potential value in all sounds.
11:40
2aNSa8. Lessons learning from successful projects in Soundscapes of outdoor spaces. Brigitte Schulte-Fortkamp (Inst. of Fluid
Mech. and Eng. Acoust., TU Berlin, Einsteinufer 25, Berlin 101789, Germany, b.schulte-fortkamp@tu-berlin.de)
One oft he most successful projects within Soundscapes is the redevelopment oft he Nauener Platz in Berlin. Integrating the Soundscape approach from the beginning of the project enabled a horizontal, long-term dialog with the people in the area. The project that
resulted was effectively guided by many participants, resulting in a unique solution for mitigating noise and creating a much-needed
“backyard” for the local residents through an improved soundscape. Evaluation followed up confirms the long-term positive effect of the
project. The temptation with this level of success is to apply the strategies from Nauener Platz wholesale to other locations and attempt
to replicate its achievements. However, this would be a false promise, even at seemingly very similar sites. An instructive example is
the Berlin Wall Memorial, which shares many physical attributes to the Nauener Platz. However, its political and historical layers provide a very different set. The paper will discuss the similarities and differences with regard to the ISO.
3563
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3563
12:00
2aNSa9. Soundscape analysis and modeling of outdoor gathering spaces. Gary W. Siebein, Hyun Paek, Marylin Roa, Gary Siebein,
and Keely Siebein (Siebein Assoc., Inc., 625 NW 60th St., Ste. C, Gainesville, FL 32607, gsiebein@siebeinacoustic.com)
Case studies of three outdoor gathering spaces will be presented to illustrate the soundscape concepts embodied in each. Case study
1 is an outdoor amphitheater that is located near a residential neighborhood. Concerts at the amphitheater raised concerns from residents
about acoustical measuring methods. Case study 2 is a lively restaurant with a large outdoor seating area where guests eat and listen to
music played by a single performer or small group. Case study 3 is a series of outdoor restaurants that do not have live entertainment.
The use of different types of soundwalks to capture the acoustical signature of these venues; the acoustical communities involved in
each situation; taxonomies of the sounds that occur at each; the specific acoustical events that comprise the ambient sound in each case;
acoustical measurements and modeling used in the analysis of each venue; the extent of the acoustical rooms for performing and listening; an acoustical calendar; the design of the acoustical interventions; and live experiments to document ranges of conditions to residents
are documented and discussed.
MONDAY MORNING, 26 JUNE 2017
ROOM 202, 9:15 A.M. TO 12:20 P.M.
Session 2aNSb
Noise, Physical Acoustics, ASA Committee on Standards, and Structural Acoustics and Vibration: Sonic
Boom Noise II: Mach Cutoff, Turbulence, Etc.
Philippe Blanc-Benon, Cochair
Centre acoustique, LMFA UMR CNRS 5509, Ecole Centrale de Lyon, 36 avenue Guy de Collongue, Ecully 69134 Ecully
Cedex, France
Victor Sparrow, Cochair
Grad. Program in Acoustics, Penn State, 201 Applied Science Bldg., University Park, PA 16802
Chair’s Introduction—9:15
Invited Papers
9:20
2aNSb1. The Aerion AS2 and Mach cut-off. Jason Matisheck (Aerion Corp., 5190 Neil Rd., Ste. 500, Reno, NV 89502, jrmatisheck@
aerioncorp.com)
The Aerion supersonic business jet is intended to fly at Mach cut-off conditions over land. We will outline the history of investigations into Mach cut-off flight. Aerion’s concept of operations for Mach cut-off will be discussed. We will discuss the current state of the
technology and identify areas where further research is required.
9:40
2aNSb2. Preliminary assessment and extension of an existing Mach cut-off model. Zhendong Huang and Victor Sparrow (Graduate
Program in Acoust., The Penn State Univ., 201 Appl. Sci. Bldg., University Park, PA 16802, zfh5044@psu.edu)
Nicholls’ Mach cut-off theory is re-examined for suitability and extensibility. Nicholls’ analytical predictions (from 1971) determined the cut-off Mach numbers based on the local speed of sound, along with wind speed and direction at flight elevation and locations
where the acoustic rays bend parallel to the ground’s surface. The current investigation seeks to understand the limitations of the original
formulation and to develop modified predictions to account for realistic and continuous atmospheric profiles. In this presentation, several
differences between Nicholls’ work and an extended ray tracing model are highlighted. The influence of atmospheric profiles and model
assumptions on cut-off Mach numbers is examined. [Work supported by the FAA. The opinions, findings, conclusions, and recommendations expressed in this material are those of the authors and do not necessarily reflect the views of ASCENT FAA Center of Excellence
sponsor organizations.]
3564
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3564
10:00
2aNSb3. Sensitivity analysis of supersonic Mach cut-off flight. Gregory R. Busch, Jimmy Tai, Dimitri Mavris, Ruxandra Duca, and
Ratheesvar Mohan (Aerosp. Eng., The Georgia Inst. of Technol., 275 Ferst Dr. NW, Atlanta, GA 30332, gbusch3@gatech.edu)
Supersonic aircraft designers are pursuing various methods to help facilitate the re-introduction of overland supersonic flight operations. A substantial amount of research has been invested over recent years to demonstrate its feasibility. An alternative method for satisfying the noise standards for supersonic aircraft is more operation-oriented. Under Mach cut-off conditions, the vehicle still generates
sonic booms but the acoustic waves refract in such a way that it does not reach the ground. To better understand the propagation of sonic
booms during Mach cut-off flight, Georgia Tech (GT) has conducted research under the FAA’s Aviation Sustainability Center
(ASCENT). An acoustical model for Mach cut-off flight was developed—GT leveraged this model for sensitivity analysis. The Mach
cut-off model allowed GT to vary both atmospheric and flight conditions to study how these dynamic parameters impact sonic boom signatures through the atmosphere. The results of these analyses provide greater insight on how Mach cut-off flight can be achieved and
highlights potential technologies to facilitate its re-introduction. [This work was supported by the FAA. The opinions, findings, conclusions, and recommendations expressed in this material are those of the authors and do not necessarily reflect the views of ASCENT
FAA Center of Excellence sponsor organizations.]
2a MON. AM
10:20–10:40 Break
10:40
2aNSb4. Subjective study on attributes related to Mach-cutoff sonic booms. Nicholas D. Ortega, Michelle C. Vigeant, and Victor
Sparrow (Acoust., The Penn State Univ., 201 Appl. Sci. Bldg., University Park, PA 16802, njo5068@psu.edu)
Mach cut-off occurs under certain flight conditions (aircraft speed, atmospheric conditions, etc.) in which sonic booms do not reach
the ground’s surface. The theory has been around for over 40 years, but no studies have investigated the perception of such sounds. The
goal of the present investigation is to develop a vocabulary related to the perception of these sounds, with the intent on using this vocabulary to study annoyance. The current study uses sounds recorded by NASA’s “Far-field Investigation of No-boom Thresholds” (FaINT)
tests. Subjects listened to sets of Mach cut-off audio recordings and were asked to provide descriptors to characterize the sounds. Subjects then listened to each recording and assigned descriptors from their individual set of attributes to each sound, effectively categorizing the sounds. The analysis was performed across subjects to refine definitions and identify commonalities. The vocabulary is used in a
follow-up study wherein participants rate their annoyance to a variety of Mach cut-off sounds. The subject test data and subsequent analyses are discussed. [Work supported by the FAA. The opinions, findings, conclusions, and recommendations expressed in this material
are those of the authors and do not necessarily reflect the views of ASCENT FAA Center of Excellence sponsor organizations.]
11:00
2aNSb5. Laboratory test bed for sonic boom propagation. Michael Bailey, Wayne Kreider, Barbrina Dunmire (Ctr. for Industrial and
Medical Ultrasound, Appl. Phys. Lab, Univ. of Washington, UW, 1013 NE 40th St., Seattle, WA 98105, mike.bailey.apl@gmail.com),
Vera A. Khokhlova, Oleg A. Sapozhnikov (Phys. Faculty, Moscow State Univ., CIMU, APL, Univ. of Washington, Seattle, WA),
Julianna C. Simon, and Victor W. Sparrow (Graduate Program in Acoust., Penn State Univ., University Park, PA)
Varying ethanol concentration with depth, Hobaek et al. [AIP Proc. 838, 2006)] simulated the atmospheric sound speed profile
scaled in a water tank and mimicked sonic boom propagation. Some limitations of their groundbreaking design included complexity, fragility, size, safety, one-dimensionality, tonal bursts instead of impulsive shocks, and the inability to scale density, absorption, turbulence,
or other nonlinearities. Recognizing the time and cost savings of scaled overflight experiments in a laboratory environment, this work is
meant to address certain limitations and investigate the Mach cut-off phenomena. Our primary advances are to use gel layers instead of
fluid mixtures and lithotripter shock pulses. We defocused and modified a Dornier Compact S lithotripter to vary source angle in a 500liter polycarbonate tank, measured shocks with a fiber optic hydrophone, and developed fluid-like gels with negligible shear-wave generation and robustness against cavitation. The sound speed, attenuation, inhomogeneity, and nonlinearity are adjustable within each gel
layer. The result is a more flexible, controllable, and durable atmospheric analogue. [Work supported by the FAA. The opinions, findings, conclusions, and recommendations expressed in this material are those of the authors and do not necessarily reflect the views of
ASCENT FAA Center of Excellence sponsor organizations.]
11:20
2aNSb6. Sonic boom near-field predictions and its impact on the accuracy of ground signature predictions. Hao Shen (Boeing
Res. & Technol., The Boeing Co., 325 James S Mcdonnell Blvd., MC S306-4030, Hazelwood, MO 63042, hao.shen@boeing.com)
The widely adopted sonic boom prediction process contains two steps: the near-field signature prediction from a 3D supersonic flow
solver, and the ground signature prediction from a quasi-one-dimensional far-field propagation model following the acoustic ray path.
The far-field propagation prediction has to start at a near-field distance sufficiently far away from the airplane flight path to ensure accuracy from the quasi-one-dimensional model. A recent study indicates that the widely adopted practice of using 2-3 body length from the
flight path is not sufficient for all airplane configurations. Using a larger near-field distance, however, creates a serious challenge to the
CFD based near-field prediction method which will suffer with either degradation in solution accuracy or sharp increase in computational cost, or both. A newly developed space marching approach for high fidelity and high efficiency near-field prediction is revisited
here. It is capable of providing high accuracy solution at large near-field distance with a small fraction of computational cost required by
conventional CFD solvers. It is applied here to study the sensitivity of the ground signatures to the near-field signature distance for a
notional low sonic boom airplane configuration to find the optimal near-field signature distance.
3565
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3565
11:40
2aNSb7. Numerical simulation of sonic boom focusing and its application to mission performance. Sriram Rallabhandi and James
W. Fenbert (AMA Inc., Rm. 190-25, Mailstop 442, NASA Langley Res. Ctr., Hampton, VA 23681, sriram.rallabhandi@nasa.gov)
This paper describes the effort undertaken to include numerical boom focusing on a lossy Burgers equation based sonic boom propagation tool called sBOOM. Traditional ray acoustics may break down during acceleration or other aircraft maneuvers that cause the ray
tube area to approach zero. The paper describes a way to numerically predict focused sonic boom signatures using both Gill-Seebass similitude and the solution to the non-linear Tricomi equation that models the physics of boom focusing. The paper would then use this
capability to determine focusing and non-focusing climb trajectories and their impact on mission performance.
12:00
2aNSb8. International Civil Aviation Organization Supersonic Task Group overview and status. Sandy R. Liu and Bao Tong
(Noise Div., Federal Aviation Administration, 800 Independence Ave., SW, Washington, DC 20591, sandy.liu@faa.gov)
A resurgence of interest in supersonic flight has emerged due to recent technological advances. Given the global impact of such aircraft, environmental standards and recommended practices (SARP) are being developed under the International Civil Aviation Organization (ICAO) to ensure the mitigation of accompanying sonic booms prior to any re-introduction of civil supersonic flight operations. In
2004, ICAO’s Committee of Environmental Protection (CAEP) established the supersonics work program under Working Group 1 (WG1)
to develop SARPs. Current proposed concepts include (1) traditional sonic boom aircraft (like the Concorde), which operate supersonically over water; (2) low-boom designs with overland operation capabilities; and (3) traditional aircraft flying under Mach cut-off conditions. To-date, WG1/SSTG continues to formulate an enroute sonic boom SARP for civil supersonic airplanes. Metrics, test procedures,
and a data framework continue to be investigated for suitability and efficiency. A newly formed Landing and Take-Off (LTO) noise subgroup for supersonic aircraft has been tasked to define a second SARP for the terminal environment (i.e., subsonic operations). An overview of ICAO’s hierarchy supporting SARPs development, WG1/SSTG technical program, and a current status update is presented.
MONDAY MORNING, 26 JUNE 2017
ROOM 210, 9:20 A.M. TO 12:20 P.M.
Session 2aPAa
Physical Acoustics: Infrasound I
Roger M. Waxler, Cochair
NCPA, University of Mississippi, 1 Coliseum Dr., University, MS 38677
Pieter Smets, Cochair
R&D Department of Seismology and Acoustics, KNMI, PO Box 201, De Bilt 3730 AE, Netherlands
Invited Papers
9:20
2aPAa1. Seismo-acoustic wavefield of strombolian explosions at Yasur volcano, Vanuatu, using a broadband seismo-acoustic
network, infrasound arrays, and infrasonic sensors on tethered balloons. Robin S. Matoza (Dept. of Earth Sci., Univ. of California,
Santa Barbara, Webb Hall MC 9630, Santa Barbara, CA 93106, matoza@geol.ucsb.edu), Arthur Jolly (GNS Sci., Avalon, New
Zealand), David Fee (Univ. of Alaska, Fairbanks, Fairbanks, AK), Richard Johnson (GNS Sci., Avalon, New Zealand), Bernard Chouet,
Phillip Dawson (U.S. Geological Survey, Menlo Park, CA), Geoff Kilgour, Bruce Christenson (GNS Sci., Avalon, New Zealand), Esline
Garaebiti (Vanuatu Meteorol. and Geohazards Dept., Port Vila, Vanuatu), Alex Iezzi (Univ. of Alaska, Fairbanks, Fairbanks, AK),
Allison Austin (Dept. of Earth Sci., Univ. of California, Santa Barbara, Santa Barbara, CA), Ben Kennedy, Rebecca Fitzgerald, and
Nick Key (Univ. of Canterbury, Christchurch, New Zealand)
Seismo-acoustic wavefields at volcanoes contain rich information on shallow magma transport and subaerial eruption processes and
inform our understanding of how volcanoes work. Acoustic wavefields from eruptions are predicted to be directional, but sampling this
wavefield directivity is challenging because infrasound sensors are usually deployed on the ground surface. We attempt to overcome this
observational limitation using a novel deployment of infrasound sensors on tethered balloons in tandem with a suite of dense groundbased seismo-acoustic, geochemical, and eruption imaging instrumentation. We conducted a collaborative multiparametric field experiment at the active Yasur volcano, Tanna Island, Vanuatu, from 26 July to 2 August 2016. Our observations include data from a temporary network of 11 broadband seismometers, 6 single infrasonic microphones, 7 small-aperture 3-element infrasound arrays, 2
infrasound sensor packages on tethered balloons, an FTIR, a FLIR, 2 scanning Flyspecs, and various visual imaging data; scoria and ash
samples were collected for petrological analyses. This unprecedented dataset should provide a unique window into processes operating
in the shallow magma plumbing system and their relation to subaerial eruption dynamics.
3566
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3566
9:40
2aPAa2. Acoustical localization, reconstruction, signal, and statistical analysis of storm electrical discharges from a 2-months
long database in southern France. François Coulouvrat (Institut Jean Le Rond d’Alembert (UMR 7190), Universite Pierre et Marie
Curie & CNRS, Universite Pierre et Marie Curie, 4 Pl. Jussieu, Paris 75005, France, francois.coulouvrat@upmc.fr), Thomas Farges,
Louis Gallin (CEA, DAM, DIF, Arpajon, France), Arthur Lacroix (CEA, DAM, DIF, Paris, France), and Regis Marchiano (Institut Jean
Le Rond d’Alembert (UMR 7190), Universite Pierre et Marie Curie & CNRS, Paris, France)
2a MON. AM
Infrasound and low frequency sounds are discussed as a method to characterize lightning flashes in a complementary way to electromagnetic (EM) observations. Thunder and EM data result mainly from a 2-months long observation campaign in Southern France, dedicated to monitor atmospheric electricity as part of the hydrological cycle in Mediterranean (HyMeX program). Possibilities and limitations
to follow storms by sound or infrasound (in the 1 to 40 Hz frequency range) at various distances are outlined. The influence of distance,
wind, and ambient noise is examined. Several examples of individual lightning flashes acoustical reconstruction are compared to EM reconstruction by means of a Lightning Mapping Array. Both Intra-Cloud or Cloud-to-Ground (CG) discharges are investigated. Special emphasis is brought to the lower part of CGs, as many acoustic signals are localized inside the lightning CG channel. A statistical comparison
between the acoustical versus EM approaches is performed, thanks to a significant number of recorded discharges in a single storm. Performances of acoustical reconstruction are detailed as function of observation range. Detailed signal analysis compared to a theoretical
model shows that the tortuous channel geometry explains at least partly the low frequency content of our observations of thunder spectra.
10:00
2aPAa3. Infrasound and internal gravity waves generated by atmospheric storms. Sergey Kulichkov, Igor Chunchuzov, Oleg
Popov, Vitaly Perepelkin, and Elena Golikova (Obukhov Inst. of Atmospheric Phys., 3 Pyzhevsky Per., Moscow 119017, Russian
Federation, snk@ifaran.ru)
The recordings of infrasound and internal gravity waves (IGWs) obtained during 2015-2016 at infrasound station I43 IMS and a network of microbarographs installed by Obukhov Institute of Atmospheric Physics (OIAP) are presented. The OIAP network of microbarographs is capable of detecting simultaneously an infrasound at frequencies less than 3 Hz and IGWs with periods ranging from 5
min to 3 hr. It is shown that the low-frequency wave processes generated by atmospheric fronts retain high coherence (0.6-0.9) over the
areas with horizontal dimensions of a few tens of km. It is found that after the passage of atmospheric front the internal wave trains were
observed with the amplitudes considerably exceeding those of IGWs that were detected before the passage of the atmospheric front. The
discrete periods of 35 min, 56 min, and 110 min were found in the frequency spectra of the observed wave trains. For these periods, the
coherence between atmospheric pressure variations measured at different points reach local maxima, and the sum of the phase differences between selected three points tends to zero. The phase speeds for the observed IGWs are in the range of 10-50 m/c. The wave
“precursors” with amplitudes of 10 Pa and periods of 15-20 min were also detected 10 to 15 hr before a passage of the atmospheric front
through a network. Along with IGWs, the infrasound associated with atmospheric front was also detected (August 2016) by I43 in the
frequency range 0.1-0.4 Hz.
10:20–10:40 Break
10:40
2aPAa4. Infrasound and low frequency sound emitted from tornados. Carrick L. Talmadge (NCPA, Univ. of MS, 1 Coliseum Dr.,
University, MS 38655, clt@olemiss.edu)
The NCPA, in collaboration Hyperion Technology Group, has performed a series of measurements of infrasound and low-frequency
sound generated by tornadic thunderstorms near Oklahoma City, OK, during a large-scale outbreak on May 24, 2011. Ground truth for
tornado tracks as well as meteorological data were available for these storms. Infrasound and low-frequency sound were identified separately for a long duration EF-5, an EF-4, and an EF-2 tornado. As reported by Frazier et al. [JASA 135, 1742 (2014)], infrasound in two
distinct regions were noted: An infrasound band between approximately 1-10 Hz and a low-frequency audible band located between
roughly 40-200 Hz (center frequency around 80-100 Hz). As part of the NOAA VORTEX-SE initiative, the NCPA will be collecting
additional infrasound data in the Northern portion of Alabama, centered on Huntsville and the Sand Mountain region. Current plans are
to install to 10 infrasound arrays, with seven elements per array. As part of the same VORTEX-SE initiative, seven additional arrays
will be deployed by the University of Alabama Huntsville. We will report here on the status of infrasound generated by tornadic thunderstorms, as well as discuss the status of modeling efforts to understand the origins of these emissions.
11:00
2aPAa5. Acoustic characterization of a portable infrasound source. Martin Barlett, Thomas G. Muir, Charles M. Slack, and Timothy
M. Hawkins (Appl. Res. Labs., Univ. of Texas at Austin, P.O. Box 8029, Austin, TX 78713, barlett@arlut.utexas.edu)
A trailer-able, pneumatic infrasound source is described that can produce tones with frequencies as low as 0.25 Hz. The device is
based on compressed gas air flow which is released into the atmosphere from a pair of 500 gallon reservoirs, pressurized to 200 psi
(1377 kPa) and modulated by pair of rotating ball valves, with 2 in. (5 cm) diameter ports. Positive air flow is released twice per revolution, so the device is a siren. In addition to the fundamental frequency, the siren also produces a series of harmonics of the fundamental
tone, enabling simultaneous measurements to be made at multiple frequencies, up to 20 Hz. This instrument was developed to support
in-situ calibrations of infrasound sensors and arrays as well as to provide a controllable source for other infrasound studies, such as signal insertion loss measurements of wind noise suppression structures and characterization of infrasound array azimuthal directivity. The
physical attributes of the source are described and results of acoustic measurements of sound pressure levels and azimuthal directivity
are presented. The results are also compared to estimates made using improved versions of a previously presented aero-acoustic model
(Pneumatic Infrasound Source: Theory and Experiment, POMA 19, 045030 [2013] and papers 4pPA3 and 4pPA4. J. Acoust. Soc. Am.
134 [2013]). [Work supported by the U.S. Army Space and Missile Defense Command/Army Forces Strategic Command (USASMDC/
ARSTRAT).]
3567
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3567
11:20
2aPAa6. Reciprocity calibration for infrasound sensors. Thomas B. Gabrielson (Penn State Univ., PO Box 30, State College, PA
16804, tbg3@psu.edu)
Frequency response is a key parameter in understanding the impact of a transducer on a measurement. A single volts-per-pascal
value is often cited as an acoustic transducer’s response; however, the magnitude and phase of this ratio over the entire frequency range
of interest is required to understand the effects of that transducer on input waveforms. While this ratio can be determined by comparison
to a reference transducer, any reference must itself be calibrated in some fashion. The National Center for Physical Acoustics has built
two calibration chambers designed for evaluation of infrasound microbarometers. The large interior volume (about 1.5 cubic meters)
allows simultaneous testing of several microbarometers and reference transducers. These chambers are also equipped with two drivers—
10-in. subwoofers—so that two-tone linearity testing can be done. The incorporation of two drivers opens the possibility for implementing reciprocity calibration, a well-established primary calibration methodology. This paper describes development, evaluation, and
uncertainties of a reciprocity-based calibration procedure designed expressly for measuring the complex frequency response of infrasound sensors in the 0.005 to 10 Hz frequency range.
11:40
2aPAa7. Wind noise reduction using a compact infrasound sensor array and a Kalman filter based on the Matern Covariance
Function. William G. Frazier (Hyperion Technol. Group, Inc., 3248 West Jackson St., Tupelo, MS 38804, gfrazier@hyperiontg.com)
A method for real-time estimation of stationary infrasound signals such as microbaroms in wind noise at low signal-to-noise ratios
using a compact infrasound sensor array is presented. A compact array is defined as a sensor array that has an aperture that is much
smaller than the shortest infrasound wavelengths of interest and is unsuitable for estimation of direction-of-arrival. In this application,
the spacing between sensors results in the measured wind noise being highly correlated, and therefore, simple averaging cannot be used
to obtain a good estimate of the infrasound signal. However, by adequately modeling the spatiotemporal wind noise process, array gain
can be realized. The method is based on using a Kalman Filter that is designed with the assumption that the measured wind noise can be
adequately modeled as a dynamic Gaussian random field with a Matern covariance function (demonstrated previously at the ASA Meeting in Salt Lake City, Utah, May, 2016). The presentation describes how to design the Kalman Filter in order to estimate the infrasound
signal of interest, demonstrates the filter performance using synthetic and measured data from a compact array, and describes how to
extend the method to support changing wind conditions and non-stationary infrasound signals.
12:00
2aPAa8. Association of impulsive infrasonic events at medium ranges. W. C. Kirkpatrick Alberts and Stephen M. Tenney (U.S.
Army Res. Lab., 2800 Powder Mill Rd., Adelphi, MD 20723, william.c.alberts4.civ@mail.mil)
Multiple, widely spaced, infrasonic arrays are routinely used to detect and localize impulsive events of unknown origin at medium
ranges (<100 km). Event data are subsequently processed to yield line of bearing (LOB) information and localization is accomplished
manually. This method of analysis could significantly benefit from automatic association and localization. Because infrasound arrays are
often separated by many tens of kilometers and signals reaching the arrays can be significantly altered along the propagation path, the
task of associating signals is difficult and time consuming. Further, confidence in an event association is difficult to assign to a signal
due to arrival timing and local interferers. By using beamforming methods and coherence between signals, it is possible to automatically
associate a given recorded event. At each array, a delay and sum beamformer is used to calculate the LOB to an unknown source. The
delayed and summed beam at each array is then used to calculate the pairwise coherence between all beams. Impulsive events due to
sources recorded by widely spaced infrasound arrays often exhibit high coherence at many of the frequencies in the signal. Examples of
successful associations between widely spaced arrays will be discussed.
3568
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3568
MONDAY MORNING, 26 JUNE 2017
ROOM 300, 10:20 A.M. TO 12:20 P.M.
Session 2aPAb
Physical Acoustics: General Topics in Physical Acoustics I
Alexey S. Titovich, Chair
Naval Surface Warfare Center, Carderock Division, 9500 MacArthur Blvd., West Bethesda, MD 20817
10:20
11:00
2aPAb1. Active diffracting gratings for vortex beam generation in air.
Ruben D. Muelas H., Jhon F. Pazos-Ospina, and Joao L. Ealo (School of
Mech. Eng., Universidad del Valle, Ciudad Universitaria Melendez. Bldg.
351., Cali 760032, Colombia, joao.ealo@correounivalle.edu.co)
2aPAb3. A three-dimensional simulation of vortex formation at the
open end of an acoustic waveguide. Carlos Malaga (School of Sci.,
Universidad Nacional Autonoma de Mexico, Ciudad Universitaria,AV.
UNIVERSIDAD N 3000, Facultad de Ciencias, Mexico City, Mexico City
04510, Mexico, cmi.ciencias@ciencias.unam.mx), Leon Martinez
(CCADET, Universidad Nacional Autonoma de Mexico, Mexico City,
Mexico), Roberto Zenit (IIM, Universidad Nacional Autonoma de Mexico,
Mexico City, Mexico), and Pablo L. Rendon (CCADET, Universidad
Nacional Autonoma de Mexico, Mexico City, Mexico)
Vortex beams VB have gained great attention because of their interesting features, e.g., autorreconstruction ability, capacity to transport and transfer angular momentum, among others. Different applications have been
proposed, e.g., particle manipulation and rotation control of particles/
objects. Recently, VB generated in water using passive structures have been
reported. In particular, multi-arm spiral slits are used attached to a radiating
source. In this work, we propose a new alternative to generate VB in air.
Specifically, we use active diffracting gratings easily fabricated by gluing a
ferroelectret film on a lower electrode, structured on a PCB, that perfectly
resembles the desired geometry. The active material is not cut to size and
shape. See [1]. Consequently, a transducer with the geometry of the
intended grating radiates the acoustic energy. Broadband spiral active gratings are employed to create VB in air at frequencies between 100 and 200
kHz. Numerical simulations are compared with experimental results. This
new class of transducers paves the way to the creation of complex radiation
fields in a rapid, cheap, and efficient manner. [1] J. Ealo, J. Camacho, and
C. Fritsch, “Airborne ultrasonic phased arrays using ferroelectrets: a new
fabrication approach,” IEEE Trans. Ultrason. Ferroelectr. Freq. Control
56(4), 848-858 (2009).
For high enough levels of acoustic pressure inside a tube, a nonlinear
mechanism is responsible for the formation of annular vortices at the open
end of the tube, which results in energy loss. Higher sound pressure levels
in the tube lead in turn to larger values of the acoustic velocity at the exit,
and thus to higher Reynolds numbers. It has been observed [Buick et al.,
2011] that two regimes are possible depending on whether the acoustic velocity is low, in which case vorticity appears in the immediate vicinity of
the tube, or high, in which case vortices are formed at the open end of the
tube and are advected outwards. We use a Lattice Boltzmann Method
(LBM) to simulate the velocity field at the exit of the tube in 3D, for both
cases. We plan to compare these numerical results with experimental results
obtained through particle image velocimetry (PIV). The effect of varying
both the geometry of the tube and the shape of the termination on the magnitude of the nonlinear losses at the exit is also examined.
10:40
11:20
2aPAb2. Experimental study of the angular momentum transfer
capacity of grating vortex beams. Ruben D. Muelas H., Joao L. Ealo, and
Jhon F. Pazos-Ospina (School of Mech. Eng., Universidad del Valle, Bldg.
351, Cali, Colombia, ruben.muelas@correounivalle.edu.co)
2aPAb4. Three-dimensional model for acoustic field created by a
piezoelectric plate in a resonator. Goutam Ghoshal, Benjamin P. RossJohnsrud, Kedar C. Chitale (FloDesign Sonics, 380 Main St., Wilbraham,
MA 01095, g.ghoshal@fdsonics.com), Yurii A. Ilinskii, Evgenia A.
Zabolotskaya (Appl. Res. Labs, Univ. of Texas at Austin, Austin, TX), and
Bart Lipkens (FloDesign Sonics, Springfield, MA)
Acoustic vortices (AV) are special beams with a screw-type dislocation,
a pressure null along their principal axis and the ability to carry angular momentum. Mono-element sources, metamaterials, and phased arrays can be
used to generate an AV. Among them, phased arrays are employed in the
most of works that report particle manipulation with AV because of their
ease to shape beams. However, special care must be taken at the design
stage of the array because additional zones of constructive interference may
appear, i.e., the so called grating lobes surrounding the main beam. When
generating vortex beams using phased arrays, we have experimentally verified that these grating lobes have similar characteristics than the main beam,
so we called them grating vortices (GV). However, GV exhibit certain level
of distortion in both phase and magnitude. In this work, we present an experimental study of the angular momentum transfer capability of GV of different topological charge. We estimate the mechanical torque induced and
the angular momentum transferred to disk-like samples of different size.
The vortices are generated using a 30-element array operating at 40 kHz in
air. A discussion on the potential of use of GV in the manipulation of particle/objects is given.
3569
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
A three-dimensional model is developed to describe an acoustic field
excited by a piezoelectric plate of finite size in a fluid filled resonator. First,
the eigenfunctions (modes) of a bare plate are derived using general piezoelectric equations considering the elastic and electric properties of the plate.
Then, the piezoelectric plate is placed into a fluid media such that only one
plate side is in fluid and an acoustic field generated by the plate in the fluid
is estimated. Finally, a reflector is placed to be parallel to the piezoelectric
plate and acoustic field in a resonator is evaluated. The solution for a piezoelectric plate of finite size is obtained using Singular Value Decomposition
(SVD) method. Equations for acoustic and electric variables are presented.
Radiation force on spherical particles in the standing wave field is derived
and discussed. Numerical results are presented to show the three-dimensional modal displacement and electrical characteristics of the plate at various frequencies and aspect ratio. Finally, the analytical results are compared
with two-dimensional and three-dimensional finite element results using
COMSOL Multiphysics commercial software.
Acoustics ’17 Boston
3569
2a MON. AM
Contributed Papers
11:40
ultrasound directed self-assembly in a variety of engineering applications,
including biomedical and materials fabrication processes.
2aPAb5. Directed self-assembly of three-dimensional user-specified
patterns of particles using ultrasound. Milo Prisbrey, John Greenhall
(Mech. Eng., Univ. of Utah, 201 Presidents Circle, Salt Lake City, UT
84119, mprisim@gmail.com), Fernando Guevara Vasquez (Mathematics,
Univ. of Utah, Salt Lake City, UT), and Bart Raeymaekers (Mech. Eng.,
Univ. of Utah, Salt Lake City, UT)
12:00
2aPAb6. Waves in non-conducting continuum with frozen-in
magnetization. Victor Sokolov (Dept. of Mathematics, Moscow
Technolog. Univ., Av. Vernadskogo 78, Moscow 119454, Russian
Federation, vvs195326@gmail.com)
Particles dispersed in a fluid medium are organized into three-dimensional (3D) user-specified patterns using ultrasound directed self-assembly.
The technique employs standing ultrasound wave fields created by ultrasound transducers that line the boundary of a fluid reservoir. The acoustic
radiation force associated with the standing ultrasound wave field drives the
particles into organized patterns, assuming that the particles are much
smaller than the wavelength, and do not interact with each other. A direct
solution method is theoretically derived to compute the ultrasound transducer operating parameters required to assemble a user-specified pattern of
particles in any 3D simple, closed reservoir geometry with any arrangement
of ultrasound transducers. This method relates the ultrasound wave field and
its associated radiation force to the ultrasound transducer operating parameters by solving a constrained optimization problem that reduces to eigendecomposition. Experimental validation of the method is accomplished by
assembling 3D patterns of carbon nanoparticles in a cubic water reservoir
lined with four ultrasound transducers. Additionally, the versatility of the
method is demonstrated by simulating ultrasound directed self-assembly of
complex 3D patterns of particles in cubic and noncubic reservoir geometries
lined with many ultrasound transducers. This method enables employing
The report reviews the new approach to the ferrohydrodynamics and
magnetoelasticity, based on the concept of frozen-in magnetization. Up till
now, there were well-known two frozen-in vector fields. The first one is the
vorticity field was introduced by Helmgoltz. The second frozen-in field is a
magnetic field in a perfectly conducting fluid was introduced by Alfven.
The acoustic approximation of the ferrohydrodynamic equations with frozen-in magnetization allowed us to describe the experimental results on the
anisotropy of the ultrasonic velocity in magnetized magnetic nanofluids and
predict new waves: Alfven-type hydrodynamic wave and slow magnetosonic one. Alfven-type wave is accompanied by oscillations of the magnetization. It is shown that, in a non-conducting solid with a frozen-in
magnetization, the propagation of three types of waves are possible: the longitudinal wave, pure shear wave, and new wave which have mixed form of
Alfven-type and shear waves. The theoretical results are found to agree well
with the experimental data on the dependence of the velocity of longitudinal
and transverse waves in polycrystalline nickel on the magnetizing field
strength. The set of dynamical equations that we have derived can be used
to tackle many problems in ferrohydrodynamics and magnetoelasticity.
MONDAY MORNING, 26 JUNE 2017
ROOM 311, 9:15 A.M. TO 11:20 A.M.
Session 2aPPa
Psychological and Physiological Acoustics and Speech Communication: Acoustics Outreach to Budding
Scientists: Planting Seeds for Future Clinical and Physiological Collaborations
Anna Diedesch, Cochair
Dept. of Otolaryngology/Head & Neck Surgery, Oregon Health & Science University, 3710 SW US Veterans Hospital Rd.,
Portland, OR 97239
Adrian KC Lee, Cochair
University of Washington, Box 357988, Seattle, WA 98195
Chair’s Introduction—9:15
Invited Papers
9:20
2aPPa1. Microsecond interaural time differences of acoustic transients are decoded by inhibitory-excitatory interactions in
neurons of the lateral superior olive. Tom P. Franken (KU Leuven; The Salk Inst. for Biological Studies, SNL-R, 10010 N Torrey
Pines Rd., La Jolla, CA 92037, tfranken@salk.edu), Philip H. Smith (Univ. of Wisconsin-Madison, Madison, WI), and Philip X. Joris
(KU Leuven, Leuven, Belgium)
The lateral superior olive (LSO) in the auditory brainstem generates sensitivity to interaural level differences (ILD), an important
cue for sound localization, by comparing excitatory (E) input from the ipsilateral ear with inhibitory (I) input from the contralateral ear.
Large axosomatic synapses (e.g., calyx of Held) point to the importance of precise temporal processing, but that is not easily reconciled
with ILD detection. We propose that the IE interaction allows detection of interaural time differences (ITD) of acoustic transients, to
3570
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3570
which humans are exquisitely sensitive. We obtained in vivo whole cell recordings of LSO and MSO neurons in the gerbil, while presenting monaural and binaural clicks. We found that ITD functions to clicks in the LSO are surprisingly steep, in contrast to MSO neurons, which are considered the main ITD detectors. Intracellular LSO recordings show EPSPs generated by the ipsilateral click and
IPSPs by the contralateral click, where IPSPs often arrive earlier. Binaural spiking is maximally suppressed when the EPSP coincides
with the falling slope rather than the peak of the IPSP. We conclude that LSO neurons are more sensitive to ITDs of transients than
MSO neurons. This clarifies the importance of timing specializations in the LSO circuit.
9:40
2aPPa2. Quantifying connectivity to auditory cortex: Implications for crossmodal plasticity and hearing restoration. Blake E.
Butler and Stephen G. Lomber (Psych., Univ. of Western ON, Social Sci. Bldg., 11, London, ON N6A5C2, Canada, bbutler9@uwo.ca)
2a MON. AM
When one sensory modality is lost, compensatory advantages are observed in the remaining senses. There is evidence to suggest
these advantages reflect recruitment of cortical areas that normally process sound. In the cat, crossmodal reorganization of auditory cortex appears to be field specific. While little or no activity is evoked in primary cortical regions by visual and somatosensory stimulation,
higher-level fields confer increased peripheral acuity and improved visual motion detection. In order to better understand the changes in
neural connections that underscore these functional adaptations, we have undertaken a series of detailed anatomical studies aimed at
quantifying and comparing the patterns of connectivity in hearing and deaf animals. A retrograde neuronal tracer was deposited into auditory cortical areas, coronal sections were taken, and neurons showing positive retrograde labeling were counted and assigned to cortical and thalamic areas. Projections within and between sensory modalities were quantified; while some small-scale differences emerge,
patterns of connectivity are overwhelmingly preserved across experimental groups within each cortical field examined. This structural
preservation has implications for our understanding of the mechanisms that underlie crossmodal reorganization; moreover, it suggests
that the connectivity necessary for resumption of auditory function may withstand even lengthy periods of deprivation.
10:00
2aPPa3. Dynamic emergence of categorical perception of voice-onset time in human speech cortex. Neal P. Fox (Dept. of
Neurological Surgery, Univ. of California, San Francisco, Sandler Neurosci. Bldg., UCSF Mission Bay, 675 Nelson Rising Ln., Rm.
510, San Francisco, CA 94143, neal.fox@ucsf.edu), Matthias J. Sjerps (Dept. of Linguist, Univ. of California, Berkeley, Nijmegen,
Netherlands), and Edward F. Chang (Departments of Neurological Surgery and Physiol., Univ. of California, San Francisco, San
Francisco, CA)
A fundamental challenge in speech perception involves the resolution of a many-to-one mapping from a highly variable, continuous
sensory signal onto discrete, perceptually stable categories that bear functional relevance. Recent work has identified signatures of invariance in early neural responses to speech, but the physiological mechanisms that give rise to these categorical representations remain
unclear. We employed intracranial recordings in human subjects listening to and categorizing speech stimuli to investigate the spatiotemporal cortical dynamics underlying categorical perception. Stimuli comprised a voice-onset time (VOT) continuum from /b/ (0 ms
VOT) to /p/ (50 ms VOT). Results revealed spatially distinct neural populations that respond selectively to tokens from one category (either /b/ or /p/). Within these subpopulations, response amplitude is modulated by stimulus prototypicality for within-category stimuli
(e.g., stronger response to 0 ms vs. 10 ms VOT in /b/-selective electrodes). Over the course of a trial, this initially graded encoding of
VOT rapidly evolves to reflect properties of the ultimate (categorical) behavioral response function. These same dynamics emerged in a
computational neural network model simulating neuronal populations as leaky integrators tuned to detect temporally distributed acoustic
cues at precise lags. Our results provide direct evidence that categorical perception of VOT arises dynamically within discrete, phonetically tuned neural populations.
10:20
2aPPa4. Disrupted auditory nerve activity limits peripheral but not central temporal acuity. Carol Q. Pham and Fan-Gang Zeng
(Ctr. for Hearing Res., Univ. of California Irvine, 110 Med Sci E, Irvine, CA 92697, carol.pham@uci.edu)
Auditory neuropathy affects synaptic encoding or neural conduction of signals in the cochlea or the auditory nerve. Subjects with auditory neuropathy poorly recognize speech in noise which correlates with poor temporal processing. The integrity of temporal processes
in the auditory system can be assessed with detection of just-noticeable differences in gap between sounds. Disorder in the auditory periphery appears to alter the precise timing or latency of synchronous neural discharges important for temporal coding. However, the relative contribution of auditory nerve activities to central temporal processing is unknown. Auditory neuropathy produced significantly
worse than normal gap detection within a frequency but normal gap detection between different frequencies. No correlation between
same- and different-frequency gap detection supports two temporal processes: a peripheral mechanism dependent on overlapping nerve
fibers mediating same-frequency gaps and a central mechanism dependent on cross-correlated activity of non-overlapping fibers mediating different-frequency gaps. The fast, peripheral mechanism enables temporal acuity on the order of milliseconds and is likely limited
by neural synchrony, the amount of total nerve activity, or both, whereas the sluggish, central mechanism is likely limited by switching
time between perceptual channels on the order of a hundred milliseconds. The results demonstrate auditory nerve activities limit peripheral but not central temporal acuity.
10:40
2aPPa5. Electro-oculography based horizontal gaze tracking: A perspective for attention-driven hearing aids. Lubos Hladek, W.
Owen Brimijoin (MRC / CSO Inst. of Hearing Res. (Scottish Section), Glasgow Royal Infirmary, 10-16 Alexandra Parade, New Lister
Bldg. 3L, Glasgow Royal Infirmary, Glasgow G31 2ER, United Kingdom, lubos.hladek@nottingham.ac.uk), and Bernd Porr (School of
Eng., Univ. of Glasgow, Glasgow, United Kingdom)
Users of hearing aids with standard directional microphones can have difficulties in complex listening situations such as multi-talker
environments because they must turn their heads to follow conversations that move rapidly from one talker to the next. The use of gazepointed hearing aid directionality has been suggested as a way to potentially alleviate this problem [Kidd et al. (2013); J. Acoust. Soc.
3571
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3571
Am. 2013; 133(3): EL202-7.] However, to arrive at a practical and usable device, there is the need for unobstructive and mobile technology for gaze tracking. Here, we propose and evaluate an algorithm for estimating eye-gaze angle based solely on the single channel electro-oculogram (EOG), which can be obtained directly from the ear canal using conductive hearing aid molds. In contrast to conventional
techniques, we use an algorithm which calculates the absolute eye angle by statistical analysis of the saccades. This results in robust
long term performance where predicted eye angles significantly correlate with actual eye angles. This opens up the possibility of an
attention driven beam-former for hearing aids without the need for eye-tracking goggles. [This work was supported by the Medical
Research Council [grant number U135097131], Chief Scientist Office (Scotland) and Oticon Foundation.]
11:00
2aPPa6. Selecting an acoustic correlate for automated measurement of /r/ production in children. Heather M. Campbell
(Communicative Sci. and Disord., New York Univ., 665 Broadway, 9th Fl., Fl. 6, New York, NY 10012, heather.campbell@nyu.edu),
Daphna Harel (Ctr. for the Promotion of Res. Involving Innovative Statistical Methodology, New York Univ., New York, NY), and Tara
McAllister Byun (Communicative Sci. and Disord., New York Univ., New York, NY)
A current need in the field of speech pathology is the development of reliable and efficient techniques for the purpose of evaluating
changes in speech production over the course of treatment. The industry standard for scoring speech is time consuming and expensive,
as it involves aggregating perceptual ratings across expert listeners. As techniques for automated measurement of speech improve,
acoustic measures have the potential to play an expanded role in the clinical management of speech disorders. The current study asks
which of several acoustic measures of children’s productions of English /r/ corresponds most closely with ratings given by trained listeners. This study fits a series of ordinal mixed effects regression models to a large sample of children’s /r/ productions that had previously
been rated by three trained listeners (speech-language pathologists). Controlling for age, sex, and allophonic contextual differences, the
acoustic measure that accounted for the most variance in speech rating was F3-F2 distance, normalized relative to a sample of age- and
gender-matched speakers. Therefore, this acoustic measure is recommended for use in future automated scoring of children’s production
of rhotic targets.
MONDAY MORNING, 26 JUNE 2017
ROOM 311, 11:40 A.M. TO 12:20 P.M.
Session 2aPPb
Psychological and Physiological Acoustics: Models and Reproducible Research I
Alan Kan, Cochair
University of Wisconsin-Madison, 1500 Highland Ave., Madison, WI 53705
Piotr Majdak, Cochair
Acoustics Research Institute, Austrian Academy of Sciences, Wohllebengasse 12-14, Wien 1040, Austria
Invited Papers
11:40
2aPPb1. The challenges in developing useful models. Barbara Shinn-Cunningham and Le Wang (Boston Univ., 677 Beacon St.,
Boston, MA 02215-3201, shinn@bu.edu)
Models can be extremely helpful in understanding the mechanisms governing hearing, providing insights into how information is
processed and combined to enable perception and communication. Yet modeling is complicated and is more of an art than a science.
Ideally, a model should include only those key components that are critical for describing known phenomena. Yet, models are typically
under-constrained, so that modelers constantly are forced to make educated guesses based on limited data. Given this, developing a useful model requires good intuition and an eye for what is essential and what is superfluous: a useful model requires navigating a balance
between realism/complexity versus tractability/interpretability. Verification is often only indirect, by using the resulting model to generate testable predictions, and comparing these predictions to new experimental data. Moreover, when a model fails (such as when predictions do not match empirical outcomes), the path to “fixing” the model is not always straightforward or clear. Some of these challenges
will be illustrated by our own recent efforts to model envelope following responses in the brainstem.
3572
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3572
12:00
2aPPb2. On the use of hypothesis-driven reduced models in auditory neuroscience. Dan F. Goodman (Elec. Eng., Imperial College
London, London SW7 2AZ, United Kingdom, d.goodman@imperial.ac.uk)
MONDAY MORNING, 26 JUNE 2017
ROOM 201, 9:15 A.M. TO 12:20 P.M.
Session 2aSAa
Structural Acoustics and Vibration, Physical Acoustics, and Engineering Acoustics:
Acoustic Metamaterials I
Christina J. Naify, Chair
Acoustics, Naval Research Lab, 4555 Overlook Ave. SW, Washington, DC 20375
Chair’s Introduction—9:15
Invited Papers
9:20
2aSAa1. Acoustic metasurfaces with rapid change profiles. Chengzhi Shi, Marc Dubois, Yuan Wang, and Xiang Zhang (Dept. of
Mech. Eng., Univ. of California, Berkeley, 3112 Etcheverry Hall, Berkeley, CA 94720, chengzhi.shi@berkeley.edu)
Acoustic metasurfaces with subwavelength unit cells modulating the phase of sound wave were developed to realize beam forming,
steering, focusing, and recently carpet cloaking. However, these metasurfaces were used for small phase gradient applications and the
ability to cloak or mimic reflection surfaces with rapid change profiles remained unexplored. Here, we analytically and experimentally
investigate the effect of rapid change in-plane phase gradient on the design of acoustic metasurfaces for carpet cloaking and diffusing
applications. Helmholtz resonators with different neck radii are used to form the ultrathin metasurface in our study. The variation of
neck radius of the Helmholtz resonator results in different reflection phase of acoustic wave. In both analytical and experimental results,
the large in-plane phase gradient does not affect the carpet cloak design, which is solely depending on the phase correction of the local
profile height. On the contrary, the impedance mismatch resulted from the rapid change profile yields additional scattering to the reflection from such rapid change profiles. The recreation of the reflection pattern of rapid change profiles from a flat surface using metasurface needs to consider this multiple scattering, which requires additional amplitude modulation from the metasurface.
9:40
2aSAa2. Metamaterial-based manipulation of orbital angular momentum for sound. Bin Liang, Xue Jiang (Dept. of Phys., Inst. of
Acoust., Nanjing Univ., P. R. China, 22 Hankou Rd., Nanjing, Jiangsu 210093, China, liangbin@nju.edu.cn), Jiajun Zhao (Dept. of
Phys. and Ctr. for Nonlinear Dynam., Univ. of Texas at Austin, Austin, TX), Jianchun Cheng (Dept. of Phys., Inst. of Acoust., Nanjing
Univ., P. R. China, Nanjing, China), Likun Zhang (Dept. of Phys. and Ctr. for Nonlinear Dynam., Univ. of Texas at Austin, Austin, TX),
and Chengwei Qiu (Dept. of Elec. and Comput. Eng., National Univ. of Singapore, Singapore, Singapore)
The existing wave-steering devices generally control acoustic beams carrying only linear momentum. But acoustic vortices
imprinted with orbital angular momentum (OAM) obviously opens new degree of freedom to control wavefront, with wide applications
such as in the design of acoustic “screwdrivers” capable to generate a rotation torque to rotate objects contactlessly. To overcome the
drawbacks of the traditional methods for producing acoustic OAM, we have designed and fabricated an acoustic vortex emitter with
multi-arm coiling slits that utilize diffraction effect to generate vortex beam, with broadband functionality and stable topological charge.
Based on this, we further propose a new mechanism for producing acoustic OAM by converting acoustic resonances to OAM. As an
implementation, we have designed and fabricated a thin planar device with high efficiency to verify our scheme. Compared with existing
3573
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3573
2a MON. AM
There are a number of detailed models of auditory neurons that are able to reproduce a wide range of phenomena. However, using
these models to test hypotheses can be challenging, as they have many parameters and complex interacting subsystems. This makes it
difficult to investigate the function of a mechanism by varying just one parameter in isolation, or to assess the robustness of a model by
systematically varying many parameters. In some cases, by limiting the scope of a model to testing a specific hypothesis using a particular set of stimuli, it is possible to create a reduced mathematical model with relatively few, independent parameters. This has considerable advantages with respect to the problems above. In particular, if a certain behavior is robust and does not depend on finely tuned
parameters, then different implementations are more likely to produce the same results—a key property for reproducible research. In
addition, the code for these models is typically simpler and therefore more readable, and can often run faster, enabling us to carry out
systematic parameter exploration. I will illustrate these points with a reduced model of chopper cells in the ventral cochlear nucleus.
ways for OAM production by phased spiral sources that need sophisticated electronic control and by physically spiral sources that need
screw profiles and may also have a bulky size, our acoustic resonance-based OAM production bears the advantages of high efficiency,
planar profile, compact size, no spiral structure, and can be freely tuned to produce different orders of OAM. I will also discuss some
potential applications of our proposed scheme for OAM manipulation by metamaterials.
Contributed Papers
10:00
2aSAa3. Demonstration of a broadband aqueous acoustic metasurface.
Matthew D. Guild, Charles Rohde, Theodore P. Martin, and Gregory Orris
(US Naval Res. Lab., 4555 Overlook Ave. SW, Washington, DC 20375,
mdguild@utexas.edu)
Acoustic metamaterials have been utilized in recent years to demonstrate
extreme acoustic properties, such as those with negative or near-zero
dynamic values. While effective, the use of acoustic metamaterials can lead
to voluminous structures that may not be practical for some applications.
Alternatively, ultrathin structures known as acoustic metasurfaces offer the
same capability to achieve extreme properties as acoustic metamaterials
while offering the added benefit of having negligibly small (i.e., subwavelength) thickness. In this work, we will discuss an aqueous acoustic metasurface that utilizes subwavelength structures designed to acoustically act in
parallel, allowing for a thin, modular structure to be realized while achieving
a broad range of effective surface properties. A theoretical formulation for
the design of the flexural elements is presented, accounting for the elastic
motion of the elements subject to fluid loading due to the water. Based on
this design, an aqueous acoustic metasurface was constructed from a brass
plate, which was machined to achieve the prescribed flexural elements on the
surface, and experimentally tested in water. The results of this analysis and
testing will be discussed. [Work supported by the Office of Naval Research.]
10:20
2aSAa4. Anomalous refraction and asymmetric transmission of SVwaves through elastic metasurfaces. Xiaoshi Su and Andrew Norris
(Mech. and Aerosp. Eng., Rutgers Univ., 98 Brett Rd., Piscataway, NJ
08854, xiaoshi.su@rutgers.edu)
Recent advances in acoustic metasurface design make it possible to
manipulate sound waves in an almost arbitrary way. Here, we present several elastic metasurfaces comprised of an array of subwavelength plates for
controlling SV-wave in solids. The underlying physics are the coupling
between the SV-wave in the elastic body and the flexural wave in plates,
and the coupling between the P-wave in the elastic body and the longitudinal wave in plates. By varying the thicknesses of the plates, a wide range of
phase delay for flexural waves can be obtained, while keeping constant the
phase delay for longitudinal waves. The anomalous refraction of SV-waves
is achieved by selecting the thickness of each plate to engineer the phase
change according to the generalized Snell’s law. This metasurface has
another feature that it redirects SV-waves only, which enables it to be used
to split SV- and P-wavefronts into different directions. In addition, this
metasurface can be paired with a uniform metasurface to break spatial symmetry and achieve asymmetric transmission for normally incident SVwaves. Other potential applications, such as focusing and negative refraction, will also be discussed. [Work supported through ONR MURI.]
10:40
2aSAa5. Omnidirectional sound shielding with acoustic metacages.
Chen Shen, Yangbo Xie, Steven Cummer (Elec. and Comput. Eng., Duke
Univ., Durham, NC), and Yun Jing (Mech. and Aerosp. Eng., North
Carolina State Univ., 911 Oval Dr., EB III, Campus box 7910, Raleigh, NC
27695, yjing2@ncsu.edu)
Omnidirectional sound barriers are useful for various applications in
noise reduction. Conventional sound insulating structures like micro-perfo-
3574
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
rated plates or porous materials prevent the exchange of airflow. Here, we
propose the design of an acoustic metacage which can shield acoustic waves
from all directions and have the ability of allowing air pass through freely.
The mechanism is that the strong parallel momentum along the surface
rejects sound regardless of the directions of the incident wave. Structures
based on open channels and Helmholtz resonators are designed at an operation frequency of 2.49 kHz with thickness less than half of the wavelength.
A prototype is fabricated using 3D printing and further verified experimentally in a waveguide. Simulation and measurement results clearly show that
the proposed metacage can shield acoustic waves when the sources are
placed either interior or exterior. An average energy decay of more than 10
dB is achieved when a loudspeaker is placed inside the metacage within a
certain frequency band. Our metacage can have applications where ventilation is required.
11:00
2aSAa6. Design of broadband acoustic metamaterials for low-frequency
noise insulation. Zibo Liu, Leping Feng, and Romain Rumpler (Dept. of
Aeronautical and Vehicle Eng., KTH Royal Inst. of Technol., Stockholm
SE-100 44, Sweden, zibo@kth.se)
An innovative configuration of an acoustic sandwich structure is proposed in this paper, which uses the locally resonant structures to generate
stopbands in desired frequency regions and hence to increase the sound
transmission loss of the panel. Effects of different types of resonators,
including the mounting techniques, are investigated. The methods to
broaden the effective stopbands are discussed. The acoustic properties of
the sandwich panel with non-flat laminates are also studied. Numerical analyses show that good results can be obtained when combining the laminate
modification with the locally resonant structure, especially when the stopbands are designed to compensate the corresponding coincidence effects of
the sandwich panel. The analysis is based on the Finite Element models constructed in COMSOL. Bloch wave vectors are derived at first Brillouin zone
by using wave expansion method. Dispersion relation of the structure is discussed. Experimental validation is planned, and the results will be shown in
the conference.
11:20
2aSAa7. Designing beampatterns with tapered leaky wave antennas.
Christina J. Naify (Jet Propulsion Lab., 4555 Overlook Ave. SW,
Washington, DC 20375, christina.naify@gmail.com), Katherine. Woolfe
(Code 7160, National Res. Council Postdoctoral Associateship, Naval Res.
Lab, Washington, DC), Christopher N. Layman (ATS Consulting, Pasadena,
CA), Jeffrey S. Rogers, Matthew D. Guild, and Gregory Orris (Code 7160,
Naval Res. Lab, Washington, DC)
Leaky wave antennas (LWAs) have been shown to be an effective tool
for frequency-steerable wave radiation in both the electromagnetic and
acoustic wave regimes. LWA’s operate by modifying the impedance on a
waveguide such that refraction occurs out of the waveguide at an angle corresponding to Snell’s Law. For a LWA with uniform leaking parameter
across the waveguide length, that leakage angle is constant. Using analytical
techniques, and by careful geometric design of the waveguide impedance,
the leaked beampattern can be tailored. The process of the tapering process
for an acoustic LWA is discussed here, and notional examples are presented
including sidelobe reduction. [Work supported by the Office of Naval
Research.]
Acoustics ’17 Boston
3574
11:40
2aSAa8. Low frequency bandgaps in lightweight metamaterial panels
using rotation inertia multiplication. Tommaso Delpero (Mech. Integrity
of Energy Systems, Empa, D€
ubendorf, Switzerland), Gwenael Hannema,
Stefan Schoenwald, Armin Zemp (Acoustics/Noise Control, Empa,
D€ubendorf, Switzerland), Andrea Bergamini (Mech. Integrity of Energy
Systems, Empa, D€ubendorf, Switzerland), and Bart Van Damme
(Acoustics/Noise Control, Empa, Ueberlandstrasse 129, D€
ubendorf 8600,
Switzerland, bart.vandamme@empa.ch)
Of all possible features of structural metamaterials, the formation of
bandgaps is the most studied one due to its direct application for sound and
vibration isolation. While achieving low frequency values for the position
of the first bandgap is, in general terms, not an unsurmountable challenge,
the combination of material properties such as high stiffness, low density,
and reduced size of the unit cell, with low (in absolute terms) frequency
bandgaps, may well require some careful consideration. In previous work,
we designed panels with a 3D network of resonators, clearly improving the
vibration isolation compared to a homogeneous panel with the same weight.
Recently, we have devised a novel implementation of inertia amplification,
based on coupling the energy of longitudinal waves into the rotational oscillation of inertia elements within the unit cell. In this contribution, we present examples of phononic crystals based on this approach, and we discuss
the interaction of acoustic waves with the discussed lattices.
2a MON. AM
Invited Paper
12:00
2aSAa9. Optimal sound-absorbing structures. Ping Sheng (Dept. of Phys., HK Univ. of Sci. & Technol., Clear Water Bay, Kowloon,
Hong Kong 000, China, sheng@ust.hk)
Causal nature of the acoustic response dictates an inequality that relates the absorption spectrum of the sample to its thickness. We
use the causal constraint to delineate what is ultimately possible for sound absorbing structures, and denote those which can attain nearequality for the causal constraint to be “optimal.” By using acoustic metamaterial as backing to conventional porous absorbers, a design
strategy is presented for realizing structures with target-set absorption spectra and a sample thickness close to the minimum value as dictated by causality. By using this approach, we have realized a 12 cm-thick structure that exhibits broadband, near-perfect flat absorption
spectrum starting at around 400 Hz, while the minimum sample thickness as calculated from the causal constraint is 11.5 cm. To illustrate the versatility of the approach, two additional optimal structures with different target absorption spectra are presented. This
“absorption by design” strategy enables the tailoring of customized solutions to difficult room acoustic and noise remediation problems.
[Work done in collaboration with Min Yang, Shuyu Chen, and Caixing Fu.]
MONDAY MORNING, 26 JUNE 2017
ROOM 204, 9:20 A.M. TO 12:20 P.M.
Session 2aSAb
Structural Acoustics and Vibration: General Topics in Structural Acoustics and Vibration II
Benjamin Shafer, Chair
Technical Services, PABCO Gypsum, 3905 N 10th St., Tacoma, WA 98406
Contributed Papers
9:20
2aSAb1. Modeling and estimation of focused ultrasound radiation force
for modal excitation. Songmao Chen, Christopher Niezrecki, Alessandro
Sabato, and Peter Avitabile (Mech. Eng., Univ. of Massachusetts Lowell,
One University Ave., Lowell, MA 01854, song.m.chen@gmail.com)
To date, conventional excitation techniques such as impact hammer or
mechanical shaker excitations are commonly used in experimental modal
testing. However, these excitations require that the excitation device be in
direct contact with test articles, resulting in measurement distortions, particularly for small structures like MEMS cantilever. In addition, it is physically
difficult or even impossible to apply these contact type excitations to certain
structures, for example, biological tissues and thumb nail sized turbine
3575
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
blade. Moreover, these conventional excitations have limited bandwidth,
usually below than 10 kHz, and thus not applicable for structures with interest in higher frequency modes. Focused ultrasound radiation force, having a
much broader frequency bandwidth, has recently been used to excite structures with sizes ranging from micro to macro-scale. Therefore, it can potentially be used as an alternative non-contact excitation method for
experimental modal analysis. Yet, this force remains to be quantified in
order to obtain the force-response relationship, i.e., the frequency response
functions (FRFs) of test articles. The dynamic focused ultrasound radiation
force is modeled and estimated using the calibrated sound pressure fields
generated by a spherically focused ultrasonic transducer (UT) driven by amplitude modulated signals. Its application for modal excitation is to be
discussed.
Acoustics ’17 Boston
3575
9:40
2aSAb2. Investigation of various damping measurement techniques.
Christian A. Geweth (Chair of VibroAcoust. of Vehicles and Machines,
Tech. Univ. of Munich, Boltzmannstraße 15, Garching b. M€
unchen 85748,
Germany, christian.geweth@tum.de), Patrick Langer (Chair of
VibroAcoust. of Vehicles and Machines, Tech. Univ. of Munich, Munich,
Bavaria, Germany), Kheirollah Sepahvand (Chair of VibroAcoust. of
Vehicles and Machines, Tech. Univ. of Munich, Garching bei Munich,
Germany), and Steffen Marburg (Chair of VibroAcoust. of Vehicles and
Machines, Tech. Univ. of Munich, Muenchen, Germany)
In order to compare experimentally determined damping values with
damping values from simulations, high effort is necessary. A precise modeling of the boundary conditions with respect to its impact on the damping is
often difficult to realize and often time consuming. Furthermore, the used
measurement parameters, like sampling frequency or windowing can have
an unneglectable influence on the experimentally determined damping
value. In order to observe the sensitivity of single parameters during the
determination of damping, the dynamical behavior of virtual models with a
known excitation is investigated. The used numerical methods are validated
with analytical solutions for the model. The obtained time data from the
model are used to determine the damping with different methods. Therefore,
the influence of the different methods on the model can be identified. The
virtual modeling opens up the opportunity to identify and quantify different
sources of errors and disturbance.
sound sources at each location due to the vibrating plate, radiation of sound
fields was theoretically calculated and verified by comparing with the measured results. Under single and multi-layered conditions, influence factors on
acoustic properties were investigated based on the model. As a result, by
using the proposed model, it is possible to predict acoustic mechanisms of
vehicles due to raindrops and make them suitable for specific designs.
10:40
2aSAb5. Clamping force diagnosis during bolting process using acoustic
signatures. Gyungmin Toh, Jaehong Lee (Mech. Eng., Hanyang Univ., 222.
Wangsimni-ro, Seongdong-gu, Seoul 04763, South Korea, avlrudals@
gmail.com), Jaesoo Gwon (Hyundai Motors, Seoul, South Korea), and
Junhong Park (Mech. Eng., Hanyang Univ., Seoul, South Korea)
The method of measuring the fastening force of the bolt in a non-contact
manner during the fastening of the bolt is of great value in the industry. In
this study, the fastening force was measured by changing the dynamic characteristics as the bolts were fastened. Experiments were carried out by measuring the vibration generated when bolts with load cells were fastened.
When the bolt is fastened, the vibration characteristics of the bolt are measured by the accelerometer attached to the joint structure of the bolt by the
fastener. The measured vibration signals are classified using cepstrum of
bolt. Learning was performed by recognizing different axial force as each
speaker. The clamping force was predicted in such a manner as to determine
which studied clamping force is most similar to the present clamping force.
The proposed method is verified by applying it to actual bolt structure.
10:00
2aSAb3. Spatial distribution of acoustic radiation force modal
excitation from focused ultrasonic transducers in air. Thomas M. Huber,
Ian McKeag, William Riihiluoma (Phys., Gustavus Adolphus College, 800
W College Ave., Saint Peter, MN 56082, huber@gac.edu), Christopher
Niezrecki, Songmao Chen, and Peter Avitabile (Mech. Eng., Univ. of
Massachusetts Lowell, Lowell, MA)
Recent studies have utilized the acoustic radiation force for non-contact
modal excitation of structures in air. When two ultrasonic frequencies, for
example, f1 = 610 kHz and f2 = 600 kHz, are incident on an object, the
acoustic radiation force produces a driving force at a difference frequency
f1-f2 = 10kHz. The current study compared the spatial distribution of driving
force from a pair of co-focused transducers emitting f1 and f2, to a single
focused transducer emitting an amplitude modulated signal of both f1 and
f2. The difference frequency ranged from 400 Hz to 80 kHz. Ultrasonic
transducers, with focal spot diameters of ~2 mm mounted on translation
stages, could be directed at a 100kHz PCB-378C01 microphone or a 19.8 x
6.8 x 0.37 mm clamped-free brass cantilever monitored by a Polytec PSV400 vibrometer. When mixing of frequencies f1 and f2 was solely due to the
acoustic radiation force, the driving force was localized to a region a few
mm in diameter. However, in other cases, very broad spatial distributions of
difference frequency excitation were measured; this indicated non-acoustic
radiation force mixing of f1 and f2, such as within the transducer. The practical implications for non-contact modal excitation using acoustic radiation
force will be discussed.
10:20
2aSAb4. Vibro-acoustic modeling of roof panels for analysis of sound
radiation from droplet impact. Sangmok Park, Yunsang Kwak, Deukha
Kim, Junhong Park (Hanyang Univ., 222, Wangsimni-ro, Seongdong-gu,
Eng. Center, 306, Seoul 04763, South Korea, tkdahr619@hanyang.ac.kr),
and Kyungsup Chun (Hyundai Motors, Hwaseong, South Korea)
Sound generated by impacts between raindrops and roof panels of
vehicles is an important factor on automotive qualities when driving in rainy
conditions. Therefore, analytical method to control this phenomenon is necessary. In this research, a theoretical model for predicting characteristics of
sound radiation by droplet impacts was proposed. An experiment for measuring forces generated by falling droplets was conducted. The characteristics of the measured forces were investigated in the frequency domain. A
measurement on a plate was performed to understand sound radiation
formed by droplet impacts. Correlations between acoustic characteristics
and properties of the plate were identified. A vibro-acoustic model was
developed to analyze the experimental results. Assuming generation of
3576
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
11:00
2aSAb6. A vibroacoustic analysis of pre-stressed saw blades to identify
instabilities considering gyroscopic effects and centrifugal forces
utilizing the finite element method. Marcus Guettler (Faculty of Mech.
Eng., Tech. Univ. of Munich, Boltzmannstr. 15, Munich 85748, Germany,
marcus.guettler@tum.de), Christopher Jelich (Faculty of Mech. Eng., Tech.
Univ. of Munich, Garching b. M€
unchen, Germany), Steffen Marburg
(Faculty of Mech. Eng., Tech. Univ. of Munich, Muenchen, Germany),
Ettore Grasso, and Sergio De Rosa (Dipartimento di Ingegneria industriale,
Universita degli Studi di Napoli Federico II, Napoli, Italy)
In various engineering fields, large saw blades on heavy machines are
used for several tasks. Typically, in civil engineering, they cut large concrete
structures such as walls for adding or changing doors and windows in buildings, as well as they are utilized for roads and bridges maintenance works.
In the cutting process, especially large blades tend to vibrate excessively.
The unstable behavior can lead to wide cutting lines, less productivity, or
even jamming of the saw blade resulting in an unsafe environment for the
workers on-site. For these productivity and security related issues, engineers
face the challenge to investigate the dynamic behavior of large saw blades
at early stages of product development. The finite element method has
emerged to be a useful tool when investigating saw blade designs since various effects such as pre-stress and gyroscopic and/or centrifugal forces can
be considered. In this work, the authors use the finite element method to
study the effect of gyroscopic and centrifugal forces onto pre-stressed saw
blades to (i) identify unstable dynamic behavior and further (ii) optimize the
design to increase the vibroacoustic stability. In addition, the kinetic energy
values are used as a measure for potential sound radiation.
11:20
2aSAb7. An experimental investigation into the insertion loss from
subscale acoustic enclosures with geometric imperfections. Christopher
Beale, Murat Inalpolat, Christopher Niezrecki, and David J. Willis (Mech.
Eng., Univ. of Massachusetts Lowell, One University Ave., Lowell, MA
01854, Christopher_Beale@student.uml.edu)
Enclosures with different geometries constitute the internal sections of
various engineering applications including cabins of passenger cars, fuselages of aircraft wings, and internal compartments of wind turbine blades.
Acoustic insertion loss from and to these enclosures affect certain objective
and subjective acoustic measures along with the ability to detect damage.
This presentation describes a thoroughly executed test plan that identifies
the effect of geometric imperfections, such as holes, edge splits, and cracks
with different severity levels and locations, on the insertion loss from a
Acoustics ’17 Boston
3576
11:40
2aSAb8. Simulation of coupled structural-acoustic response with
dynamic damage evolution. Jonathan Pitt (Appl. Res. Lab., The Penn State
Univ., PO Box 30, Mailstop 3320B, State College, PA 16804, jonathan.
pitt@psu.edu)
A novel time-domain method for simulating dynamic damage evolution
in a coupled structural-acoustic system is presented. The system is derived
via the theory of continuum damage mechanics, and incorporates standard
damage evolution models, but is readily extendible to more exotic formulations. The overall solution method is staggered, solving for the dynamic
damage evolution first with an explicit step, and then using the new values
in the coupled computation of the structural-acoustic system. The spatial domain is discretized using a mixed finite element method, and the temporal
domain is discretized with a higher-order implicit time discretization
scheme. Efforts toward fully coupled verification of the solution algorithm
are presented, as are validation studies for cases without evolving damage.
Applications with evolving damage are presented, and present a novel first
principles study of changes in the structural acoustic response to
dynamically evolving damage in the structure. Special attention is given to
brittle fracture. Examples of downstream usage of the evolving structural
response are discussed in the concluding remarks.
12:00
2aSAb9. Using reciprocity principles and sensitivity functions for the
vibroacoustic response of panels under random excitations. Christophe
Marchetto, Maxit Laurent (Univ Lyon, INSA-Lyon, Laboratoire Vibrations
Acoustique, 25 bis av. Jean Capelle, Villeurbanne F-69621, France,
christophe.marchetto@usherbrooke.ca), Olivier Robin, and Alain Berry
(Groupe d’Acoustique de l’Universite de Sherbrooke, Universite de
Sherbrooke, Sherbrooke, QC, Canada)
The vibroacoustic characterization of panels submitted to random pressure fields is of great interest for the industry. The test means associated
with those excitations (i.e., wind tunnel, reverberant room) are expensive
and can hardly be controlled. An alternative method to experimentally characterize the behavior of a panel under random pressure fields is therefore
proposed. The mathematical formulation of the problem allows describing
the vibroacoustic behavior of a panel as a function of the cross spectral density function of the considered excitation and so-called “sensitivity
functions.” These functions can be estimated experimentally using reciprocity principle, which can either be applied for characterizing the structural
response by exciting the panel with a normal force at the point of interest or
for characterizing the acoustic response (radiated pressure, acoustic intensity) by exciting the panel with a monopole and a dipole source. For validation purposes, the method is applied numerically and experimentally for the
case of a diffuse acoustic field. Based on indicators such as the vibratory
response and the transmission loss factor, the method is finally confronted
to measurements in coupled anechoic-reverberant rooms facility following
standards.
MONDAY MORNING, 26 JUNE 2017
BALLROOM A, 9:20 A.M. TO 12:20 P.M.
Session 2aSC
Speech Communication: Speech Production (Poster Session)
Melissa M. Baese-Berk, Chair
Department of Linguistics, Univ. of Oregon, 1290 University of Oregon, Eugene, OR 97403
All posters will be on display from 9:20 a.m. to 12:20 p.m. To allow contributors in this session to see the other posters, authors of
odd-numbered papers will be at their posters from 9:20 a.m. to 10:50 a.m. and authors of even-numbered papers will be at their posters
from 10:50 a.m. to 12:20 p.m.
Contributed Papers
2aSC1. Rhoticity in Cajun French. Katherine M. Blake and Kelly
Berkson (Linguist, Indiana Univ., 107 S Indiana Ave., Bloomington, IN
47405, kamblake@indiana.edu)
Previous studies of Cajun French (CF) report a shift in pre-rhotic vowel
quality (Conwell & Juilland 1963, Blainey 2015, Dubois & Noetzel 2005,
Salmon 2007, Lyche, Meisenburg & Gess 2012). This is unsurprising, in
that such behavior is reported for numerous Francophone varieties, but prior
work on rhotics in CF has been largely impressionistic. As such, the current
study presents acoustic analysis of pre-rhotic vowels in CF. Formant measurements were taken from data collected from three native speakers. The
3577
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
question of whether rhotic deletion is present in CF is also addressed: /r/dropping is widely attested cross-linguistically, and has been reported in the
closely related Francophone variety of Acadian French (Cichocki 2012,
2006). Given the previous literature on /r/-dropping and rhoticity in CF and
other Francophone varieties, it was expected that these data would show
variable /r/ production, compensatory lengthening of vowels preceding a
dropped /r/, and a general trend of /r/-induced vowel lowering. Results of
this study confirm the presence of highly variable rhotic deletion, a lack of
compensatory lengthening triggered by this deletion, a centralization effect
of the rhotic on the quality of preceding vowels, and variable lowering of
the third formant.
Acoustics ’17 Boston
3577
2a MON. AM
subscale acoustic enclosure. A composite rectangular prism enclosure,
located inside an anechoic chamber, was internally ensonified using a loudspeaker, and an externally located condenser microphone was used to measure the insertion loss under different conditions. One of the faces of the
enclosure possessed various size and location imperfections simulating
damage. Insertion loss deviations introduced through the prescribed damage
cases were compared to a baseline case with no prescribed imperfections.
The results obtained from the initial test campaign with healthy and damaged enclosure specimens were utilized to arrive at several conclusions on
the detectability and feature extraction capabilities required for damage
detection from subscale enclosures.
2aSC2. Articulatory data for a five-way liquid contrast: 3D ultrasound
of Marathi. Kelly Berkson and Abigail H. Elston (Linguistics, Indiana
Univ., 1020 E. Kirkwood Ave., Ballantine Hall 844, Bloomington, IN
47405, kberkson@indiana.edu)
Lateral and rhotic consonants show great crosslinguistic variation, and are
traditionally described as articulatorily complex (Ladefoged & Maddieson
1996; Proctor 2011; Wiese 2001, 2011). A good body of work has investigated the characteristics of liquids in languages like English (Dellatre & Freeman 1968; Guenther et al. 1998; Sproat & Fujimura 1993; Westbury, Hashi,
& Lindstrom 1998; many others), which contains a two-way contrast. What
of South Asian languages, however, which often contain a greater number of
liquids? Tamil liquids have been imaged using palatography and electropalatography (McDonough & Johnson 1997) as well as MRI (Narayanan et al.
1999), and Malayalam liquids have been imaged using mid-sagittal ultrasound (Scobbie, Punnoose, & Khattab 2013). Little has been done with Marathi, though. Like Tamil and Malayalam, Marathi—an Indic language spoken
in the Indian state of Maharashtra—contains a five-way liquid contrast. This
work utilizes recent advances in 3D ultrasonography to provide detailed articulatory data for Marathi’s five liquids (/l/, /l¨/ /r/ /r¨/ /ì/) (Dhongde & Wali
2009; Pandharipande 1997). Real-time images of tongue motion are combined with digitized impressions of the palate to provide new insights into the
complex articulatory gestures involved in production of these sounds.
2aSC3. Asymmetrical patterns of formant variability in English vowels.
Wei-rong Chen, Mark Tiede, and D. H. Whalen (Haskins Labs., 300 George
St., Ste. 900, New Haven, CT 06511, chenw@haskins.yale.edu)
Previous studies have claimed that lower formants should be weighted
more than higher formants in a perceptual model of vowel perception (e.g.,
Schwartz et al., 1997). Given this formant weighting hypothesis, and if vowels have acoustic targets, vowels should be more variable in higher formant
frequencies. Here, we examined within-speaker variability for five English
vowels /æ, ˆ, O, E, I/ in various contexts as produced by 32 speakers in the
x-ray microbeam database (Westbury, 1994). For variabilities of the first
three formants, only /O/ follows this prediction (i.e., variability: F3 > F2 >
F1) while /æ/ exhibits the opposite pattern; if we ignore F3 (as being less
reliably measured), most vowels conform to the prediction (i.e., variability:
F2 > F1), except for /æ/. Although the F2 variability is generally consistent
with the perceptual model of vowel perception, it is also consistent with a
possibly greater effect of consonant coarticulation (which is extensive here)
on F2 relative to F1; this requires more testing. Further, while these results
do not fully conform to the prediction made by the hypotheses, coproduction
effects arising from the diverse contexts likely interact with the expected
tendency. Correlation with observed kinematic variabilities will also be discussed. [Work supported by NIH grant DC-002717.]
2aSC4. Acoustic properties of Mexico City Spanish vowel weakening.
Meghan F. Dabkowski (Dept. of Spanish and Portuguese, The Ohio State
Univ., 298 Hagerty Hall, 1775 College Rd., Columbus, OH 43210,
dabkowski.5@osu.edu)
Mexico City Spanish exhibits weakened vowels that have been
described as reduced, relaxed, unstable, obscured, abbreviated, devoiced,
and “lost,” indicating likely reduction in duration, voicing, and/or quality.
The objective of this study is to precisely identify the acoustic nature of this
vowel weakening. To this end, recorded spontaneous speech was collected
from 20 speakers native to Mexico City. 3000 tokens, i.e., monophthongs,
were analyzed acoustically in Praat (Boersma & Weenink 2016), and measurements were taken for F1, F2, vowel duration, and voicing duration. Findings show that vowel weakening in this variety consists primarily of
shortening and weakened voicing, but not raising or centralization. Instead
of simple presence or absence of voicing, many tokens show weak voicing,
characterized by a lower intensity in the waveform and a lighter voice bar,
or partial voicing that does not endure throughout the entire vowel. The
presence of frication distinguishes devoicing from weak voicing when other
aspects of the acoustic signal are not clear indicators. Many tokens exhibited
full voicing, but only consisted of 2-3 wave cycles, resulting in a severely
shortened vowel. Uncovering the acoustic properties of these weakened
vowels is crucial to understanding how this variety fits with cross-linguistic
vowel weakening trends.
3578
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
2aSC5. Vowel acoustics in three dialects of Spanish: Iberian,
Dominican, and Mexican. Stephanie C. Fermin, Martha Tyrone (LIU–
Brooklyn and Haskins Labs, 1 university Plaza, Brooklyn, NY 10021,
Stephanie.c.fermin@gmail.com), Laura L. Koenig (Adelphi Univ. and
Haskins Labs., New Haven, CT), and Isabelle Barriere (LIU–Brooklyn,
Brooklyn, NY)
A growing number of studies have begun to investigate vowel variability
among Spanish speakers. The purpose of this study was to measure the
acoustics of vowels in three dialects of Spanish and to compare how these
dialects vary in their vowel production. This information is important for
speech and language clinicians working with dialectally diverse individuals
to recognize the difference between typical dialectal variation and a speech/
language disorder. We specifically examined dialects that have developed
separately from each other and also have large numbers of speakers. Data
were obtained from five female speakers in each of these groups: Iberian
speakers, Dominican speakers, and Mexican speakers (N = 15). To analyze
the effect of speaking task on production variation, we elicited a controlled
speaking task and a naturalistic speaking task. For the controlled task, the
participants were asked to read aloud randomized phrases on a computer
screen. For the naturalistic task, participants were presented with a simple
map from which to give navigation instructions. The results showed differences in average vowel placement and token-to-token variability. These data
disprove previous hypotheses that vowels are stable across Spanish dialects.
2aSC6. Phonetic variability in Moroccan Arabic rhotics. Aaron Freeman
(Dept. of Linguist, Univ. of Pennsylvania, 3401-C Walnut St., Ste. 300, C
Wing, Philadelphia, PA 19104, aaronfr@sas.upenn.edu)
Moroccan Arabic /r/ and its pharyngealized counterpart exhibit a wide
range of variability in their pronunciation, with reported articulations ranging from apical trills to uvular fricatives. Using an phonetic dataset elicited
from speakers of the dialect of Fès (reported to traditionally have a uvular
variant), I assess the distribution and acoustic properties of rhotic variants,
including coarticulatory effects. The data present three distinct rhotic articulations: (1) an apical trill or tap, (2) an apical continuant produced together
with high-frequency sibilant noise, and (3) a dorsal sonorant or rhotacized
vowel similar to English “burred /r/.” Despite claims in the literature, no
uvular fricative was observed in the data, and the dorsal sonorant was identified by speakers as the idiosyncratic local pronunciation. Variants (1) and
(2) further exhibit devoiced positional variants. Lowered F2 of adjacent
vowels and of sonorant portions of the rhotic signal differentiate the pharyngealized phoneme /r¿/ from its plain counterpart /r/. However, both phonemes were found to exhibit the same range of variation in their primary
articulation. Two acoustic properties common to all variants were (a)
depression and attenuation of upper formants and (b) the presence of some
aperiodic noise above 5 kHz.
2aSC7. On the acoustic cues of unreleased stops. Ting Huang (Dept. of
Linguist and Philosophy, Massachusetts Inst. of Technol., Graduate Inst. of
Linguist, Rm. B306, HSS Bldg., No. 101, Section 2, Kuang-Fu Rd.,
Hsinchu City 30013, Taiwan, funting.huang@gmail.com) and Michael
Kenstowicz (Dept. of Linguist and Philosophy, Massachusetts Inst. of
Technol., Cambridge, MA)
Unreleased stops, lacking a burst, have been claimed to have low perceptibility and are more likely to neutralize place contrasts. While this proposition has been supported by examining no-burst VC fragments spliced from
released stops, little is known about the acoustic discriminability among true
unreleased stops. This study fills this gap by analyzing the acoustic correlates
of VC (where C = unreleased stops pK, tK, kK, ?) in Cantonese and Taiwanese
Southern Min. Specifically, duration and formant transitions were estimated
across three distinct contexts: VC-V vs. VC#V vs. VC#C. The preliminary
results are (a) the formant transitions are effective cues to place contrasts of
the unreleased stops: the labial has low F2 offset frequency, the coronal has
high F2, and the dorsal has low F3, (b) the magnitude of transition cues
varies with different contexts: the cues are more significant when followed
by a vowel-initial, lexical morpheme (VC#V) than by a consonant-initial or
functional morpheme (VC#C, VC-V), and (c) vowel raising is resisted in the
dorsal-final environment. This finding may have implications for the phonotactic constraint *[ + high][high] in the two languages.
Acoustics ’17 Boston
3578
Diphthongs have formant transitions sensitive to speaking rate, stress,
vowel quality, and language ability. Tongue movement typically shows a
functional “pivot” where the palate to tongue distance is almost constant;
some combinations result in an “arch,” with only part of the tongue moving
[Iskarous, J. Phon. 33, 363-381, 2005]. Here, ultrasound images of the
tongue in Korean, Japanese, and Mandarin diphthongs and triphthongs
(Mandarin) (vowel sequences in Korean and Japanese) were analyzed. Five
repetitions of 6 nonwords were presented in Hangul, Hiragana or both Pinyin and a character for a word with the same vowel on tone 1. Diphthongs
were [ai, ei, au] and triphthongs/sequence of three vowels were [iau, ieu,
and uai]. All the ultrasound image frames for the target vowels were traced
and superimposed. Preliminary results indicate that most vowel quality pairs
resulted in a pivot pattern, with some arch patterns as well. Even in Mandarin triphthongs, there were generally two pivots, even though phonologically the sequence is considered a single vowel nucleus. Several approaches
to quantifying this effect will be presented. It is possible that the success in
producing a pivot could indicate mastery of production, both in development and in second language learning.
2aSC9. Exploring the acoustic characteristics of individual variation.
Benjamin V. Tucker and Daniel Brenner (Linguist, Univ. of AB, 4-32
Assiniboia Hall, Edmonton, AB T6G 2E7, Canada, bvtucker@ualberta.
ca)
Studies of the acoustic properties of words often analyze a small subset
of words across a large population of speakers. Much of the previous
research has not investigated the individual variation produced by a single
speaker in large sets of words. The present study analyzes the individual
variation produced by a male Western Canadian English speaker, who produced 26,800 English words and 9,600 pseudo-words. All pseudo-words
were phonotactically licit and were generated using the software package
Wuggy (Keuleers & Brysbaert, 2010). Each word has been force-aligned
using the Penn Forced Aligner (Yuan & Liberman, 2008) and then hand corrected by trained phoneticians. We investigate the formant space, word pitch
contours, segmental duration, and other acoustic characteristics relevant to
classes of segments (such as center of gravity for fricatives). An acoustic
comparison is performed between the words and pseudo-words. We explore
the acoustic variation of the individual segments produced by this speaker
and investigate his individual speech patterns. Finally, we consider the value
of delving deeply into productions of a single speaker rather than relying on
averaged summaries across a sample a large sample.
2aSC10. Uncertainty of glottal airflow estimation during continuous
speech using impedance-based inverse filtering of the neck-surface
acceleration signal. Victor M. Espinoza (Dept. of Music and Sonology,
Universidad de Chile, Compania 1264, 7th Fl., B Sector, Santiago 8340380,
Chile, vespinoza@uchile.cl), Daryush Mehta, Jarrad Van Stan, Robert E.
Hillman (Ctr. for Laryngeal Surgery and Voice Rehabilitation,
Massachusetts General Hospital, Boston, MA), and Matıas Za~
nartu (Dept.
of Electron. Eng., Universidad Tecnica Federico Santa Maria, Valparaıso,
Valparaıso, Chile)
The aim of this work is to determine the uncertainty of non-invasive
glottal aerodynamic measures that are obtained using subglottal impedancebased inverse filtering (IBIF) of the signal from a neck-placed accelerometer
during continuous speech. Currently, we are studying the vocal behavior of
individuals with typical voices and voice disorders by analyzing weeklong
recordings using a smartphone-based ambulatory voice monitor. We extend
on previously reported analyses of sustained vowel production using subglottal IBIF and move toward continuous speech applications where IBIF
parameters are estimated in a frame-based approach. Selected voiced frames
of both oral-airflow (baseline) and acceleration signal from the Rainbow
Passage are used to build a probabilistic model of IBIF parameters to run
3579
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
multiple random realizations of the inverse-filtered neck-surface acceleration signal. Confidence intervals are estimated for both the glottal waveform
and derived features. The probabilistic model is tested using data from
patients with vocal hyperfunction and matched-control subjects with normal
voices at a comfortable pitch and loudness in an acoustically treated sound
booth. Results show that model parameters follow approximately normal
distributions, and the confidence intervals for the estimates of glottal aerodynamic measures were <10%, which is in close agreement with previously
reported IBIF performance using sustained vowels.
2aSC11. Exertive modulation of coordinative structures in speech. Sam
Tilsen (Cornell Univ., 203 Morrill Hall, Ithaca, NY 14853, tilsen@cornell.
edu)
An articulography study was conducted to investigate variability in the
relative contributions of the upper lip, lower lip, and jaw to bilabial closure
and opening tasks in speech. We currently do not know the extent to which
the contributions of articulator subsystems may vary in the absence of linguistic contextual variation. One hypothesis is that variation in exertive
mechanisms (e.g., arousal, effort, and attention) differentially affects articulator subsystems; this predicts that variation in articulator contributions will
be nonstationary and will correlate with exertive variables. In this study,
head movement during responses is considered a proxy for exertive variation. Nine experimental sessions were conducted in which six participants
repeatedly produced the form [i.pa], instructed to do so as consistently as
possible throughout the session. It was observed that distributions of relative
articulator contributions differed substantially across participants and were
non-stationary for all participants. Head movement during response production accounted for a substantial amount of variation in relative articulator
contributions. These results show that interactions between subsystems in a
coordinative structure are nonstationary and differentially susceptible to exertive modulations. This suggests that experimental manipulation of exertion can be used to investigate the organization of articulatory control.
2aSC12. Aeroacoustic consequences of tongue troughs in labiodentals.
Christine H. Shadle (Haskins Labs., 300 George St., New Haven, CT 06511,
shadle@haskins.yale.edu), Hosung Nam (English Lang. and Lit., Korea
Univ., New Haven, CT), A. Katsika (Linguist, U.C. Santa Barbara, Santa
Barbara, CA), Mark Tiede, and D. H. Whalen (Haskins Labs., New Haven,
CT)
It has long been accepted that the main constriction for fricatives /f, v/ is
formed by the lower lip pressing against the upper teeth, thus allowing the
tongue to freely coarticulate with preceding and following segments. Here,
electromagnetic articulometry data were obtained from 5 subjects in a study
of tongue troughs, defined as a discontinuity in anticipatory coarticulation,
such as when the tongue drops during a bilabial consonant in /ii/ context.
The corpus included /f/ and /v/ in VC(C)V contexts, where V = /i/ for C= /v/
, and V = {/i a u/} for C= /f/. The tongue moved down and back for /f/ in all
vowel contexts; in /iC(C)i/ context, the troughs were deeper for long
(VCCV) than short labial consonants, as predicted, and deeper for /f/ than
for /p, b, v, m/, which was unexpected. There seems to be a secondary gesture specifying that the tongue be down during labiodental fricatives. Our
hypothesis is that lowering the tongue for /f/ ensures that the airflow resistance will be due only to the labiodental constriction, thus providing an aeroacoustic advantage for the onset of turbulence. This is supported by
comparison of asymmetric vowel contexts (e.g., /uffi/-/iffu/ tokens); tongue
dorsum and blade sensors show the tongue moving quickly away from /i/ to
/f/ position, in contrast to movement from /u/ to /f/. Aerodynamic considerations thus appear to be actively incorporated into the speech motor plan.
[Work supported by NIH grant DC-002717.]
2aSC13. A comparison of lip positions for /Ø/ and // in Bora. Steve
Parker (Appl. Linguist, Graduate Inst. of Appl. Linguist, Dallas, TX) and
Jeff Mielke (English, North Carolina State Univ., 221 Tompkins Hall,
Campus Box 8105, Raleigh, NC 27695-8105, jimielke@ncsu.edu)
Bora is a Witotoan language spoken by about 750 persons in the Amazon jungle of Peru and 100 in Colombia. Its phonemic vowels are /i e a o Ø
/ (Thiesen and Weber 2012). A contrast between a central and a back
Acoustics ’17 Boston
3579
2a MON. AM
2aSC8. Ultrasound study for patterns of tongue movement in
diphthongs, triphthongs, and vowel sequences in Korean, Japanese, and
Mandarin. Boram Kim (Linguist, CUNY Graduate Ctr., The Graduate Ctr.,
City Univ. of New York, 365 Fifth Ave., New York, NY 10016, bkim@
gradcenter.cuny.edu), Ai Mizoguchi (Speech-Language-Hearing Sci., Cu,
New York, NY), and D. H. Whalen (Speech-Language-Hearing Sci., Cu,
New Haven, CT)
vowel which are otherwise identical is theoretically significant since it
shows that a binary feature [ + /-back] is too weak to encode all phonological
contrasts along the front/back dimension. The three high vowels of Bora
have been acoustically confirmed with measurements of F1-F3 (Parker
2001), but there has been no articulatory investigation of these vowels.
Impressionistically all of the Bora vowels except /o/ are articulated with
unrounded lips. However, Ladefoged and Maddieson (1996) note that high
back unrounded vowels in languages such as Japanese involve a gesture of
lip compression or inrounding. Consequently, an important research question is whether the distinction between /Ø/ and // in Bora can be relegated
to a difference in lip positions rather than to a primary contrast in tongue
backness? To test this hypothesis, we obtained video recordings of native
speakers on location in a Bora village (6 males and 7 females), and we
report lip position data from these recordings.
2aSC14. Gradient realization of Mandarin nasal codas. Yanyu Long
(Linguist, Cornell Univ., 203 Morrill Hall, Ithaca, NY 14853, yl2535@
cornell.edu)
Impressionistic studies suggested that Mandarin nasal codas optionally
delete before vowels (/dan/ + /ai/->[da~.ai]) and assimilate to the following
stops in place (/dan/ + /pai/->[dam.pai]). In this EMA study, we found that
both processes are not phonological changes but are gradient processes
modulated by speech rates. Three native Mandarin speakers read disyllabic
words with n/˛ codas before /a/ and /p/ in a carrier sentence at three speech
rates. The tongue tip trajectories show that both nasals retain a reduced
tongue gesture before /a/. The reduction INCREASEs as speed INCREASEs
and is more variant in faster speech. The lower lip trajectories show that the
labial gesture of /p/ occurs earlier when preceded by nasal codas. The timenormalized duration of this gestural advance DECREASEs as speed
INCREASEs (and decreases at a slower rate in slower speech), suggesting
less gestural overlap at faster speed. Such a relationship is opposite to the
reduction-speed relationship. The articulatory evidence shows that although
not perceptual salient, there is a reduced tongue gesture for nasals before
vowels and gestural overlap between stops and preceding nasals. The nonlinear effects of speed on gestural reduction and overlap suggest both are
gradient phonetic processes; yet, their opposite relationships to speed suggest different implementation mechanisms.
2aSC15. Articulatory differences between glides and vowels. Dan
Cameron Burgdorf and Sam Tilsen (Linguist, Cornell Univ., 28 Village
Circle Apt. 2, Ithaca, NY 14850, dcb275@cornell.edu)
Glides bear similarities to both consonants and vowels, and align with
different classes of phonological patterns in different languages. Limited
prior studies have suggested that glides are realized with a greater degree of
constriction and a shorter duration than vowels, but articulatory studies are
rare and we do not know the relative importance of these properties. To
determine how glides differ from vowels gesturally, an articulatory study
was conducted with EMA. A two-dimensional stimuli continuum was constructed by manipulating the intensity and duration of high vowels (i, u)
flanked by a low vowel (a), from a vowel-like extreme to a glide-like
extreme, and participants were asked to imitate these stimuli immediately
after hearing them. Results show interesting asymmetries and interactions;
while intensity was not directly imitated through degree of constriction, it
did affect intergestural timing, with low-intensity stimuli yielding shorter
durations in production. The relationship between glides and vowels is more
nuanced than any single featural difference.
2aSC16. Lemma frequency affects the duration of homographic noun/
verb conversion “homophones.” Arne Lohmann (Dept. of English,
Heinrich-Heine-Universitaet D€
usseldorf, Universitaetsstrasse 1, Duesseldorf
40225, Germany, arne.lohmann@hhu.de)
This paper reports empirical evidence for an effect of lemma frequency
on the duration of homographic Noun-Verb homophones in spontaneous
speech, e.g., cut(N) / cut(V). In previous research on effects of lemma frequency, these words were not investigated due to a focus on heterographic
homophones (e.g., Gahl 2008). However, testing the frequency hypothesis
3580
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
on Noun/Verb homophones is of great theoretical relevance, as their representational status is especially controversial in both linguistic as well as psycholinguistic models of the mental lexicon. A mixed-effects analysis of
speech data from the Buckeye corpus yields the result that the more frequent
member of a Noun/Verb pair is pronounced with shorter duration, relative
to its low-frequency twin. Generally speaking, this finding supports models
of the mental lexicon in which entries are specified for syntactic category.
Furthermore, this outcome is at odds with an account of “complete frequency inheritance” across homophones, as predicted by the Levelt production model. A separate analysis of the subsample of low-frequency words
was carried out in order to further investigate possible frequency inheritance
effects, as under an assumption of “partial inheritance.” No such effects
were found. Taken together, the findings can be best accounted for in a
model that assumes completely separate lexical representations for homophonous words.
2aSC17. Quantifying kinematic aspects of reduction in a contrasting rate
production task. Mark Tiede (Haskins Labs., 300 George St., Ste. 900, New
Haven, CT 06511, tiede@haskins.yale.edu), Carol Y. Espy-Wilson (Elec. and
Comput. Eng., Univ. of Maryland, College Park, MD), Dolly Goldenberg
(Linguist, Yale Univ., New Haven, CT), Vikramjit Mitra (Speech Technol.
and Res. Lab., SRI Int., Menlo Park, CA), Hosung Nam (English Lang. and
Lit., Korea Univ., New Haven, CT), and Ganesh Sivaraman (Elec. and
Comput. Eng., Univ. of Maryland, Hyattsville, MD)
Electromagnetic articulometry (EMA) was used to record the 720 phonetically balanced Harvard sentences (IEEE, 1969) from multiple speakers
at normal and fast production rates. Participants produced each sentence
twice, first at their preferred “normal” speaking rate followed by a “fast”
production (for a subset of the sentences two normal rate productions were
elicited). They were instructed to produce the “fast” repetition as quickly as
possible without making errors. EMA trajectories were obtained at 100 Hz
from sensors placed on the tongue, lips, and mandible, corrected for head
movement and aligned to the occlusal plane. Synchronized audio was
recorded at 22050 Hz. Comparison of normal to fast acoustic durations for
paired utterances showed a mean 67% length reduction, and assessed using
Mermelstein’s method (1975), two fewer syllables on average. A comparison of inflections in vertical jaw movement between paired utterances
showed an average of 2.3 fewer syllables. Cross-recurrence analysis of distance maps computed on paired sensor trajectories comparing corresponding
normal:normal to normal:fast utterances showed systematically lower determinism and entropy for the cross-rate comparisons, indicating that rate
effects on articulator trajectories are not uniform. Examples of rate-related
differences in gestural overlap that might account for these differences in
predictability will be presented. [Work supported by NSF.]
2aSC18. Physiological correlates of loud speech: Respiratory and
intraoral pressure data. Laura L. Koenig (Adelphi Univ. and Haskins
Labs., 300 George St., New Haven, CT 06511, koenig@haskins.yale.edu)
and Susanne Fuchs (Leibniz Ctr. for General Linguist, Berlin, Germany)
Many previous studies have investigated how increased loudness affects
speech production behavior, but authors have varied widely in the measures
they have used, and few studies have systematically assessed relationships
among respiratory, laryngeal, and acoustic measures. In this work, we present respiratory and aerodynamic (intraoral pressure) data on eleven German-speaking women who produced speech in regular and loud conditions
in three different tasks: Reading short sentences, responding to questions,
and producing spontaneous speech. Loudness variation was assessed naturalistically by varying speaker-experimenter distance. Respiratory behavior
was assessed using inductance plethysmography, and intraoral pressure was
obtained via a pressure transducer affixed to the hard palate. In the respiratory data, we measured inspiratory magnitude as well as the slope of the
inspiratory and expiratory phases. In the intraoral pressure data, we searched
automatically for the peak pressure value during plosives anterior to the
transducer (viz., bilabials and alveolars). These physiological data will be
related to previously-presented data on speech acoustics to begin to disentangle respiratory and supraglottal contributions to the characteristics of
loud speech.
Acoustics ’17 Boston
3580
127(3), 1955). Principal components analysis of the ultrasound data reveals
that these two acoustic changes are independently controlled. We will also
test the role of individual differences in vocal tract morphology. Hard palate
curvature affects variability in both articulation and acoustics [Brunner et
al. (2009, JASA 125(6), 3936-3949)]: flatter palates have less acoustic stability (and greater flexibility) [Bakst & Johnson (2016, JASA 140(4), 32233223)], requiring greater articulatory precision to maintain acoustic consistency. We hypothesize that people with flatter palates will adapt to altered
feedback faster and more completely because they (a) may have more
detailed knowledge of their articulation-acoustics mapping and (b) have
greater flexibility in their acoustic output.
In a classic 1988 paper, Titze presented arguments based on the dynamics of the motion of the air through the glottis and its relation to the pressures there to describe how the presence of the vocal tract should affect the
phonation threshold pressure. He argued that the action of the intraglottal
pressures due to the vocal tract and the motion of the vocal folds would be
in phase, and thus the presence of the vocal tract should lower the threshold
pressure by an amount that depends upon its inertence. Since the inertence
of the vocal tract depends directly upon its length and inversely upon its
cross sectional area, these arguments set the stage for quantitative studies of
the connection of the geometry of the vocal tract with threshold pressure in
both mathematical and physical models. To this end, two sets of experiments were carried out in Erlangen with a physical model of the vocal folds
and a vocal tract whose dimensions could be varied. One set of experiments
focused on the relationship of the threshold pressure and its frequency with
the cross sectional area (areas varied from about 2 cm2 to 12 cm2) and the
other addressed the relationship of threshold pressure and its frequency with
the length of the vocal tract (lengths varied in increments of 5 cm from
about 4 cm to 54 cm). These measurements are compared with calculations
done with the surface wave model and those done with a two-mass model.
2aSC22. Empirical eigenfunctions as a function of glottal adduction in
excised hemilarynx experiments. David Berry (Surgery, UCLA, 31-24
Rehab, Los Angeles, CA 90095-1794, daberry@ucla.edu)
2aSC20. Sensorimotor adaptation to auditory perturbation of speech is
facilitated by noninvasive brain stimulation. Laura Haenchen, Ayoub
Daliri, Sara C. Dougherty, Emily J. Thurston, Julia Chartrove, Tyler K.
Perrachione, and Frank H. Guenther (Boston Univ., 635 Commonwealth
Ave., Boston, MA 02215, haenchen@bu.edu)
Repeated exposure to disparity between the motor plan and auditory
feedback during speech production results in a proportionate change in the
motor system’s response called auditory-motor adaptation. Artificially raising F1 in auditory feedback results in a concomitant decrease in F1 during
speech production. Transcranial direct current stimulation (tDCS) can be
used to alter neuronal excitability in focal areas of the brain. The present
experiment explored the effect of noninvasive brain stimulation applied to
the speech premotor cortex on the timing and magnitude of adaptation
responses to artificially raised F1 in auditory feedback. Participants (N = 18)
completed a speaking task in which they read target words aloud. Participants’ speech was processed to raise F1 by 30% and played back to them
over headphones in real time. A within-subjects design compared acoustics
of participants’ speech while receiving anodal (active) tDCS stimulation
versus sham (control) stimulation. Participants’ speech showed an increasing magnitude of adaptation of F1 over time during anodal stimulation compared to sham. These results indicate that tDCS can affect behavioral
response during auditory-motor adaptation, which may have translational
implications for sensorimotor training in speech disorders.
2aSC21. Articulation and adaptation to altered auditory feedback.
Sarah Bakst (Linguist, Univ. of California Berkeley, 1915 Bonita Ave.,
Studio A, Berkeley, CA 94704, bakst@berkeley.edu), John F. Houde
(Otolaryngol., Univ. of California San Francisco, San Francisco, CA), and
Keith Johnson (Linguist, Univ. of California Berkeley, Berkeley, CA)
Speakers listen to themselves while talking, and they use this auditory
feedback to modify their speaking plans on-line [Houde and Jordan, Science
20;279(5354):1213-1216 (1998)]. This altered auditory feedback experiment uses ultrasound tongue imaging to investigate individual differences in
adaptation under three conditions: (1) raising F1 in /E/, (2) raising F2 in /U/,
(3) raising F3 in /r/. Pilot data suggest that speakers may change both F1
and F2 in response to an altered F1, replicating Katseff et al. (2010, JASA
3581
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
For three excised human male hemilarynxes, vocal fleshpoints and empirical eigenfunctions were computed along the vocal fold surface. For two
larynges, an increase in adduction resulted in an increase in lateral and vertical oscillation amplitudes and an improved energy transfer from the airflow
to the vocal fold tissues. In contrast, the third larynx exhibited a decrease in
oscillation amplitudes. By evaluating the empirical eigenfunctions, this
decrease in oscillation amplitudes was associated with an unbalanced oscillation pattern with predominantly lateral amplitudes. These results suggest
that adduction facilitates the phonatory process by increasing vibrational
amplitudes. However, this relationship holds only when a balanced ratio
between the vertical and lateral displacements is maintained. Indeed, it
appears that a balanced vertical-lateral oscillation pattern may be more beneficial to phonation than strong periodicity with predominantly lateral
vibrations.
2aSC23. Non-linear dimensionality reduction for correlated tongue
measurement points. Jaekoo Kang (Speech-Language-Hearing Sci.
Program, CUNY Graduate Ctr., 3547 34th St., Apt. 1E, Long Island City,
NY 11106, jkang@gradcenter.cuny.edu), D. H. Whalen (Speech-LanguageHearing Sci. program, CUNY Graduate Ctr., New York, NY), and Hosung
Nam (Haskins Labs., New Haven, CT)
The tongue surface is a good indicator of the main supralaryngeal articulation of speech, and quantifying it with more points to measure increases
accuracy. However, unlike acoustic variables (e.g., formants), articulatory
variables (e.g., flesh-point pellets or multiple measurement points on an
ultrasound image) are highly correlated to one another. Projecting the correlated variables with high dimensionality to lower is necessary. This study
employs a nonlinear reduction method, Autoencoder (Hinton & Salakhutdinov, 2006), and compares its performance to that of Principal Component
Analysis, which assumes orthogonality and linearity of dimensions. The two
methods were applied to eight English vowels in clear speech from Wisconsin X-ray Microbeam dataset (Westbury, 1990). Root-mean-squared errors
measured after the data reconstruction were analyzed by vowel type and pellet locations. Preliminary results with one speaker exhibited a slightly better
performance than the nonlinear method, especially for some vowels (/i,u/).
More speakers and time frames are predicted to lead to larger improvements
of the nonlinearity analysis over the linear PCA. Similar tests of the two
methods will be performed on the variabilities of the corresponding acoustic
data. It is predicted that nonlinear analyses will make the variabilities in the
acoustic and articulatory domains more comparable than previously
assumed.
2aSC24. Sensitivity and specificity of auditory feedback driven
articulatory learning in virtual speech. Jeffrey J. Berry, Ramie Bagin
(Speech Pathol. & Audiol., Marquette Univ., P.O. Box 1881, Milwaukee,
WI 53201-1881, jeffrey.berry@marquette.edu), James Schroeder (Elec. and
Comput. Eng., Marquette Univ., Milwaukee, WI), and Michael T. Johnson
(Elec. and Comput. Eng., Univ. of Kentucky, Lexington, KY)
The current work presents articulatory kinematic and acoustic data characterizing how the form of synthesized auditory feedback in virtual speech
affects the sensitivity and specificity of articulatory learning. The term
“virtual speech” refers to talker-manipulated synthesized speech controlled
Acoustics ’17 Boston
3581
2a MON. AM
2aSC19. Phonation threshold pressure and the properties of the vocal
tract. Lewis Fulcher (Dept. of Phys. and Astronomy, Bowling Green State
Univ., Bowling Green, OH 43403, fulcher@bgsu.edu), Alexander
Lodemeyer (Processmachinery and Systems Eng., Friedrich-Alexander
Univ. Erlangen-Nuernberg, Erlangen, Bavaria, Germany), Stefan
Kniesburges (Phoniatrics and Pediatric Audiol., Univ. Hospital Erlangen,
Erlangen, Germany), George Kaehler (Processmachinery and Systems Eng.,
Friedrich-Alexander Univ. Erlangen-Nuernberg, Erlangen, Bavaria,
Germany), Michael Doellinger (Phoniatrics and Pediatric Audiol., Univ.
Hospital Erlangen, Erlangen, Germany), and Stefan Becker
(Processmachinery and Systems Eng., Friedrich-Alexander Univ. ErlangenNuernberg, Erlangen, Bavaria, Germany)
in real time using electromagnetic articulography (EMA). In the current
work, 36 participants (4 with dysarthria) participated in a learning experiment requiring them to control an articulatory speech synthesizer using
movements of the tongue, lips, and jaw. Participants were divided among
two experimental conditions: (1) an “unmatched” condition, during which
all participants received auditory feedback based on common articulatory
synthesis settings (neither formant working space nor fundamental frequency were distinguishable between talkers); and (2) a “matched” condition, during which the articulatory synthesis parameters were adjusted to
mimic the formant working space and average fundamental frequency of the
learner. Analyses focused on the kinematic and acoustic differences in
learning between the two conditions. Results suggest that the sensitivity and
specificity of articulatory learning is affected by the extent to which the auditory feedback matches the learner’s familiar acoustic working space. Findings have implications regarding how the acoustic characteristics of
perceived speech affect sensorimotor integration and learning in typical
talkers and individuals with dysarthria.
2aSC25. Articulatory reuse in “good-enough” speech production
strategies. Matthew Faytak (Linguist, Univ. of California, Berkeley, 2632
San Pablo Ave. Apt. A, Berkeley, CA 94702, mf@berkeley.edu)
Given a novel speech motor task, a speaker may optimize execution of
the task for accuracy by taking feedback into account, or revert to more
habitually used productions which provide a precise output not entirely optimized for accuracy. An articulatory examination comparing L1 and L2 productions was carried out in part to assess the roles of optimization and
reversion to habit for individual speakers. Native speakers of American
English learning French, (n = 30, 9 males) with a variety of levels of exposure to the L2, were recorded producing the monophthongs of both languages and two English approximants (/r/ and /l/) using ultrasound tongue
imaging, video of lip shape, and audio. Principal component analyses run on
tongue ultrasound and lip shapes of individual speakers reveal that production of L2 French essentially within L1 English articulatory habit is typical;
in several cases, the approximants /r/ and /l/ are essentially reused as French
vowels (e.g., // and /u/). However, a slight optimization toward native-like
productions can be observed in speakers with longer exposure to the L2.
2aSC26. Amplitude envelope kinematics of speech: Parameter extraction
and applications. Lei He and Volker Dellwo (Phonet. Lab, Univ. of Zurich,
D€ubendorfstrasse 32, Zurich 8051, Switzerland, lei.he@uzh.ch)
We model the amplitude envelope of a speech signal as a kinematic system and calculate its basic parameters: displacement, velocity, and acceleration. Such system captures the smoothed amplitude fluctuation pattern over
time, illustrating how energy is distributed across the signal. Although the
pulmonic air pressure is the primary energy source of speech, the amplitude
modulation pattern is largely determined by articulatory behaviors, especially mandible and lip movements. Therefore, there should be a correspondence between signal envelope kinematics and articulator kinematics.
Previous research showed that a tremendous amount of speaker idiosyncrasies in articulation existed. Such idiosyncrasies should therefore be reflected
in the envelope kinematics as well. From the signal envelope kinematics, it
may be possible to infer individual articulatory behaviors. This is particularly useful for forensic phoneticians who usually have no access to articulatory data, and clinical speech pathologists who usually find it difficult to
make articulatory measurements in clinical consultations.
2aSC27. Are idiosyncrasies in vowel production free or learned? A
study of variants of the French vowel system in biological brothers.
lucile rapin (Linguist, Universite du PQ a Montreal, Montreal, QC, Canada),
Jean-Luc Schwartz (GIPSA-Lab, Grenoble, France), and Lucie Menard
(Linguist, Universite du PQ a Montreal, CP 8888, succ. Centre-Ville,
Montreal, QC H3C 3P8, Canada, menard.lucie@uqam.ca)
Speech production displays a number of idiosyncrasies that are individual
variations in the way speakers achieve phonetic contrasts in their language. It
was shown previously [Menard L. et al., Speech Commun. 2008, 50(1), 1428] that idiosyncrasies in the production of the height contrast in oral vowels
in French are characterized by large variations in the distribution of F1 values,
3582
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
which are associated with a stability of F1 for a given height degree, independent of the place of articulation (front vs. back) and rounding. The current
study aimed to assess whether these idiosyncrasies are random or induced by
the learning environment. Ten pairs of French Canadian adult male siblings
were recruited, and each individual was asked to produce ten repetitions of
the ten French oral vowels, which were recorded. F1 values were extracted
using linear predictive coding algorithms. There was a trend towards imposed
variations, since the distances between F1 values for brothers for a given
vowel were significantly smaller than the corresponding distances between
speakers who were not brothers. However, the F1 values within pairs of brothers were significantly correlated for only two of the six mid-high or mid-low
vowels. Thus, it appears that a large part of the idiosyncrasies in the pronunciation of vowels were random and differed between brothers.
2aSC28. Random effects and the evaluation of the uniform scaling
hypothesis for vowels. Terrance M. Nearey (Linguist, Univ. of AB, 4-32
Assiniboia Hall, University of AB, Edmonton, AB T6G 0A2, Canada,
tnearey@ualberta.ca), Santiago Barreda (Linguist, Univ. of California,
Davis, CA), Michael Kiefte (School of Commun. Disord., Dalhousie Univ.,
Halifax, NS, Canada), and Peter F. Assmann (School of Behavioral and
Brain Sci., Univ. of Texas at Dallas, Richardson, TX)
The uniform scaling hypothesis suggests that formant frequencies of
vowels of one speaker can accurately predict those of any other speaker of
the same dialect by applying a single multiplicative scale factor. Fant (STL
QPSR, 2-3, 1-19, 1975) questioned this assumption and presented graphical
evidence that scale factors vary between adult men and women for different
vowels in ways that are systematically related to their position in the vowel
space. However, the issue is complicated by the fact that average vowel
datasets may contain multiple sources of variation, including dialect mixture. Statistical evaluation of the uniform scaling hypothesis (or any systematic deviations from it) need to account for multiple sources of variation. In
preliminary random-effects analyses of formant data produced by individual
speakers from three geographically distinct dialect regions of American
English, we found that while relatively modest systematic trends in nonuniformity may exist, their magnitude may be smaller than suggested by earlier
work and their assessment may be strongly influenced by other sources of
variation within geographical dialect regions. We will present extensions of
our preliminary analyses that will include speech data from Texas, Alberta,
and Nova Scotia collected in our laboratories.
2aSC29. Speech acoustics can be modulated by cognitive interference in
a vowel-modified Stroop task. Caroline A. Niziolek (Speech, Lang., and
Hearing Sci., Boston Univ., 635 Commonwealth Ave., SLHS, Rm. 351,
Boston, MA 02135, carrien@bu.edu), Kimberly R. Lin (Neurosci., Boston
Univ., Boston, MA), Sara D. Beach (Speech and Hearing BioSci. and
Technol., Harvard Med. School, Boston, MA), Ian A. Quillen (Neurosci.,
Boston Univ., Boston, MA), and Swathi Kiran (Speech, Lang., and Hearing
Sci., Boston Univ., Boston, MA)
How are speech acoustics influenced by cognitive processes? In the current study, we used a novel variant of the Stroop test to measure whether the
interference between color naming and reading could modulate vowel formant frequencies. Seventeen healthy participants named the color of words in
three categories: (1) congruent words (e.g., “red” written in red), (2) colorincongruent words (e.g., “green” written in red), and (3) vowel-incongruent
words with phonetic properties that partially matched their color (e.g., “rid”
written in red). We hypothesized that the cognitive effort needed to inhibit
reading—saying “red,” not “rid”—could affect vowel acoustics. For example, the correct spoken response (“red”) could more acoustically resemble
the inhibited word “rid;” alternatively, the vowel could be influenced in the
opposite direction, resembling “rad,” which would serve to accentuate the
acoustic contrast between the spoken and inhibited words. As expected, participants were slower to produce words on color-incongruent trials than on
congruent trials. Interestingly, vowel-incongruent trials were not significantly slower than congruent trials, but preliminary acoustic analyses of the
first formant (F1) showed that some subjects systematically modulated their
productions in the presence of incongruent vowels. This finding lends
insight into how the brain integrates multiple pieces of information to produce speech.
Acoustics ’17 Boston
3582
Closed-syllable vowel laxing describes the cross-linguistic tendency for
high and mid vowels to have higher F1 values and more central F2 values in
closed than in open syllables. This pattern is often analyzed as resulting
from vowel shortening in closed syllables. However, vowel undershoot does
not generally result in an increase of F1 for mid vowels. This paper tests an
alternative hypothesis according to which laxing is a strategy to enhance
coda-consonant place contrasts, with lower and more central vowels providing more informative closure transitions than higher and more peripheral
vowels. Two native French speakers were recorded uttering C1VC2 nonce
words with C1, C2={p,t,k} and V={i,y,u,e,,o,E,œ,O,a}. 85 English and
French hearers were presented with the stimuli without word-final bursts
and were asked to identify the place of the word-final consonant. In accordance with the enhancement hypothesis, lowering accompanied by centralizing was found to improve [p]-[k] and [t]-[k] contrasts for front unrounded
vowels and [p]-[k], [t]-[k], and [p]-[t] contrasts for back vowels. These contrasts were not systematically more distinct after front rounded vowels than
after back and unrounded front vowels with similar F1 values (e.g., [y] vs.
[i]/[u]), suggesting that centralizing alone is not sufficient to enhance place
contrasts.
2aSC31. Quantifying sonority contour: A case study from American
English. Suyeon Yun (Ctr. for French and Linguist, Univ. of Toronto
Scarborough, 1265 Military Trail, Humanities Wing, H427, Toronto, ON
M1C 1A4, Canada, suyeon.yun@utoronto.ca)
Previous studies have argued that the most reliable phonetic correlate of
sonority is intensity (e.g., Parker 2002, 2008, Jany et al. 2007). However,
those studies have only considered intensity of a single segment. This paper
investigates the phonetic correlate of sonority contour in consonant clusters.
10 native speakers of American English (5 male, 5 female) read 33 monosyllabic English words that begin with a bi- or tri-consonantal cluster (e.g.,
play, stray) embedded in a frame sentence (“Father saw ‘____’ again,” used
in Parker 2008). Measured first were (i) an average RMS and (ii) sound level
minima of each consonant in the cluster C1C2, and the sonority contour was
quantified by subtracting the intensity value of C1 from the intensity value
of C2. Also, (iii) actual intensity slopes in the transition between the two
consonants were measured. Results show that the intensity contours calculated based on (i) and (ii) do not always correspond to the intensity slopes
(iii), while both of them are in general correlated with the sonority contour.
It will also be suggested that it is intensity slopes (iii) that play a crucial role
in consonant cluster perception and in phonological phenomena involving
consonant clusters.
2aSC32. Phoneme distribution and phonological processes of
orthographic and pronounced phrasal words by syllable structure in
the Seoul Corpus. Byunggon Yang (English Education, Pusan National
Univ., 30 Changjundong Keumjunggu, Pusan 609-735, South Korea,
bgyang@pusan.ac.kr)
This study examined the phoneme distribution and phonological processes of orthographic and pronounced phrasal words according to syllable
structure in the Seoul Corpus of spontaneous speech produced by 40 Korean
speakers. To achieve the goal, the phrasal words were extracted from the
transcribed label scripts of the Seoul Corpus using Praat. Then, the onsets,
peaks, codas, and syllable types of the phrasal words were analyzed using
an R script. Results revealed that k0 was most frequently used as an onset in
both orthographic and pronounced phrasal words. Also, aa was the most
favored vowel in the Korean syllable peak with fewer phonological processes in its pronounced form. For the codas, nn accounted for 34.4% of the
total pronounced phrasal words and was the varied form. From syllable type
classification of the Corpus, CV appeared to be the most frequent type followed by CVC, V, and VC from the orthographic forms. Overall, the onsets
were prevalent in the pronunciation more than the codas. From the results,
the author concludes that an analysis of phoneme distribution and phonological processes in light of syllable structure can contribute greatly to the
understanding of the phonology of spoken Korean.
3583
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
2aSC33. A landmark-based approach to transcribing systematic
variation in the implementation of /t, d/ flapping in American English.
Suyeon Yun (Ctr. for French and Linguist, Univ. of Toronto Scarborough,
1265 Military Trail, Humanities Wing, H427, Toronto, ON M1C 1A4,
Canada, suyeon.yun@utoronto.ca), Jeung-Yoon Choi, and Stefanie
Shattuck-Hufnagel (MIT, Cambridge, MA)
A model of human speech processing based on individual cues to distinctive features of phonemes, such as the acoustic landmarks (abrupt spectral changes) that signal manner features, is proposed to provide a more
accurate account of American English flapping of /t/ and /d/ than an allophonic or phone-based model. To test this hypothesis, this study analyses
the phonetic realization of /t, d/ in the context of flapping using the acoustic
landmark cues of abrupt stop closure, abrupt stop release and glide-like amplitude minimum (Stevens 2002), in subsets of the TIMIT corpus. Results
show that the majority of flapped variants of /t, d/ preserve their stop closure
landmark and there are several cases where they preserve both of their stop
landmarks (stop closure and stop release), while exhibiting the landmark
modification for flapping (e.g., t-glide- + ). Additionally, flapped /t/ is more
likely to maintain stop landmarks than flapped /d/. This is unexpected from
the traditional view of the flap as a categorical phenomenon, and suggests
that acoustic landmarks are useful in capturing systematic phonetic variation
in flapping. It will be important to test whether this landmark-based analysis
yields a better result in automatic speech recognition than (allo)phone-based
approaches.
2aSC34. Detecting glides and their place of articulation using speechrelated measurements in a feature-cue-based model. Adrian Y. Cho
(Harvard-MIT Program in Speech and Hearing BioSci. and Technol., 50
Vassar St., Rm. 56, Speech Commun. Group, Cambridge, MA 02139,
aycho@g.harvard.edu), Anita Y. Liu (Speech Commun. Group, Res. Lab. of
Electronics, MIT, Quincy, MA), Jeung-Yoon Choi, and Stefanie ShattuckHufnagel (Speech Commun. Group, Res. Lab. of Electronics, MIT,
Cambridge, MA)
An algorithm was developed for detecting glides (/w/, /j/, /r/, /l/, or /h/)
in spoken English and detecting their place of articulation using an analysis
of acoustic landmarks [Stevens 2002]. The system uses Gaussian mixture
models (GMMs) trained on a subset of the TIMIT speech database annotated with acoustic landmarks. To characterize the glide tokens extracted
from the speech samples, the following speech-related measurements were
calculated: energy in four spectral bands (E1-E4), formant frequencies (F1F4), and the time derivatives of E1-E4 (E1’-E4’); the fundamental frequency (F0) and magnitude difference of harmonics (H1-H2, H1-H4) were
also included. GMMs were then trained on a subset of the tokens to learn
the characteristics of each category for two distinct tasks: distinguishing
glide landmarks from the set of all landmark types (identification task), and
determining the place of articulation given a glide landmark (categorization
task). The classifier used the maximum posterior probability of a speech
sample conditioned on each of the trained GMMs. The performance of the
algorithm was evaluated with median F-scores, and results suggest that the
measurements at acoustic landmarks provide salient cues to glide detection
and categorization.
2aSC35. A new metric for calculating acoustic dispersion in stop
inventories. Ivy Hauser (Linguist, Univ. of Massachusetts Amherst, 650
North Pleasant St., Amherst, MA 01060, ihauser@linguist.umass.edu)
Dispersion Theory (DT; Liljencrants and Lindblom, 1972) claims that
acoustically dispersed vowel inventories should be typologically common.
Literature on DT has focused on vowels, where the predictions are robust,
and less work has been done on consonants. This paper uses vocal tract
model data of stops (as in Schwartz et al. 2012) to extend the predictions of
DT to consonants, revealing problems with the conventional method of calculating dispersion. Dispersion is often quantified using triangle area
between three category means as points in acoustic space. This approach
ignores distributions and reduces entire acoustic categories (which have
large variances and different distribution shapes) to single points. Withincategory variance is a factor in DT (Lindblom, 1986) and experimental data
shows that it affects perception (Clayards 2008), yet conventional dispersion
metrics do not take it into account. Here, a new metric based on the JeffriesAcoustics ’17 Boston
3583
2a MON. AM
2aSC30. Closed-syllable vowel laxing: A contrast enhancement strategy.
Benjamin Storme (Linguist and Philosophy, MIT, 16 Wilson Ave.,
Somerville, MA 02145, bstorme@mit.edu)
Matusita distance is proposed and compared with the more conventional
mean to mean distance approach. The incorporation of covariance better
reflects human perception, which has implications for considering dispersion in any acoustic space. Nevertheless, this does not recover the predictions of DT, suggesting DT does not apply to consonants and vowels in the
same way.
2aSC36. Toward an analysis of Spanish glides in the acoustic landmark
framework. Violet Kozloff (Wellesley College, Unit 6018 21 Wellesley
College Rd., Wellesley, MA 02481, vkozloff@wellesley.edu), Stefanie
Shattuck-Hufnagel (MIT, Boston, MA), and Jeung-Yoon Choi (MIT,
Cambridge, MA)
Stevens (2002) proposes that the distinctive feature [glide] is signaled by
an acoustic landmark, i.e., an amplitude/F1 minimum, usually during a phonated region, but this hypothesis has not been tested extensively in languages other than American English. This study analyzes acoustic
realizations of tapped /Q/ and trilled /r/ sounds in European Spanish, identifying a range of acoustic realizations for these consonants, including glides,
and proposing criteria for identifying Spanish tap and trill landmarks in the
speech signal. The speech sample of 200 tokens was drawn from the Albayzin corpus, which includes recordings of read Castillian Spanish from male
and female speakers. Tokens were characterized by the number of amplitude
minima (or occlusions) as well as the presence or absence of vocal fold
vibration and noise. Additional factors analyzed include the rate of amplitude modification (tongue tip vibration), and contextual factors, including
word and syllable position, stress, and consonant clusters. These moments of
abrupt change (amplitude inflection points) provide cues to the manner features of the speaker’s intended words, and are hypothesized to play a significant role in perceptual processing and word recognition. These initial results
for /r/ provide the basis for extension of this analysis to other Spanish glides.
2aSC37. Similarity measurement based rest position re-initialization in
the MRI vocal tract image sequences. Xi Liu (ESPCI, 10 Rue Vauquelin,
ESPCI Paris, Paris 75005, France, 1992xi.liu@gmail.com) and Kele Xu
(College of Electron. Sci. and Eng., National Univ. of Defense Technol.,
Paris, France)
Magnetic resonance imaging is often used for the speech production
research. One important aspect is to segment the vocal tract in the image
sequence. However, during the deformation of the vocal tract, it is highly
possible that the landmark deviates from its correct position and is in need
of reinitialization. During the natural speech production of the subject, the
vocal tract may return to its initial position during the pause at the end of a
sentence. This could be used to reset the landmark to its correct position. In
order to determine the pause in the image sequence, we used similarity
based measurements to compare the similarity between the current frame
and the first frame, which is the beginning of a sentence and the vocal tract
is at the rest position. These measurements include Structural Similarity
(SSIM), Complex Wavelet Structural Similarity (CW-SSIM), Visual Information Fidelity in Pixel (VIFP), Peak Noise to Signal Ratio (PSNR), etc.
We found CW-SSIM outperformed the other methods. We calculated the
similarity measurements and they varied periodically during the speech.
CW-SSIM returned to a maximum of around 0.9, which indicated that the
vocal tract returned to its initial position. The rest of the similarity measurements returned to a maximum value greatly deviated from 1, which indicated that the CW-SSIM was the best candidate.
3584
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
2aSC38. Uncontrolled manifold method to speech production. Hosung
Nam (Dept. of English Lang. and Lit., Korea Univ., 145 Anam-ro,
Seongbuk-gu, Seoul 02841, South Korea, hnam@korea.ac.kr), Jaekoo Kang
(Speech-Language-Hearing Sci. program, CUNY Graduate Ctr., Long
Island City, NY), and Elliot Saltzman (Dept. of Physical Therapy and
Athletic Training, Boston Univ., Boston, MA)
Speech production is a highly skilled sensorimotor activity defined by
articulatory or acoustic coordinates. To compare the variabilities of those
two conceptualizations, issues of dimension reduction, normalization,
incompleteness of information, etc., need to be taken into account. Uncontrolled manifold (UCM) method analyzes high-dimensional movement dataset with respect to the outcomes that count as successful tasks. It divides the
variability in the data into two parts: “bad” variability associated with
motion within the controlled manifold (CM) that would lead to an error in
the task and “good” variability within the uncontrolled manifold (UCM)
that do not harm to the task. The smaller ratio indicates both tighter control
(less variability in the CM) and greater flexibility (more variability in the
UCM). The UCM method is applied to the Wisconsin X-ray microbeam
data. We first constructed a neural-net-based forward mapping from articulators to acoustics. The inter-layer weight matrices and the outputs of each
layer in the trained forward model are then used to compute the elements of
this forward model’s Jacobian matrices; the Jacobians are then used to compute rCM/rUCM ratios. We further compare these ratios across data
obtained in various linguistic conditions.
2aSC39. Anatomically oriented Principal Components Analysis of
three-dimensional tongue surfaces. Steven M. Lulich (Speech and
Hearing Sci., Indiana Univ., Bloomington, IN), Max Nelson, Kenneth de
Jong, and Kelly Berkson (Linguist, Indiana Univ., 1021 East 3rd St Memorial Hall 322 E, Bloomington, IN 47405, maxnelso@umail.iu.
edu)
A procedure for carrying out Principal Components Analyses of threedimensional tongue surfaces segmented from 3D/4D volumetric ultrasound
images is presented. The segmented surface is transformed to a spherical
coordinate system with the origin defined at the anterior visible extreme of
the tendon of the genioglossus (near the mandibular symphysis). Principal
Components Analyses of tongue surface shapes for monosyllabic real words
carried out in this spherical coordinate system are robust to variations in the
location of the origin, and show similarities across speakers, based on a corpus of 10 speakers.
2aSC40. High-resolution speech directivity measurements. Claire
Pincock, Timothy W. Leishman, and Jenny Whiting (Brigham Young
Univ., 159 E 300 S #3, Provo, UT 84606, mckellar.claire@gmail.com)
A measurement system has been developed at Brigham Young University to assess high-resolution directivity data produced by human subjects.
The system incorporates 2522 unique sampling positions over a sphere and
has been used to acquire directivity data of several female and male talkers
repeating phonetically balanced passages. Both polar and balloon plots of
these data have been generated, along with similar plots corresponding to
gender-specific averages and spherical-harmonic expansions. The results
will be used for speech radiation studies and architectural acoustics simulations. This presentation reports the results and compares the directivity averages for both genders.
Acoustics ’17 Boston
3584
MONDAY MORNING, 26 JUNE 2017
ROOM 302, 9:15 A.M. TO 10:40 A.M.
Session 2aSPa
Signal Processing in Acoustics: Topological Signal Processing
Jason E. Summers, Chair
Applied Research in Acoustics LLC, 1222 4th Street SW, Washington, DC 20024-2302
2a MON. AM
Chair’s Introduction—9:15
Invited Papers
9:20
2aSPa1. Topological features in signal processing using frame theory and persistent homology. Mijail Guillemard (Mathematics,
TU Hamburg, Hamburg Univ. of Technol. Inst. of Mathematics (E-10), Hamburg, Hamburg 21073, Germany, mguillemard@gmail.
com)
We present some interactions between frame theory and persistent homology as a new way to construct classification mechanisms in
signal processing. On the one hand, frame theory generalizes basic ideas from time-frequency analysis including aspects of short term
Fourier transformations and wavelet theory. On the other hand, persistent homology provides new algorithms applying concepts from
algebraic topology to data analysis. The question of finding adequate sparse representations of data can be seen from several points of
view, including dimensionality reduction and modern developments in neural networks. Persistent homology, as a topic in topological
data analysis, presents alternative mechanisms for finding adequate sparse representations of data. We explain some interactions between
these tools with applications to the analysis of acoustic signals.
9:40
2aSPa2. The performance of topological classifiers on sonar data. Michael Robinson (Mathematics and Statistics, American Univ.,
4400 Massachusetts Ave. NW, 226 Gray Hall, Washington, DC 20016, michaelr@american.edu)
For various reasons, synthetic aperture sonar (SAS) target classification in various clutter contexts is usually done using a datadriven, machine learning approach. Unfortunately, the resulting feature set can be rather inscrutable—what features is it really using?
Topological methods are particularly well-aligned with the goal of gaining insight into physical processes, since they highlight symmetries which are driven by these physical processes. For instance, collating multiple image looks of a round object uncovers rotational
symmetries in an appropriate feature space derived from the images. The use of topological invariants allows one to infer that the object
is round by reasoning about its feature space. The fact that sonar target signatures are (mostly) translation invariant in range can also be
deduced from topological invariants. I will describe a principled, foundational analysis of target echo structure through the lens of topological signal processing, and then analyze the performance of this approach as compared to more traditional classification methods.
10:00
2aSPa3. Sliding windows and persistence. Jose Perea (Computational Mathematics, Sci. and Eng., Michigan State Univ., 1501 Eng.
Bldg., East Lansing, MI 48824, joperea@msu.edu) and Chris Traile (Elec. and Comput. Eng., Duke Univ., Durham, NC)
The use of geometric and topological ideas as a means to tackle problems in signal analysis has seen a sharp increase in the last few
years. The goal of this talk is to show how ideas from dynamical systems (e.g., time delay embeddings) with tools from topological data
analysis (e.g., persistent homology) allows one to extract highly non-trivial features from vector-valued time series data. As an example,
we describe a paradigm for quantifying (quasi)periodicity in video data and provide applications including the study of gene regulatory
networks in biology, biphonation in mammals, and speech pathologies in humans.
10:20–10:40 Panel Discussion
3585
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3585
MONDAY MORNING, 26 JUNE 2017
ROOM 302, 11:00 A.M. TO 12:20 P.M.
Session 2aSPb
Signal Processing in Acoustics, Engineering Acoustics, and Architectural Acoustics: Signal Processing for
Directional Sensors I
Kainam T. Wong, Chair
Dept. of Electronic & Information Engineering, Hong Kong Polytechnic University, DE 605, Hung Hom KLN, Hong Kong
Invited Paper
11:00
2aSPb1. On the general connection between wave impedance and sound intensity. Domenico Stanzial (Res. Section of Ferrara,
CNR - Inst. of Acoust. and Sensors “Corbino,” v. Saragat, 1, c/o Phys. and Earth Sci. Dept., Ferrara 44122, Italy, domenico.stanzial@
cnr.it) and Carlos E. Graffigna (Int. Doctorate Program, Universidad Nacional de Chilecito - Univ. of Ferrara - CNR-IDASC, Ferrara,
Italy)
This paper presents the generalization to non-monochromatic non-monodimensional fields of the equation linking the complex sound
intensity to the wave impedance/admittance already introduced with a different form in [D. Stanzial and C. E. Graffigna “On the connection between wave impedance, sound intensity and kinetic energy in monochromatic fields,” accepted for publication on POMA, Dec.
23, 2016]. Computer simulations have been now carried out, both for wave impedance and admittance, in quasi-stationary bi-dimensional waves fields with different reflection coefficients and spectral compositions. It turns out that the equation is validated in all space
points of the sound field for each spectral component. This allows primarily to calculate the active intensity vector field as the vector
sum of all vector fields obtained as the spectral components of the active intensity and therefore to determine the reactive intensity magnitude by simply subtracting the modulus of the so obtained active intensity from scalar field of the apparent intensity. This grand result
will allow to develop a precision device for measuring 3D sound intensity and its active and reactive spectral components.
Contributed Paper
11:20
2aSPb2. Bias error comparison for plane-wave acoustic intensity using
cross-spectral and phase-and-amplitude-gradient-estimator methods.
Daxton Hawks (Brigham Young Univ. - Idaho, Rexburg, ID), Tracianne B.
Neilsen, Kent L. Gee, and Scott D. Sommerfeldt (Brigham Young Univ.,
N311 ESC, Provo, UT 84602, tbn@byu.edu)
Acoustic vector intensity relies on the product of the acoustic pressure
and particle velocity. The particle velocity is typically approximated via
Euler’s equation using the gradient of the complex pressure across closely
spaced microphones, which is traditionally found using the cross-spectral
density. In contrast, the phase-and-amplitude-gradient-estimator (PAGE)
method [Thomas et al., J. Acoust. Soc. Am., 137, 3366-3376 (2015)] relies
on gradients of pressure magnitude and phase. For a broadband source this
allows for the phase to be unwrapped, which extends the usable bandwidth
of the intensity calculation well above the spatial Nyquist frequency. The
benefits of the PAGE method are evident in plane wave tube measurements
in which microphones spaced 90 cm apart yield accurate intensity values at
frequencies at least ten times the spatial Nyquist frequency. This represents
an increase in bandwidth of 30 times over the traditional method. The bias
errors for the traditional method for calculating acoustic intensity are
reviewed and compared with the bias errors for the PAGE method for the
case of both two and three microphone intensity probes in a plane-wave
tube environment. [Work supported by the National Science Foundation.]
Invited Papers
11:40
2aSPb3. Precision device for measuring the three dimensional spectral intensity. Domenico Stanzial (Res. Section of Ferrara, CNR
- Inst. of Acoust. and Sensors “Corbino,” Ferrara, Italy) and Carlos E. Graffigna (Int. Doctorate Program, Universidad Nacional de
Chilecito - Univ. of Ferrara - CNR-IDASC, v. Saragat 1, c/o Phys. and Earth Sci. Dept., Ferrara I-44122, Italy, carlos.graffigna@idasc.
cnr.it)
On the basis of processing algorithms developed by the same authors in a companion paper [“On the general connection between
wave impedance and sound intensity”, 173rd Meeting of the Acoustical Society of America and the 8th Forum Acusticum, Boston MA,
25-29 June 2017] where the equation between the complex sound intensity and the specific acoustic impedance/admittance has been
stated and numerically validated in general form, the block diagram of a possible device for precision measurement of 3D spectral intensity is proposed here. In order to test its functionality, some measurements have been carried out, inside a tube of 28x28 cm2 square
3586
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3586
section and 4 m long, which was terminated with panels of different materials. Measurements have been carried out at different positions
along the tube’s axis by means of a 3D pressure-velocity probe below and above the cutoff frequency of the tube so to include also the
effects of transversal modes. Preliminary results of such measurements are reported and discussed briefly here.
12:00
2aSPb4. Source localization with three-dimensional sound intensity probe with high precision. Jeong-Guon Ih, In-Jee Jung, and
Jung-Han Woo (Mech. Eng., KAIST, 373-1 Guseong-Dong, Yuseong-Gu, Daejeon 305-701, South Korea, J.G.Ih@kaist.ac.kr)
MONDAY MORNING, 26 JUNE 2017
2a MON. AM
When an array module measuring three-dimensional sound intensity is employed for detecting the sound source, a compact space
usage and small number of sensors are advantageous than the other source localization methods. However, because of severe bias errors,
it has not been popular. We analyze the major sources of bias estimation error and seek for the compensation method. Spectral bias error
is due to the reflected signal from the environment, which is proportional to the difference of distance between direct and reflective
paths. Spatial bias error is due to the inhomogeneous directivity of the intensity module stemming from discrete arrangement of sensors
on the hypothetical sphere surrounding sensors. Simulation with changing the source direction by 1 deg. in spherical angle can generate
an error map for all incidence angles. A measurement is conducted using a tetrahedral intensity module with 30 mm spacing for the compensation of errors. Low pass filtering of the cross spectral density function is used for the spectral bias error, and spherical error map is
used for the directional bias error. By compensating such bias errors, it is shown that the localization errors of all bearing angles are less
than 1 in an anechoic chamber when kd<1.1.
ROOM 306, 9:15 A.M. TO 12:20 P.M.
Session 2aUWa
Underwater Acoustics: Sound Propagation and Scattering in Three-Dimensional Environments I
Ying-Tsong Lin, Cochair
Applied Ocean Physics and Engineering, Woods Hole Oceanographic Institution, Bigelow 213, MS#11, WHOI,
Woods Hole, MA 02543
Frederic Sturm, Cochair
Acoustics, LMFA, Centre Acoustique, Ecole Centrale de Lyon, 36, avenue Guy de Collongue, Ecully 69134, France
Chair’s Introduction—9:15
Invited Papers
9:20
2aUWa1. Gaussian beam tracing for calculating the broadband field in three-dimensional environments. Michael B. Porter and
Laurel Henderson (HLS Res., 3366 N. Torrey Pines Ct., Ste. 310, La Jolla, CA 92037, mikeporter@hlsresearch.com)
Ray tracing methods have a long history in underwater acoustics going back to a paper by H. Lichte in 1919 which also predicted
the SOFAR channel. They remain extremely valuable today, partly because they present an intuitive view of sound propagation that
readily allows for many extensions. For instance, targets and boundaries with complicated scattering can be included in a natural way;
similarly, motion of boundaries, sources, receivers, and the ocean itself are easily treated. This talk will focus on the 3D extension with
particular emphasis on broadband waveforms such as chirps or waveforms due to acoustic modems.
9:40
2aUWa2. 3-D ocean acoustics with normal modes. David P. Knobles (Knobles Sci. and Anal., PO Box 27200, Austin, TX 78755,
dpknobles@yahoo.com)
A new derivation is presented for coupled mode equations applicable to 3-D ocean environments possessing strong horizontal variability. The 3-D acoustic field is represented by a bi-orthonormal expansion with both the depth- and azimuthal-dependent eigenfunctions. Two classes of coupling integrals emerge. One class is associated with azimuthal modes and their azimuthal derivatives and the
other class is associated with both the depth-dependent and the azimuth-dependent modes and their range and azimuth derivatives. The
coupled integral equations for the scattering amplitudes are solved using a method previously developed for a 2-D integral equation
coupled mode approach. The method is applied to several 3-D environments including single and multiple seamounts. The effect of
neglecting various types of coupling integrals is examined. [Work supported by ONR Code 322 OA.]
3587
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3587
10:00
2aUWa3. Massively parallel structural acoustics for forward and inverse problems. Timothy F. Walsh (Computational Solid Mech.
and Structural Dynam., Sandia National Labs., PO Box 5800, MS 0380, Albuquerque, NM 87185, tfwalsh@sandia.gov) and Wilkins
Aquino (Civil and Environ. Eng., Duke Univ., Durham, NC)
Three-dimensional structural acoustic simulations on highly complex structural models that are immersed in infinite and semi-infinite acoustic domains typically lead to large numbers of degrees of freedom that cannot be solved with commercial software packages.
Typical applications include underwater acoustic simulation of submerged structures, and reverberation testing of aerospace structures.
Many of these applications of interest involve large acoustic domains and complex 3D structures, thus making a finite element solution
an attractive option. In addition, unknown parameters in the models can be mitigated with the solution of inverse problems. In this talk,
we will discuss recent research efforts in Sierra-SD in the area of structural acoustics and will also present a partial differential equation
(PDE) constrained optimization approach for solving inverse problems in structural acoustics that uses Sierra-SD for solving the forward
and adjoint problems. Inverse problems are commonly encountered in structural acoustics, where accelerometer and/or microphone pressures are measured experimentally, and it is desired to characterize the acoustic sources, material parameters, and/or boundary conditions that produced the measurements. With a PDE-constrained optimization approach, the scalability of Sierra-SD can be leveraged for
solving inverse problems. We will present results on the application of Sierra-SD on several large-scale structural acoustic applications
examples of interest.
10:20
2aUWa4. Three-dimensional modeling in global acoustic propagation. Kevin D. Heaney (OASIS Inc., 11006 Clara Barton Dr.,
Fairfax Station, VA 22039, oceansound04@yahoo.com)
The ocean is nearly transparent for acoustic propagation at low frequencies (<100 Hz), leading to the detection of signals (seismic
events, volcanoes, and man-made signals) at distances as large as the ocean basin. Historically, basin acoustic modeling has neglected
out-of-plane effects and has been performed with the model computed in the range/depth plane for multiple radials following geodesics
(Nx2D). Both oceanographic and bathymetric features can lead to out-of-plane effects. In this paper, a summary of computational
approaches to this problem will be presented, including vertical-mode, horizontal ray hybrid approaches, and full-3D Parabolic Equation
modeling. Out-of-plane effects include refraction and diffraction—which have different effects as well as different approaches to modeling. Experiments where 3D propagation effects were significant will be presented within this context, including Perth-Bermuda (1960),
the Heard Island Feasibility Test (1993), and a recent seismic tomography test off the coast of Japan (2015). Three physics mechanisms
will be addressed: horizontal deflection due to mesoscale eddies and fronts, reflection from islands (refraction), and diffraction behind
bathymetric edges.
10:40
2aUWa5. Broadband acoustic wave propagation in three-dimensional shallow waveguide with variable sound speed profile and
boundary roughness. Mohsen Badiey (College of Earth, Ocean, and Environment, Univ. of Delaware, 261 S. College Ave., Robinson
Hall, Newark, DE 19716, badiey@udel.edu)
Propagation of broadband acoustic signals in shallow water environment is a complex four-dimensional problem that needs to be
addressed with input from the spatial and temporal physical parameters of waveguides with rough boundaries. To construct a numerical
model of this four-dimensional problem, methods such as the Parabolic Equation (PE), Horizontal Rays and Vertical Modes, 3D Ray
Method, and Nx2D PE have been utilized in recent years. However, data/model comparison still remains a challenge and accurate comparison between measured and modeled acoustic fields in the waveguide is badly needed. Lack of environmental input to use for modeling is one reason, but with proper sampling of environment it may be overcome by novel experimental design based on the knowledge
of waveguide physics. Acoustic frequency can also be utilized as one of the key parameters to simplify the problem and adopt strategies
in conducting calibrated experiments. In this paper, we provide a broad view of recent advancements in three-dimensional acoustic
wave propagation in shallow water waveguides in the presence of variable volumetric and boundary conditions. The effect of broadband
acoustic wave center frequency and bandwidth with respect to the scale of environmental variability is also discussed. [Work supported
by ONR 322 OA.]
Contributed Papers
11:00
2aUWa6. Propagation over a rigid sea ridge using a three-dimensional
finite element model. Fiona Cheung and Marcia J. Isakson (Appl. Res.
Labs., The Univ. of Texas at Austin, PO Box 8029, Austin, TX 78713,
fiona.cheung@utexas.edu)
A three-dimensional finite element model is developed to describe propagation over a rigid underwater sea ridge. Finite element models are attractive benchmark solutions since they include all orders of scattering as well
as refraction. In this case, the model is calculated using an out-of-plane
wavenumber decomposition technique which utilizes a series of two-dimensional models to calculate a fully three-dimensional model in longitudinally
invariant environments. [Isakson et al., J. Acoust. Soc. Am. Express Letters,
136:EL206-211, 2014.] The time-harmonic model is then extended to the
time domain via Fourier synthesis in order to more fully understand the dynamics of modal refraction and propagation over the ridge. The model is
3588
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
compared with coupled mode solutions from the University of Rhode Island.
[Work supported by ONR, Ocean Acoustics.]
11:20
2aUWa7. Time-domain reverberation modeling for rough bottom
consisting of polygon facets. Youngmin Choo (Defense System Eng.,
Sejong Univ., Seoul National University, 1, Gwanak-ro, Gwanak-gu, Seoul,
Seoul 151 - 744, South Korea, sks655@snu.ac.kr) and Woojae Seong
(Seoul National Univ., Seoul, South Korea)
In reverberation modeling for rough bottom, a surface integration is conducted along elemental scattering areas with repetitive uses of propagation
and scattering strength models and a summation of scattered signals from
the scattering areas provides a synthetic reverberation signal in time domain.
In particular, when roughness is on a flat or sloping bottom, numerical integration schemes including quadrature by parts can be used with elemental
scattering areas, which are small enough to obtain a converged reverberation
Acoustics ’17 Boston
3588
signal. However, this standard approach is unavailable for a bottom having
irregular geometry since the bottom cannot be divided into small element
scattering areas. To acquire a stable reverberation signal by the irregular
bottom, we derive an analytic integration of scattered signal for polygon
facet by using Stokes’ theorem while approximating the bottom with combination of polygon facets. In this approach, a delay difference in an elemental
scattering area is considered whereas a representative delay is used for the
elemental scattering area in the standard approach. Results from two different reverberation models are compared and the scheme using analytic integration shows a converged reverberation signal even with large elemental
scattering areas.
reflected from ideal non-flat boundary, and reflected and refracted from
non-flat boundary of two different medias, in parabolic approximation. Solutions of different model problems are presented.
12:00
We propose the generalized form of three dimensional acoustic parabolic equation (3DPE) based on Chisholm approximation [Chisholm, Math.
Comp. 27, 841-848 (1973)] of a rational approximant for two variables, and
the splitting denominator assumption. The proposed form has wider angle
accuracy to the inclination angle of 6620 from the range axis of 3DPE at
the bearing angle of 450. Moreover, the splitting denominator makes the
split-step algorithm with finite-difference depth solver more efficient in that
the 3DPE can be easily transformed into the tridiagonal matrices system.
One drawback of this method is the increase of the phase error in the evanescent region, but should be practically remedied by several skills. In this
study, the comparative study of other PE approximations will be conducted
based on the phase error analysis. Also, numerical examples with three
dimensional problems will be given for the performance and benchmark
test.
11:40
2aUWa8. Computation of sound field, reflected from ideal non-flat
boundary, and reflected and refracted from non-flat boundary of two
different media in parabolic approximation. Nikolai Maltsev
(Lookomorie, 1467 Leaftree cir, San Jose, CA 95131, nick_e_maltsev@
yahoo.com)
Four
Euler
equations
for
harmonic
sound
field
qc2(䉮,v) = ixp,䉮p = ixpv can be reduced to two equations for sound pressure p and radial velocity vr in cylindrical coordinate system (r,u,z): @p/
@r = ixpvr @vr/@r = i/(xp)(k2+(1/r2)@ 2/@u2+@ 2/@z2)p-1/rvr Solutions for
this system of equations in a set of 3-D local normal modes is constructed.
A stable numerical method was created, for computation of sound field,
MONDAY MORNING, 26 JUNE 2017
ROOM 309, 9:20 A.M. TO 11:40 A.M.
Session 2aUWb
Underwater Acoustics, Acoustical Oceanography, Signal Processing in Acoustics, Structural Acoustics and
Vibration, Physical Acoustics, and Biomedical Acoustics: Passive Sensing, Monitoring, and Imaging in
Wave Physics III
Karim G. Sabra, Cochair
Mechanical Engineering, Georgia Institute of Technology, 771 Ferst Drive, NW, Atlanta, GA 30332-0405
Philippe Roux, Cochair
ISTerre, University of Grenoble, CNRS, 1381 rue de la Piscine, Grenoble 38041, France
Invited Papers
9:20
2aUWb1. Dynamic imaging of a gravity wave caused by laser-induced breakdown in a fluid waveguide using acoustic waves.
Tobias van Baarsel, Philippe Roux (Universite Grenoble-Alpes, Universite Grenoble Alpes ISTerre, Grenoble 38000, France, tobias.
van-baarsel@univ-grenoble-alpes.fr), Barbara Nicolas (Creatis, Villeurbanne Cedex, France), Jer^
ome Mars (Universite Grenoble-Alpes,
Grenoble, France), Julien Bonnel, and Michel Arrigoni (ENSTA, Brest cedex 9, France)
The dynamic imaging of a gravity wave propagating at the air-water interface is a complex task that requires the sampling of every
point at this interface. Using two source-receiver vertical arrays facing each other in a shallow water environment, we manage to isolate
and identify each multi-reverberated eigenray that interacts with the air-water interface. The travel-time and amplitude variations of
each eigenray are then measured during the crossing of the gravity wave. In this work, we present an ultrasonic experiment in a 1 mlong, 5 cm-deep waveguide. At frequencies in the MHz range, the waveguide transfer matrix is recorded 100 times per second between
two source-receiver arrays while a low-amplitude gravity wave is generated by a laser-induced breakdown above the water surface. The
breakdown causes a blast wave that interacts with the air-water interface and penetrates into the water, creating ripples at the surface.
This event is easily controlled and therefore repeatable. The inversion performed from a few thousand eigenrays leads to accurate imaging of the dynamic of the air-water interface, using either the travel-time or the amplitude variation.
3589
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3589
2a MON. AM
2aUWa9. Three dimensional acoustic parabolic equation based on
Chisholm approximation with the splitting denominator. Keunhwa Lee
(Defense Systems Eng., Sejong Univ., Neungdong-ro 209, Seoul 05006,
South Korea, nasalkh2@sejong.ac.kr) and Woojae Seong (Ocean Eng.,
Seoul National Univ., Seoul, South Korea)
9:40
2aUWb2. Statistical inference for source localization using multi-frequency machine learning. Haiqiang Niu (Marine Physical
Laboratory, Scripps Inst. of Oceanogr., San Diego, CA), Peter Gerstoft, and Emma Reeves (Marine Physical Laboratory, Scripps Inst. of
Oceanogr., Univ. of California, La Jolla, CA, pgerstoft@ucsd.edu6)
As a classification problem in machine learning, source localization is solved by training a feed-forward neural network (FNN) on
ocean acoustic data. The FNN is fed with normalized sample covariance matrices (SCMs). The output of network, which represents the
probability for range, is used to determine the source ranges. Since it is a data-driven method, no acoustic propagation models are
needed. As shipping noise has a broad frequency band, an approach of statistical inference for source localization is presented by taking
advantage of multi-frequency information. It is demonstrated by the vertical array data from Noise09 experiment.
Contributed Papers
10:00
2aUWb3. Effect of dispersion on the convergence rate for Green’s
function retrieval. John Y. Yoritomo and Richard Weaver (Phys., Univ. of
Illinois at Urbana-Champaign, 1110 West Green St., Urbana, IL 618013080, yoritom2@illinois.edu)
Much information about wave propagation in a variety of structures has
been obtained from Green’s function retrieval by noise correlation. Here we
examine how dispersion affects Green’s function retrieval and, in particular,
its signal-to-noise ratio (SNR). On recalling how the inherent spread of a
signal due to band limitation is augmented by spread due to dispersion and
propagation distance, and how both affect amplitude, we argue that SNR in
highly dispersive media can be substantially lowered by strong dispersion.
We argue this is most relevant for gravity waves over large propagation distances in the ocean or atmosphere. In particular, we argue that dispersion
could explain recent retrieval failure from surface gravity wave noise in the
ocean. Lastly, we consider methods to ameliorate the poor SNR due to dispersion. We use numerical simulation to substantiate our analytic results.
10:20–10:40 Break
decrease in the poroelastic interface wave speed. A buried acoustic source
was used to directly measure the compressional wave speed in the seabed
on the across-shore array.
11:00
2aUWb5. Passive bottom reflection-loss estimation using ship noise and
a vertical line array. Lanfranco Muzi and Martin Siderius (Elec. and
Comput. Eng., Portland State Univ., 1900 SW 4th Ave., Ste. 160, Portland,
OR 97201, muzi@pdx.edu)
An existing technique for passive bottom-loss estimation from natural
marine surface noise (generated by waves and wind) is adapted to use ship
generated noise. The original approach (based on beamforming of the noise
field recorded by a vertical line array of hydrophones) is retained. However,
the field generated by a passing ship must be processed preliminarily, in
order for it to show features that are similar to those of the natural surfacenoise field and therefore become amenable to the technique. A necessary
requisite is that the ship position, relative to the array, vary over as wide a
range of steering angles as possible, ideally passing directly over the array
to ensure coverage of the steepest angles. The methodology is illustrated
through simulation and applied to experimental data.
10:40
2aUWb4. Estimating the speed of poroelastic interface waves using
ambient noise. David R. Barclay (Dept. of Oceanogr., Dalhousie Univ., PO
Box 15000, Halifax, NS B3H 4R2, Canada, dbarclay@dal.ca), Len Zedel
(Phys. and Physical Oceanogr., Memorial Univ. of NF, St. John’s, NF,
Canada), and Alex E. Hay (Oceanogr., Dalhousie Univ., Halifax, NS,
Canada)
Pairs of hydrophones were buried at mid-tide height in a 1:10 sloped
mixed gravel and coarse sand beach and used to make ambient noise recordings over a period of three weeks in Advocate Harbour, Nova Scotia, in the
Bay of Fundy. The pairs were arranged in vertical and horizontal configurations and recorded pressure time series, power spectral density, and vertical
and horizontal coherence measurements of the noise field in the seabed. A
nearby suite of oceanographic instruments measured the water level, ocean
wave properties, bed dynamics and weather during the experiment. The
measured noise between 1 Hz and 1 kHz was dominated by poroelastic
interface waves generated by plunging surf on the beach face. The speed of
the compressional component of the interface wave was estimated by crosscorrelating the noise recorded on the across-shore oriented pair of sensors
while the unconsolidated sediment was water-saturated as well as drained.
Additionally, the increasing and decreasing overburden pressure due to the
rising and falling 10-m tide was found to drive a respective increase and
3590
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
11:20
2aUWb6. Channel impulse response arrival uncertainty using source of
opportunity for tomography. Kay L. Gemba, Jit Sarkar, Jeffery D.
Tippmann, William S. Hodgkiss, Bruce Cornuelle, William A. Kuperman
(MPL/SIO, UCSD, Univ. of California, San Diego, 8820 Shellback Way,
Spiess Hall, Rm. 446, La Jolla, CA 92037, gemba@ucsd.edu), and Karim
G. Sabra (Mech. Eng., Georgia Inst. of Technol., Atlanta, GA)
Passive acoustic tomography exploits the acoustic energy generated by
sources with unknown spectral content such as sources of opportunity (e.g.,
cargo ships) to study the ocean. The recording at each sensor within a vertical line array (VLA) is the channel impulse response (CIR) convolved with
the noise generated by the moving random radiator. Using an incremental
approach, we estimate the source signal locally at three VLAs by beamforming on the direct ray-path to deconvolve each CIR. CIR arrival uncertainty
is inversely proportional to the bandwidth of the source and SNR, the latter
estimated from the deconvolved time domain waveform. Over the 10 min
source track, we present the time evolution of CIR arrival uncertainty computed at three VLAs horizontally separated by 1.5 km and discuss constraints on integration time. Data are presented using the Noise Correlation
2009 Experiment and application to the Santa Barbara Channel Experiment
2016 is discussed.
Acoustics ’17 Boston
3590
MONDAY MORNING, 26 JUNE 2017
ROOM 104, 9:15 A.M. TO 10:45 A.M.
Meeting of the Standards Committee Plenary Group
to be held jointly with the meetings of the
ANSI-Accredited U.S. Technical Advisory Groups (TAGs) for:
2a MON. AM
ISO/TC 43, Acoustics,
ISO/TC 43/SC 1, Noise,
ISO/TC 43/SC 3, Underwater acoustics
ISO/TC 108, Mechanical vibration, shock, and condition monitoring,
ISO/TC 108/SC 2, Measurement and evaluation of mechanical vibration and shock as applied
to machines, vehicles, and structures,
ISO/TC 108/SC 4, Human exposure to mechanical vibration and shock,
ISO/TC 108/SC 5, Condition monitoring and diagnostics of machine systems,
and IEC/TC 29, Electroacoustics
R. D. Hellweg, Chair and P. D. Schomer, Vice Chair, U.S. Technical Advisory Group for ISO/TC 43
Acoustics and ISO/TC 43/SC 1 Noise
Hellweg Acoustics, 13 Pine Tree Road, Wellesley MA 02482
Schomer and Associates, 2117 Robert Drive, Champaign, IL 61821
M. A. Bahtiarian, Chair, U.S. Technical Advisory Group for ISO/TC 43/SC 3 Underwater acoustics
Noise Control Engineering, Inc., 799 Middlesex Turnpike, Billerica, MA 01821
W. Madigosky, Chair of the U.S. Technical Advisory Group for ISO/TC 108 Mechanical vibration,
shock, and condition monitoring
MTECH, 10754 Kinloch Road, Silver Spring, MD 20903
M. L’vov, Chair of the U.S. Technical Advisory Group for ISO/TC 108/SC 2 Measurement and evaluation
of mechanical vibration and shock as applied to machines, vehicles, and structures
Siemens Energy, Inc., 5101 Westinghouse Blvd., Charlotte, NC 28273
D. D. Reynolds, Chair, U.S. Technical Advisory Group for ISO/TC 108/SC 4 Human exposure to mechanical vibration and shock
3939 Briar Crest Court, Las Vegas, NV 89120
D. J. Vendittis, Chair of the U.S. Technical Advisory Group for ISO/TC 108/SC 5 Condition monitoring and
diagnostics of machine systems
701 Northeast Harbour Terrace, Boca Raton, FL 33431
D. A. Preves and C. Walber, U.S. Technical Co-advisors for IEC/TC 29, Electroacoustics
Starkey Hearing Technologies, 6600 Washington Ave., S., Eden Prairie, MN 55344 (D. Preves)
PCB Piezotronics, Inc., 3425 Walden Avenue, Depew, NY 14043 2495 (C. Walber)
3591
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3591
The reports of the Chairs of these TAGs will not be presented at any other S Committee meeting.
The meeting of the Standards Committee Plenary Group will follow the meeting of Accredited Standards Committee S2, which will be held on Sunday,
25 June 2017 from 5:00 p.m. to 6:00 p.m.
The Standards Committee Plenary Group meeting will precede the meetings of the Accredited Standards Committees S1, S3, S3/SC 1, and S12, which are
scheduled to take place in the following sequence:
Monday, 26 June 2017
Monday, 26 June 2017
Monday, 26 June 2017
Monday, 26 June 2017
11:00 a.m. – 12:15 p.m.
2:00 p.m. – 3:00 p.m.
3:15 p.m. – 4:30 p.m.
4:45 p.m. – 5:45 p.m.
S12, Noise
ASC S3/SC 1, Animal Bioacoustics
ASC S3, Bioacoustics
ASC S1, Acoustics
Discussion at the Standards Committee Plenary Group meeting will consist of national items relevant to all S Committees and U.S. TAGs.
The U.S. Technical Advisory Group (TAG) Chairs for the various international Technical Committees and Subcommittees under ISO and IEC, which are
parallel to S1, S2, S3, and S12, are as follows:
U.S. TAG Chair/Vice Chair
TC or SC
U.S. Parallel Committee
ISO/TC 43 Acoustics
ASC S1 and ASC S3
ISO/TC 43/SCI Noise
ASC S12
ISO/TC 43/SC 3, Underwater acoustics
ISO/TC 108 Mechanical vibration, shock, and condition monitoring
ISO/TC 108/SC2 Measurement and evaluation
of mechanical vibration, and shock as applied to
machines, vehicles and structures
ISO/TC 108/SC3 Use and calibration of vibration and
shock measuring instruments
ISO/TC 108/SC4 Human exposure to mechanical
vibration and shock
ISO/TC 108/SC5 Condition monitoring and diagnostics
of machine systems
ASC S1, ASC S3/SC 1, and ASCS12
ASC S2
ASC S2
ISO
R. D. Hellweg, Jr., Chair
P. D. Schomer, Vice Chair
R. D. Hellweg, Jr., Chair
P. D. Schomer, Vice Chair
M. A. Bahtiarian, Chair
W. Madigosky, Chair
M. L’vov, Chair
D. J. Evans, Chair
D. D. Reynolds, Chair
D. J. Vendittis, Chair
IEC
D. A. Preves and C. Walber,
U.S. Technical Co-advisors
3592
IEC/TC 29 Electroacoustics
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
ASC S2
ASC S2 and ASC S3
ASC S2
ASC S1 and ASC S3
Acoustics ’17 Boston
3592
MONDAY MORNING, 26 JUNE 2017
ROOM 104, 11:00 A.M. TO 12:15 P.M.
Meeting of Accredited Standards Committee (ASC) S12 Noise
S. J. Lind, Vice Chair ASC S12
The Trane Co., 3600 Pammel Creek Road, Bldg. 12-1, La Crosse, WI 54601 7599
2a MON. AM
D. F. Winker, Vice Chair ASC S12
ETS-Lindgren Acoustic Systems, 1301 Arrow Point Drive, Cedar Park, TX 78613
Accredited Standards Committee S12 on Noise. Working group chairs will report on the status of noise standards currently under development. Consideration will be given to new standards that might be needed over the next few years. Open discussion of committee
reports is encouraged.
People interested in attending the meeting of the TAG for ISO/TC 43/SC 1, Noise, and ISO/TC 43/SC 3, Underwater acoustics, take
note—that the meeting will be held in conjunction with the Standards Plenary meeting at 9:15 a.m. on Monday, 26 June 2017.
Scope of S12: Standards, specifications, and terminology in the field of acoustical noise pertaining to methods of measurement, evaluation, and control, including biological safety, tolerance and comfort, and physical acoustics as related to environmental and occupational
noise.
MONDAY MORNING, 26 JUNE 2017
EXHIBIT HALL D, 9:00 A.M. TO 5:00 P.M.
Exhibit
The instrument and equipment exhibit is located near the registration area in Exhibit Hall D.
The Exhibit will include computer-based instrumentation, scientific books, sound level meters, sound intensity systems, signal processing systems, devices for noise control and acoustical materials, active noise
control systems, and other exhibits on acoustics.
Exhibit hours are Sunday, 25 June, 5:30 p.m. to 7:00 p.m., Monday, 26 June, 9:00 a.m. to 5:00 p.m., and
Tuesday, 27 June, 9:00 a.m. to 12:00 noon.
Coffee breaks on Monday and Tuesday mornings, as well as an afternoon break on Monday, will be held
in the exhibit area.
3593
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3593
MONDAY AFTERNOON, 26 JUNE 2017
ROOM 207, 1:15 P.M. TO 5:00 P.M.
2pAAa
Architectural Acoustics: New Measurement and Prediction Techniques at Low Frequencies in Buildings
James E. Phillips, Cochair
Wilson, Ihrig & Associates, Inc., 6001 Shellmound St., Suite 400, Emeryville, CA 94608
Bert Roozen, Cochair
Physics and Astronomy, KU Leuven, Leuven, Belgium
Herbert Muellner, Cochair
Acoustics and Building Physics, Federal Institute of Technology TGM Vienna, Wexstrasse 19-23, Vienna A-1200, Austria
Chair’s Introduction—1:15
Invited Papers
1:20
2pAAa1. An introduction to the new edition of AISC Design Guide 11 “Vibrations of Steel Framed Structural Systems due to
Human Activity.” Eric E. Ungar (Acentech, 33 Moulton St., Cambridge, MA 02138-1118, eungar@acentech.com)
The 2016 edition of Steel Design Guide 11, like the first (1997) edition, “Floor Vibrations due to Human Activity,” presents relatively easily used means for predicting the vibrations of rectangular bays of floors of steel-framed construction due to typical walking
and for assessing the acceptability of these vibrations to personnel. However, the new edition includes better representations of the
forces associated with footfalls and improved validated methods for prediction of the structural responses. In addition to floors of buildings, it also addresses footbridges and stairs. It distinguishes between “low frequency” and “high frequency” structures; the former tend
to vibrate nearly steadily at resonance due to typical walking, whereas the latter tend in essence to respond to a series of separate footfall
impulses. The new edition presents not only expressions for the expected peak velocities and accelerations, but also includes relations
for evaluating the acceptability of vibrations for equipment and activities whose criteria are expressed in terms of one-third-octave-band
or narrow-band values. It also notes how the different response measures are affected differently by changes in the structural mass and
stiffness. An extensive chapter provides advice on the application of finite-element analysis.
1:40
2pAAa2. An alternative statistical method for characterizing low-frequency environments in sensitive laboratory settings. Byron
Davis (Vibrasure, 1015 Florida St., San Francisco, CA 94110, byron@vibrasure.com)
Many practitioners are familiar with the collection and expression of complex noise data as Ln statistical spectra. Given suitable
instrumentation, it is easy to develop Ln statistics for low-frequency sound and vibration environments, as well. However, the traditional
(instrument-generated) Ln datasets have some shortcomings when it comes to understanding some environments, especially in highly
sensitive settings like research laboratories. The emergence of mass data storage and manipulation tools has provided an opportunity to
employ a somewhat subtler method to characterize these environments. In this presentation, we will describe this analytical technique,
which provides statistical representations similar to the Ln system. However, the technique further allows an explicit invocation of timescale, allowing the user to choose an arbitrary analytical timescale relevant to sensitive operations or expected environmental transients
or other forcings. We will also show examples that demonstrate why the approach might find particular use in technical settings like
nanotechnology laboratories and other environments supporting research in the physical sciences.
2:00
2pAAa3. Micro-vibration design for a cutting edge cancer research laboratory at a high-vibration site. Ahmad Bayat, Thomas
Kaytt, and Alana DeLoach (Vibro-Acoust. Consultants, 490 Post St., Ste. 1427, San Francisco, CA 94102, tom@va-consult.com)
State-of-the art medical & material science imaging technologies are pushing the limits of ground vibration tolerance. Modern technical instruments such as Transmission Electron Microscopes are calling for tighter functional vibration limits far below the threshold of
human sensitivity and at lower and lower frequencies to facilitate analysis of details on the scale of individual atoms. In one particularly
demanding case, an instrument proposed for a university research building called for vibration levels not to exceed 50 m-in/s at frequencies as low as 1 Hz, and this instrument would be needed while heavy construction is ongoing at nearby surrounding properties. Meeting
this strict criterion without limiting the use of the surrounding land required special low-noise instrumentation, an in depth geotechnical
analysis of the site, and a comprehensive isolation scheme to achieve up to a 95% reduction in ground vibration. This paper will discuss
the steps taken to evaluate the site vibration conditions and the structural isolation & active isolation mounting measures enacted to
meet this stringent instrumentation requirement.
3594
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3594
2:20
2pAAa4. Numerical analysis on applicability of measurement method according to ISO 16283 in small rooms at low frequencies.
Stefan Schoenwald and Armin Zemp (Lab. for Acoustics/Noise Control, Empa Swiss Federal Labs. for Mater. Sci. and Technol.,
€
Uberlandstrasse
129, D€
ubendorf 8606, Switzerland, stefan.schoenwald@empa.ch)
The robustness of the new measurement method for sound pressure level in small rooms with less than 25 m3 room volume at low
frequencies from 50 Hz to 80 Hz that was recently introduced in the ISO 16283 series on sound insulation measurements in buildings
was investigated in an experimental study. This restricted study revealed some potential problems with the method that were already presented, but unfortunately, a further experimental investigation was not possible because of the time- and labor-intensiveness of the conducted experiments. Now the sound level distribution in the room was predicted with a simple analytical modal model and an excellent
agreement with the available experimental results was found. With the prediction model, it was possible to refine the results of the experimental study, to extend it to other room geometries, and to revisit and to analyze the identified potential problems on a much more
detailed and broader database. In the conference paper, the experimental study and its outcomes are briefly recapitulated, the prediction
model and its validation are presented, and new conclusions are drawn based on the findings of the original experimental and the
extended numerical study.
2:40
2pAAa5. Sound radiation efficiency of lightweight building constructions—Study on the influence of panel fastening by
numerical calculations and laser scanning vibrometry measurements. Maximilian Neusser and Thomas Bednar (Res. Ctr. for Bldg.
Phys. and Sound Protection, Technische Universit€at Wien, Karlsplatz 13/206/2, Vienna 1040, Austria, maximilian.neusser@tuwien.ac.
at)
2p MON. PM
The aim of the presented work is to develop a calculation model for predicting vibration characteristics and consequent measures of
the sound radiation efficiency of lightweight building structures including the influences of their connection means. An application of
the currently normative covered processes for these structures is excluded in the relevant body of standards. Through the analysis of the
velocity distribution on the surface by laser vibrometry and the simultaneously measurement of the introduced vibration energy, transfer
functions could be identified. Within the scope of this work, different influencing parameters for the formation of the connecting joint
between wall components are identified and their effects on the vibration characteristics are quantified. The measurement results offered
not only the identification of parameters but can also be used in the development and validation of the simulation model based on the finite element method. A good correspondence between measurements and results of the introduced numerical model could be achieved.
The presented simulation model offers the possibility of the consideration of the identified parameters in the formation of the connecting
bodies such as the dimension of the screws, the distance between screws, the tightening torque, and the position of the screws on supporting structures.
3:00–3:20 Break
3:20
2pAAa6. Measuring the sound insulation of an external thermal insulation composite system (ETICS) by means of vibrometry.
Daniel Urban (A&Z Acoust. s.r.o., S.H.Vajanskeho 43, Nove Zamky 94079, Slovakia, ing.daniel.urban@gmail.com), Bert Roozen
(Dep. of Phys. and Astronomy, Soft matter and Biophys., Lab. of Acoust., KU Leuven, Leuven, Belgium), Alexander Niemczanowski
(Versuchsanstalt TGM, Fachbereich Akustik und Bauphysik, Wien, Austria), Herbert Muellner (Versuchsanstalt TGM, Fachbereich
Akustik und Bauphysik, Vienna, Austria), and Peter Zat’ko (A&Z Acoust. s.r.o., Bratislava, Slovakia)
The impact of External Thermal Insulation Composite System (ETICS) on acoustic properties of external walls has been already
examined. Probably the most fundamental research was carried out in Germany (Weber). The effect of thickness and dynamic stiffness
of ETICS, as well as mass of external plaster, on decreasing of wall sound insulation was demonstrated. In this paper, the mass spring
mass resonances (m-s-m) were investigated, in case of a massive external wall with ETICS. Application of ETICS increases the sound
insulation of walls in the mass-law dominated frequency range by about 12 dB/oct. However, in the low frequencies ETICS decrease
sound insulation due to resonant effects, which become very prominent in traffic noise situations. This contribution presents how vibrometry measurements can be useful for ETICS sound insulation properties measurement.
3:40
2pAAa7. Laser Doppler vibrometry measurement of the radiated sound power of a funicular floor system. Tomas Mendez
Echenagucia (Inst. of Technol. in Architecture, Block Res. Group, ETH Zurich, Stefano-Franscini-Platz 1, HIB E 46, Zurich, Zurich
8093, Switzerland, mendez@arch.ethz.ch), Bert Roozen (Dept. of Phys. and Astronomy, KU Leuven, Leuven, Belgium), and Philippe
Block (Inst. of Technol. in Architecture, Block Res. Group, ETH Zurich, Zurich, Switzerland)
Floor slabs represent a high percentage of the embedded energy in buildings. Lightweight floor systems, such as funicular shell structures, represent an important approach to the reduction of embedded energy in building by reducing material use in a significant way. As
the amount of material is reduced, the sound insulation capabilities need to be studied in depth, particularly in the lower frequency range.
The high stiffness of the shell structures has been shown with numerical experiments to have great potential for sound insulation in low
frequencies. This paper presents laboratory measurements of the radiated sound power an unreinforced concrete funicular floor system
in the low frequencies by means of laser Doppler vibrometry (LDV). The presented experiments use the velocities captured by the LDV
system, coupled with the Rayleigh integral method to estimate the radiated sound power in an accurate way, without the known mode
density problems present in low frequency microphone based measurements. A flat concrete slab of the same mass and dimensions is
also tested for comparison. The paper presents the results of the two measurements and outlines guidelines for the acoustic optimization
of the funicular floor system.
3595
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3595
Contributed Papers
4:00
2pAAa8. An active control method for the spectral homogenization of
airborne sound insulation laboratories at low frequencies. Andrea Prato,
Alessio Atzori, and Alessandro Schiavi (INRIM, Strada Delle Cacce 91,
Torino 10135, Italy, a.prato@inrim.it)
Low frequency sound insulation measurements are affected by large
uncertainties and inaccuracies due to the low modal density of small laboratory rooms. For this reason, an automated measuring system for the active
spectral homogenization of enclosed spaces at low frequencies has been
developed. The aim is to achieve a quasi-diffuse field, reducing the amplitude of room modes, in order to apply standard procedures for sound insulation measurements at low frequencies. The homogenization method is based
on the active control of the interference spatial patterns of a system of two
loudspeakers using a phase steering technique. The room response spectrum, as function of frequency-dependent phase difference between source
signals, is measured in different positions in order to achieve the optimal
spectral homogenization. Such technique allows to decrease the modal
sound pressure level fluctuations in the source and receiving volumes in the
frequency range between 30 Hz and 120 Hz. Based on this, sound insulation
measurements at low frequencies on a high performance triple glazing and
steel structure are performed and compared with standard ones.
4:20
2pAAa9. Blocked pressure based Transfer Path Analysis (TPA) method
to diagnose airborne sound transfer through building partitions.
Nikhilesh Patil, Andy Moorhouse, and Andy S. Elliott (Acoust. Res. Ctr.,
Univ. of Salford, Acoust. Res. Ctr., Newton Bldg., Salford M5 4WT, United
Kingdom, n.patil@edu.salford.ac.uk)
Airborne sound transmission through building elements or the sound
insulation of the building element is usually rated by its Sound Reduction
Index (SRI) or the Sound Transmission Class (STC). SRI/STC quantifies the
overall sound transfer but gives no information about how the transfer takes
3596
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
place and what are the contributions of different sound transfer paths
involved. Such problems are fairly common to the vehicle acoustics industry
and are generally tackled by TPA techniques. The paper formulates an insitu Airborne TPA technique to quantify the contributions of different sound
transfer paths to the transmitted pressure. The airborne source is characterized by its blocked pressure and its direct measurement is discussed. Results
are presented for dual leaf partitions excited by an airborne source. The
method has shown to be significantly faster than the Blocked force based
TPA method which relies on inverse measurement methods. The accuracy of
the method is closely related to the wavelength of incident airborne waves.
4:40
2pAAa10. On the use of finite-element methods to minimize
uncertainties in airborne sound insulation measurements in the low
frequency range. Francesco Martellotta, Ubaldo Ayr (DICAR, Politecnico
di Bari, Via Orabona 4, Bari, Bari 70125, Italy, francesco.martellotta@
poliba.it), and Gianluca Rospi (Dipartimento delle Culture Europee e del
Mediterraneo, Universita della Basilicata, Matera, Italy)
Measuring airborne sound insulation at low frequencies (below 100 Hz)
may be very challenging. In fact, to cope with this problem ISO 162831:2014 also included a dedicated procedure for smaller rooms, having a volume below 25 m3. However, even in significantly larger rooms, large spatial
variations of sound pressure levels may appear, resulting in a measure which
is largely affected by the choice of the measurement positions. Considering
that simple rooms may be easily modeled using finite-elements tools, it may
be advantageous to carry out a preliminary numerical analysis of the spatial
distribution of the one-third-octave levels, so to minimize measurement
uncertainty. The room space is first subdivided by means of a 3D grid where
sound pressure levels are determined using a finite element model. Then,
the receiver positions compatible with standard requirements are identified,
and, finally, a statistical analysis of the measurement uncertainties is carried
out. Comparisons between simulations and measurements are finally
illustrated.
Acoustics ’17 Boston
3596
MONDAY AFTERNOON, 26 JUNE 2017
ROOM 206, 1:20 P.M. TO 5:00 P.M.
2pAAb
Architectural Acoustics: Topics in Architectural Acoustics Related to Application
Kenneth W. Good, Cochair
Armstrong, 2500 Columbia Ave., Lancaster, PA 17601
David C. Swanson, Cochair
Penn State ARL, 222E ARL Bldg., PO Box 30, State College, PA 16804
1:20
2pAAb1. The acoustics of rooms for music rehearsal and
performance—The Norwegian approach. Jon G. Olsen (Akershus,
Norwegian Council for Music Organizations, Akersuhs musikkråd,
Trondheimsveien 50E, Kjeller 2007, Norway, jon.olsen@musikk.no) and
Jens Holger Rindel (Oslo, Multiconsult AS, Oslo, Norway)
Each week, local music groups in Norway use more than 10,000 rooms for
rehearsal and concert; many of the rooms are in schools. The size of the rooms
vary from under 100 m3 to over 10,000 m3. The users cover a broad variety of
music ensembles, mostly wind bands, choirs, and other amateur ensembles.
Since 2009, the Norwegian Council for Music Organizations («Norsk musikkråd») has completed more than 500 room acoustical measurement reports
on rooms used for rehearsal and concert. The measurements include reverberation time, the strength parameter G, and background noise. All the reports are
made available online in a Google Map. The analysis shows that 85% of the
rooms do not comply with the Norwegian Standard NS 8178:2014 and are
evaluated more or less unsuitable for the purpose for acoustical reasons. The
important criteria are volume, room dimensions, reverberation, acoustic treatment of surfaces, and background noise. In particular, the importance of volume is clearly documented. Analysis of room strength indicates that this also
is an essential factor for this type of rooms. The systematic collection of
acoustic reports gives important background for recommendations on how to
build or refurbish rooms for music in schools and cultural buildings.
1:40
2pAAb2. Perception of acoustic comfort in large halls covered by
transparent structural skins. Monika Rychtarikova (Faculty of
Architecture, KU Leuven, Hoogstraat 51, Gent 9000, Belgium, Monika.
Rychtarikova@kuleuven.be), Daniel Urban (A&Z Acoust., Bratislava,
Slovakia), Magdalena Kassakova (Faculty of Civil Eng., Dept. of Bldg.
Construction, STU Bratislava, Bratislava, Slovakia), Carl Maywald (Vector
Foiltec, Bremen, Germany), and Christ Glorieux (Phys. and Astronomy,
Lab. of Acoust., KU Leuven, Leuven, Belgium)
Large halls, such as shopping malls, atria, or big entrance halls, often
suffer from various acoustic discomfort issues, which are not necessarily
caused by extremely high noise levels. Due to the large size of halls and
consequently the long trajectories that sound waves travel between the
source, interior surfaces, and the receiver, sound reflections arriving from
surrounding surfaces are not as strong as they would be in smaller rooms.
Reports in literature and comments by users of large halls concerning acoustic discomfort in large halls refer mainly to continuous reverberation related
noise. Therefore, quantification of the acoustic comfort by the reverberation
time, which is related to the average absorption of interior surfaces and by
the equivalent sound pressure level, which in a large space is dominated by
direct sound, is not adequate to describe the global acoustic comfort or
soundscape. Based on statistical noise analysis on auralized soundscapes,
this article proposes a set of measurable monaural and binaural acoustic
3597
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
parameters that adequately describes the acoustic comfort in large halls. The
study is focusing on rooms covered by traditional materials, such as glass,
plexiglass, etc., and ETFE (ethylene tetrafluoroethylene) foil structures.
2:00
2pAAb3. Acoustical design of diffusers in concert halls using scale
models. HyunIn Jo, Hyung Suk Jang, and Jin Yong Jeon (Dept. of
Architectural Eng., Hanyang Univ., Hanyang University, Seoul,
Seongdong-gu 133-791, South Korea, best2012@naver.com)
A design process of diffusing surfaces has been verified by comparing
with acoustic parameters in different scale models of a concert hall. A 1:50
scale model was used to determine locations of the diffusers as the main diffusing surfaces, and a 1:25 scale model was used to determine the structural
height and density of the stage diffuser by examining the amount of diffusion on the stage. In addition, a 1:10 scale model was used to design the
exact shapes of the diffusers by measuring the scattering and diffusion coefficients of them. The diffusers were installed on the concert hall stage and
sidewalls of the auditorium; acoustic parameters, such as EDT, G, C80, and
Np were measured to examine the amount of diffusion in the hall. As a
result, the relative standard deviations of the parameters RT, EDT, and G
decreased, whereas Np values increased. In the case of Np, the change was
consistent with the variation of the diffusion amount. In conclusion, when
designing a diffusing surface in a concert hall, the scale-model measurement
is essential to determine the location and amount of the effective diffuser, as
well as the direction of the diffuser geometry.
2:20
2pAAb4. The acoustic design of the new University of Iowa Voxman
School of Music. Russell A. Cooper and Steven Schlaseman (Jaffe Holden
Acoust., Inc., 114A Washington St., Norwalk, CT 06854, rcooper@
jaffeholden.com)
The University of Iowa Voxman School of Music opened for students in
September 2016. Necessitated by a flood of the Iowa river in 2008 that condemned the original building, this ground up new building in the center of
Iowa City features a 700 seat concert hall, a 200 seat recital hall, a 75 seat
organ recital hall; band, orchestra, chamber music, and choral rehearsal
rooms; an opera studio; recording, percussion, and electronic music suites;
teaching studios; practice rooms; library; classrooms and social spaces.
Designed on a single city block and stacked 6-1/2 floors high, the design
presented many sound isolation challenges. The varied musical pedagogy
also required that each performance and rehearsal space have adjustable
acoustics. The concert hall, the most public space in the facility houses a
brand new 3,883 pipe Klais organ. The TheatroAcoustic ceiling in the hall
is a beautiful example of coordination of all design disciplines: acoustics,
rigging, lighting, sound, HVAC, fire suppression, recording, and aesthetics.
This paper presents the criteria and design of the facility, the innovations, as
well as tried and true and the results of acoustic measurements.
Acoustics ’17 Boston
3597
2p MON. PM
Contributed Papers
2:40
2pAAb5. Effect of orchestra absorption on the reverberation time in
concert halls. Sung Min Kim, Hyung Suk Jang, and Jin Yong Jeon (Dept.
of Architectural Eng., Hanyang Univ., Sung dong-gu Wangsimni-ro 222,
Seoul 133-791, South Korea, rainbear0622@gmail.com)
Based on 1:10 scale models and computer simulations performed in this
study, the number of players in an orchestra and their sound absorption are
suggested as important factors affecting the sound in the hall. For the computer simulation, scale models of players were constructed and the absorption rates were measured in the reverberation chamber. The simulation
results suggest that a sound absorption rate per person is affected by the density of the occupied area of the players, and the sound absorption coefficient
is determined for the space per player. When comparing two spaces of different sizes with the same number of players, the reverberation time of the
auditorium is largely affected by the players when the space is larger. If
more than 5% of the sound absorbing area of the hall is occupied by players,
the reverberation time is remarkably reduced because of the sound absorption of the players. Therefore, the results from this study can be utilized to
more accurately predict the reverberation and clarity in the acoustic design.
3:00
2pAAb6. Variable room design In office spaces by the use of sound
insulating curtains. Jonas Schira (Sales Manager Acoust., Gerriets GmbH,
Im Kirchenh€urstle 5-7, Umkirch 79224, Germany, jschira@gerriets.
com)
The use of open space design in offices has been widely used over the
last years. The idea of a collaborative working zone for more than just two
employees has both advantages and disadvantages. One of the disadvantages
is that private meeting zones or thin tanks get lost when a simple open space
architecture is used. This fact can be solved by using variable sound insulating components to create variable zones inside the open space architecture.
In the field of theater, techniques such variable sound insulating elements
have been used since the last decades in the form of sound insulating curtain
systems. To create a good sound insulation, more than just one layer of
heavy and highly absorptive fabric is needed. By combining reflective and
absorptive materials and using a special track system and ceiling connection, a sound insulation of up to RW’ = 25 dB can be achieved. This lecture
will illustrate the use of sound insulation curtain systems as a part of an
innovative open space office design. Also, the technical aspects of building
and installing such a system will be discussed.
3:20–3:40 Break
3:40
2pAAb7. Sound masking in office environments—Trade-off between
masking effect and user acceptance. Noemi D. Martin and Andreas Liebl
(Acoust., Fraunhofer Inst. for Bldg. Phys. IBP, Nobelstrasse 12, Stuttgart
70596, Germany, noemi.martin@ibp.fraunhofer.de)
Solving cognitive tasks is adversely affected by speech sound (Irrelevant
Speech Effect). In particular, the interfering potential of speech sound is
high in open-plan office environments. Sound masking is one of the most
effective methods to reduce the disturbing speech sound by covering certain
fractions of it with a masking signal. Since the most effective masking signals are subjectively perceived as annoying themselves, the goal of this
work was to develop a masking signal which fulfills both, a good masking
effect and a high user acceptance. To achieve this goal, qualitatively different masking signals with varying structural properties were developed. The
efficacy of these signals was tested in a laboratory experiment using a cognitive task (serial recall). In addition, a subjective evaluation of the signals
3598
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
(loudness, annoyance, etc.) was carried out. In the second part of this work,
a questionnaire was developed, which can be used to evaluate the acceptance and the subjectively assessed effectiveness of newly developed masking signals by their potential users. Using this questionnaire, the newly
developed masking signals were evaluated by persons working in open-plan
offices. Findings and implications for practical usage are presented.
4:00
2pAAb8. Using acoustical modeling software to predict speech privacy
in open-plan offices. Valerie Smith (Charles M. Salter Assoc., 130 Sutter
St., San Francisco, CA 94104, valerie.smith@cmsalter.com)
Speech Privacy Index is one of the commonly used metrics to discuss an
occupant’s acoustical comfort in an open-plan office. In existing open-plan
offices, the Articulation Index can be measured using a qualified sound
source; the Privacy Index can then be calculated from the Articulation
Index. However, in the case of future office planning, the Privacy Index
must be estimated. Using the ODEON acoustical modeling software, we
estimated the Privacy Index in the open-plan section of our office using the
existing room dimensions, material finishes, and background noise levels.
The ODEON estimates were then compared with “real world” measurements of the same space. This paper summarizes the differences between
the estimated and measured speech privacy index levels.
4:20
2pAAb9. The acoustic design of a conference room: From sketches to
measurements. Fabio Sicurella (Planair SA, Cr^et 108, La Sagne 2314,
Switzerland, fabio.sicurella@planair.ch), Gino Iannace (Universita della
Campania “Luigi Vanvitelli”, Aversa, Italy), Perla Colamesta (Planair SA,
La Sagne, Switzerland), and Matteo Gentilin (Sta€helin architectes SA,
Delemont, Switzerland)
This paper reports the multidisciplinary approach applied for the design
of a new conference room in Switzerland. This conference rooms belongs to
a large new educational building realized in Delemont (CH) in 2016. The architectural approach focused on the choice of the room’s shape and materials (mainly timber) while the acoustic treatments had to provide a good
insulation from exterior and adjacent rooms as well as a good intelligibility,
clarity and speech definition. Simulations run with the software Odeon
allowed to foresee the main acoustic indicators (T30, EDT, C80, D50, and
STI) and therefore to optimize the dimension and position of the acoustic
treatments for different occupancy rate. Moreover, the acoustic treatment
enforced the vocal emission of the speaker without amplification systems. A
measurement campaign at the end of the building construction confirmed
the good acoustic quality of the conference room as well as some discrepancies with the forward analysis due to some changes during construction. The
results of a survey are also reported in this paper in order to better understand the actual acoustic feeling of the users.
4:40
2pAAb10. Acoustic design of a new museum. Attila B. Nagy and Andras
Kotschy (Kotschy and Partners Ltd., Almos vezer u. 4., Torokbalint 2045,
Hungary, attila.nagy@kotschy.hu)
In this paper, we report on the acoustic design of a new museum.
Besides the general exhibition space, the museum building includes three
larger multipurpose halls that will give place for lectures and diverse events.
Defining the acoustic requirements of the multipurpose hall is always a hard
to reach compromise. We demonstrate the results of the international cooperation of acoustic engineers and architects, and show how computer aided
modeling helped during all design phases to achieve the fine-tuned final
stage.
Acoustics ’17 Boston
3598
MONDAY AFTERNOON, 26 JUNE 2017
ROOM 208, 1:20 P.M. TO 3:00 P.M.
2pAAc
Architectural Acoustics: Perceptual Effects Related to Music Dynamics in Concert Halls
Tapio Lokki, Cochair
Computer Science, Aalto University, POBox 13000, Aalto 00076, Finland
Michelle C. Vigeant, Cochair
Graduate Program in Acoustics, The Pennsylvania State University, 201 Applied Science Bldg., University Park, PA 16802
Invited Papers
2p MON. PM
1:20
2pAAc1. How (and why) does every Concert Hall “wake up” differently? Eckhard Kahle, Evan Green, Fabian Knauber, Thomas
Wulfrank, and Yann Jurkiewicz (Kahle Acoust., Ave. Molière 188, Brussels 1050, Belgium, kahle@kahle.be)
Masking is a loudness-dependent, non-linear process. Conversely, the process of unmasking is significant when considering the perceptual effects of music dynamics: the audibility of reflections relative to the direct sound is loudness-dependent, as shown by Wettschurek in
the 1970s. To summarize his description of an orchestral crescendo: in pianissimo , all sources are clearly localized and the auditory image
is fully frontal; then, as loudness increases, the room increasingly wakes up as reflections progressively become unmasked; and, finally, in
full forte , the room is present, and ideally, the listener should be enveloped by sound. Wettschurek’s description implies a linear increase
in spatial impression with loudness; however, the effect of musical dynamics is different in every hall, determined by the unique reflection
sequence at a particular listening position. As a consequence, different rooms can lose clarity at different loudness levels, not necessarily
only during saturation at fortissimo. Recent research by Lokki indicates that the signature of a room is determined by the details of the early
room response. The links between the signature of a room and the unmasking of reflections will be discussed.
1:40
2pAAc2. Links between spatial impulse response and binaural dynamic responsiveness in concert halls. Robert Essert (Sound
Space Vision, 2 Tay House, 23 Enterprise Way, London SW181FZ, United Kingdom, bob.essert@soundspacevision.com)
Recent research at Aalto University suggests that binaural dynamic responsiveness (BDR) in concert halls is (a) desirable and (b) a
result of interaction between the emphasis of higher overtones at stronger dynamic levels and the early response of the room; also that
tall, narrow, parallel sided “shoebox” halls have a greater degree of BDR than wider non-shoebox geometries. We have been analyzing
the time evolution of spatial impulse responses and their connection to room geometry. In a recent paper, we looked at how wall tilt
affects the growth of lateral energy in the room. In this paper, we consider how the shape of forward integrated lateral energy in a spatial
impulse response may affect perceived strength, spaciousness, and binaural dynamic responsiveness.
2:00
2pAAc3. Towards a common parameter characterizing the dynamical responsiveness of a concert hall. Tapio Lokki (Comput. Sci.,
Aalto Univ., POBox 15500, Aalto 00076, Finland, Tapio.Lokki@aalto.fi) and Jukka P€atynen (Comput. Sci., Aalto Univ., Espoo, Finland)
Dynamic responsiveness is an important feature in room acoustics for making music more enjoyable. The concert hall should render
the most silent pianissimos audible everywhere, including the last row, yet support fully the loudest fortissimos. The realization of these
effects is impossible to objectively quantify by analyzing solely the measured impulse responses. Therefore, the analysis should be
coupled with additional information related to music and dynamics. These factors can include directivities of both sources and listeners,
and the spectral changes in the source signals and hearing sensitivity according to the sound level. With such effects combined to the information obtained from the conventional impulse response, the dynamic responsiveness could be objectively measured. This paper
presents a proposed analysis method which is then applied to a variety of measured concert halls. In addition, the results show what magnitude of differences in dynamic responsiveness could be found between concert halls.
2:20
2pAAc4. Effect of concert hall acoustics on tonal consonance of orchestra sound. Jukka P€atynen (Dept. of Comput. Sci., Aalto
Univ. School of Sci., Konemiehentie 2, Espoo FI02150, Finland, jukka.patynen@aalto.fi)
The sound of pitched orchestral instruments consists of harmonic frequencies which, in performances, are transmitted over room
acoustics. The amplitude relations of the harmonic peaks affect the timbre of one tone. When two or more notes are played together, the
effect of consonance and dissonance becomes prominent. The degree of consonance for intervals of musical pitches have been explained
by the frequency separation of their harmonic components in relation to the width of critical bands in human hearing. When the playing
dynamics is varied, the changes in the instruments’ spectral envelopes are foreseen to alter also the consonance of simultaneous notes.
3599
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3599
Furthermore, the room acoustics influence the overall harmonic spectra conveyed to the listeners. This paper presents experiments on
the tonal consonance of orchestra instruments at contrasting dynamic levels in various concert halls. By combining binaural hall measurements and anechoic instrument recordings with a consonance-estimating model, the following hypothesis is investigated: do the
acoustics of concert halls change the orchestra sound’s consonance in different music dynamics?
2:40–3:00 Panel Discussion
MONDAY AFTERNOON, 26 JUNE 2017
ROOM 208, 3:20 P.M. TO 5:20 P.M.
2pAAd
Architectural Acoustics: Room Acoustics Design for Improved Behavior, Comfort, and Performance I
Nicola Prodi, Cochair
Dept. of Engineering, University of Ferrara, via Saragat 1, Ferrara 44122, Italy
Kenneth P. Roy, Cochair
Building Products Technology Lab, Armstrong World Industries, 2500 Columbia Ave., Lancaster, PA 17603
Invited Papers
3:20
2pAAd1. Acoustic comfort for hypermarket cashiers: Problems and solutions. Francesco Martellotta, Sabrina Della Crociata,
Antonio Simone, and Michele D’Alba (DICAR, Politecnico di Bari, Via Orabona 4, Bari, Bari 70125, Italy, francesco.martellotta@
poliba.it)
An acoustic investigation carried out in a large hypermarket pointed out that several acoustic “zones” could be identified. Among
them, the most critical was the checkout area, where cashiers were exposed to many noise sources like customers passing with shopping
trolleys, “beeps” due to barcode readers, people voices, packaging and paging messages, and, above all, background music. In fact, as
the checkout position usually divides the hypermarket from the shopping arcade, workers in this area are further exposed to music (and
noise) from the nearby cafes, restaurants, and stores. As a result background noise LA90 is about 66 dB, with peaks (LA10) of about 72
dB. Subjective analysis also reported the highest rate of complaints in this area. Starting from a detailed analysis of the different noise
sources and of the room acoustic conditions of the area (including reverberation time and speech transmission index measurements), a
set of mitigation actions is analyzed and discussed.
3:40
2pAAd2. Subjective and objective acoustical quality in healthcare office facilities. Murray Hodgson (UBC, SPPH - 2206 East Mall,
Vancouver, BC V6T1Z3, Canada, murray.hodgson@ubc.ca)
This paper discusses acoustical quality in 17 healthcare office facilities. A subjective survey assessed office worker perceptions of
their environments and satisfaction with the acoustics. Self-reported productivity, well-being, and health outcomes were also captured.
Satisfaction was lower with acoustics than with other aspects of IEQ. Satisfaction results were related to room type and the absence or
presence of a sound-masking system. Physical acoustical measurements were made in six types of rooms, some with sound-masking systems, to determine the acoustical characteristics, assess their quality and relate them to the building designs. Background-noise levels
were measured in the occupied buildings. In the unoccupied buildings, measurements were made of reverberation times, and “speech”
levels needed to calculate speech intelligibility indices for speech intelligibility and speech privacy. In open offices, sound-level reductions per distance doubling (DL2) were measured. The results are presented, and are related to room type and partition design. The knowledge gained from this study informs the decision-making of designers and facilities management for upgrades and future design projects.
4:00
2pAAd3. Advances in adjustable acoustics systems in large multi-use performing arts centers in the U.S. Mark Holden, Mathew
Nichols, and Carlos Rivera (Jaffe Holden Acoust., 114A Washington St., Norwalk, CT 06896, mholden@jaffeholden.com)
In the fall of 2016, Jaffe Holden opened three large (1800-2500 seats) multi-use halls with excellent acoustic reviews for all performances from symphony performances to highly amplified popular music. New designs of halls in Salt Lake City, Little Rock, and South
Texas prove that this uniquely American building type has dispelled the myth that multi-purpose is in fact “no purpose.” Acoustic modeling calculations will be compared and contrasted with completed building measurements of EDT, RT30. C80, BR, and other criteria
proving that these halls are excellent quantitatively and qualitatively.
3600
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3600
Contributed Papers
2pAAd4. Advances in room acoustics design for educational audio
studios. John Storyk (MPE, Berklee College of Music, 262 Martin Ave.,
Highland, NY 12528, john.storyk@wsdg.com)
Improved behavior, comfort, and performance are critical in creating
today’s audio recording and post production educational facilities. As these
facilities continue to grow and become more widespread in high school and
secondary learning institutions, the challenges that they present also continue to grow. Fundamental acoustic behavior remains paramount, but
unique considerations associated with the teaching aspect of these rooms
present some interesting issues. These include isolation issues, internal
room acoustic performance, and interfacing with teaching and industry commercial ergonomic needs (among others). This presentation will use the
recently completed Berklee College of Music Studio Complex (Boston,
MA) as its prime example and should be associated with a technical tour of
the studios provided during the conference.
4:40
2pAAd5. Acoustical design for a little big theater: Instituto Brincante.
onica, Acustica & Sonica, Rua Fradique
Jose A. Nepomuceno (Ac
ustica & S^
Coutinho, 955 cjt 12, S~ao Paulo, S~ao Paulo 05433-000, Brazil, info@
acusticaesonica.com.br)
The Brincante Institute located in S~ao Paulo, Brazil, is a space devoted
to the study and re-creation of the multitudinous Brazilian artistic manifestations and cultural heritage. The space Brincante occupied from 1990
to 2014 with a theater, shops, and classrooms was demolished to give
place for a commercial building. A new building was designed and built
for the “new” Brincante, thanks to the donation of different groups and
crow funding campaign. The new house is a two stories construction that
occupies a small land of 200 m2. The design challenges were great: tight
budget, quality expectations, noisy environment, broad use program, and
close neighbor constructions. The range of performances includes acoustic and amplified music, dance to choir. The challenges of acoustic design
were the small theater for 100 seats and the rehearsal room, requiring
excellent acoustical conditions and high sound isolation due to location
specifics. The theater’s reduced size helped to keep audience close to
players and the sensation of envelopment and clarity. This paper describes
the very simple and high performance solutions used for sound isolation
and acoustical conditioning. Artists and public received the acoustics with
great enthusiasm. The home gives Brincante the opportunity to keep
alive.
5:00
2pAAd6. Hospital noise mitigation. Felicia Doggett and Sooch San Souci
(Metropolitan Acoust., LLC, 40 W. Evergreen Ave., Ste. 108, Philadelphia,
PA 19118, f.doggett@metro-acoustics.com)
A review of the literature today concerning noise in hospitals, both public and peer review, points to a continual increase in noise complaints from
medical staff and patients in hospitals. One would expect that things would
be getting better by now, but this is not the case. Noise surveys that we have
recently conducted suggest that though unsatisfactory conditions are widespread, a list of simple, cost effective solutions has proven to be effective.
This paper presents an array of actual projects detailing the development of
remedies for noise annoyance in hospitals in an effort to increase the comfort and performance of medical staff and patients.
MONDAY AFTERNOON, 26 JUNE 2017
BALLROOM A, 1:20 P.M. TO 4:20 P.M.
2pAAe
Architectural Acoustics and National Council of Acoustical Consultants:
Student Design Competition
Andrew N. Miller, Cochair
Bai, LLC, 4006 Speedway, Austin, TX 78758
David S. Woolworth,
Roland, Woolworth & Associates, LLC, 356 CR 102, Oxford, MS 38655
The Technical Committee on Architectural Acoustics of the Acoustical Society of America with support from the Robert Newman Student Award Fund, The Wenger Foundation, and the National Council of Acoustical Consultants is sponsoring the 2017 Student Design
Competition that will be professionally judged at this meeting.
Design Scenario: A small university has decided to open a multi-purpose facility. It will be located in a densely populated urban setting.
It is flanked on both long sides by neighboring buildings. It is to include an auditorium with a balcony, stage house, and orchestra pit.
The auditorium will be used as a meeting space and for the school’s drama and band programs as well as Broadway productions. The facility will also include a multipurpose rehearsal room which must have easy access to the stage. Music performed and rehearsed in this
facility will be chamber ensembles, soloists, jazz ensembles and concert band ensembles.
The submitted designs will be judged by a panel of professional architects and acoustical consultants. An award of USD$1250 will be
made to the submitter(s) of the design judged “first honors.” Four awards of USD$700 each will be made to the submitters of four entries
judged “commendation.”
3601
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3601
2p MON. PM
4:20
MONDAY AFTERNOON, 26 JUNE 2017
ROOM 300, 1:15 P.M. TO 5:40 P.M.
2pABa
Animal Bioacoustics, Acoustical Oceanography, Education in Acoustics, and Underwater Acoustics:
Incorporating Underwater Acoustics Research into the Decision Making Process
Kathleen J. Vigness-Raposa, Cochair
Marine Acoustics, Inc., 2 Corporate Place, Suite 105, Middletown, RI 02842
Michael A. Ainslie, Cochair
Underwater Tech. Dept., TNO, P.O. Box 96864, The Hague 2509JG, Netherlands
Chair’s Introduction—1:15
Invited Papers
1:20
2pABa1. Incorporating basic underwater sound principles into the decision making process. Kathleen J. Vigness-Raposa (Marine
Acoust., Inc., 2 Corporate Pl., Ste. 105, Middletown, RI 02842, kathleen.vigness@marineacoustics.com), Gail Scowcroft, Christopher
Knowlton, and Holly Morin (Graduate School of Oceanogr., Univ. of Rhode Island, Narragansett, RI)
Research on underwater sound is continually advancing. New discoveries on sound in different environments and how sound exposure affects marine animals are just two examples of important ongoing research. To integrate underwater acoustics research into the
regulatory process, a fundamental understanding of basic sound principles is required for both producers and regulators. The University
of Rhode Island Graduate School of Oceanography teamed with Marine Acoustics, Inc., on the Discovery of Sound in the Sea (DOSITS)
project to provide scientifically accurate resources on the current knowledge about underwater sound. The project’s foundation is a comprehensive website (www.dosits.org). It synthesizes the latest peer-reviewed science on underwater sound in a form that is accessible to
a variety of audiences. The site has over 400 pages and is updated semi-annually with newly published information after a thorough
review by a panel of scientific experts. Based on the DOSITS website, this talk will provide background for the decision-making community on the characteristics of sound, underwater sound propagation, and appropriate measurement units. In addition, recent developments in the harmonization of sound modeling, measurement, and reporting will be discussed, highlighting the urgent need for
consistent metrics across all underwater sound disciplines.
1:40
2pABa2. Underwater ears and the physiology of impacts: Comparative liability for hearing loss in sea turtles, birds, and
mammals. Darlene R. Ketten (Biomedical Eng. and Otology and Laryngology, Boston Univ. and Harvard Med. School, CMST, GPO
Box 1987, Perth, Western Australia 6845, Australia, dketten@whoi.edu)
From human and laboratory animal studies, we have extensive knowledge about mechanisms of hearing loss from noise as well as
aging, trauma, genetic conditions, and disease. By contrast, little is known about causes of hearing loss in wild species. Although there
is great concern for anthropogenic acoustic impacts in marine species, especially for marine mammals, we are far from a clear understanding of the potential scope of impacts on the thousands of marine species. We have acquired significant data on hearing in pinnipeds,
cetaceans, and fish, but far less is known about hearing and possible impacts in turtles and seabirds and how or if they suffer irreparable
hearing loss. This talk will review what is known about hearing loss mechanisms from human and animal studies and based on the comparisons for land and marine ears, how the combined data can aid us to understand how hearing may be impaired in marine animals. It
will also examine the current evidence for natural loss processes (presbycusis, disease, and trauma) in marine animals and implications
for our ability to estimate and mitigate hearing loss from underwater sound.
2:00
2pABa3. Sound sources in the marine environment. Klaus Lucke (CMST, Curtin Univ., GPO Box U1987, Perth, WA 6845,
Australia, klaus.lucke@wur.nl)
Regulation of underwater sound requires a good understanding of the sound emitted from various sound sources into the marine
environment. Underwater sound sources can be subdivided into three main categories: geological, biological, and anthropogenic sources.
While regulation of underwater sound self-evidently applies solely to anthropogenic sound, contributions from the other categories need
to be taken into account when assessing the potential influence of man-made sound on the marine fauna. In addition to focusing on single
offshore operations, cumulative noise exposure and cumulative stressors are relevant aspects for the regulation of underwater sound and
need to be considered too. In this presentation, the main underwater sound contributors from each category will be identified, and the
related sounds characterized and compared in relation to the overall marine sound energy budget. Important implications with regard to
3602
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3602
physical parameters of sound will be briefly discussed. Current approaches to measuring, monitoring, and modeling underwater sound
and how regulation can benefit from these different techniques will be reviewed.
2:20–3:20 Panel Discussion
3:20–3:40 Break
Contributed Papers
2pABa4. Implementing NOAA Fisheries’ 2016 Marine Mammal
Acoustic Guidance: Challenges and lessons learned. Amy R. ScholikSchlomer (NOAA Fisheries Service, 1315 East-West Hwy, SSMC3, Rm.
13605, Silver Spring, MD 20910, amy.scholik@noaa.gov)
The National Oceanic and Atmospheric Administration’s (NOAA) first
comprehensive Guidance addressing the effects of noise on marine mammal
hearing is intended for use by NOAA managers and applicants to better predict acoustic exposures that have the potential to trigger certain requirements
under various statutes (e.g., U.S. Marine Mammal Protection Act; Endangered Species Act). The Guidance was developed by compiling, interpreting,
and synthesizing scientific information on the effects of anthropogenic sound
on marine mammal hearing. The Guidance’s updated acoustic thresholds are
more sophisticated than our previous thresholds. This added complexity is an
important consideration for applicants who have formerly relied on more simple acoustic thresholds to evaluate potential impacts. Thus, the development
of user-friendly tools is a fundamental issue for the regulatory community
that is not often considered by most outside this group. As NOAA implements the Guidance, we have entered a new phase that consists of its own inherent issues and challenges associated with the practicality of employing
more complex science to real-world applications. Throughout this process,
NOAA has learned several valuable lessons, which will help improve the process of updating this document as well as drafting future guidance (e.g., marine mammal behavioral guidance; guidance for other protected species).
4:00
2pABa5. Cumulative sound exposure levels—Insights from seismic survey
measurements. Bruce Martin, Jeff McDonnell (JASCO Appl. Sci., 32 Troop
Ave., Ste. 202‘, Dartmouth, NS B3B 1Z1, Canada, bruce.martin@jasco.com),
and koen broker (Shell Global Solutions, Rijswijk, Netherlands)
The weighted cumulative sound exposure level from man-made noise
sources is often recommended as a method of measuring possible injury to
marine life hearing. However, the behavior of this metric over large areas
and its evolution in time is poorly documented in the scientific literature.
Similarly, the differences in sound exposure levels as a result of changing
the frequency weighting functions are only vaguely discussed. In this presentation, we provide insights into the behavior of the SEL metric based on
measurements of several seismic surveys and simulations of seismic vessels
passing a recording location. Based on the real-world measurements, we
show how the range at which the surveys exceed the regulatory thresholds is
highly dependent on the weighting functions and threshold values.
4:20
2pABa6. Taking into account uncertainties in environmental impact
assessment of underwater anthropogenic noise. Florent Le Courtois, G.
Bazile Kinda, and Yann Stephan (HOM, Shom, 13, rue du Chatellier BP
30316, Brest 29603, France, florent.le.courtois@shom.fr)
The management of underwater anthropogenic noise is becoming a
significant component of marine policies. Rules to limit noise exposure in a
way that marine animals are not adversely affected are expected to be set up
in the future. However, modeling or measuring underwater noise levels
(or other suitable metrics) can still be subject to a lot of uncertainties due to
difficulty in estimatation or measuring key parameters as source pressure
levels and waveguide features. On the other hand, the audition threshold
values inferred from bioacoustics studies may also lack of statistical robustness and are difficult to generalize. The combination of these two major
sources of uncertainties may lead to misestimate the noise exposure risk and
may hinder marine space planning efficiency. This work aims at developing
a framework for impact studies while considering uncertainties on the
acoustic metrics and confidence in the threshold values. It relies on probabilistic description of the acoustics pressure, as proposed in several recent
studies. The formulation provides the interpretation of the results in terms
of impact risk considering exposure time and source distance. Taking
into account of the uncertainties becomes then a strong tool for decision
support.
4:40–5:40 Panel Discussion
3603
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3603
2p MON. PM
3:40
MONDAY AFTERNOON, 26 JUNE 2017
ROOM 313, 1:20 P.M. TO 5:20 P.M.
2pABb
Animal Bioacoustics: Data Management, Detection, Classification, and Localization
David K. Mellinger, Chair
Coop. Inst. for Marine Resources Studies, Oregon State University, 2030 SE Marine Science Dr., Newport, OR 97365
Contributed Papers
1:20
2pABb1. Organizing metadata from passive acoustic localizations of
marine animals. Marie A. Roch (Dept. of Comput. Sci., San Diego State
Univ., 5500 Campanile Dr, San Diego, CA 92182-7720, marie.roch@sdsu.
edu), Philip Miller (Scripps Inst. of Oceanogr., San Diego, CA), Tyler A.
Helble (Systems Ctr. Pacific, SPAWAR, San Diego, CA), Simone
Baumann-Pickering, and Ana Sirović
(Scripps Inst. of Oceanogr., La Jolla,
CA)
The Tethys system is a set of schemata designed to organize spatiotemporal data from detection, classification, and localization (DCL) tasks targeting sound producing marine animals. These metadata are based on the
analysis of recordings collected through passive acoustic monitoring. Tethys
is accompanied by a scientific workbench implementation of the schemata
rules. The system is designed to promote the retention and organization of
acoustic metadata in a manner that allows long-term retention of detailed
DCL outputs. We present recent work on the localization schemata showing
the ability to organize localizations from a large naval hydrophone range.
The Tethys implementation permits these data to be analyzed in the context
of Internet available data products containing oceanographic, atmospheric,
and ephemeris data. Moving beyond simple detections, this addition of
localizations to Tethys will allow even more powerful interpretation of the
spatiotemporal occurrence of marine animals in the context of their ecology.
1:40
2pABb2. Advanced methods for passive acoustic detection,
classification, and localization of marine mammals. David K. Mellinger,
Yang Lu, Curtis Lending (Coop. Inst. for Marine Resources Studies, Oregon
State Univ. and NOAA Pacific Marine Environ. Lab., 2030 SE Marine Sci.
Dr., Newport, OR 97365, David.Mellinger@oregonstate.edu), Jonathan
Klay (NOAA Pacific Marine Environ. Lab., Seattle, WA), David Moretti
(Naval Undersea Warfare Ctr. (NUWC) Div., Wakefield, RI), Susan M.
Jarvis (Naval Undersea Warfare Ctr. (NUWC) Div., Worcester, MA), Paul
M. Baggenstoss (Naval Undersea Warfare Ctr. (NUWC) Div., Newport,
RI), Stephen W. Martin (Environ. Dept., National Marine Mammal
Foundation, San Diego, CA), Marie A. Roch, Christopher A. Marsh
(Comput. Sci. Dept., San Diego State Univ., San Diego, CA), and Kaitlin E.
Frasier (Scripps Inst. of Oceanogr., Univ. of California, San Diego, La Jolla,
CA)
For effective long-term passive acoustic monitoring of large data sets,
automated algorithms provide the ability to detect, classify, and locate
(DCL) marine mammal vocalizations. Several DCL algorithms were developed and enhanced, with emphasis on methods robust to non-Gaussian and
non-stationary noise sources. (1) A subspace model was developed to separate odontocete clicks from noise sounds. (2) A multi-class support vector
machine was improved by resolving confusion among species’ overlappingfrequency clicks, and a beaked whale buzz class developed. (3) For dolphin
whistles, shape-related features, extractable automatically, were shown to
carry species-specific information. (4) Equipment and site differences were
discovered to affect Gaussian mixture model classifiers, and methods were
developed to mitigate these differences. (5) A nearest-neighbor approach to
detection association and 3D localization across multiple phones with
3604
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
multiple arrivals was developed (and applied to beaked whales) using timedifference-of-arrival (TDOA) hyperbolic methods, retaining TDOAs with
fewer than the usual three detections and using associations between a given
phone’s detections with nearest neighbors. (6) Minke whale “boing” frequency estimates were improved to differentiate individuals, and a kinematic tracking algorithm was developed. (7) A generalized-power-law
detector for humpback whales was improved. (8) A software interface for
detection was developed, then tested by sending data from Ishmael to a
detection process in MATLAB.
2:00
2pABb3. Assessing the effects of noise masking and transmission loss on
dolphin occupancy rates reported by echolocation click loggers
deployed on the eastern Scottish coast. Kaitln Palmer (School of Biology,
Univ. of St. Andrews, Sir Harold Mitchell Bldg., St Andrews, Fife KY16
9TH, United Kingdom, kp37@st-andrews.ac.uk), Kate L. Brookes (Marine
Scotland Sci., Aberdeen, United Kingdom), and Luke Rendell (Univ. of St.
Andrews, St. Andrews, United Kingdom)
C-PODs are commercially available echolocation click loggers used to
monitor odontocete populations worldwide. Data from C-PODs have
directly contributed to high-profile conservation efforts as well as provided
major insights into cetacean behavior and habitat use. However, the “blackbox” nature of the instruments poses a challenge to researchers seeking to
validate data from these instruments. In this study, we simulate how changes
in site-specific propagation conditions and ambient noise levels shift dolphin
occupancy rates as reported by the C-POD. As part of the ECoMASS array,
10 calibrated continuous recorders (SM2Ms) were co-deployed with CPODs in the North Sea. Transmission loss profiles, assumed dolphin source
levels, and published C-POD performance metrics were combined to estimate the relationship between detection probability and ambient noise level
at the 10 study sights. Bayesian models were then used to estimate dolphin
occupancy rates with and without accounting for differences in detection
probability. While absolute occupancy rates differed when detection probability was accounted for, relative trends in occupancy were generally consistent within the two models. These data suggest that, within the scope of
the ECoMASS array, relative occupancy rates are somewhat robust to differences in transmission loss and ambient noise levels throughout the survey
period and location.
2:20
2pABb4. Variability in ground-truth data sets and the performance of
two automated detectors for Antarctic blue whale calls in different
soundscape conditions. Emmanuelle C. Leroy (Laboratoire GeoSci. Ocean,
Univ. of Brest, IUEM Technopole Brest Iroise, Rue Dumont d’Urville,
Plouzane 29280, France, emmanuelle.leroy@univ-brest.fr), Karolin
Thomisch, and Ilse Van Opzeeland (Ocean Acoust. Lab, Alfred Wegener
Institut, Bremerhaven, Germany)
Automated detectors are important tools for processing large passive
acoustic databases. Assessing the performance of a given method can be
challenging and needs to be interpreted in the light of the overall purpose of
analysis. Performance evaluation often involves comparison between the
Acoustics ’17 Boston
3604
2:40
2pABb5. Automatic classification of humpback whale social calls. Irina
Tolkova (Appl. Mathematics, Univ. of Washington, Durham, NH), Lisa
Bauer (Comput. Sci., Johns Hopkins, 401 N Coquillard Dr., South Bend, IN
46617, lbauer6@jhu.edu), Antonella Wilby, Ryan Kastner (Univ. of
California San Diego, San Diego, CA), and Kerri Seger (Univ. of New
Hampshire, La Jolla, CA)
Acoustic methods are an established technique to monitor marine mammal populations and behavior, but developments in computer science can
expand the current capabilities. A central aim of these methods is the automated detection and classification of marine mammal vocalizations. While
many studies have applied bioacoustic methods to cetacean calls, there has
been limited success with humpback whale (Megaptera novaeangliae)
social call classification, which has largely remained a manual task in the
bioacoustics community. In this project, we automated this process by analyzing spectrograms of calls using PCA-based and connected-componentbased methods, and derived features from relative power in the frequency
bins of these spectrograms. These features were used to train and test a
supervised Hidden Markov Model (HMM) algorithm to investigate classification feasibility. We varied the number of features used in this analysis by
varying the sizes of frequency bins. Generally, we saw an increase in precision, recall, and accuracy for all three classified groups, across the individual data sets, as the number of features decreased. We will present the
classification rates of our algorithm across multiple model parameters. Since
this method is not specific to humpback whale vocalizations, we hope it will
prove useful in other acoustic applications.
3:00–3:20 Break
3:20
2pABb6. Cepstral analysis of vocal tract resonance from Tree Swallow
(Tachycineta bicolor) songs during learning. Benjamin N. Taft (Landmark
Acoust. LLC, 1301 Cleveland Ave., Racine, WI 53405, ben.taft@
landmarkacoustics.com)
Bird vocalizations range in spectral composition from pure-tone whistles
through resonant bugles to noisy squawks. Birds differ from mammals in
the vibratory source of their vocalizations. Individuals in both groups use
muscular control of the resonances of their vocal tracts to affect the resulting
acoustic signal. Sensorimotor learning about this process plays an important
role in the learning of both human speech and avian song. It is hypothesized
that vocal tract resonances, as measured by cepstrum-based formant detectors, will show more variation both among and within calls early in the
learning process. This should be particularly true for birds whose songs consist of rapidly frequency-modulated pure tones. Field recordings of multiple
wild tree swallows (Tachycineta bicolor) are analyzed over the course of
the breeding season. The consistency of vocal tract resonances are compared
among birds of different ages and between newly- and previously-learned
song units.
3605
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
3:40
2pABb7. Estimating geo-position and depth of echo-locating beaked
whales using an array of drifting recorders. Jay Barlow, Emily T.
Griffiths (Marine Mammal and Turtle Div., NOAA-NMFS-SWFSC, 8901
La Jolla Shores Dr., La Jolla, CA 92037, jay.barlow@noaa.gov), and Holger
Klinck (BioAcoust. Res. Program, Cornell Lab of Ornithology, Ithaca,
NY)
An array of 8 drifting recorders were deployed in the Catalina Basin off
Southern California to localize beaked whales. The drifting recorders with
hydrophone pairs at 90-135 m were deployed along two parallel lines with
~1 km separation between recorders. The array was re-deployed daily at
approximately the same location to maintain this array spacing. Cuvier’s
beaked whales (Ziphius cavirostris) were detected on 26 occasions from
their distinctive echo-location clicks. On 8 of these occasions, direct-path
and surface-reflected signals were received on four or more drifting recorders, which allowed us to estimate location and depth of the whales. Average
array tilt during these events was less than 0.2 , and always less than 0.6 .
The same echolocation clicks were seldom received on more than two
recorders, so we could not use methods that require TDOA measurements
between recorders. We develop a novel method of 3-D localization using
the vertical bearing angles estimated from the direct- and surface-reflected
signals and use optimization methods to find the unique location and depth
at which these angles converged. Detection ranges varied from 1.0 to 3.7
km (mean = 2.0, sd = 0.65), and depths of vocalizing animals varied from
696-1150 m (mean = 948, sd = 152).
4:00
2pABb8. Analysis of marine mammal bearing tracks from twohydrophone recordings made with a glider. Elizabeth T. K€
usel and
Martin Siderius (Portland State Univ., 1900 SW 4th Ave., Portland, OR
97201, kusele@alum.rpi.edu)
An underwater glider fitted with two hydrophones recorded approximately 19 hours of data during an opportunistic sea experiment in the
summer of 2014. The acoustic data were collected with a sampling frequency of 96 kHz and 16-bit resolution in deep waters off the western coast
of the island of Sardinia, Mediterranean Sea. Detection and classification of
sounds by a trained human analyst indicated the presence of sperm whale
(Physeter macrocephalus) regular clicks as well as dolphin clicks and whistles. A period of 90 min during which the glider did not surface, and which
contained extensive sperm whale clicking activity was chosen for analysis.
Cross-correlation of the data from both channels allowed the estimation of
the direction (bearing) of clicks, and realization of animal tracks. Several
bearing tracks were observed through this analysis, closely following the oscillatory pattern of the glider’s heading, suggesting that such information
has the potential to break the left-right ambiguity of the bearing estimates.
Results from the bearing tracking analysis, including accuracy and performance, will be shown followed by a discussion on how they can aid in population density estimation studies.
4:20
2pABb9. A comparison of whale tracking algorithms in the Indian
Ocean using the Comprehensive Test Ban Treaty Hydrophone Data.
David A. Lechner (Mech. Eng., The Catholic Univ. of America, 9404
Bethany Pl., Washington, DC 20064, 66Lechner@cua.edu), Shane Guan
(Mech. Eng., The Catholic Univ. of America, Silver Spring, MD), and
Joseph F. Vignola (Mech. Eng., The Catholic Univ. of America,
Washington, DC)
This paper will present results comparing several geo-location and tracking algorithms applied to whale signals as monitored by the Comprehensive
Test Ban Treaty (CTBT) sensor network. The performance of three tracking
algorithms are compared using several hours of acoustic data collected off
Cape Leeuwin, Australia, containing the apparent broadcast call of a blue
whale (Balaenoptera musculus). The first approach used cross-correlation in
Acoustics ’17 Boston
3605
2p MON. PM
detector output and a ground-truth data set, which often involves manual
analyses of the data. Such analyses may be subjective depending on, e.g.,
interfering background noise conditions. In this study, we investigated the
variability between two analysts in the detection of Antarctic blue whale Zcalls (Balaenoptera musculus intermedia), as well as the intra-analyst variability, in order to understand how this variability impacts the creation of a
ground-truth and the assessment of detector performances. Analyses were
conducted on two test datasets reflecting two basins and different situations
of call abundance and background noise conditions. Using a ground-truth
based on combined results of both analysts, we evaluated the performances
of two automated detectors, one using spectrogram correlation and the other
using a subspace-detection strategy. This evaluation allows understanding
how recording sites, vocal activity, and interfering sounds affect the detector
performances and highlights the advantages and limitations of each of the
methods, and the possible solutions to overcome the main limitations.
time on the entire signal, while the second used a smaller time sample that
covers the leading or trailing edge of the signal and creates a motion vector
from the pair. The third approach used the Cross-Ambiguity-Function Mapping technique to generate 2-D energy maps on a geographic plot. Accuracy
and energy detection algorithms based on blind-processing of the data are
compared.
4:40
2pABb10. Bat population censusing with passive acoustics. Laura
Kloepper, Yanqing Fu, and Joel Ralston (Biology, Saint Mary’s College,
262 Sci. Hall, Saint Mary’s College, Notre Dame, IN 46556, lkloepper@
saintmarys.edu)
Passive acoustic monitoring is a widely used method to identify bat species and determine spatial and temporal activity patterns. One area where
acoustic methods have not yet been successfully applied, however, is in
determining population counts, especially from roosts. Typically, most roost
counts are obtained with thermal imagery that may be prohibitively expensive for many natural resource managers or require complex computer programming. Here, we demonstrate a new acoustic technique to estimate
population size of Brazilian free-tailed bats (Tadarida brasiliensis) emerging from large cave colonies. Data were acquired across multiple nights and
at 9 cave locations with different roost structures and flight behavior profiles. We used a single microphone to monitor echolocation activity and
simultaneously recorded the emerging bats with thermal video. Bat abundance counts were determined from a single video frame analysis (every 10
s) and were compared to different acoustic energy measures of a 1-s long
acoustic sequence recorded at the time of the analyzed video frame. For
most cave locations, linear regression models successfully predicted bat
emergence count based on acoustic intensity of the emerging stream. Here,
we describe our method and report on its application for counting bats from
different roost locations.
3606
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
5:00
2pABb11. Evaluating autonomous underwater vehicles as platforms for
animal population density estimation. Danielle Harris (Ctr. for Res. into
Ecological and Environ. Modelling, Univ. of St. Andrews, The Observatory,
Buchanan Gardens, St. Andrews KY16 9LZ, United Kingdom, dh17@standrews.ac.uk), Selene Fregosi (Cooperative Inst. for Marine Resources
Studies, Oregon State Univ. and NOAA Pacific Marine Environ. Lab.,
Newport, OR), Holger Klinck (BioAcoust. Res. Program, Cornell Lab of
Ornithology, Cornell Univ., Ithaca, NY), David K. Mellinger (Cooperative
Inst. for Marine Resources Studies, Oregon State Univ. and NOAA Pacific
Marine Environ. Lab., Newport, OR), Jay Barlow (Marine Mammal and
Turtle Div., NOAA Southwest Fisheries Sci. Ctr., La Jolla, CA), and Len
Thomas (Ctr. for Res. into Ecological and Environ. Modelling, Univ. of St.
Andrews, St. Andrews, United Kingdom)
AFFOGATO (A Framework For Ocean Glider-based Acoustic density
estimation) is a multi-year project (2015—2018) funded by the Office of
Naval Research. Its main goal is to investigate the utility of slow-moving
marine vehicles, particularly ocean gliders and profiling floats, for animal
density or abundance estimation, using the passive acoustic data that these
vehicles can collect. In this presentation, we will (1) provide a project overview and (2) share results from the initial stages of the project. As part of
one task, existing deployments in the Gulf of Alaska, Hawaii, and the
Mariana Islands have been used to investigate the capability of gliders to
adhere to planned survey tracks. Simulations were also conducted to assess
whether realized glider survey track lines could produce unbiased density
estimates using two hypothetical animal distributions and assuming socalled design-based analysis methods (the standard, and also simplest,
approach). Five deployments were assessed and deviances of up to 20 km
from the planned survey track line were found. Under the specific simulated
conditions, density estimates showed biases up to 9%. Next stages of the
project will also be discussed, including ongoing work to estimate the probability of detecting different cetacean species using AUVs.
Acoustics ’17 Boston
3606
MONDAY AFTERNOON, 26 JUNE 2017
ROOM 310, 1:20 P.M. TO 5:40 P.M.
2pAO
Acoustical Oceanography: Session in Honor of David Farmer II
Andone C. Lavery, Cochair
Applied Ocean Physics and Engineering, Woods Hole Oceanographic Institution, 98 Water Street, MS 11, Bigelow 211,
Woods Hole, MA 02536
Grant B. Deane, Cochair
Marine Physical Lab., Univ. of California, San Diego, 13003 Slack St., La Jolla, CA 92093-0238
Tim Leighton, Cochair
Institute of Sound and Vibration Research, University of Southampton, Southampton, United Kingdom
2p MON. PM
Invited Paper
1:20
2pAO1. Subtleties in the acoustics of marine sediments. Michael J. Buckingham (Scripps Inst. of Oceanogr., Univ. of California, San
Diego, 9500 Gilman Dr., La Jolla, CA 92093-0238, mbuckingham@ucsd.edu)
Based on the pioneering work of David Farmer, it is now is well recognized that the layer immediately beneath the sea surface is a
highly dynamic, two-phase region, where bubbles created by wave breaking form an upward refracting sound speed profile that acts as
an acoustic waveguide. The bottom boundary, although less dynamic than the sea surface, possesses its own unique complexities that
are no less challenging to understand than those of the near-surface bubble layer. For instance, a growing body of experimental evidence
indicates that the acoustic attenuation in a marine sediment obeys a frequency power law, extending over a wide bandwidth, in which
the exponent takes a value close to unity. A long-standing problem has been to identify the frequency dispersion in the sound speed associated with such a power-law attenuation. Several solutions to this problem have been proposed over recent decades but are in fact
unphysical in that they fail to obey the Kramers-Kronig dispersion relations. An alternative solution that has recently been developed,
which overcomes the previous difficulties, contains a number of subtleties that, it is hoped, will appeal to David Farmer’s keen sense of
scientific curiosity. [Research supported by ONR.]
Contributed Papers
1:40
2:00
2pAO2. A method of estimating the value of in situ surface tension on a
bubble wall. Mengyang Zhu, Tim Leighton (Inst. of Sound and Vib. Res.,
Eng. and the Environment, Univ. of Southampton, University Rd.,
Southampton, Hampshire SO17 1BJ, United Kingdom, M.Zhu@soton.ac.
uk), and Peter Birkin (Chemistry, Univ. of Southampton, Southampton,
Hampshire, United Kingdom)
2pAO3. Experimental observations of acoustic backscattering from
spherical and wobbly bubbles. Alexandra M. Padilla, Kevin M. Rychert,
and Thomas C. Weber (Ctr. for Coastal and Ocean Mapping, 24 Colovos
Rd., Durham, NH 03824, apadilla@ccom.unh.edu)
The surface tension of a liquid is an important parameter for estimating
and analyzing the processes that happen on the air/liquid interface, such as
the air/sea gas exchange. Current methods of measuring surface tension concentrate on measuring its value at the top flat air/liquid interface. However,
in cases where bubbles mediate oceanic processes (such as their contributions to air-to-sea transfers of mass, energy, and momentum), the value of
surface tension that is needed (e.g., for placement in models of the evolution
and persistence of sub-surface bubble clouds) is the instantaneous value on
the bubble wall, as it moves through the ocean and potentially collects surface-active species onto the bubble wall. This paper outlines a method of
estimating the value of this in situ surface tension, by insonifying a bubble
and observing the onset of Faraday waves on a bubble wall. This new
method was compared with a traditional ring method in various scenarios.
3607
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Methane bubbles released from the seafloor transport gas through the
water column to the atmosphere. Direct or optical methods via underwater
vehicles are often used for quantifying methane gas flux in the water column,
however these methods are time consuming and expensive. Acoustic measurements, using split-beam and multibeam echo sounders that are readily
available on most sea vessels, provide a more efficient method for determining methane gas flux. These acoustic methods typically convert acoustic
backscatter measurements of bubbles to bubble size using analytical models
of bubble target strength. These models assume that bubbles have uniform
shape; however it has been shown that bubbles with a radius greater than 1
mm, which have large E€
otv€
os and/or large Reynolds number, are non-spherical. To investigate the error associated with assuming large bubbles are spherical, a 6 m deep tank experiment was conducted to compare calibrated target
strength measurements of both small spherical and large wobbly bubbles to
existing acoustic scattering models. Bubble sizes observed in this experiment
ranged from a fraction of 1 mm to 6 mm in radius. This experiment used a
broad range of frequencies (10-300 kHz) to cover typical echo sounder frequencies utilized in field measurements of natural methane seeps.
Acoustics ’17 Boston
3607
2:20
2:40
2pAO4. Low-frequency active acoustic response of underwater bubble
plumes. Marcia J. Isakson (Appl. Res. Labs., The Univ. of Texas at Austin,
10000 Burnet Rd., Austin, TX 78713, misakson@arlut.utexas.edu), Zel J.
Hurewitz (Dept. of Phys., The Univ. of Texas at Austin, Austin, TX), and
Paul M. Abkowitz (George W Woodruff School of Mech. Eng., Georgia
Inst. of Technol., Atlanta, GA)
2pAO5. Efficacy of up and down-chirps for two-pulse sonar techniques.
Nikhil Mistry, Paul White, and Tim Leighton (Inst. of Sound & Vib. Res.,
Southampton Univ., Bldg. 13, Rm. 3049, University Rd., Southampton,
Hampshire SO17 1BJ, United Kingdom, nm6g09@soton.ac.uk)
Twin-Inverted Pulse Sonar (TWIPS) and Biased Pulse Summation Sonar
(BiaPSS) were inspired by a video of dolphins using bubble nets to hunt.
There are a number of animals that have been observed to use the chirp (a
sweep in sine waves of shifting frequency) in echolocation. This talk will
present the difference in efficacy of the sonar techniques, mentioned above,
when using upwards and downwards sweeping chirps as sonar signals.
These techniques have been simulated and experimentally tested with a
number of bubble size distributions (BSD). This includes simulations using
the results of BSD measurements by Farmer and Vagle (1989). Further,
another two-pulse technique will be outlined, proposed as a means to overcome the limitation on detection range, owing to the inter-pulse delay in
TWIPS and BiaPSS.
Active, low frequency acoustics may provide a method of locating and
identifying methane bubble plumes over a large area by exciting radial and
column resonances. In this work, a proof of concept experiment was conducted by measuring the active, low frequency response of an air bubble
plume at the Lake Travis Test Station at the University of Texas at Austin.
Several flow rates, bubble sizes, and plume widths were investigated. The
test consisted of insonifying the plumes at two different standoff distances
using a low-frequency chirp from 300-2500 Hz. The plume response was
much longer in time than originally expected due to resonant ringing. The
long-time response was modeled with a finite element model, which predicted the column resonance. This behavior may be able to be exploited for
long-range bubble plume identification. [Work supported by ExxonMobil.]
Invited Papers
3:00
2pAO6. An acoustic study of sea ice behavior in a shallow, Arctic bay. Oskar Glowacki (Inst. of Geophys., Polish Acad. of Sci.,
Ksiecia Janusza 64/413, Warsaw 01-452, Poland, oglowacki@igf.edu.pl)
Recent acceleration of sea ice decline observed in the Arctic Ocean draws attention to environmental factors driving this phenomenon. One of the main conclusions is a growing need for better understanding of sea ice drift, deformation, and fracturing. In response to
that call, several ambient noise recordings were carried out in the coastal zone of Hornsund Fjord, Spitsbergen, in spring 2015 to study
underwater acoustic signatures of sea ice behavior. The noise levels varied significantly with sea ice type and intensity of external forces.
Low-frequency signatures were strongly related to tidal cycle, which manifested in much higher SPL values at low water. Compacted
ice cover is periodically deformed and crushed, representing a significant contribution to the ambient noise field in the study site. Average noise levels at frequencies above 1 kHz are, in turn, considerably higher in front of marine-terminating glacier than in the neighboring, non-glacial bay. These differences, expanding with the rise of water temperature, are associated with melting of the ice cliff and
generally unaffected by the presence of sea ice. [Work funded by the Polish National Science Centre, Grant No. 2013/11/N/ST10/
01729.]
3:20–3:40 Break
3:40
2pAO7. The masking of beluga whale (Delphinapterus leucas) sounds by icebreaker noise in the Arctic. Christine Erbe (Ctr. for
Marine Sci. & Technol., Curtin Univ., Kent St., Bentley, WA 6102, Australia, c.erbe@curtin.edu.au)
Beluga whales are an Arctic and subarctic cetacean, with an overall “near threatened” conservation status, yet some populations are
considered endangered. Apart from threats such as whaling, predation, contamination, and pathogens, underwater noise is of increasing
concern. In the early 1990s, Fisheries & Oceans Canada started to fund research on underwater noise emitted by icebreakers and its bioacoustic impacts. In collaboration with the Vancouver Aquarium, beluga whales were trained for masked hearing experiments. Apart
from measuring pure-tone audiograms in quiet conditions, animals were trained to listen for beluga vocalizations in different types of
noise, including artificially created white noise, naturally occurring thermal ice-cracking noise, and an icebreaker’s propeller cavitation
and bubbler system noise. Based on this data, software models for masking in beluga whales were developed. More than 20 years later,
this dataset remains the only one on masking in cetaceans using both complex signals (actual vocalizations) and complex noise (actual
recordings of Arctic ambient and anthropogenic noise)—highlighting both Dave Farmer’s foresight as well as perhaps his lesser-known
escapades into marine mammal bioacoustics.
4:00
2pAO8. The upper ocean ambient sound field as a tool to address significant scientific and societal questions. Svein Vagle
(Fisheries and Oceans Canada, Inst. of Ocean Sci., 9850 West Saanich Rd., Sydney, BC V8L4B2, Canada, Svein.Vagle@dfo-mpo.gc.
ca)
David Farmer’s keen interest in understanding the dynamics of the upper-ocean and air-sea interaction convinced him that studying
and using the naturally occurring high-frequency oceanic sound field would give additional insight into these processes. Breaking surface waves are important to ocean dynamics and were believed to be a significant source of the observed sound field. However, direct
measurements were lacking. In the mid-1980s and onwards, David Farmer and his students developed a range of new observational techniques and instrumentation which would significantly improve our understanding of the sound field itself and how it relates to
3608
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3608
parameters such as wind speed and wave conditions, and as a tool to understanding the more fundamental physical processes of the
upper ocean. All this research resulted in significant progress in our understanding of wave breaking, air-sea interaction, air-sea gas
transfer, and indirectly to upper-ocean bubble distributions and dynamics, ice generated sound, and the role of anthropogenic noise on
marine fauna. Here, we review key outcomes of this research and discuss how several components of Farmer et al.’s work now is being
used to address significant societal issues with regard to the impacts of increasing levels of man-made underwater noise on marine life.
4:20
2pAO9. Passive and active acoustical studies of ocean surface waves. Li Ding (Vitech Res. and Consulting, 6280 Doulton Ave.,
Richmond, BC V7C4Y4, Canada, lding2011@gmail.com)
This paper reviews previous work with Professor David Farmer, on the use of acoustical techniques to observe and measure ocean
surface waves. In passive acoustics, breaking surface waves in the open ocean were observed with a hydrophone array deployed close to
the surface to track individual breaking events. The spatial and temporal statistics of braking events, such as velocity and breaking probability, were determined and compared with simultaneously measured directional wave spectra. The comparisons suggest that wave
breaking occurs at multiple scales and that the mean scale of breaking is substantially smaller than the associated with the dominant
wind wave component. In an active acoustical study, an incoherent bistatic sonar mounted on the seafloor was used to measure currents
close to the ocean surface and within the crests of large, steep waves. Individual estimates of the currents at, and close to, the surface
were made with sufficient temporal resolution to identify kinematics in the crests of large waves. Observations acquired in the North Sea
are examined to evaluate both the potential merits and limitations of the measurement approach. The observations lead to some conclusions regarding wave kinematics during a storm in which the wind speed reached 17 m s 1.
4:40
5:00
2pAO10. What can we learn from breaking wave noise? Grant B. Deane
(Marine Physical Lab., Univ. of California, San Diego, 13003 Slack St., La
Jolla, CA 92093-0238, gdeane@ucsd.edu)
2pAO11. Insights from acoustical oceanography: A personal
assessment. David Farmer (Inst. of Ocean Sci., Vancouver, BC, Canada,
farmer.davidm@gmail.com)
Breaking waves are an important process at the air-sea interface: they
limit the growth of waves, transfer momentum between the atmosphere and
ocean, generate marine aerosols, change ocean albedo, and enhance the
transport of greenhouse gases across the air-sea interface. However, they are
a challenging phenomenon to study in the field and much is yet to be learnt
about the transient, two phase flow inside whitecaps. Acoustical oceanography has much to offer as a remote sensing tool for studying wave breaking,
and has been exploited to great effect by David Farmer and his colleagues
over the past 3 decades. I will cover some of the highlights of this fascinating subject, including recent developments to study air entrainment and turbulence in breaking waves using radiated wave noise. [Work supported by
ONR,Ocean Acoustics Division, and NSF.]
New ways of looking at the natural environment often lead to new
insights. It is true of the ocean and especially so in the rapidly developing
field of acoustical oceanography. This will be illustrated with examples
drawn from personal experience over the past 3-4 decades, having application to a range of phenomena from coastal processes to air-sea interaction,
from stratified flow over topography to sea ice behavior. Much is owed to
the Acoustical Society’s welcoming approach to oceanographers, to the skill
and enthusiasm of our students, colleagues, and staff, and to the invaluable
support of our sponsors.
5:20–5:40 Panel Discussion
3609
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3609
2p MON. PM
Contributed Papers
MONDAY AFTERNOON, 26 JUNE 2017
ROOM 312, 1:15 P.M. TO 6:00 P.M.
2pBA
Biomedical Acoustics and Physical Acoustics: Beamforming and Image Reconstruction
Martin D. Verweij, Cochair
Acoustical Wavefield Imaging, Delft University of Technology, Lorentzweg 1, Delft 2628CJ, Netherlands
Hendrik J. Vos, Cochair
Biomedical Engineering, Erasmus MC, Rotterdam, Netherlands
Chair’s Introduction—1:15
Invited Papers
1:20
2pBA1. Fast compressive pulse-echo ultrasound imaging using random incident sound fields. Martin F. Schiffner and Georg
Schmitz (Medical Eng., Ruhr-Univ. Bochum, Universit€atsstr. 150, Bochum 44801, Germany, martin.schiffner@rub.de)
In fast pulse-echo ultrasound imaging (UI), the image quality is traded off against the image acquisition rate by reducing the number
of sequential wave emissions per image. To alleviate this tradeoff, the concept of compressed sensing (CS) was proposed by the authors
in previous studies. CS regularizes the linear inverse scattering problem (ISP) associated with fast pulse-echo UI by postulating the existence of a nearly-sparse representation of the object to be imaged. This representation is obtained by a known linear transform, e.g., the
Fourier or a wavelet transform. A central degree of freedom in the regularized ISP is the choice of the incident sound fields. Previous
studies focused exclusively on steered plane waves. In this study, we investigate the usage of random incident sound fields to improve
the relevant mathematical properties of the scattering operator governing the linear ISP. These sound fields are synthesized by a linear
transducer array whose physical elements are excited applying combinations of random time delays and random apodization weights.
Using simulated and experimentally obtained radio frequency signals, we demonstrate that these sound fields significantly reduce the recovery errors and improve the rate of convergence for low signal-to-noise ratios.
1:40
2pBA2. Model-based image reconstruction for medical ultrasound. Pieter Kruizinga (Biomedical Eng., Erasmus MC, Westzeedijk
353, BME Ee3202, Rotterdam 3015AA, Netherlands, p.kruizinga@erasmusmc.nl), Pim van der Meulen (Circuits and Systems, Delft
Univ. of Technol., Delft, Netherlands), Frits Mastik, Nico de Jong, Johannes G. Bosch (Biomedical Eng., Erasmus MC, Rotterdam,
Netherlands), and Geert Leus (Circuits and Systems, Delft Univ. of Technol., Delft, Netherlands)
Most techniques that are used to reconstruct images from raw ultrasound signals are based on pre-defined geometrical processing.
This type of image reconstruction typically has a low computational complexity and allows for real-time visualization. Since these techniques do not account for situation-specific parameters such as transducer characteristics and medium in-homogeneities, they cannot
make proper use of the information that is contained in the raw ultrasound signals. In this paper, we explore the possibility of reconstructing images that best explain the measured ultrasound signals given the full ultrasound propagation model including all parameters.
We build this model by measuring the spatiotemporal impulse response of the imaging transducer and, using the angular spectrum
approach, estimate the ultrasound signal as it would originate from each individual image pixel position. An iterative search for the pixel
combination that best explains the recorded signals provides the final image. We discuss the details of this model, provide experimental
proof that this reconstruction allows for improved image quality, and extend our ideas to other imaging schemes such as compressive
imaging.
2:00
2pBA3. Beamforming methods for large aperture imaging. Gregg Trahey, Nick Bottenus (Biomedical Eng., Duke Univ., 136
Hudson Hall, Box 90281, Durham, NC 27708, gregg.trahey@duke.edu), and Gianmarco Pinton (Biomedical Eng., Univ. of North
Carolina, Chapel Hill, NC)
Maintaining image quality at large tissue depths remains a clinically significant and unmet challenge for ultrasonic scanners. For tissue structures beyond 10 cm, commonly encountered in obstetric and abdominal scans, diffraction and propagation through tissue can
limit azimuthal resolution to be larger than 5 mm and elevation resolution can exceed a few centimeters, making the evaluation of subcentimeter fetal anatomical features or renal or hepatic lesions very difficult. We describe simulation and ex vivo human tissue studies
which evaluate the image quality achievable with large aperture arrays and with associated beamforming methods. We imaged through
human abdominal tissue layers and synthetically formed very large coherent apertures (2 cm X 10 cm) apertures. We also performed
matched simulations using Visible Human Project-derived tissue models and full-wave simulation code. Using both datasets, we
3610
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3610
assessed : 1) the image quality improvements attainable with large arrays, 2) the factors degrading the image quality of deep-lying tissues, and 3) the beamforming methods best suited for large array imaging of deep tissues. Our results indicate that large improvements
in resolution are obtainable for deep-lying tissues when imaging with large arrays. The major source of tissue-induced image degradation was observed to be clutter due to reverberation and beamforming limitations, rather than phase errors. Coherence-based beamforming methods were seen to be especially applicable in large array imaging.
2:20
2pBA4. Reverberation suppression and enhanced sensitivity by coherence-based beamforming. Jeremy J. Dahl (Radiology,
Stanford Univ., 3155 Porter Dr., Palo Alto, CA 94304, jeremy.dahl@stanford.edu)
Diffuse acoustic reverberation is often present in poor quality ultrasound images and can mask organs and anatomical structure. In
addition, imaging methods such as blood flow and targeted microbubble imaging can be problematic in the presence of reverberation
and thermal noise. We present a beamforming method that is based on the spatial coherence of backscattered ultrasound waves. The
method differentiates signal from noise based on the spatial coherence of the signal and small spatial differences (or lags), and is therefore called the Short-Lag Spatial Coherence (SLSC) beamformer. Because diffuse reverberation and thermal noise are spatially incoherent, they can easily be distinguished from tissue, blood, and other signals of interest. We present SLSC beamforming and its
applications to cardiac and other imaging targets, tissue harmonic imaging, flow imaging, and molecular imaging. We show that the
technique improves the visibility of organ structures and the sensitivity of flow and molecular imaging targets by suppression of noise
signals. We demonstrate a real-time SLSC beamforming prototype system that achieves upwards of 35 fps in cardiac imaging and 50 fps
in molecular imaging. The system demonstrates high-quality, stable images of in vivo organs and targets using the SLSC beamformer.
2:40
2p MON. PM
2pBA5. Ultrafast ultrasound imaging temporal resolution enhancement with filtered delay multiply and sum (FDMAS)
beamforming. Asraf Moubark, Zainab Alomari, Sevan Harput, David Cowell, and Steven Freear ( School of Electron. Eng., Univ. of
Leeds, Leeds LS2 9JT, United Kingdom, s.freear@leeds.ac.uk)
FDMAS beamforming technique have been employed with low number of steering angles and smaller lateral steps in order to
improve the image quality for better cyst classification. Taking advantage of the autocorrelation process in FDMAS, lateral steps were
reduced in order to calculate the time delay for the RF signal more accurately. The new beamforming technique has been tested on
CIRS phantoms experimentally with the ultrasound array research platform version 2 (UARP II) using a 3-8 MHz 128 element clinical
transducer. The point spread function (PSF) main lobes lateral resolution measured at 20 dB shows an improvement of 65.8% for lateral
steps k to k/5. Meanwhile, the contrast ratio (CR) obtained for an anechoic cyst size 1 mm in diameter located at 15 mm depth with lateral steps of k and k/5 are -11.7 dB and -18.36 dB, respectively. The contrast to noise ratio (CNR) also shows improvement of 17.6% for
the same lateral steps reduction. In conclusion, reducing lateral steps in FDMAS beamforming technique with low number of steering
angles outperforms DAS with a high number of steering angles in laboratory experiments by narrowing its main lobes and increasing the
image contrast thus improve the temporal resolution.
3:00
2pBA6. An adaptive mirage. Alfonso Rodriguez-Molares (Circulation and Medical Imaging, Norwegian Univ. of Sci. and Technol.,
Det medisinske fakultet, Institutt for sirkulasjon og bildediagnostikk, Postboks 8905, Trondheim 7491, Norway, alfonso.r.molares@
ntnu.no), Ole Marius H. Rindal (Univeristy of Oslo, Oslo, Norway), Ali Fatemi (Circulation and Medical Imaging, Norwegian Univ. of
Sci. and Technol., Trondheim, Norway), and Andreas Austeng (Univ. of Oslo, Oslo, Norway)
We are in the middle of a Cambrian explosion. Software beamforming has redefined what can be done with the signal. As a consequence, our field has become flooded with adaptive beamforming (AB) algorithms, methods that by clever manipulation of channel data
have exceeded our wildest expectations for the maximum achievable contrast and resolution. Or have they? If we define image quality
in terms of the contrast ratio (CR) and the full-width half-maximum (FWHM), there is another way of getting unprecedented image
quality. Dynamic range stretching, the kind of stretching one gets from squaring the beamformed signal amplitude, will also produce
higher CR and smaller FWHM. If AB alters the output dynamic range, then the reported CR and FWHM are invalid. No tools are available yet for researchers and reviewers to check this. Here we address this problem. We propose a phantom to measure the dynamic range
of AB. The phantom includes a speckle gradient band similar to those used in the calibration of monitors. The phantom allows us to confirm that AB algorithms can alter the dynamic range of the signal and produce incorrect CR and FWHM values. But it also makes it possible to compensate for that alteration and calibrate the algorithms. After calibration AB still results in higher image quality than delayand-sum, but the metrics are more reasonable. A debate must be opened on the significance of AB algorithms. The metrics used to assess
image quality must be revised. Otherwise, we risk to walk in circles, tricked by an illusion.
3:20–3:40 Break
3:40
2pBA7. A Fresnel-inspired approach for steering and focusing of pulsed transmit beams by matrix array transducers. Martin D.
Verweij (Acoust. Wavefield Imaging, Delft Univ. of Technol., Lorentzweg 1, Delft 2628CJ, Netherlands, m.d.verweij@tudelft.nl),
Michiel A. Pertijs (Electron. Instrumentation, Delft Univ. of Technol., Delft, Netherlands), Jos de Wit, Fabian Fool (Acoust. Wavefield
Imaging, Delft Univ. of Technol., Delft, Netherlands), Hendrik J. Vos (Biomedical Eng., Erasmus MC, Rotterdam, Netherlands), and
Nico de Jong (Acoust. Wavefield Imaging, Delft Univ. of Technol., Delft, Netherlands)
Matrix ultrasound transducers for medical diagnostic purposes are commercially available for a decade. A typical matrix transducer
contains 1000 + elements, with a trend towards more and smaller elements. This number renders direct connection of each individual
element to an ultrasound machine impractical. Consequently, it is cumbersome to employ traditional focusing and beamforming
approaches that are based on transmit and receive signals having an individual time delay for each element. To reduce cable count
3611
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3611
during receive, one approach is to apply sub-arrays that locally combine the element signals using programmable delay-and-sum hardware, resulting in reduction by a factor 10. In transmit, achieving cable count reduction while keeping focusing and steering capabilities
turns problematic once it becomes impossible to locally equip each element with its own high voltage pulser. To overcome this bottleneck for decreasing element size, here we present a Fresnel-inspired hardware and beam forming approach that is based on transmit
pulses consisting of several periods of an oscillating waveform. These will be derived from one oscillating high voltage signal by using
local switching and timing hardware. To demonstrate the feasibilities of our approach, we will show beam profiles and images for a miniature matrix transducer that we are currently developing.
Contributed Papers
4:00
4:40
2pBA8. In vivo measurements of muscle elasticity applying shear waves
excited with focused ultrasound. Timofey Krit, Valeriy Andreev (Phys.,
Moscow State Univ., Leninskie Gory, Bldg. 1/2, Moscow, Moscow 119991,
Russian
Federation,
timofey@acs366.phys.msu.ru),
Igor
Demin
(Lobachevsky State Univ. of Nizhny Novgorod, Nizhny Novgorod, Russian
Federation), Pavel Rykhtik, and Elena Ryabova (Federal Inst. of Health
«Privolzhsky Regional Medical Ctr. Federal Medical-Biological Agency of
Russia», Nizhny Novgorod, Russian Federation)
2pBA10. Cramer Rao lower bound for two-dimensional elastotography.
Prashant Verma (Dept. of Phys. and Astronomy, Univ. of Rochester, 60
Crittenden Blvd. Apt. 230, Rochester, NY 14620, prashant.v.iitkgp@gmail.
com) and Marvin M. Doyley (Dept. of Elec. and Comput. Eng., Univ. of
Rochester, Rochester, NY)
The common algorithm of shear waves excitation for diagnostical ultrasonic devices was modified for measurements in muscles. We measured the
speed of shear waves, excited by a focused ultrasound at a frequency of 5
MHz in the muscles of the volunteers. Siemens Acuson S2000 was used for
in vivo measurements. The suggested algorithm was tested on the muscle
mimicking phantoms. The values of shear wave velocities in the same areas
of studied phantoms at the same angles measured with Siemens Acuson
S2000 system corresponded to the values obtained by Verasonics, where the
region of shear wave excitation had a form of “blade” of thickness less than
0.5 mm, length and width of 1.5-2 mm. Due to this form of the region, the
excited shear wave has propagated codirectional with the long side of the ultrasonic medical probe. Thus, the direction of propagation of the shear wave
with respect to the phantom fibers, became dependent on the position of the
probe. [The reported study was funded by RFBR and Moscow city Government according to the research project @ 15-32-70016 «mol_a_mos», by
RFBR according to the research project @ 16-02-00719 a, and by Program
for Sponsorship of Leading Scientific Schools (Grant NSh-7062.2016.2).]
In this study, we present a theoretical framework for characterizing the
performance of two-dimensional displacement and strain estimators. Specifically, we derived the Cramer-Rao lower bound for axial and lateral displacements estimated from radio frequency echo data. The derived
analytical expressions include the effects of signal decorrelation, electronic
noise, point spread function (PSF), and signal processing parameters (window size and overlap between the successive windows). We modeled the 2D PSF of pulse-echo imaging system as a sinc-modulated spatial sine pulse
in the axial direction and as a sinc function in the lateral direction. For validation, we compared the variance in displacements and strains, incurred
when quasi-static elastography was performed using conventional linear
array (CLA), plane wave (PW) and compounded plane wave (CPW) imaging techniques. We also extended the theory to assess the performance of
vascular elastograms. The modified analytical expressions predicted that
CLA and CPW should provide the worst and best elastographic performance, respectively, which was confirmed both in simulations and experimental studies. Additionally, our framework predicted that the peak
performance should occur when 2 \% strain is applied, the same order of
magnitude as observed in simulations (1 \%) and experiments (1 \% – 2 \%).
4:20
5:00
2pBA9. Non-invasive carotid artery elastography using multi-element
synthetic aperture and plane wave imaging: Phantom and in vivo
evaluation. Rohit Nayak (Elec. and Comput. Eng., Univ. of Rochester, 205
Conant Rd., Apartment B, Rochester, NY 14623, rohitnayak@rochester.
edu), Giovanni Schifitto (Dept. of Neurology, Univ. of Rochester Medical
Ctr., Rochester, NY), and Marvin M. Doyley (Elec. and Comput. Eng.,
Univ. of Rochester, Rochester, NY)
2pBA11. Numerical simulation of transcranial ultrasound imaging
using a two-dimensional phased array. Petr V. Yuldashev, Sergey Tsysar,
Vera Khokhlova (Phys. Faculty, M. V. Lomonosov Moscow State Univ.,
119991, Russian Federation, Moscow, Leninskie Gory, Moscow 119991,
Russian Federation, petr@acs366.phys.msu.ru), Victor D. Svet (N.N.
Andreyev Acoust. Inst., Russian Acad. of Sci., Moscow, Russian
Federation), and Oleg Sapozhnikov (Phys. Faculty, M. V. Lomonosov
Moscow State Univ., Moscow, Russian Federation)
Vascular elastography can visualize the strain distribution in the carotid
artery, which governs plaque rupture. In this study, we hypothesize that multielement synthetic aperture (MSA) imaging, which produces divergent transmit beams can overcome the grating lobes issues associated with compounded
plane wave (CPW) imaging, and produce more reliable strain elastograms. To
corroborate this hypothesis, we conducted phantom and in vivo studies using
both the techniques, and determined the most optimal imaging configuration
for carotid elastography. The phantom studies were conducted using cryogel
vessel phantoms. We validated the phantom study results in vivo, on healthy
volunteers. These studies were performed using a commercially ultrasound
scanner (Sonix RP, Ultrasonix Medical Corp., Richmond, BC, Canada), operating at a transmit frequency of 5 MHz. The phantom results demonstrated
that the plaque was visible in elastograms from both techniques; however,
MSA elastograms had fewer artifacts, with a 12 dB improvement in elastographic contrast to noise ratio relative to CPW imaging. Further, the results
from the in vivo study agreed with the phantom results. These results suggest
that MSA imaging can produce useful strain elastograms. Our future work
will involve further development and more in vivo validation.
3612
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Trans-skull ultrasound imaging of brain structures is a challenging problem because of strong attenuation of the skull and reflections from its boundaries. In addition, because the speed of sound in skull is much higher than
in soft tissues, nonuniform thickness and heterogeneous bone structures
cause strong refraction and aberration effects. In this work, transcranial
ultrasound imaging of a 3D volume was simulated numerically. A linear
wave equation in an inhomogeneous medium was modeled using the kspace method. A phased array comprising 10000 identically shaped square
elements distributed over 70 x 70 mm2 area was used to generate a quasiplane 2-cycle pulsed wave at 1 MHz. A spherical 3-mm diameter scatterer
was placed 30 mm behind a cranial bone phantom with mass density 1900
kg/m3, sound speed 2500 m/s, and an irregular thickness varying from 5 to 8
mm. First, two reflections from the face and back sides of the phantom were
used to determine its thickness. Then, a delay and sum algorithm was
applied to the received echo signals to compensate for aberrations. It was
shown that the scatterer was only visible when aberration compensation was
applied. [Work supported by RSF-14-15-00665.]
Acoustics ’17 Boston
3612
5:40
2pBA12. Experimental implementation of a synthesized twodimensional phased array for transcranial imaging with aberration
correction. Sergey Tsysar (Phys. Dept., Lomonosov Moscow State Univ.,
GSP-1, 1-2 leninskie Gory, Moscow 119991, Russian Federation, sergey@
acs366.phys.msu.ru), Victor D. Svet (N.N.Andreyev Acoust. Inst., Moscow,
Russian Federation), Petr V. Yuldashev, Vera Khokhlova, and Oleg
Sapozhnikov (Phys. Dept., Lomonosov Moscow State Univ., Moscow,
Russian Federation)
2pBA13. Using frequency-sum beamforming in passive cavitation
imaging. Shima Abadi (Eng. and Mathematics, Univ. of Washington,
18115 Campus Way NE, Box 358538, Bothell, WA 98011, abadi@uw.edu),
Kevin J. Haworth (Dept. of Internal Medicine, Univ. of Cincinnati,
Cincinnati, OH), Karla P. Mercado (Dept. of Internal Medicine, Univ. of
Cincinnati, Rochester, NY), and David R. Dowling (Mech. Eng., Univ. of
Michigan, Ann Arbor, MI)
Passive Cavitation Imaging (PCI) is a method for locating cavitation
emissions to study biological effects of ultrasound on tissues. In this
method, an image is formed by beamforming passively recorded acoustic
emissions with an array. The image resolution depends on the ultrasound
frequency and array geometry. Acoustic emissions can be scattered due to
tissue inhomogeneity, which may degrade the image resolution. Emissions
at higher frequencies are more susceptible to such degradation. Frequencysum beamforming is a nonlinear technique that alters this sensitivity to scattering by manufacturing higher frequency information from lower frequency
components via a quadratic product of complex signal amplitudes. This presentation evaluates the performance of frequency-sum beamforming in a
scattering environment using simulations and experiments, conducted in the
kHz and MHz frequency regimes. First, 50 and 100 kHz signals were broadcasted from a single source to an array of 16 hydrophones in a water tank
with and without discrete scatterers. Second, a tissue-mimicking phantom
perfused with microbubbles was insonified at 2 MHz, and the emissions
were received by a 128-element linear array. The performance of frequency-sum beamforming was compared to conventional delay-and-sum
and minimum variance beamforming in mild and strong scattering environments. [Work partially supported by NAVSEA through the NEEC.]
Ultrasound (US) imaging of brain structures is a challenging, but highly
promising diagnostic technology in medical ultrasound. Recent advances in
transcranial US therapy suggest the potential to implement diagnostic US at
higher frequencies, ideally for full brain imaging. In this work. we present
experimental results of ultrasound imaging of spherical and tubular scatterers placed behind a skull phantom. The phantom was produced from a casting compound with acoustic properties matching those of skull. Phantom
shape was defined from CT data of a human skull and 3D printing of a
mold. A two-dimensional ultrasound array was simulated by mechanical
translation of the focal spot of a broadband single-element 2 MHz transducer over the phantom surface. This synthesized array mimicked a 2D flexible phased array placed on the top of the patient’s head. A pulse-echo
technique was used for reconstructing the thickness of the skull phantom
and detecting backscattered signals from the test objects. Transcranial image
reconstruction was performed using a delay-and-sum technique that
accounts for refraction and absorption inside the phantom. It was demonstrated that aberration correction using either straight rays or more accurate
refracted raytracing yields significant improvement of image quality. [Work
supported by RSF-14-15-00665.]
MONDAY AFTERNOON, 26 JUNE 2017
ROOM 205, 1:20 P.M. TO 4:40 P.M.
2pEA
Engineering Acoustics: Ducts and Mufflers II
Mats Åbom, Cochair
The Marcus Wallenberg Laboratory, KTH-The Royal Inst. of Technology, Teknikringen 8, Stockholm 10044, Sweden
David Herrin, Cochair
Department of Mechanical Engineering, University of Kentucky, 151 Ralph G. Anderson Building, Lexington, KY 40506-0503
Invited Papers
1:20
2pEA1. Hybrid silencer transmission loss above a duct’s plane wave region. Paul T. Williams (AAF Ltd, Northumberland, United
Kingdom), Mats Åbom (The Marcus Wallenberg Lab., KTH-The Royal Inst of Technol., Teknikringen 8, Stockholm 10044, Sweden,
matsabom@kth.se), Ray Kirby (Mech. Eng., Brunel Univ., Middlesex, United Kingdom), and James Hill (AAF Ltd., Northumberland,
United Kingdom)
For large ducts, the removal of low frequency and tonal noise is normally achieved through the use of inefficient dissipative
silencers; however, a combination of dissipative and reactive solutions could be more effective. But reactive noise control solutions are
rarely applied to large diameter duct systems since it is commonly assumed that the low cut-on frequency of higher order modes severely
restricts their efficiency. However, it is possible for a reactive silencer to remain operational outside of the plane wave region, provided
the reactive elements are distributed across the cross-section of the duct. Of course, at higher frequencies, the sound field within a duct
will have nonplane wave modal content, and the transmission loss is expected to differ compared to the plane wave condition. This
effect is investigated here using numerical (FEM) predictions for hybrid dissipative-reactive parallel baffle silencers and the performance
of the reactive elements is explored under different excitations. The effects of non-planar fields and individually excited modes are analyzed, and it is found that the frequency range over which quarter wave resonators contribute to transmission loss can be extended above
the cut-on frequency of the duct by increasing the number of baffles.
3613
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3613
2p MON. PM
5:20
1:40
2pEA2. Comparison of an integral and a collocation based impedance-to-scattering matrix methods for large silencers analysis.
Peng Wang (Mech. Eng., Univ. of Kentucky, 151 Ralph G. Anderson Bldg., Lexington, KY 40506, pwa229@g.uky.edu), Limin Zhou
(Akustica Inc., Pittsburgh, PA), and T. W. Wu (Mech. Eng., Univ. of Kentucky, Lexington, KY)
Large silencers used in power generation industry usually have a large cross section at the inlet/outlet. The plane-wave cutoff frequency of the inlet/outlet duct could be below only a few hundred Hz. To evaluate the acoustical performance of large silencers above
the inlet/outlet cutoff, either an integral based or a point-collocation based impedance-to-scattering matrix method may be applied to
convert the BEM impedance matrix to the scattering matrix with the higher-order modes at the inlet/outlet. In this presentation, these
two impedance-to-scattering matrix methods are introduced first, and then several test cases are used to compare the computational accuracy, efficiency, and stability of the two methods.
2:00
2pEA3. Investigation of the effect of neglecting reflections in Power-based sound in ducts. Mina W. Nashed, Tamer Elnady (Group
for Adv. Res. in Dynamic Systems (ASU-GARDS), Ain Shams Univ., 1 Elsarayat St., Cairo, Abbaseya 11517, Egypt, mina.wagih@
eng.asu.edu.eg), and Mats Åbom (The Marcus Wallenberg Lab., KTH-The Royal Inst of Technol., Stockholm, Sweden)
In high frequency sound propagation inside ducts, the modal density is very high that the sound starts to propagate in rays. The
acoustic performance of a duct network can be simulated using power-based models. The application of such situations is for HVAC
systems and large silencers for power generation. Several standards are available for the analysis of HVAC systems using the same technique, such as ASHRAE and VDI. For each element, the flow-generated sound power inside the element is added to the input sound
power and the output sound power is calculated by subtracting the Insertion Loss of the element. The attenuated sound energy can be either dissipated inside the element or reflected back to the system. Standards always assume that no energy is reflected and all the attenuation happens inside the element. This assumption is investigated in this paper. Several standard HVAC elements are considered,
calculating the amount of energy dissipated inside the element and that reflected back. It was found that if all the reflected energy is considered, this will affect the output power from the system, especially in the highly reflective elements. The investigation was done using
the Finite Element Method with the ray tracing technique.
2:20
2pEA4. Eigenvalue analysis for acoustic multi-ports. Stefan Sack and Mats Åbom (The Marcus Wallenberger Lab., The Royal Inst.
of Technol., Teknikringen 8, Stockholm 100 44, Sweden, ssack@kth.se)
Acoustic multi-ports are commonly used to describe the scattering (the transmission and reflection) and the source of aero-acoustic
components in duct and pipe systems. The components are therefore modeled as “black-boxes,” assuming linear and time invariant systems. Using linear network theory, two components can be combined to a cascade for which the scattering and sources are predicted.
This step, however, requires decoupled components; the flow disturbances downstream of an aero-acoustic source can be large and turbulences impinging on the downstream component may change its acoustic properties. In this presentation, we show how to use eigenvalue equations in order to investigate this so called “installation-effect” on both, the scattering and the source of induct components.
The theoretical results are compared with measurements in order to conclude on the changing source and scattering mechanisms.
2:40
2pEA5. Linear sound amplification and absorption in a corrugated pipe. Xiwen Dai (CNRS, LAUM, UMR CNRS 6613, Ave. O
Messiaen, F-72085 LE MANS Cedex 9, France, Le Mans 72085, France, xiwen.dai@univ-lemans.fr), Joachim Golliard (TNO, Delft,
Netherlands), and Yves Auregan (CNRS, Le Mans, France)
Linear sound propagation in an axisymmetric corrugated pipe with shear flow is studied numerically and experimentally. The acoustic and hydrodynamic perturbations are described by the linearized Euler equations (LEEs) in a parallel shear flow. Wave propagation
and scattering are computed by means of a multimodal method where the disturbances are expressed as a linear combination of acoustic
modes and hydrodynamic modes. The Floquet-Bloch approach is used to calculate the wavenumber in the periodic system. Both sound
amplification and absorption, depending on the Strouhal number, are well predicted compared to experiments, which means that the
flow-acoustic coupling in the system is effectively described by the present model. It is also shown that the corrugated pipe can amplify
the sound even if the shear layer over the cavities is stable everywhere.
3:00–3:20 Break
3:20
2pEA6. Stop whistling! A note on fluid driven whistles in flow ducts. Mikael Karlsson (MWL, KTH, Teknikringen 8, Stockholm
10044, Sweden, kmk@kth.se), Magnus Knutsson (Volvo Cars, G€
oteborg, Sweden), and Mats Åbom (MWL, KTH, Stockholm, Sweden)
The generation mechanism and possible counter measures for fluid driven whistles in low Mach number flow duct networks are discussed. The vortex sound model, where unstable shear layers interact with the acoustic field and act as amplifiers under certain boundary
conditions, is shown to capture the physics well. Further, for the system to actually whistle, an acoustic feedback to the amplifying shear
layer is also needed. The demonstration example in this study is a generalized resonator configuration with annular volumes attached to
a straight flow duct via a number of small holes, perforations, around the duct’s circumference. At each hole, a shear layer is formed and
the acoustic reflections from the resonator volumes and the up and downstream sides provides a possible feedback to them. The attenuation properties as well as the whistling frequencies at varying inlet mean flow velocities for this system are studied both numerically and
experimentally showing that good quality predictive simulations are possible using the vortex sound theory. Finally, a few countermeasures against whistling are tested. Both the feedback and the shear layers are manipulated. Best effect was found disturbing the shear
layers by covering the holes with a coarse mesh.
3614
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3614
3:40
2pEA7. A numerical investigation of Helmholtz resonators in the presence of grazing flow by means of the lattice Boltzmann
method. Andre M. Spillere (Dept. of Mech. Eng., Federal Univ. of Santa Catarina, Campus Reitor Joao David Ferreira Lima,
Florianopolis, Santa Catarina 88040-900, Brazil, andre.spillere@lva.ufsc.br), Jose P. de Santana Neto, Andrey R. da Silva (Dept. of
Mech. Eng., Federal Univ. of Santa Catarina, Florian
opolis, Santa Catarina, Brazil), and Julio A. Cordioli (Dept. of Mech. Eng., Federal
Univ. of Santa Catarina, Florian
opolis, SC, Brazil)
Helmholtz resonators remain widely used in noise control. In applications such as aircraft engines and exhaust systems, the presence
of a grazing flow significantly changes their behavior, and a correct prediction of their acoustic properties is essential to improve noise
reduction. With the purpose of understanding the physical phenomena associated with the acoustic-flow interaction, the simulation of a
single 2D Helmholtz resonator was considered by means of an in-house numerical code based on the lattice Boltzmann method (LBM).
The results have been validated against published data based on both experimental results and direct numerical simulation (DNS) for
normally incident acoustic waves in the absence of flow. The investigations will proceed by taking into account grazing acoustic waves
in the presence of a grazing flow, similarly to the conditions found in a liner test rig. Efforts will be focused on typical aircraft engine
inlet conditions, i.e., high Mach numbers and SPL. Both experimental and numerical results will be compared in terms of absorption
coefficient and impedance.
4:00
2pEA8. Flow noise generation in a pipe bend. Magnus Knutsson, Simone Vizzini (Noise and Vib. Ctr., Volvo Car Group, Dept 91620,
G€oteborg 40531, Sweden, magnus.knutsson@volvocars.com), Maria Dybeck (Powertrain Eng., Volvo Car Group, G€
oteborg, Sweden),
and Mats Åbom (The Marcus Wallenberg Lab., KTH, Stockholm, Sweden)
2p MON. PM
Noise generated by low Mach number flow in duct networks is important in many industrial applications. In the automotive industry,
the two most important are the ventilation duct network and the engine exhaust system. Traditionally, design is made based on rule-of
thumb or slightly better by simple semi-empirical scaling laws for flow noise. In many cases, strong curvatures and local deviations
from circular cross-sections are created due to outer geometry restrictions. This can result in local relatively high flow velocities and
complex flow separation patterns and as a result, rule-of thumb and scaling law methods can become highly inaccurate and uncertain.
More advanced techniques based on time domain modelling of the fluid dynamics equations together with acoustic analogies can offer a
better understanding of the local noise generation, the propagation and interaction with the rest of the system. This investigation contains
a study on flow noise generation in a circular duct with a 90-degree bend carrying a low Mach number flow. Experimental results are
presented and compared to numerical simulations, based on a combination of computational fluid dynamics and the acoustic analogies
by Lighthill and M€
ohring, as well as semi-empirical models.
Contributed Paper
4:20
2pEA9. Attenuation measurements inside and at the output of a passive
silencer equipped with parallel absorbing baffles. Xavier Kaiser (CEDIA,
Univ. of Liege, Liege, Belgium), Sebastien Brandt (Eng. studies, Haute
ecole de la Province de Liege, Liege, Belgium), Benoit Meys (Test cells,
Safran Aero Boosters, Herstal, Belgium), Nicolas Plom (Bureau
d’acoustique BANP, Liege, Belgium), and Jean-Jacques Embrechts (Elec.
Eng. and Comput. Sci., Univ. of Liege, Campus du Sart-Tilman B28,
Quartier Polytech 1, 10 Allee de la decouverte, Liege 4000, Belgium,
jjembrechts@ulg.ac.be)
The mock-up of a great passive silencer used for noise attenuation in
industrial applications has been designed and tested in laboratory. This
mock-up consists of three metal casing containing the noise source and
3615
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
several removable rails and supports, allowing the test of different configurations of parallel absorbing baffles. The output of the silencer radiates in an
anechoic chamber in order to simulate free-field conditions. The acoustic
attenuation has been measured not only at the output, but also inside the silencer, with a mobile microphone located at several positions along the axis.
Also, the tested configurations include three types of absorbing “cushions”
and several geometrical arrangements. The results show that the maximum
insertion loss (dB) measured at the output of the silencer corresponds to frequencies between 800 Hz and 2.5 kHz and its value strongly depends on the
air gaps between baffles. The measurements with the mobile microphone
show a linear decrease of the sound pressure level with the distance along
the axis (mainly between 100 Hz and 1 kHz), which corresponds to the
depth of absorbing material involved in the attenuation. Finally, a comparison with a real-scale silencer is discussed.
Acoustics ’17 Boston
3615
MONDAY AFTERNOON, 26 JUNE 2017
BALLROOM C, 1:20 P.M. TO 3:20 P.M.
2pID
Interdisciplinary: Neuroimaging Techniques II
Martin S. Lawless, Cochair
Graduate Program in Acoustics, The Pennsylvania State University, 201 Applied Science Building, University Park, PA
16802
Adrian KC Lee, Cochair
University of Washington, Box 357988, University of Washington, Seattle, WA 98195
Sophie Nolden, Cochair
RWTH Aachen University, Jaegerstrasse 17/19, Aachen 52066, Germany
Z. Ellen Peng, Cochair
Waisman Center, University of Wisconsin-Madison, 1500 Highland Avenue, Madison, WI 53711
G. Christopher Stecker, Cochair
Hearing and Speech Sciences, Vanderbilt University, 1215 21st Ave. South, Room 8310, Nashville, TN 37232
Invited Papers
1:20
2pID1. Breaking the barriers of temporal and spatial resolutions for ultrasound neuroimaging. Mickael Tanter (Langevin Inst.
(ESPCI Paris, CNRS, Inserm), Inserm, 17 rue Moreau, Paris 75012, France, mickael.tanter@gmail.com)
The introduction of plane or diverging wave transmissions rather than line by line scanning focused beams has broken the conventional barriers of ultrasound imaging. The frame rate reaches the theoretical limit of physics dictated by the ultrasound speed and an ultrasonic map can be provided typically in tens of micro-seconds (thousands of frames per second). Interestingly, this leap in frame rate
is not only a technological breakthrough, but it permits the advent of completely new ultrasound imaging modes, in particular, neurofunctional ultrasound imaging of brain activity (fUltrasound). Indeed, ultrafast Doppler gives ultrasound the ability to detect very subtle
blood flow in small vessels and paves the way for fUltrasound of brain activity through the neurovascular coupling. It provides the first
modality for whole brain imaging on awake and freely moving animals with unprecedented resolutions1,2,3 compared to fMRI. Finally,
we demonstrated that it can be combined with 3 mm diameter microbubbles injections to provide a first in vivo and non-invasive imaging
modality at microscopic scales deep into organs by localizing the position of millions of microbubbles at ultrafast frame rates. This ultrasound localization microscopy technique solves for the first time the problem of in vivo imaging the whole brain microvasculature 4. 1.
Mace et al., Nature Methods 2011. 2. Osmanski et al., Nature Commun. 2014. 3. Sieu et al., Nature Methods 2015. 4. Errico et al.,
Nature 2015.
2:00
2pID2. The ins and outs of capturing brain activities associated with auditory perception and cognition. Adrian K. C. Lee (Dept.
of Speech and Hearing and Inst. for Learning and Brain Sci. (I-LABS), Univ. of Washington, Box 357988, Seattle, WA 98195, akclee@
uw.edu) and G. Christopher Stecker (Hearing and Speech Sci., Vanderbilt Univ., Nashville, TN)
Magnetoencephalography, electroencephalography, and functional magnetic resonance imaging (MEG, EEG, and fMRI) have been
used extensively to study human auditory perception and cognition. Due to the different temporal and spatial resolutions associated with
each of these neuroimaging modality, each technique offers a different unique window into how our cortex participates in auditory tasks.
In this talk, a number of classical paradigms will be presented and their relative strengths and shortcomings would be discussed. Other
methodological advances and challenges particularly relevant to experiments in auditory perception and cognition will also be reviewed.
[Work supported by NIH R01 DC013260 (AKCL) and R01DC011548 (GCS).]
3616
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3616
2:20
2pID3. Oscillatory brain activity in response to emotional sounds in musicians and non-musicians. Sophie Nolden (RWTH Aachen
Univ., Jaegerstrasse 17/19, Aachen 52066, Germany, nolden@psych.rwth-aachen.de), Simon Rigoulot (McGill Univ., Montreal, QC,
Canada), Pierre Jolicoeur (Univ. of Montreal, Montreal, QC, Canada), and Jorge L. Armony (McGill Univ., Montreal, QC, Canada)
Emotions can be conveyed through a variety of channels in the auditory domain such as the human voice or music. Recent studies
suggest that expertise in one sound category can impact the processing of emotional sounds in other sound categories. We focused here
on how the neural processing of emotional information varies as a function of sound category and expertise of participants. Electroencephalogram (EEG) of 20 non-musicians and 17 musicians was recorded while they listened to speech prosody, vocalizations (such as
screams and laughter), and musical sounds. The amplitude of EEG-oscillatory activity in the theta, alpha, beta, and gamma band was
quantified and Independent Component Analysis (ICA) was used to identify underlying components of brain activity in each band.
Sound category-dependent activations were found in frontal theta and alpha, as well as greater activation for musicians than for nonmusicians. Differences in the beta band were mainly due to differential processing of speech. The results reflect musicians’ expertise in
recognition of emotion-conveying music, which seems to also generalize to emotional expressions conveyed by the human voice, in line
with previous accounts of effects of expertise on musical and vocal sounds processing.
2:40
2pID4. Using functional magnetic resonance imaging to assess the emotional response to room acoustics. Martin S. Lawless and
Michelle C. Vigeant (Graduate Program in Acoust., The Penn State Univ., 201 Appl. Sci. Bldg., University Park, PA 16802, msl224@
psu.edu)
2p MON. PM
A previous pilot study by the authors demonstrated the potential of using neuroimaging techniques to investigate a listener’s emotional response to room acoustic conditions of varying preference. The hypothesis of the pilot study and the present work is that regions
associated with reward and pleasure will activate when an individual listens to pleasing room acoustics contrasted with listening to
unpleasant room acoustics. In this study, auralizations were generated in simulated room conditions ranging from anechoic to extremely
reverberant with the expectation that the most-liked stimuli would have reverberation times between 1.0 and 2.8 s. Participants were
screened based on their ability to discern differences in preference across the stimuli. Following the screening, eligible participants rated
the stimuli according to overall preference in a mock MRI machine. The results from the mock MRI testing were used to identify each
participant’s most-liked and most-disliked stimuli, and to familiarize the participants with the MRI environment. In a second session,
this pair of stimuli, along with anechoic and scrambled-music stimuli were presented to the subjects in an MRI machine. Contrasts
between these conditions were analyzed to investigate if activations were present in regions associated with reward processing, including
the nucleus accumbens, caudate nucleus, and orbitofrontal cortex.
Contributed Paper
3:00
2pID5. Reduced vessel tone leads to vasodilation and decreased
cerebral rigidity. Katharina Schregel, Miklos Palotai (Radiology, Brigham
and Women’s Hospital, 221 Longwood Ave., Boston, MA 02115,
kschregel@bwh.harvard.edu), Navid Nazari (Biomedical Eng., Boston
Univ., Boston, MA), Paul E. Barbone (Mech. Eng., Boston Univ., Boston,
MA), Ralph Sinkus (Biomedical Eng., Kings College London, London,
United Kingdom), and Samuel Patz (Radiology, Brigham and Women’s
Hospital, Boston, MA)
Magnetic Resonance Elastography (MRE) measures elastic shear wave
propagation in vivo to infer biomechanical properties non-invasively
[Muthupillai, et al. (1995) Science;269:1854-1857.]. Models predict that tissue stiffness is influenced by changes of vascular properties [Parker, et al.
3617
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
(2016) PhysMedBiol;61:4890-4903.]. Cerebral blood supply is closely regulated by diameter changes of blood vessels. We here investigated the influence of vasodilation on cerebral brain stiffness with MRE. A healthy
C57BL/6 mouse was anesthetized with isoflurane mixed in 100% O2. Vasodilation was induced by a hypercapnic challenge (isoflurane mixed in 95%
O2 + 5% CO2). Brain stiffness was measured with a 3D spin-echo MRE
sequence in a 7T animal MRI scanner under normocapnic and hypercapnic
conditions. Vibration frequency was 1 kHz. Wave length and wave speed
was observed to decrease significantly under hypercapnic conditions across
the whole brain when compared to baseline. These changes correspond to a
decrease in tissue rigidity. We conclude, therefore, that vasodilation via
reducing vessel tone leads to a significant decrease in brain rigidity. Therefore, potential changes in cerebral blood flow due to physiological or pathological conditions should be considered when studying brain rigidity.
Acoustics ’17 Boston
3617
MONDAY AFTERNOON, 26 JUNE 2017
ROOM 200, 1:15 P.M. TO 5:00 P.M.
2pMU
Musical Acoustics: Electronically-Augmented Instruments
Edgar J. Berdahl, Cochair
Music, Louisiana State University, 102 New Music Building, Baton Rouge, LA 70803
Adrien Mamou-Mani, Cochair
IRCAM, 1 place Stravinsky, Paris 75004, France
Chair’s Introduction—1:15
Invited Papers
1:20
2pMU1. Trekking around ancestors of smart instruments. Charles Besnainou (Lutheries - Acoustique - Musique, Universite Pierre
et Marie Curie, chez Baudry, Saint Eugène 17520, France, charles.besnainou@upmc.fr)
In the present time, smart instruments are the ongoing revolution tying together acoustical instruments and computers. The concepts
of electronic active control of structure are the heart of the process. The aim of this paper is to summarize more than one century of trials. Who remembers the “infinite sound” piano of Richard Eisenmann driven by electromagnetics and been touched, during the Industrial Art Exhibition in Munich in 1888, by Hermann von Helmholtz, and earlier the Pape’s enhanced piano with air blow? Some
experiments revive the project as the E-bow dedicated for the steel-string guitar. Then, after a jump in the 1990’s as exercises for students on teaching feedback Proportional-Integral-Derivative analogical central heating regulator. On the cliff of the positive feedback
applied for a xylophone bar, and so on just before numerical systems.
1:40
2pMU2. Bela: An embedded platform for low-latency feedback control of sound. Andrew McPherson (School of Electron. Eng. and
Comput. Sci., Ctr. for Digital Music, Queen Mary Univ. of London, Mile End Rd., London E1 4NS, United Kingdom,
andrewmcphersonasa@gmail.com)
Bela is an open-source embedded platform for audio and sensor processing. It uses a Beagle Bone Black single-board computer with
a custom hard-real time audio environment based on Xenomai Linux, which is capable of submillisecond round-trip audio latencies (as
low as 80 microseconds in certain configurations). The Bela hardware features stereo audio input and output, 8 channels each of 16-bit
analog input and output, and 16 digital I/Os, all sampled at audio rates with nearly jitter-free alignment to the audio clock. This paper
will present the hardware, software, and selected applications of Bela. Bela is suitable for creating digital musical instruments and interactive audio systems, and its low latency makes it especially well adapted for real-time feedback control over acoustic systems. It has
been used in feedback control experiments with wind and string instruments and used as the basis for a study of the performer’s experience of latency on percussion instruments.
2:00
2pMU3. Modal active control: A tool to finely adjust the sound of string instruments. Simon Benacchio (IRSST, 505 Boulevard de
Maisonneuve O, Montreal, QC H3A 3C2, Canada, Simon.Benacchio@irsst.qc.ca)
Controlling the vibratory properties of musical instruments is an important challenge for musical acousticians, musicians, and instrument makers. The latter try to control these properties modifying mechanical parameters of instruments like their shape or their materials
to obtain some expected sound attributes. Musicians also modify the vibratory properties of their instruments to change their sound using
mutes, for example. Musical acousticians try to modify these properties because it is an intuitive way to investigate the relationship
between the instrument mechanisms and their sound attributes. Inspired by industrial techniques, active control was revealed as a convenient way to answer to the latter goal. Moreover, modal active control is a preferred method for the application to musical instruments
since their modal parameters are believed to be good descriptors of their vibratory properties. This study aims at applying modal active
control on string instruments. First, the possibilities offered by this technique are presented and experimented on several instruments.
Then, examples of the control of specific phenomena are given. Couplings between the soundboard and strings for both a cello and a guitar are controlled to cancel the well-known wolf-note phenomenon for the former and to switch from strong to weak coupling for the
latter.
3618
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3618
2:20
2pMU4. Development of a hybrid wind instrument—Some key findings. Kurijn Buys, David Sharp, and Robin Laney (Faculty of
Mathematics, Computing and Technol., Walton Hall, Milton Keynes, Buckinghamshire MK7 6AA, United Kingdom, kurijn.buys@
open.ac.uk)
A hybrid wind instrument is constructed by putting a theoretical excitation model (such as a real-time computed physical model of a
clarinet embouchure) in interaction with a real wind instrument resonator. In previous work, the successful construction of a hybrid
wind instrument has been demonstrated, with the interaction facilitated by a loudspeaker and a microphone placed at the entrance of a
clarinet-like tube. The present paper focuses on some key findings, concentrating particularly on the “musical instrument” and “research
tool” perspectives. The limitations of the hybrid set-up are considered. In particular, the choice of the loudspeaker used in the set-up is
explained and the occurrence (and prevention) of instabilities during the operation of the hybrid instrument are discussed. For the design
of excitation models used to drive the hybrid instrument, the usefulness of dimensionless and reduced parameter forms is outlined. In
contrast to previously reported physically based excitation models, it is demonstrated that a purely mathematical “polynomial model”
enables an independent control of separate sound features. For all excitation models, the sounds produced with the hybrid instrument are
shown to match to those predicted by simulation. However, the hybrid instrument is more easily destabilized for certain extreme parameter states.
2:40
2pMU5. Traveling wave control of stringed musical instruments. Liam Donovan and Andrew McPherson (Queen Mary Univ. of
London, Queen Mary Univ., Mile End Rd., London E1 4NS, United Kingdom, l.b.donovan@qmul.ac.uk)
2p MON. PM
Traveling wave control is a technique in which the energy propagating around a structure in the form of waves can be manipulated
directly in order to change the overall dynamic behaviour of the structure. In this research, traveling wave control is applied to a musical
instrument string with a view to affecting the timbre of the sounds produced by the string when vibrating. A highly linear custom optical
sensor is built which is capable of detecting a wave traveling on a string in a single direction, and a piezo stack actuator is situated under
the termination point of the string allowing the reflection of the wave to be manipulated directly. Various controllers are analyzed theoretically in terms of their performance, stability, and musical usefulness. They are then implemented and evaluated in terms of their relevance to the design of new musical instruments.
3:00
2pMU6. Astounding sounds, amazing music—At the crossroads of audio control. Joseph A. Paradiso (Media Lab, MIT, MIT Media
Lab, E14-548P, Cambridge, MA 02139, joep@media.mit.edu)
The ways in which we produce, compose, discover, and consume music have changed profoundly in only the last decades as the conveyances of these capabilities have digitally converged. What does it mean to “play” a musical instrument and what will a musical performance become? In this presentation, I will explore these fringes via recent projects from my research team at the MIT Media Lab.
This includes frameworks to enable composers to exploit sources of “big” data to realize their music (ranging from physics detectors at
the Large Hadron Collider to a sensor-laden former cranberry bog turning into a wetland). In a related vein, I will introduce an interactive sonification framework we have devised to dynamically rotate complex data from visual to audio, with the goal of optimally engaging eyes and ears. At the other extreme, I will describe a set of new physical instruments for musical control, such as stretchable fabric
keyboards, instruments designed to be breakable during performance, and collaborative instruments that leverage social media and the
Internet of Things. Finally, I will give my perspective on the recent resurgence of modular synthesizers, grounded in having built and
designed perhaps the world’s largest homemade modular system between 30 and 40 years ago.
3:20–3:40 Break
Contributed Papers
3:40
4:00
2pMU7. Monitoring saxophone reed vibrations using a piezoelectric
sensor. Alex Hofmann, Vasileios Chatziioannou, Alexander Mayer (Music
Acoust. (IWK), Univ. of Music and Performance Art Vienna, Anton von
Webern Platz 1, Vienna, Select State 1030, Austria, hofmann-alex@mdw.ac.
at), and Harry Hartmann (Fiberreed, Leinfelden-Echterdingen, Germany)
2pMU8. Active control of Chinese gongs. Marguerite Jossic (Institut
d’Alembert, UPMC CNRS UMR 7190, 4 Pl. Jussieu 75005, Institut
d’Alembert, Tour 55-65, 4, Pl. Jussieu, Paris Cedex 05 75252, France,
marguerite.jossic@upmc.fr), Vivien Denis, Olivier Thomas (Arts et Metiers
ParisTech, LSIS UMR CNRS 7296, 8 bd. Louis XIV, Lille, France), Adrien
Mamou-Mani (IRCAM CNRS UPMC UMR 9912, 1 pl. Stravinsky 75004,
Paris, France), Baptiste Chomette (Institut d’Alembert, UPMC CNRS UMR
7190, 4 Pl. Jussieu 75005, Paris, France), and David Roze (IRCAM CNRS
UPMC UMR 9912, 1 pl. Stravinsky 75004, Paris, France)
In sound production on single-reed woodwind instruments the reed is
oscillating in a frequency related to the length of the resonator. Strain gauge
sensors attached to single reeds have been used to capture the vibrations of
the reed to investigate articulation techniques on saxophone and clarinet.
Reeds can be made from natural cane and also from synthetic materials like
oriented polymers or layers of fiber-reinforced polymers. Such synthetic
reeds allow to integrate sensors inside the reed during manufacture. However, integrated strain gauge sensors produced signals with high noise,
which have been shown not to be ideal for amplification purposes. Replacing the integrated strain gauge with a piezo film sensor greatly enhanced the
sound quality of the sensor reeds. With this procedure, electronically augmented woodwind instruments may be constructed for performance, acoustic measurements, and music pedagogy feedback systems.
3619
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Active control provides one of the most promising ways of modifying
instruments’ sound. However, among the various control techniques covered
by this discipline, most of the experimental applications have so far been limited to instruments keeping a linear behavior in normal playing conditions.
This study explores the possibility of applying active control to Chinese
gongs. These instruments exhibit geometric nonlinearities in their dynamical
behavior such as a very characteristic pitch glide of the fundamental mode.
The implementation of a nonlinear control of this pitch glide is introduced following a two-steps process. First, a reduced nonlinear model of the instrument
Acoustics ’17 Boston
3619
interaction for musical purposes. The bi-directional excitation and feedback
is realized using standard electro-mechanical sensors and actuators, making
it possible to connect the string vibration of the modified banjo to arbitrary
geometries as well as capturing the response of the modeled structure and
feeding it back to the acoustic instrument. FD-models of different shapes
and materials are implemented resulting in physically impossible instrument
configurations yielding unique tone production capabilities.
dynamics is determined from the von Karman and normal form theories. This
formulation allows for describing the nonlinear dynamics of the fundamental
mode by one single nonlinear mode governed by a Duffing equation. The experimental identification of the Duffing model parameters is performed by
measuring the backbone curve of the fundamental mode in forced vibrations.
Second, the methodology for the nonlinear control is developed. In particular,
the determination of the control law and the overall stability of the control
system are discussed. Control simulations results as well as perspectives for
experimental applications are finally exposed.
4:40
2pMU10. Audio-haptic interaction with modal synthesis models. Edgar J.
Berdahl (Music, Louisiana State Univ., 102 New Music Bldg., Louisiana
State University, Baton Rouge, LA 70803, eberdahl@ccrma.stanford.edu)
4:20
2pMU9. No latency feedback controller coupled to real-time physical
models using programmable hardware. Florian Pfeifle (Univ. of
Hamburg, Neue Rabenstrasse 13, Hamburg 20354, Germany, Florian.
Pfeifle@uni-hamburg.de)
Using a musical instrument that is augmented with haptic feedback, a
performer can be enabled to haptically interact with a modal synthesis-based
sound synthesizer. This subject is explored using the resonators object in
Synth-A-Modeler. For synthesizing a single mode of vibration, the resonators object should be configured with a single frequency in Hertz, a single
T60 exponential decay time in seconds, and a single equivalent mass in kg.
Changing the mass not only changes the level of the output sound, it also
changes how the mode feels when touched using haptic feedback. The resonators object can further be configured to represent a driving-point admittance corresponding to arbitrarily many modes of vibration. In this case,
each mode of vibration is specified by its own frequency, decay time, and
equivalent mass. Since the modal parameters can be determined using an
automated procedure, it is possible to (within limits) approximately calibrate
modal models using recordings of sounds that decay approximately exponentially. Various model structures incorporating the resonators object are
presented in a variety of contexts. The musical application of these models
is demonstrated alongside presentation of compositions that use them.
Over the last years, advances in technology and methodology made it
possible to simulate and synthesize highly realistic finite difference (FD)
models in real-time or close to real-time. Still, most conventional processing
platforms introduce latency to the signal processing chain due to sequential
processing and/or communication protocol timing and throughput restrictions. This can act as a severe penalty when developing expressive controller interfaces for large geometry physical models. Using field programmable
gate array (FPGA) hardware enables highly customized interface and FD
model designs that are able to meet hard real-time requirements even for
large physical models. In this work, a modified five-string banjo coupled to
a real-time physical modeling synthesis application running on an FPGA development board is presented. The proposed methodology is an extension of
an existing PCIe enabled interface which was primarily developed for
research applications. The new interface is aimed at facilitating expressive
MONDAY AFTERNOON, 26 JUNE 2017
ROOM 203, 1:15 P.M. TO 5:40 P.M.
2pNSa
Noise, Architectural Acoustics and ASA Committee on Standards: Noise Impacts and Soundscapes on
Outdoor Gathering Spaces II
Brigitte Schulte-Fortkamp, Cochair
Institute of Fluid Mechanics and Engineering Acoustics, TU Berlin, Einsteinufer 25, Berlin 101789, Germany
K. Anthony Hoover, Cochair
McKay Conant Hoover, 5655 Lindero Canyon Road, Suite 325, Westlake Village, CA 91362
Chair’s Introduction—1:15
Invited Papers
1:20
2pNSa1. Soundscape design of an open-air concert venue using virtual reality technologies. Andy Chung (Smart City Maker, Hong
Kong Plaza, Hong Kong HKSAR, Hong Kong, ac@smartcitymaker.com), W. M. To (Macao Polytechnic Inst., Macao, Macao), and
Brigitte Schulte-Fortkamp (TU Berlin, Berlin, Germany)
Hong Kong has two mega-projects underway that include open-air concert venues. One of them is called the Kai Tak Sports Park
and another is the West Kowloon Cultural District. As an open-air concert venue will produce amplified music and noise from audience,
as well as vehicular traffic noise and crowd noise before and after the concerts, a priori analysis of soundscape and noisescape in the district due to musical events is highly desirable. This paper reviews some noise incidents due to concerts at the Hong Kong Stadium and
Chinese operas at other public places. We suggest that virtual reality technologies shall be utilized so that stakeholders can hear the possible soundscape and noisescape at different periods of a musical event during the public engagement.
3620
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3620
1:40
2pNSa2. How does activity affect soundscape assessments? Insights from an urban soundscape intervention with music. Daniel
Steele, Cynthia Tarlao (MIL & CIRMMT, McGill Univ., 3661 Peel St, Montreal, QC H3A 1X1, Canada, daniel.steele@mail.mcgill.ca),
Edda Bild (Universiteit van Amsterdam, Amsterdam, Netherlands), Julian Rice, and Catherine Guastavino (MIL & CIRMMT, McGill
Univ., Montreal, QC, Canada)
The relationship between activity and soundscape has recently garnered research attention, particularly in public spaces. In the
summer of 2015, we installed an interactive sound system (Musikiosk) in a busy public park allowing users to play their own content
over high-quality speakers. Questionnaires (N = 197) were administered over 3 conditions: pre-installation with park users, during the
installation phase with Musikiosk users, and during the installation phase with park users not using Musikiosk. For users and observers
of Musikiosk, a separate evaluation of the Musikiosk intervention was also included. The questionnaire included quantitative evaluations
(soundscapes scale from Swedish Soundscape Quality Protocol, restorativeness, mood, noise sensitivity), free response data (soundscape
description, self-reported activity, sound source identification, reasons for park visit), and demographic info (age, interaction with others,
proximity of residence). The qualitative descriptions of activity and sound sources were categorized into emergent themes. Presented
here is the analysis of the interaction between activity and soundscape assessment in terms of quantitative variables and qualitative
descriptions.
2:00
2:40
2pNSa3. A perception-based protocol for the systematic selection of
urban sites with specific soundscapes. Bert De Coensel and Dick
Botteldooren (Information Technol., Ghent Univ., iGent, TechnologieparkZwijnaarde 15, Ghent B-9052, Belgium, bert.decoensel@ugent.be)
2pNSa5. Measuring sounds with a grid method with examining public
spaces. Yalcin Yildirim (Urban Planning and Public Policy, Univ. of Texas
at Arlington, 601 W Nedderman Dr. #203, Arlington, TX 76019, yalcin.
yildirim@mavs.uta.edu)
The Urban Soundscapes of the World project aims to set the stage for a
standard on recording and reproducing urban acoustic environments with
soundscape in mind. Immersive audiovisual recordings, which combine
high quality spatial (binaural) audio with 360 degree video, are valuable to
serve as an ecologically valid baseline for assessing the perceptual influence
of noise control and soundscaping measures through auralization. As architects and designers commonly work by example, one of the goals of this
project is to compile a comprehensive reference database of well-documented exemplars. These are to be recorded at a range of urban sites with a
wide variety of soundscapes, in order to be able to achieve a good statistical
power in any subsequent analysis. For this purpose, a protocol for selecting
recording locations and time periods in a systematic way is developed,
based on a common questionnaire that is conducted among panels of local
experts in each selected city. The questionnaire contains open questions to
look for public spaces inside the city that are perceived in various ways,
regarding the presence of sound sources, the perceived affective quality and
the appropriateness of the sound environment.
This study provides an outlook of the association between sound and
public space by performing sound level meter in Dallas Fort-Worth metropolitan area. The main research question of the study is that whether the
characteristics (program elements, position of public spaces and roads, and
public space usage in different time intervals) of the public open spaces
have relationship with sound levels. In order to demonstrate it, the applied
sound level meter instruments for sound environment in public open spaces
by using grid method. This study recommends that time intervals have
effects on sound and practice should concentrate for this essential element.
At this point, sound is forgotten component and it is not paid attention by
urban planners, architects, landscape architects, civil engineering and many
other disciplines. Presently, there are a few studies that are related to sound
relationships in the United States. Hence, this research illustrates a point of
view of a sound research for rapid growing urban areas.
2:20
2pNSa4. A trial investigation to understand the characteristics of
soundscape in a busy town from the viewpoint of sound quality. Takeshi
Akita (Dept. of Sci. and Technol. for Future Life, Tokyo Denki Univ., 5
Senju-Asahi-cho Adachi-ku, Tokyo 1208551, Japan, akita@cck.dendai.ac.jp)
To reveal the characteristics of soundscape of a town that has an open and
busy place like a shopping avenue, an investigation that requests subjects finding out attracted sounds and evaluating their sound quality when they walk
around the town is tried. Eight subjects participated in the present investigation It was carried out at Kita-Senju town in Tokyo. Each subject was required
to walk around assigned area that has busy shopping avenue and road, to write
down the name of sound, to take the photograph of sound source, and to evaluate its sound quality when his attention was attracted by some sounds. Sound
quality was evaluated by 5 step scales that inquired strength, fineness, and
acuteness. Results show that there are many kinds of sound that has little
strength impression except road traffic noise. On the other hand, the evaluated
results for fineness and acuteness are different among sounds that are evaluated not so loud. They also show that there are many meaningless sounds
and machine oriented noises. It is suggested that finding out and evaluation
method for attracted sounds in busy areas make the characteristics of soundscape clear and it will contribute to create fine sonic environment.
3621
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
3:00
2pNSa6. Soundscape of Arctic settlements: Longyearbyen and
Pyramiden. Dorota Czopek, Pawel Malecki, Janusz Piechowicz, and Jerzy
Wiciak (AGH Univ. of Sci. and Technol., al. Mickiewicza 30, Krak
ow 30059, Poland, dorota.czopek@agh.edu.pl)
This paper presents the soundscape analysis of two settlements in Spitsbergen in Svalbard archipelago. The first one is the largest settlement in
Spitsbergen, Longyearbyen, with population of about 2000 people. It is the
administrative center of Svalbard with airport and the seat of Governor of
Svalbard. The second one, Pyramiden, is Russian, coal-mining settlement
closed in 1998. Since 2007 Pyramiden has become the tourist attraction
with hotel and small museum. Only a few workers live there permanently.
Two one-week research expeditions were organized to perform preliminary
Arctic soundscape measurements. First, summer expedition during polar
day and second winter-spring expedition during the transition period
between the polar night and polar day. Long and short term sound pressure
level measurements together with the ambisonic recordings of unique and
typical sounds were made. Both qualitative assessment and quantitative
analysis of the results were carried out. The identification and classification
of the existing sound sources were conducted. Furthermore, noise maps of
both places together with the comparative analysis were performed.
3:20–3:40 Break
Acoustics ’17 Boston
3621
2p MON. PM
Contributed Papers
3:40
2pNSa7. Introduction and management of noise low emission zones:
LIFE MONZA project. Raffaella Bellomini (Universita’ Di Firenze, Via
Stradivari 19, Firenze 50127, Italy, raffaella.bellomini@vienrose.it),
Rosalba Silvaggio (ISPRA, Rome, Italy), Sergio Luzzi (VIE EN.RO.SE.
Ingegneria, Firenze, Italy), and Francesco Borchi (Universita’ Di Firenze,
Firenze, Italy)
The introduction of Low Emission Zones, an urban area subject to road
traffic restrictions in order to ensure compliance with the air pollutants limit
values, set by the European Directive on ambient air quality (2008/50/EC),
are common and well-established actions in the administrative government
of the cities and the impacts on air quality improvement are widely analyzed, while the effects and benefits concerning the noise have not been
addressed in a comprehensive manner. The definition, the criteria for analysis, and the management methods of a Noise Low Emission Zone are not yet
clearly expressed and shared. LIFE MONZA project (Methodologies for
Noise low emission Zones introduction and management—LIFE15 ENV/
IT/000586) addresses these issues. The first objective of the project, cofunded by the European Commission, is to introduce an easy-replicable
method for the identification and the management of the Noise Low Emission Zones, an urban area subject to traffic restrictions, whose impacts and
benefits regarding noise issues will be analyzed and tested in the pilot area
of the city of Monza, located in North Italy. Background conditions, structure, and objectives of the project will be discussed in this paper.
4:00
2pNSa8. Beyond the Noise: Open Source Soundscapes. A mixed
methodology to analyze and plan small, quiet areas on the local scale,
applying the soundscape approach, the citizen science paradigm, and
ur Stadt- und
open source technology. Antonella Radicchi (Institut f€
Regionalplanung, Technische Universit€at Berlin, Hardenbergstraße 40 a,
Sekr. B 4, Berlin, Berlin 10623, Germany, antonella.radicchi@tu-berlin.
de)
Today, cities have become increasingly noisier. In Europe, over 125 million people are affected by noise pollution from traffic every year, and apparently, quietness is becoming a luxury available only for the elites. There is
a growing interest in protecting and planning quiet areas, which has been
recognized as a valid tool to reduce noise pollution. However, developing a
common methodology to define and plan quiet areas in cities is still challenging. The “Beyond the Noise: Open Source Soundscapes” project aims
to fill this gap of knowledge by applying the soundscape approach, the citizen science paradigm and open source technology, with the ultimate goal of
making quietness as a commons. Accordingly, a new mixed methodology to
analyze and plan small, quiet areas on the local scale has been tested
through the development of a pilot study in a Berlin neighborhood affected
by environmental injustice and noise pollution. In this pilot study, a number
of citizens have been involved in crowdsourcing data related to “everyday
quiet areas” by using novel mobile technologies. This contribution illustrates the project’s theoretical background, the methods applied, the first
findings of the study and its potential impact.
4:20
2pNSa9. Assessment of the relation between psychoacoustic parameters
and the subjective perception of urban soundscapes. Daniel de la Prida,
Antonio Pedrero, Cesar Dıaz, and Marıa Angeles
Navacerrada (Grupo de
Investigacion en Acustica Arquitect
onica, Tech. Univ. of Madrid, UPM:
Escuela Tecnica Superior de Arquitectura, Avenida Juan de Herrera n 4,
Madrid, Madrid 28040, Spain, d.delaprida@alumnos.upm.es)
Since soundscapes are strongly related to the human perception, the
sound pressure level does not seem to be a sufficient representative of a
soundscape by itself. Therefore, the characterization of a soundscape might
be improved by using psychoacoustic parameters. A study has been conducted and the relationship between psychoacoustic parameters and the
subjective perception has been analyzed, for a collection of urban spaces.
For that purpose, several locations at the city of Madrid have been selected,
based on their main use and geometrical features. Then, binaural recordings
have been made for several days and different seasons, which allow to
observe the relevance of the behavioral differences for the same locations
under different conditions. Psychoacoustic parameters, as well as sound
pressure level, have been calculated for both the complete recordings and
selected parts of them. A semantic differential listening test have been carried out to look for correlations between the calculated parameters and the
subjective perception of a panel of participants. Finally, an automatic clustering is presented for the collection of locations. The adequacy of the proposed clustering method is evaluated by comparing the clusters to the
psychoacoustic parameters and the subjective responses of the listening test.
4:40
2pNSa10. Music to some, noise to others; reducing outdoor music
festivals’ sonic impact on surrounding communities. Case study:
KAABOO 2016. Pantelis Vassilakis (AcousticsLab - Acoust. Consulting,
616 W Imperial Ave., #4, El Segundo, CA 90245, pantelis@acousticslab.
org) and Aaron Davis (Audio, ECTO Productions Inc., Bensenville, IL)
For music scenes to coexist and thrive alongside residential communities, approaches to the problem of music as noise must acknowledge the
impact of noise signal type on listener annoyance levels. The challenge has
yet to be properly addressed by the environmental acoustics community,
which focuses on measurement standards and mitigation techniques applicable to mechanical noise but unfit to address noise issues related to music.
Differences include short versus long range contexts, health versus annoyance considerations, and continuous/unintelligible versus time-variant/intelligible source signals. Noise ordinances often introduce further
complications, requiring disambiguation to provide valid/assessable expectations. The presentation outlines how the problem was successfully tackled
for KAABOO 2016, a large-scale open-air music festival involving over
100 acts, over 75,000 patrons, and multiple outdoor stages. We discuss (a)
working with the city and venue to fine-tune noise ordinance expectations
and support valid compliance assessment; (b) designing and deploying sophisticated sound systems, powerful enough to fulfill audience expectations
and focused enough to effectively reduce noise impact on the surrounding
communities; (c) cooperating with the artists’ teams to appropriately reduce
on-site levels; and (d) obtaining relevant noise data prior and during the
event to validly capture the event’s noise impact and formally assess
compliance.
5:00
2pNSa11. Soundscapes, social media, and big data: The next step in
strategic noise mapping. Eoin A. King, She’ifa Punla-Green, and Samuel
Genovese (Acoust. Program and Lab, Univ. of Hartford, 200 Bloomfield
Ave., West Hartford, CT 06117, eoking@hartford.edu)
The current state-of-the art in noise assessment involves the development of a strategic noise map to identify areas with excessive noise levels,
expressed in terms of a single time-averaged noise indicator. While noise
maps yield important information regarding sound pressure levels in a particular space, they do not give any representation of the overall sound quality in that space. A more human-centered approach to noise assessment
could be achieved by developing soundscapes as a complementary tool to
noise mapping. However, most soundscape studies traditionally use surveys
or interviews to assess general sentiment toward the acoustic environment,
and as such are generally restricted to small geographic areas, compared to
the entire cities considered in noise mapping. Instead of using traditional
assessment techniques, this project aims to harness the potential of big data,
including, for example noise complaint data or social media chatter related
to noise, to better assess public sentiments towards soundscapes. This would
yield an unparalleled dataset of public opinions and perceptions of the
acoustic environment. Initial results based on an analysis of NYC311 complaints and geo-localized data mined from Twitter are presented.
5:20–5:40 Panel Discussion
3622
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3622
MONDAY AFTERNOON, 26 JUNE 2017
ROOM 202, 1:20 P.M. TO 5:20 P.M.
2pNSb
Noise, Physical Acoustics, ASA Committee on Standards, and Structural Acoustics and Vibration:
Sonic Boom Noise III: Community Exposure and Metrics
Philippe Blanc-Benon, Cochair
Centre acoustique, LMFA UMR CNRS 5509, Ecole Centrale de Lyon, 36 avenue Guy de Collongue,
Ecully 69134 Ecully Cedex, France
Victor Sparrow, Cochair
Grad. Program in Acoustics, Penn State, 201 Applied Science Bldg., University Park, PA 16802
2p MON. PM
Invited Papers
1:20
2pNSb1. NASA’s Low Boom Flight Demonstration: Assessing community response to supersonic overflight of quiet supersonic
aircraft. Peter Coen, Alexandra Loubeau (NASA, NASA Langley Res. Ctr., MS 264, Hampton, VA 23681, peter.g.coen@nasa.gov),
and Brett Pauer (NASA, Edwards, CA)
Innovation in Commercial Supersonic Technology is one of six thrusts that guide NASA’s Aeronautics Research. The near term
objective of this activity is establishment of standards for acceptable overland supersonic flight, in cooperation with international standards organizations. To accomplish this objective, NASA believes the next step is to conduct a flight demonstration using a research aircraft designed to produce not a sonic boom, but a quieter “thump” sound. Based on the success of recent research, NASA has initiated
design studies on a Quiet Supersonic Technology (QueSST) Aircraft. The Flight Demonstration will culminate in a series of campaigns
in which the QueSST aircraft will be flown over communities. Surveys will be conducted to develop a database of public response to the
sounds. This data will support ongoing international efforts to develop the noise certification standard. While the first flight of the aircraft
is still a few years away, NASA recognizes there is much to be done to prepare for the community response test phase. This includes
community identification and engagement, survey and instrumentation design and local and Federal government approvals. The paper
will present background and of NASA planning to date, and solicit input from the research community on next steps.
1:40
2pNSb2. An examination of the variations in estimated models for predicting annoyance due to supersonic aircraft noise. Daniel
J. Carr and Patricia Davies (Ray W. Herrick Labs., School of Mech. Eng., Purdue Univ., 177 South Russell St., West Lafayette, IN
47907-2099, daviesp@purdue.edu)
There is a need for good criteria to evaluate the acoustic outcomes of designs of future commercial supersonic aircraft. Such criteria
could be used with sound predictions to assess impact on communities under flight paths of supersonic aircraft. While surveys of communities exposed to supersonic aircraft noise should be part of the criteria development and validation, some candidate models of people’s responses need to be developed to help focus the design of the community tests. While several tests have been conducted, the
models proposed to predict annoyance differ. Analysis of response data from several sonic boom subjective tests is presented. Either
indoor or outdoor sounds have been used in the tests, and the models are based on metrics from indoor and from outdoor sounds. The
effects of the environment in which the people hear the sounds, the signal sets used in the tests and in the analysis, the metrics and types
of models used are discussed.
2:00
2pNSb3. Dose-response model comparison of recent sonic boom community annoyance data. Jonathan Rathsam (NASA Langley
Res. Ctr., MS 463, Hampton, VA 23681, jonathan.rathsam@nasa.gov) and Laure-Anne Gille (Shanghai, China)
To enable quiet supersonic passenger flight overland, NASA is providing national and international noise regulators with a low-noise
sonic boom database. The database will consist of dose-response curves, which quantify the relationship between low-noise sonic boom
exposure and community annoyance. The recently-updated international standard for environmental noise assessment, ISO 19961:2016, references two fitting methods for dose-response analysis. Fidell’s community tolerance method is based on theoretical assumptions that fix the slope of the curve, allowing only the intercept to vary. By contrast, Miedema and Oudshoorn’s method is based on multilevel grouped regression. These fitting methods are applied to an existing pilot sonic boom community annoyance data set from 2011
with a small sample size. The purpose of this exercise is to develop data collection and analysis recommendations for future sonic boom
community annoyance surveys.
3623
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3623
2:20
2pNSb4. A multiple-criteria decision analysis to evaluate sonic boom noise metrics. Joe DeGolia (Elec. Eng., Univ. at Buffalo, 12
Capen Hall, Buffalo, NY 14260, jedegoli@buffalo.edu) and Alexandra Loubeau (Structural Acoust., NASA Langley Res. Ctr.,
Hampton, VA)
A statistical analysis was performed to select which noise metrics are best at explaining human annoyance to sonic boom noise. This
follows previous work to downselect these metrics, but offers a more robust argument. The analysis began with calculation of a set of
thirteen noise metrics and collected information about their explanatory powers (r2 between annoyance rating and noise metric) in five
laboratory dose-response studies performed at different sonic boom simulation facilities. In these studies, indoor and outdoor human
responses were gathered under various experimental conditions—booms alone, booms with rattle indoors, and booms with indoor vibration. This input data was then passed through two stages of multiple-criteria decision-making algorithms. In the first step, a Pareto efficiency analysis was conducted to objectively group metrics and eliminate poor contenders. The second step involved an Analytic
Hierarchy Process to rank the remaining metrics, with an option to subjectively weight the studies by perceived importance. The result
of this downselection is the ranking of five metrics that were selected as top contenders: BSEL, ESEL, ISBAP, PL, and DSEL. [ISBAP
= PL + 0.4201(CSEL-ASEL)].
2:40
2pNSb5. Some practical difficulties in assessing community response to low-amplitude sonic booms. Sanford Fidell (Fidell Assoc.,
Inc., 23139 Erwin St, Woodland Hills, CA 91367, sf@fidellassociates.com), Richard Horonjeff (Consultant in Acoust. and Noise
Control, Boxborough, MA), and Vincent Mestre (Landrum and Brown, Irvine, CA)
NASA Langley Research Center has been engaged for several years in planning tests of public acceptance of exposure to low-amplitude sonic booms that will be created by its Quiet Supersonic Transport (QueSST) X-plane design. Estimation of a dosage-response relationship for the prevalence of high annoyance with sonic booms is a key part of this testing. The need to credibly assess prompt, singleevent responses within carpet boom corridors extending along hundreds of miles of flight tracks and their linkage to sonic boom sound
levels at respondents’ homes is a large part of the challenge of establishing such a relationship. As many as tens of thousands of contact
attempts and thousands of completed interviews must be achieved within short time periods. Conventional social survey approaches to
measuring cumulative noise exposure and the prevalence of high annoyance in airport environs are ill-suited to such purposes. ADS-Bbased, Internet-enabled flight tracking and impulse noise measurement, as well as high speed interviewing methods, are currently under
investigation as potential solutions to difficulties in synchronizing interviews with arrival times of shock waves at residences of exposed
populations.
3:00
2pNSb6. Development of a metric utilizing outdoor sonic boom for correlating indoor human annoyance responses. John M.
Morgenstern (Lockheed Martin Aeronautics, 1011 Lockheed Way, Palmdale, CA 93599, john.morgenstern@lmco.com)
In 2015, the CAEP (Committee for Aviation Environmental Protection) SSTG (SuperSonic Task Group) assessed over 70 metrics
for resolving acceptability of sonic booms. A ground rule was metric application to the outdoor sonic boom signature. Studies indicate
people spend 90% of time indoors and sonic boom is expected to be the same or more annoying indoors. NASA developed an IER
(Indoor Environment Room) facility to simulate indoor sonic booms and collected 30 humans’ responses to 140 representative booms.
Because all other metrics were based on human perception of loudness without indoor effects, a new metric was developed. Previous
work suggested indoor annoyance from sonic booms was predominantly based upon indoor loudness, building response and associated
rattle. This new metric combines one metric for indoor loudness and one metric for building response and rattle. Indoor loudness [PL(i)]
is based upon the highest correlated outdoor metric, PL, but calculated after adjusting 1/3 octave levels for transmission loss. Building
response (BR) averages of 1/3 octave levels in the 10-40 Hz range. The combined metric correlated the NASA indoor data with an R2 of
0.939, higher than the next best metrics: ISBAP 0.921, PL(i) 0.910 and PL 0.881.
3:20–3:40 Break
3:40
2pNSb7. Effects of model fidelity on indoor sonic boom exposure estimates. Jacob Klos (Structural Acoust. Branch, NASA Langley
Res. Ctr., 2 N. Dryden St., MS 463, Hampton, VA 23681, j.klos@nasa.gov)
Commercial supersonic flight is prohibited over land, but this may change in the near future with the introduction of supersonic aircraft that produce a substantially quieter sonic boom. A transient modal interaction model is used to simulate the acoustic and vibration
environment inside a large ensemble of residential homes to estimate the range in levels to which residents may be exposed during overflight of low-boom supersonic aircraft. However, the choice of fidelity used in the finite element models of the house structure (e.g.,
walls, floors, roofs, etc.) may have an impact on these exposure estimates. This presentation documents a recent study in which the fidelity of the structural finite element models was varied. Model fidelity was either an orthotropic panel approximation, in which the stiffening effects of studs were smeared over the entire panel, or a model that explicitly modeled the sheathing and studs. For sonic boom
noise, it was found that the orthotropic panel approximation performs well for partitions that have through-the-thickness geometric symmetry, for example walls with two sheathing surfaces. However, the orthotropic approximation does not perform as well for panels with
only one sheathing surface, which is typical of floors, ceilings, and roofs.
3624
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3624
4:00
2pNSb8. Sonic boom weather analysis of the F-18 low boom dive maneuver. Juliet Page (Environ. Measurement and Modeling,
Volpe National Transportation Systems Ctr., 55 Broadway, Cambridge, MA 02142, juliet.page@dot.gov)
In support of community low boom test planning, a sonic boom analysis of ten years of weather data was conducted at multiple
coastal regions for an F-18 conducting the NASA low boom dive maneuver. The low boom dive maneuver involves an inverted dive
where the aircraft accelerates supersonically and then pulls out above 30,000 Ft. During the dive maneuver, the sonic booms arrive on
both egg and crescent shaped isopemps. Due to the supersonic flight conditions and the propagation paths the boom from the earlier parts
of the trajectory arrive before the later part of the flight path. The influence of the local meteorological conditions on this maneuver has
a striking effect on the sonic boom footprints, including the shape and location of the focal zone and the extent of the low-amplitude
sonic boom carpet region. The paper will describe the PCBoom sonic boom propagation results and interpretive techniques for assessing
potential coastal sites for conducting dose-response testing using the F-18 dive maneuver.
4:20
2pNSb9. A study of reflected sonic booms using airborne measurements. Samuel R. Kantor and Larry J. Cliatt (AeroDynam. and
Propulsion, NASA Armstrong Flight Res. Ctr., 4800 Lily Ave., Edwards, CA 93523, samuel.r.kantor@nasa.gov)
2p MON. PM
In support of ongoing efforts to bring commercial supersonic flight to the public, the Sonic Booms in Atmospheric Turbulence (SonicBAT) flight test was conducted at NASA Armstrong Flight Research Center. During this test, airborne sonic boom measurements were
made using an instrumented TG-14 motor glider, called the Airborne Acoustic Measurement Platform (AAMP). During the flight program, the AAMP was consistently able to measure the sonic boom wave that was reflected off of the ground, in addition to the incident
wave, resulting in the creation of a completely unique data set of airborne sonic boom reflection measurements. This paper focuses on
using this unique data set to investigate the ability of sonic boom modelling software to calculate sonic boom reflections. Because the
algorithms used to model sonic boom reflections are also used to model the secondary carpet and over the top booms, the use of actual
flight data is vital to improving the understanding of the effects of sonic booms outside of the primary carpet. Understanding these
effects becomes especially important as the return of commercial supersonic approaches, as well as ensuring the accuracy of mission
planning for future experiments.
4:40
2pNSb10. The minimum number of ground measurements required for narrow sonic boom metric confidence intervals. William
Doebler and Victor Sparrow (The Penn State Univ., 201 Appl. Sci. Bldg., University Park, PA 16802, wfd5057@psu.edu)
In subsonic civilian flight, the FAA’s permissible noise standard sets a limit on average aircraft loudness and requires that the loudness is known to within an adequately narrow 90% confidence interval. The FAA and international partners are developing a certification
standard for the enroute regime of overland civil supersonic flight. In support of developing this standard, it may be useful to identify
the number, location, and frequency of ground measurements to ensure that metrics’ 90% confidence intervals for a supersonic flyover
are acceptably narrow. Using NASA’s Superboom Caustic Analysis and Measurement Program (SCAMP) database where an F-18 jet
flew above a linear 3048 m long 81-microphone array, confidence intervals of array-averaged metric values were calculated for six
steady speed, level altitude flights. Microphones were selectively removed from the metric averaging process using various techniques
to identify the effect of microphone number on confidence interval size. Preliminary results indicate ten measurements yield sufficiently
narrow confidence intervals compared to a large number of measurements. [Work supported by the FAA. The opinions, findings, conclusions, and recommendations expressed in this material are those of the authors and do not necessarily reflect the views of ASCENT
FAA Center of Excellence sponsor organizations.]
5:00
2pNSb11. Commercial space operations noise and sonic boom issues. Natalia Sizov (Office of Environment and Energy, Federal
Aviation Administration, 800 Independence Ave., SW, Washington, DC 20591, natalia.sizov@faa.gov)
Commercial space transportation is a rapidly developing industry worldwide. The expansion of the space transportation infrastructure creates additional challenges to the National Airspace System. New launch facilities have and are being developed some of which
co-locate with commercial airports. Rocket launch community noise impact requires adequate assessment and mitigation. New space
vehicles encompass a wide range of design, aircraft geometry, and flight parameters. A database of operational launch profiles for rockets does not exist at this moment. The acoustical characteristic of these new vehicles may also differ from that of existing rockets or conventional aircraft. Spacecraft ascent sonic boom signatures have higher overpressure, longer duration and a bow shock. There are no
standard methodologies for the environmental assessment of launch vehicles and sites. Developing and validating these models is an
emerging field. In addition, the metrics used for community noise assessment for commercial aircraft are being used for commercial
space operations and may not be appropriate for such use. The existing gaps in rocket launch community noise assessment, technical
and regulatory requirements, and current steps the FAA is undertaking to establish an environmental regulatory framework for the commercial space operations will be discussed.
3625
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3625
MONDAY AFTERNOON, 26 JUNE 2017
ROOM 210, 1:20 P.M. TO 5:40 P.M.
2pPA
Physical Acoustics: Infrasound II
Roger M. Waxler, Cochair
NCPA, University of Mississippi, 1 Coliseum Dr., University, MS 38677
Pieter Smets, Cochair
R&D Department of Seismology and Acoustics, KNMI, PO Box 201, De Bilt 3730 AE, Netherlands
Invited Papers
1:20
2pPA1. Mapping nonlinear infrasound penetration into a shadow zone: Results from rocket motor blasts at the Utah Test and
Training Range. Catherine de Groot-Hedlin (Scripps Inst. of Oceanogr., Univ. of California at San Diego, 9500 Gilman Dr., La Jolla,
CA 92037-0225, chedlin@ucsd.edu)
Each summer, large-scale detonations are carried out at the Utah Test and Training Range (UTTR) west of Salt Lake City. In 2016,
acoustic sensors were placed at ranges up to 90 km east of the UTTR test site. The frequencies of the recorded signals indicate that they
are infrasonic waves. The travel times imply that they are direct arrivals although the sensors lie within a shadow zone. Finally, the signal amplitudes indicate that acoustic propagation is nonlinear near the source. Consequently, numerical simulations that account for diffraction, and the effects of strong shocks are required to accurately map infrasound propagation into this region. A finite difference timedomain infrasound propagation method is applied to these signals. The algorithm relies on the assumption that the environmental model
is azimuthally symmetric about the source location, allowing for efficient numerical computation of acoustic propagation from a spherical source. For each detonation, numerical computations are performed along a series of azimuths from the shot position, using accurate
weather and topography along each path. The results show that infrasound penetration into the shadow zone is accurately predicted. The
synthesized over-pressures are positive correlated with observed pressure amplitudes, to an accuracy of 8 dB.
1:40
2pPA2. A numerical study of infrasound scattering from atmospheric inhomogeneities based on the 3-D unsteady compressible
Navier-Stokes equations. Roberto Sabatini (Ctr. for Acoust., Ecole Centrale de Lyon, 36 Ave. Guy de Collongue, Ecully cedex 69134,
France, roberto.sabatini@doctorant.ec-lyon.fr), Olivier Marsden (ECMWF, Reading, United Kingdom), Christophe Bailly (Ctr. for
Acoust., Ecole Centrale de Lyon, Ecully, France), and Olaf Gainville (CEA/DAM/DIF, Arpajon, France)
A direct numerical simulation of the compressible unsteady Navier-Stokes equations is performed to investigate the 3-D nonlinear acoustic field generated by a high-amplitude infrasonic source placed at ground level in a realistic atmosphere. High-order finite differences and a
Runge-Kutta time integration scheme originally developed for aeroacoustic applications are employed. The atmosphere is parametrized as a
stationary and vertically stratified medium, constructed by specifying a speed of sound and a mean wind profiles which mimic the main
trends observed during the Misty-Picture experiment. In the present talk, after a general description of the acoustic field observed up to 140
km altitude and 450 km range, the scattering from stratospheric inhomogeneities is investigated. The spectrum of the scattered wave recorded
at ground level is more particularly discussed and its dependence on the spectral properties of the inhomogeneities is highlighted. A fast
method for computing the scattered field, based on a wavelet representation of the temperature and wind fluctuations, is finally presented.
2:00
2pPA3. Local infrasound propagation in three dimensions simulated with in situ atmospheric measurements. Keehoon Kim,
Arthur Rodgers (Geophysical Monitoring Program, Lawrence Livermore National Lab., 7000 East Ave., L-103, Livermore, CA 94550,
kim84@llnl.gov), and Douglas Seastrand (Remote Sensing Lab., National Security Technologies, Las Vegas, NV)
Local infrasound propagation is influenced by atmospheric conditions. The vertical gradients of local ambient temperatures and winds
can alter the effective sound speed profiles in the atmosphere and dramatically change the focusing and defocusing behaviors of acoustic
waves at local distances. Accurate prediction of local infrasound amplitude is critical to estimating explosion energies of natural and/or
man-made explosions, and physics-based numerical simulation that accounts for three-dimensional propagation effects should be required
for that purpose. The accuracy of a numerical modeling is, however, often compromised by the uncertainty of atmospheric parameters that
are used for the modeling. In this study, we investigate the impacts of local atmospheric conditions on infrasound propagation using the
data from chemical explosion experiments. In situ atmospheric conditions during the experiments are measured by a combination of (1)
local radiosonde soundings, (2) Atmospheric Sounder Spectrometer for Infrared Spectral Technology (ASSIST), (3) surface weather stations, and (4) a wind LIDAR profiler, which can complement atmospheric profiles for numerical simulations and capture local atmospheric
variability. We simulate three-dimensional local infrasound propagation using a finite-difference method with the local atmospheric measurements, and the accuracy of the numerical simulations are evaluated by the comparison with the field observations.
3626
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3626
2:20
2pPA4. Estimating the effects of an ellipsoidal earth and topography on infrasonic propagation. Philip Blom (Los Alamos
National Lab., Los Alamos National Lab., PO Box 1663, Los Alamos, NM 87545, pblom@lanl.gov)
Infrasonic signals often propagate significant horizontal distances so that predictions obtained using a flat ground approximation can
introduce inaccuracies. Simulations of propagation in an atmospheric layer around a spherical globe have shown non-negligible deviations from flat ground predictions for both arrival locations and propagation times. Simulation predictions for flat-ground and spherical
earth models will be discussed and the additional challenges of implementing a non-spherical globe model and the inclusion of topography will be discussed using the approximation of geometric acoustics. A non-spherical globe model, such as the WGS84 ellipsoid, is
found to produce range dependence in the propagation medium, even in the case that a stratified local atmosphere is assumed. Further,
although scattering and diffraction effects are not included in the geometric limit, variations in the ground surface level can be included
in ray path computation to more accurately model propagation of infrasound. Propagation effects will be detailed in the case of a tropospheric waveguides for which interaction with the ground surface is significant as well as the case that source and receiver locations
have differences in elevation.
2:40
We study the effect of acoustic propagation from explosions on full waveforms using both empirical and numerical approaches.
Empirically, we explore the effects of meteorology, terrain, etc., on explosion signatures by exploiting a rich dataset of explosion measurements in different regions to relate specific path effects to second-order effects in the waveforms. Numerically, we explore the effects
using different full wave codes to understand observations from a unique experiment with both ground and air waveform and 3D wind
field measurements. We discuss implications for explosion yield estimation for surface explosions and for underground events.
Contributed Papers
3:00
2pPA6. The study of sudden stratospheric warmings using infrasound.
Pieter Smets, Jelle Assink, and L€aslo Evers (R&D Dept. of Seismology and
Acoust., KNMI, PO Box 201, De Bilt 3730 AE, Netherlands, smets@knmi.
nl)
Infrasound has a long history in monitoring SSWs. Several pioneering
studies have focused on the various effects of a major warming on the propagation of infrasound, described throughout this chapter. A clear transition
can be denoted from observing anomalous signatures towards the use of
these signals to study anomalies in upper atmospheric specifications. First
studies describe the various infrasonic signatures of a major warming. In
general, the significant change in observed infrasound characteristics correspond to summer-like conditions in midwinter. More subtle changes are
denoted during a minor warming, recognizable by the presence of a bidirectional stratospheric duct. A combined analysis of all signal characteristic
unravels the general stratospheric structure throughout the life cycle of the
warming. From then on, infrasound observations are used to evaluate the
state of the atmosphere as represented by various NWP models. A new
methodology, comparing regional volcano infrasound with simulations
using various forecast steps, indicates interesting variations in stratospheric
skill.
3:20–3:40 Break
3:40
2pPA7. NCPAprop—A software package for infrasound propagation
modeling. Roger M. Waxler (National Ctr. for Physical Acoust., Univ. of
MS, University, MS), Jelle D. Assink (The Royal Netherlands
Meteorological Inst., De Bilt, Netherlands), Claus Hetzer (National Ctr. for
Physical Acoust., Univ. of MS, University, MS), and Doru Velea (Leidos,
14668 Lee Rd., Chantilly, VA 20151, doru.velea@leidos.com)
Developed by the infrasound group at the National Center of Physical
Acoustics, University of Mississippi, and a few collaborators, NCPAprop is
an open source software package aiming at providing a comprehensive set
of tested and validated numerical models for simulating the long range propagation of infrasonic signals through the earth’s atmosphere. The algorithms
implemented in NCPAprop are designed for frequencies large enough that
3627
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
the effects of buoyancy can be neglected and small enough that propagation
to ranges of hundreds to thousands of kilometers is possible without significant signal attenuation. Nominally, NCPAprop can, without modification,
be used to efficiently model narrowband propagation from 0.1 to 10 Hz and
broadband propagation from 0.05 Hz to 2 or 3 Hz. NCPAprop provides both
geometrical acoustic and full wave models which will be presented. The geometrical acoustics part consists of 2-d and 3-d ray tracing programs as well
as a non-linear ray theory model. The full wave models consist of a suite of
normal mode models of increasing complexity and a Parabolic Equation
(PE) model.
4:00
2pPA8. Acoustic/Infrasonic analysis and modeling of thunder from a
long-term recording in Southern France. Arthur Lacroix (Institut Jean Le
Rond d’Alembert (UMR 7190), Univ. Pierre and Marie Curie - Paris 6,
Universite Pierre et Marie Curie, 4 Pl. Jussieu, Paris 75005, France, arthur.
lacroix@upmc.fr), Thomas Farges (CEA, DAM, DIF, Arpajon, France),
Regis Marchiano, and François Coulouvrat (Institut Jean Le Rond
d’Alembert (UMR 7190), Univ. Pierre and Marie Curie - Paris 6, Paris,
France)
Thunder produces complex signals with a rich infrasonic and audible
frequency spectrum. These signals depend both on the source and the propagation to the observer. However, there is no mutual agreement on the link
between the observed spectral content and the generation mechanisms. The
objectives of this study is to provide additional experimental and theoretical
investigations, especially on the return stroke, based on a database of several
thousands of acoustic and electromagnetic signals recorded in Southern
France during autumn 2012 (HyMeX campaign). It contains a sufficient
number of events close to the source (<1 km) to minimize propagation
effects and to focus on the source effects. Source localization and lightning
acoustical reconstruction indicate that infrasonic and low frequency audible
part (1-40 Hz) of the spectrum show no clear differences between the return
stroke and the intracloud discharges. These observations are compatible
with a source mechanism due to the thermal expansion associated to the
sudden heating of the air in the lightning channel. An original model
inspired by Few’s string pearl theory has been developed. It shows that the
tortuous channel geometry explains at least partly the low frequency content
of observed thunder spectrum.
Acoustics ’17 Boston
3627
2p MON. PM
2pPA5. Local-distance acoustic propagation from explosions. Stephen Arrowsmith, Nathan Downey, Leiph Preston, and Daniel C.
Bowman (Sandia National Labs., PO Box 5800, Albuquerque, NM 87185-0404, sjarrow@sandia.gov)
4:20
5:00
2pPA9. Infrasound scattering from stochastic gravity wave packets.
Christophe MILLET (CEA, DAM, DIF, CEA, DAM, DIF, Arpajon 91297,
France, christophe.millet@cea.fr), Bruno RIBSTEIN (LMD, ENS, Cachan,
France), and Francois LOTT (CMLA, ENS, Paris, France)
2pPA11. Simulating global atmospheric microbaroms from 2010
onward. Pieter Smets, Jelle Assink, and L€aslo Evers (R&D Dept. of
Seismology and Acoust., KNMI, PO Box 201, De Bilt 3730 AE,
Netherlands, smets@knmi.nl)
Long-range infrasound propagation problems are characterized by a
large number of length scales and a large number of propagating modes. In
the atmosphere, these modes are confined within waveguides causing the
sound to propagate through multiple paths to the receiver. In most infrasound modeling studies, the small scale fluctuations are represented as a
“frozen” gravity wave field that is superimposed on a given average background state, and the normal modes are obtained using a single calculation.
Direct observations in the lower stratosphere show, however, that the gravity wave field is very intermittent, and is often dominated by rather well
defined large-amplitude wave packets. In the present work, we use a few
proper modes to describe both the gravity wave field and the acoustic field.
Owing to the disparity of the gravity and acoustic length scales, the acoustic
field can be constructed in terms of asymptotic expansions using the method
of multiple scales. The amplitude evolution equation involves random terms
that can be related to vertically distributed gravity wave sources. To test the
validity of the theory, numerical results are compared with recorded signals.
It is shown that the present stochastic theory offers significant improvements
over current semi-empirical approaches.
Microbaroms are atmospheric pressure oscillations radiated from nonlinear ocean surface wave interactions. Large regions of interacting highenergetic ocean waves, e.g., ocean swell and marine storms, radiate almost
continuously acoustic energy. Microbaroms dominate the infrasound ambient noise field, which makes them a preferred source for passive atmospheric probing. Microbarom are simulated using a two-fluid model,
representing an atmosphere over a finite-depth ocean and a coupled oceanwave model providing the sea state. Air-sea coupling is crucial due to the
two-way interaction between surface winds and ocean waves. In this study,
a detailed overview is given on how global microbarom simulations are
obtained, including a sensitivity analysis of the various model input data
and parameterizations. Simulations are validated by infrasound array
observations of the International Monitoring Systems (IMS) of the Comprehensive Nuclear-Test-Ban Treaty Organization (CTBTO). An brief demonstration is given on the added value of global microbarom simulations for
infrasound studies and how to obtain these source simulations.
4:40
2pPA10. Spectral broadening of infrasound tones in mountain wave
fields. Florentin DAMIENS (LMD, ENS, Paris, France), Christophe
MILLET (CEA, DAM, DIF, CEA, DAM, DIF, Arpajon 91297, France,
christophe.millet@cea.fr), and Francois LOTT (LMD, ENS, Paris, France)
Linear theory of acoustic propagation is used to analyze how infrasounds trapped within the lower tropospheric waveguide propagate across
mountain waves. The atmospheric disturbances produced by the mountains
are predicted by a semi-theoretical mountain gravity wave model. For the
infrasounds, we solve the wave equation under the effective sound speed
approximation both using a spectral collocation method and a WKB
approach. It is shown that in realistic configurations, the mountain waves
can deeply perturb the low level waveguide, which leads to significant
acoustic dispersion. To interpret these results, we follow each acoustic
mode separately and show which mode is impacted and how. We show that
during statically stable situations, roughly representative of winter or night
situations, the mountain waves induce a strong Foehn effect downstream
which shrinks significantly the waveguide. This yield a new form of infrasound absorption, a form that can largely outweigh the direct effect of the
mask the mountain induce on the low-level waveguide. On the opposite,
when the low level flow is less statically stable (summer or day situations),
the mountain wave dynamics does not produce dramatic responses downstream, it can even favor the passage of infrasound waves, somehow mitigating the direct effect of the obstacle.
3628
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
5:20
2pPA12. Infrasound sensing on Mars: Wind noise predictions for a
porous dome geometry. Kevin Pitre and Andi Petculescu (Univ. of
Louisiana at Lafayette, University of Louisiana at Lafayette, Lafayette, LA
70503, kmp7935@gmail.com)
Infrasound sensing will play an important role in future Mars exploration. Applications include quantifying bolide impacts, monitoring subsurface activity, storm and dust-devil dynamics, as well as characterizing the
planetary boundary layer. As on Earth, infrasonic measurements are likely
to be hampered by wind-generated noise at the frequencies of interest.
Instead of the rosette-type filter geometry commonly used at Earth monitoring stations, porous hemispherical domes could be more easily deployed in
an ‘‘inverted-umbrella’’ configuration, with the sensor at the apex. By
adapting previous work (Noble et al., Proc. Meet. Acoust. 21, 045005
(2014)) to the conditions in the Martian surface layer, we predict the infrasonic wind noise at the center of a porous dome placed at the locations of
various Mars landers (Viking 1 and 2, Pathfinder, Mars Science Laboratory,
and Phoenix). The predictions include the turbulence-turbulence, turbulence-shear, and stagnation-pressure contributions, obtained for different
dome porosities and Martian wind speeds. Measurements at Mars’ surface
as well as interpolated data from the Mars Climate Database (wwwmars.lmd.jussieu.fr), a detailed Mars circulation model, are used as inputs to
the model. The work was funded by a grant from the Louisiana Space
Consortium.
Acoustics ’17 Boston
3628
MONDAY AFTERNOON, 26 JUNE 2017
ROOM 311, 1:20 P.M. TO 6:00 P.M.
2pPPa
Psychological and Physiological Acoustics: Models and Reproducible Research II
Alan Kan, Cochair
University of Wisconsin-Madison, 1500 Highland Ave., Madison, WI 53705
Piotr Majdak, Cochair
Acoustics Research Institute, Austrian Academy of Sciences, Wohllebengasse 12-14, Wien 1040, Austria
Invited Papers
2p MON. PM
1:20
2pPPa1. Development and dissemination of computational models for physiology and psychophysical predictions. Laurel H.
Carney (Univ. of Rochester, 601 Elmwood Ave., Box 603, Rochester, NY 14642, Laurel.Carney@Rochester.edu)
This talk will present lessons from our experience in publishing and sharing computational models for physiological and psychophysical responses. Our physiological models have included rather comprehensive and nonlinear phenomenological descriptions of auditory-nerve responses to complex sounds, and simpler linear models for brainstem and midbrain responses. Using ensembles of singleneuron models to estimate population responses enables psychophysical predictions based on different aspects of sub-cortical representations. Examples of psychophysical models that we have pursued include level discrimination and diotic and dichotic masked detection
of tones. Some of the challenges inherent in this type of work will be discussed. [Work supported by NIH-R01-001641 & -010813.]
1:40
2pPPa2. From physiology to functional auditory-nerve models: Challenges and approaches. Sarah Verhulst (Ghent Univ.,
Technologiepark 15, Zwijnaarde 9052, Belgium, s.verhulst@ugent.be)
A variety of auditory-nerve models, as well as a vast amount of animal single-unit and population response data that can be used to
set the parameters of such models, exists. However, it is hard to evaluate different model implementations from published data to decide
whether the specific implementation is appropriate for your envisioned application. In this presentation, I will give an experience-based
overview on the challenges faced with when evaluating the model parameters in auditory-nerve models. The adopted approach uses the
available computer code of the different models under test, and compares their responses to the same input. This method is very efficient
in testing the influence of changing one specific part of the model while leaving the rest unchanged, and can ultimately yield improved
functional models of the auditory periphery.
2:00
2pPPa3. Simulation model for interaural time difference discrimination for tones. William M. Hartmann (Phys. and Astronomy,
Michigan State Univ., Physics-Astronomy, 567 Wilson Rd., East Lansing, MI 48824, hartmann@pa.msu.edu) and Andrew Brughera
(Hearing Res. Ctr., Boston Univ., Boston, MA)
Difference limens for the interaural time difference (ITD) can be measured with a 2-interval, 2-alternative forced-choice staircase
using the ITD as the staircase variable. Experimental results can be predicted by a computational model that simulates the experiment
protocol in every important detail, as applied to human listeners, but the computer can tolerate hundreds of times more runs. At the core
of the simulation is a decision process based on an opponency model for the two medial superior olives (MSO). MSO firing rates as
functions of ITD are initially determined by a stochastically driven Hodgkin-Huxley cell model and represented in the simulation by a
four-parameter fitted function. A corresponding noise function is estimated from multiple runs of the cell model. Left-right symmetry in
both the model and the experiment protocol simplifies the calculations. Simulations have practical value in relating staircase thresholds
to the underlying parameterized firing rate functions for given staircase variables, especially the initial ITD and the step size. Understanding this relationship is critical for the design and evaluation of experiments at low tone frequencies where thresholds grow to span
a wide range. [Work supported by the AFOSR and ARCLP.]
2:20
2pPPa4. Reproducing response characteristics of electrically-stimulated auditory nerve fibers with a phenomenological model.
Marko Takanen and Bernhard U. Seeber (Audio Information Processing, Tech. Univ. of Munich, Arcisstrasse 21, Munich 80333,
Germany, marko.takanen@tum.de)
Electrical stimulation of the auditory nerve fibers (ANFs) by a cochlear implant (CI) restores hearing for profoundly deaf people.
Modern CIs use sequences of amplitude-modulated charge-balanced pulses to encode the spectro-temporal information of the sound
reaching the ears of the listener. In such a pulsatile stimulation, several temporal phenomena related to inter-pulse interactions affect the
3629
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3629
responsiveness of the ANF during the course of the stimulation. Specifically, refractoriness, facilitation, accommodation, and spike-rate
adaptation affect whether a given pulse evokes an action potential or not, and these phenomena continue to provide challenges for computational models. Here, we present a model that builds on recent the biphasic leaky integrate-and-fire model by Horne et al. (Front.
Comput. Neurosci. 2016), which we have extended to include elements that simulate refractoriness and facilitation/accommodation by
affecting the threshold value of the model momentarily after supra- and subthreshold stimulation, respectively. We show that the revised
model can reproduce neurophysiological data from single neuron recordings considering the aforementioned phenomena. By accurate
modeling of temporal aspects of inter-pulse interactions, the model is shown to account also for effects of pulse rate on the synchrony
between the pulsatile input and the spike-train output. [Work supported by BMBF 01 GQ 1004B.]
2:40
2pPPa5. Modeling sound externalization based on listener-specific spectral cues. Robert Baumgartner (Acoust. Res. Inst., Austrian
Acad. of Sci., Wohllebengasse 12-14, Vienna 1040, Austria, robert.baumgartner@oeaw.ac.at), Piotr Majdak (Acoust. Res. Inst.,
Austrian Acad. of Sci., Wien, Austria), H. Steven Colburn, and Barbara Shinn-Cunningham (Dept. of Biomedical Eng., Boston Univ.,
Boston, MA)
Sound sources in natural environments are usually perceived as externalized auditory objects located outside the head. In contrast,
when listening via headphones or hearing-assistive devices, sounds are often heard inside the head, presumably because they are filtered
in a way inconsistent with normal experience. Previous results suggest that high-frequency spectral cues arising from the listener-specific filtering by the pinnae are particularly important for sound externalization, but this has not been confirmed in a quantitative perceptual model yet. Here, we present a model designed to predict sound externalization related to the spectral-cue salience in free field. The
modeling results are compared to results from various behavioral studies testing the effect of low-pass filtering, non-individualized
head-related transfer functions, and behind-the-ear microphone casing in hearing-assistive devices. We will discuss the limitations of
previous experimental designs and existing modeling approaches, including fundamental issues of model fitting.
3:00
2pPPa6. Predicting binaural lateralization and discrimination using the position-variable model. Richard M. Stern (Elec. and
Comput. Eng., Carnegie Mellon Univ., 5000 Forbes Ave., Pittsburgh, PA 15213, rms@cs.cmu.edu)
The position-variable model was developed as a means to characterize and predict a variety of binaural lateralization, discrimination,
and detection phenomena. The model was motivated by a desire for a more complete understanding of the putative mechanisms by
which interaural time and intensity differences were combined, as well as the extent to which results in interaural discrimination and binaural detection experiments are mediated by cues based on subjective lateral position. This paper will describe recent efforts to unify
and extend the predictions of the model, as well as to develop a publicly accessible version of the model within the framework for comparing evaluating binaural models described by Dietz et al. in this session. Predictions of the model are based on computation of the
centroid along the internal-delay axis of the patterns of activity of the display of information proposed earlier by Jeffress and Colburn,
derived from the auditory-nerve response to the experimental stimuli. Some of the issues to be discussed include comparisons to other
proposed methods of developing lateralization predictions, the impact of internal versus external noise in the model’s predictions, and
specific issues involved with modifying the model to render it compatible to the common framework for model comparison.
3:20–3:40 Break
3:40
2pPPa7. Reproducible psychoacoustic experiments and computational perception models in a modular software framework.
Stephan D. Ewert (Medizinische Physik and Cluster of Excellence Hearing4All, Universit€at Oldenburg, Carl-von-Ossietzky Str. 9-11,
Oldenburg 26129, Germany, Stephan.ewert@uni-oldenburg.de) and Torsten Dau (Hearing Systems Group, Dept. of Elec. Eng., Tech.
Univ. of Denmark, Lyngby, Denmark)
Psychoacoustic experiments and auditory models are fundamental elements of hearing research helping to understand human auditory perception. One successful way to apply models has been to use the model as artificial observer, performing exactly the same psychoacoustic experiment as human subjects [e.g., Jepsen et al., J. Acoust. Soc. Am. 124, 422 (2008)]. While the signal processing parts of
this and other models are publicly available, reproducible research requires availability of the complete framework including stimulus
generation, experimental procedure, and interface to the model. For this, AFC for Matlab/Octave [www.aforcedchoice.com] provides a
free and highly flexible framework to design and run psychoacoustic measurements with subjects and computer models. Previous versions of AFC have been used for nearly two decades in several highly ranked psychoacoustic research sites. To foster reproducible
research, AFC offers full downward compatibility to the very first version, and the ability to easily overload or add measurement procedures, audio drivers, and models/model interfaces. Here a new version is presented with the above model as a use case. A database of
psychoacoustic experiments from numerous publications is established, to provide the stimulus generation of the experiments, methods,
and models for exact reproduction of the original work for teaching and research.
4:00
2pPPa8. An initiative for testability and comparability of binaural models. Mathias Dietz (National Ctr. for Audiol., Western Univ.,
1201 Western Rd., London, ON N6G 1H1, Canada, mdietz@uwo.ca), Torsten Marquardt (UCL Ear Inst., London, United Kingdom),
Piotr Majdak (Oesterreichische Akademie der Wissenschaften, Wien, Austria), Richard M. Stern (Carnegie Mellon Univ., Pittsburgh,
PA), William M. Hartmann (Michigan State Univ., East Lansing, MI), Dan F. Goodman (Imperial College, London, United Kingdom),
and Stephan D. Ewert (Universitaet Oldenburg, Oldenburg, Germany)
A framework aimed at improving the testability and comparability of binaural models will be presented. The framework consists of
two key elements: (1) a repository of testing software that evaluates the models against published data and (2) a model repository. While
the framework is also intended for physiological data, the planned initial contribution will be psychoacoustical data together with their
3630
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3630
psychoacoustical testing protocols, as well as existing binaural models from available auditory toolboxes. Researchers will be invited to
provide their established as well as newly developed models in whatever programming language they prefer, given the models are compatibility with the proposed interface to the testing software. This entails that the models act as artificial observers, testable with exactly
the same procedure as the human subjects. A simple communication protocol based on wav and txt-files is proposed because these are
supported by every programming environment, and are able connect models and testing software of any programming language. Examples will illustrate the principle of testing models with unaltered signal processing stages on various seminal data sets such as tone detection in so-called double-delayed masking noise, or lateralization of 3=4-period delayed noise and sounds with temporally asymmetric
envelopes.
4:20
2pPPa9. Resource sharing in a collaborative study on cochlear synaptopathy and suprathreshold-hearing deficits. Hari M.
Bharadwaj, Jennifer M. Simpson, and Michael G. Heinz (Speech, Lang., and Hearing Sci. & Biomedical Eng., Purdue Univ., 715 Clinic
Dr., Lyles-Porter Hall, West Lafayette, IN 47907, hbharadwaj@purdue.edu)
2p MON. PM
Evidence from animal models of substantial noise-induced cochlear synaptopathy even in the absence of measurable audiometric
changes has led to an active debate over whether such damage occurs in humans, and whether it contributes to suprathreshold-hearing
deficits. Addressing these fundamental and translational questions requires multi-disciplinary approaches that integrate widely ranging
forms of data and analyses, e.g., animal/human/model, evoked/single-unit/behavioral, and lab/clinical. Furthermore, connecting results
across research groups around the world working on various species requires a systematic approach to resource sharing that will promote
rigor and reproducibility. Here, we describe our efforts and plans to share resources from a large-scale collaborative project on noiseinduced synaptopathy that links single-unit, evoked, and behavioral data from chinchillas with evoked, behavioral, and imaging data
from humans studied in the laboratory and in the clinic. In addition to using modular implementations of stimulation paradigms, computational models, and analysis tools in high-level languages, we adopt open-access resource repositories and integrated platform-independent tools for version control, distributed development, documentation, and testing. Such resource sharing will help expedite
answering the question of whether the anatomical/physiological effects seen in smaller animal models are present and perceptually significant in humans.
4:40
2pPPa10. Open community platform for hearing aid algorithm research. Hendrik Kayser (Medizinische Physik and Cluster of
Excellence H4a, Carl von Ossietzky Universit€at Oldenburg, Ammerlaender Heerstrasse 114-118, Oldenburg D-26111, Germany,
hendrik.kayser@uol.de), Tobias Herzke, Frasher Loshaj (H€
orTech gGmbH, Oldenburg, Germany), Giso Grimm, and Volker Hohmann
(Medizinische Physik and Cluster of Excellence H4a, Carl von Ossietzky Universit€at Oldenburg, Oldenburg, Germany)
The project “Open community platform for hearing aid algorithm research” funded by the National Institutes of Health (NIH Grant
1R01DC015429-01) aims at sustainable, focused research toward improvement and new types of assistive hearing systems. To this end,
an open-source software platform for real-time audio signal processing will be developed and made available to the research community
including a standard set of reference algorithms. Furthermore, novel algorithms for dynamic and frequency compression, auditoryscene-analysis based noise suppression and speech enhancement, and feedback management will be investigated. For a realistic assessment of the benefits of hearing aid algorithms and combinations thereof, instrumental measures of performance in virtual acoustic environments of varying complexity will be included in the algorithm design and optimization. With such a quasi-standard set of
benchmarks and the means to develop and integrate own signal-processing methods and measures in the same framework, the platform
enables reproducible, comparative studies and collaborative research efforts. Beyond an implementation for PC hardware the system
will also be made usable for ARM-processor based hardware to allow pre-development of wearable audio devices—so-called
“hearables.” This contribution will present underlying previous work and the goals and plans of the project that has started midyear
2016. www.openMHA.org.
5:00
2pPPa11. Open science in the Two!Ears project—Experiences and best practices. Hagen Wierstorf (Audiovisual Technol. Group,
Technische Universit€at Ilmenau, Ehrenbergstraße 29, Ilmenau 98693, Germany, hagen.wierstorf@posteo.de), Fiete Winter, and Sascha
Spors (Inst. of Communications Eng., Univ. Rostock, Rostock, Germany)
Two!Ears was an EU funded project for binaural auditory modeling with ten international partners involved. One of the project goals
was to follow an Open Science approach in all stages. This turned out to be a challenging task as the project involved huge amounts of
software, acoustical measurements, and data from listening tests. On the other hand, it was obvious from the positive experience with
the Auditory Modelling Toolbox that an Open Science approach would have a positive impact and foster progression afterwards. As
there existed no ready solution to achieve this goal at the beginning of the project, different paths for data management were tested. It
was especially challenging to provide a solution for data storage. Here, the goal was not only the long term accessibility of the data, but
also the revision control of public and private data for the development inside the project. In the end, the project was able to make most
of its software and data publicly available, but struggled to apply the reproducible research principle to most of its papers. This contribution will discuss best practices to actively support reproducible research in large-scale projects in the acoustic community, points out
problems and solutions.
5:20
2pPPa12. A library of real-world reverberation and a toolbox for its analysis and measurement. James Traer and Josh McDermott
(Brain and Cognit. Sci., MIT, 77 Massachusetts Ave., Cambridge, MA 02139, jtraer@mit.edu)
Reverberation distorts the sounds produced in the world, but in doing so provides information about the environment. This distortion
is characterized by the Impulse Response (IR) and depends upon the material and geometry composing the environment. For some tasks
(voice recognition, source localization, acoustic tomography, etc.), reverberation must be discounted, but for others (room identification,
3631
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3631
distance estimation, etc.), it must be analyzed. Our recent work on the perception of reverberation has leveraged measurements of realworld IRs, which exhibit a number of regularities that the brain appears to have internalized for auditory scene analysis. Here, we present
a library of these measurements and a toolbox for measuring and analyzing additional IRs. The library contains 271 IRs from spaces
encountered by 7 volunteers over 2 weeks of daily life, and thus reflects the distribution of typical reverberation experienced by humans.
The toolbox includes procedures for measuring IRs with low-cost, portable equipment, with a low-volume broadcast. This allows measurement in both public and outdoor spaces. Both the library and toolbox are publicly available.
Contributed Paper
5:40
2pPPa13. The open virtual auditory-localization environment: Towards
a common methodology for objectively evaluating head-related transfer
function personalization methods. Griffin D. Romigh (Air Force Res.
Labs, 2610 Seventh St., Area B, Bldg. 441, Wright Patterson AFB, OH
45433, griffin.romigh@us.af.mil) and jason ayers (Ball Aerosp., Dayton,
OH)
Despite the fact that individualized head-related transfer functions
(HRTFs) are critical for achieving high-fidelity virtual audio representation,
the techniques for measuring them, which have been around for decades,
are too costly for most potential users. As such, many strategies have been
proposed that aim to improve virtual audio fidelity by personalizing a nonindividualized HRTF based on input (or anthropometric information) from
the user. Unfortunately, evaluations of these methodologies have varied
widely from purely computational to purely subjective, making comparisons
across studies or to objective behavioral performance metrics difficult. The
current work presents the Open Virtual Auditory-Localization Environment
(OpenVALE), a software toolkit for providing a common, objective, HRTFbased auditory localization task via new, relatively low-cost, commercial
VR headsets. The heart of OpenVALE is a server application that allows
researchers to dynamically load custom HRTFs, present spatialized auditory
stimuli, collect hand- or head-slaved, cursor-based localization responses,
and provide visual feedback, all through simple string-based IP socket messages from any compatible client application (e.g., Matlab, Java, Python,
etc.). An initial validation of the task environment, based on individualized
HRTF measurements, will be described along with a discussion of the
remaining challenges for creating an accepted standard methodology.
MONDAY AFTERNOON, 26 JUNE 2017
BALLROOM B, 1:20 P.M. TO 4:20 P.M.
2pPPb
Psychological and Physiological Acoustics: Hearing Aiding, Protection, and Speech Perception
Valeriy Shafiro, Chair
Communication Disorders & Sciences, Rush University Medical Center, 600 S. Paulina Str., AAC 1012, Chicago, IL 60612
Contributed Papers
1:20
1:40
2pPPb1. An investigation of passive type hearing protection in human
and animals by their auricles by diverting natural drainage of rain
water along facial features into the ear canal. Amitava Biswas (Speech
and Hearing Sci., Univ. of Southern MS, 118 College Dr. #5092, USMCHS-SHS, Hattiesburg, MS 39406-0001, Amitava.Biswas@usm.edu)
2pPPb2. Auditory protection, observation vs standards. Gerald Fleischer
(Justus-Liebig-Univ., Hoehenstr. 18, Giessen 35466, Germany, gerald.
fleischer@gmx.net)
Many textbooks in human auditory system begins with the external ear
and basically describe the auricle as a collector of sound. In this study, we
have explored the utility of the auricle to protect the external ear canal from
environmental factors such as rain shower or sand storm. A model of human
head of typical dimensions was held in upright position inside a bath tub. A
shower head was positioned directly above the model head. One ear of the
model was sliced off. Each ear canal was provided internally a tube of about
8 mm diameter connected to a collecting bottle of about 500 cc capacity.
The shower head output was about eight liters of water per minute. The
results suggest that the impaired ear is more vulnerable to water entry compared to the unimpaired ear of the model. Therefore, another vital necessity
of evolution of the external ear may have been environmental protection, in
addition to collection of sound energy. Generalization of the data for other
common animals will be discussed.
3632
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
A summary of more than two decades of research, related to the relationship between the acoustic environment and the auditory threshold. Special
groups have been examined: professional musicians, dentists, fans and
avoiders of discotheques, office personnel, nomadic people, etc. Persons who
suffered from noise-induced damage have been examined, and the acoustic
conditions, mostly impulses, reenacted and analyzed. Depending on the pressure-time-history, impulses show several types of characteristic damage (footprints). These types of damage can be determined automatically, using
pattern recognition, if more frequencies are used for audiometry. Evidence
for training of the auditory system is widespread. A cochlear mechanism for
reducing the sensitivity very rapidly appears likely and is presented and discussed. Middle-ear muscles are responsible for auditory accommodation—listening to events nearby while suppressing noise from a distance. They are not
helpful for avoiding typical damages caused by noise. Using these parameters
gives a much better understanding what is harmful for hearing. This is helpful
for avoiding such conditions. Comparing the auditory threshold between various groups, independent of age, reveals what is good for hearing.
Acoustics ’17 Boston
3632
2pPPb3. Attenuation of dual hearing protection: Measurements and
finite-element modeling. Hugues Nelisse, Franck C. Sgard (IRSST, 505
Blvd. De Maisonneuve Ouest, Montreal, QC H3A 3C2, Canada, hugues.
nelisse@irsst.qc.ca), Marc-Andre Gaudreau (Cegep de Drummondville,
Montreal, QC, Canada), and Thomas Padois (Mech. Eng., Ecole
de
Technologie Superieure (ETS),
Montreal, QC, Canada)
In extremely noisy environments, it is normally recommended to use a
combination of earplug and earmuff, denoted here as a dual protection, to
protect workers from the excessive noise. Unfortunately, it has been shown
repeatedly that the attenuation values obtained with the dual protection are
generally less than the sum of the individual earplug and earmuff attenuation values. In the literature, this is generally explained by the bone conduction path and the coupling between the earplug and the earmuff. However,
there is much less work devoted to examining in detail the coupling between
the earplug and the earmuff. In this work, experimental results on human
subjects and on an artificial text fixture are collected using REAT and MIRE
procedures for different combinations of earmuffs and earplugs. Additionally, a finite-element model is used to investigate the physics of the problem
and to better understand the nature of the coupling between the earplug and
the earmuff when used in a dual configuration. Results from the experimental procedures as well as from the FE model are presented and discussed.
These results clearly illustrate the importance of the coupling between the
earplug and the earmuff when used in combination.
2:20
2pPPb4. Correlation of flow field and acoustic output from hearing aids
under influence of wind. Florian Zenger, Linda Gerstner (Inst. of Process
Machinery and Systems Eng., Friedrich-Alexander Univ. ErlangenN€urnberg, Cauerstr. 4, Erlangen 91058, Germany, ze@ipat.uni-erlangen.
de), Alexander Lodermeyer (Inst. of Process Machinery and Systems Eng.,
Friedrich-Alexander Univ. Erlangen-N€
urnberg, Erlangen, Bayern,
Germany), and Stefan Becker (Inst. of Process Machinery and Systems
Eng., Friedrich-Alexander Univ. Erlangen-N€
urnberg, Erlangen, Bavaria,
Germany)
Wind noise in hearing aids occurs even at low wind speeds and is a confounding factor for hearing aid wearer, hence leading to a reduction of
speech intelligibility. In this submission, a study on the correlation of the
flow field around a hearing aid to its acoustic output is made. The BTE
(behind the ear) hearing aid is mounted on an artificial head with three different ear geometries. The flow field is captured using a two component PIV
(particle image velocimetry) system. For exposing critical flow phenomena,
a POD (proper orthogonal decomposition) of the PIV measurement data is
made. The hearing aid output is measured with a microphone inside the artificial head. On the one hand, wind noise in hearing aids is generated by the
fluctuating velocity field of the boundary layer on the hearing aid. On the
other hand, based on the PIV data and the POD results, flow patterns around
the artificial head and the hearing aid are detected, which cause further
noise, that is captured by the hearing aid microphones. With these findings,
modifications on the hearing aid geometry are deduced, that lead to a
decrease in wind noise and hence to a better speech intelligibility.
2:40
2pPPb5. Measurements of the acoustic feedback path of hearing aids on
human subjects. Tobias Sankowsky-Rothe and Matthias Blau (Inst. of
Hearing Technol. and Audiol., Jade Hochschule WOE, Ofener Straße 1619, Oldenburg, Niedersachsen 26121, Germany, Tobias.Sankowsky@jadehs.de)
Feedback is a problem in hearing aids which will cause signal degradation and reduce the maximum applicable gain. More specifically, the advantages of open fittings (e.g., minimizing the occlusion effect) are limited by
acoustic feedback. Feedback cancelation algorithms are used to overcome
these limitations. For the development of such algorithms, the acoustic feedback path of the hearing aid must be known. The acoustic feedback path is
not only affected by the outer sound field but by the individual anatomy and
physiology as well. In order to quantify these different influences, feedback
path measurements were performed on 20 human subjects. The measurements included different static conditions as well as dynamic ones (i.e.,
3633
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
repetitive movements were performed during the measurement). Since the
sound pressure level must be limited in measurements on human subjects, a
valid identification of the feedback path is difficult in many cases, due to a
low signal to noise ratio. Therefore, all measurements were performed reciprocally in addition to the direct measurements. Results show that yaw movements only have a small influence on the acoustic feedback path, whereas
changes of the outer sound field can have a substantial impact on the feedback path.
3:00
2pPPb6. How to improve a hearing aid fitting based on idiosyncratic
consonant errors. Ali Abavisani and Jont b. Allen (ECE, Univ. of Illinois
at Urbana-Champaign, 405 N Mathews Ave., Rm. 2137, Urbana, IL 61801,
aliabavi@illinois.edu)
The goal of this study is to quantify a given hearing aid insertion gain
using a consonant recognition based measure, for ears having sensorineural
hearing loss. The basic question addressed is how a treatment impacts phone
recognition, relative to a normal-hearing insertion gain. These tests are
directed at (1) fine-tuning a treatment, with the ultimate goal of improving
speech perception, and (2) to identify when a hearing level based treatment
degrades speech recognition. Eight subjects with hearing loss were tested
under two conditions: Flat-gain and a Treatment insertion gain based on
subject’s hearing level. The speech corpus consisted of consonant-vowel
tokens at different signal to speech-weighted noise (SNR) conditions, presented at subjects’ most comfortable level. The tokens used in this study
were selected from those having less than 3% error at -2 [dB] SNR, from 30
Normal Hearing subjects. The Treatment caused token score to improve for
31% of the trials and decrease for 12%. An analysis method was devised to
identify degraded tokens for individual hearing impaired ears, based on sorting the tokens according to their error. By comparing sorted errors across
experiments, the effect of the treatment could be accurately evaluated, providing precise characterization of idiosyncratic phone recognition.
3:20
2pPPb7. Consistency and variation in recognition of text and speech
interrupted at variable rates. Valeriy Shafiro (Commun. Disord. & Sci.,
Rush Univ. Medical Ctr., 600 S. Paulina Str., AAC 1012, Chicago, IL
60612, valeriy_shafiro@rush.edu), Daniel Fogerty (Commun. Sci. and
Disord., Univ. of South Carolina, Columbia, SC), and Kimberly Smith
(Speech Pathol. and Audiol., Univ. of South Alabama, Mobile, AL)
Recent research indicates that recognition of interrupted text can predict
speech intelligibility under adverse listening conditions. However, factors
underlying the relationship between perceptual processing of speech and
text are not fully understood. We examined contributions of underlying linguistic and perceptual structure by comparing recognition of printed and
spoken sentences interrupted at different rates (0.5—64 Hz) in 14 normalhearing adults. The interruption method approximated deletion and retention
of rate-specific linguistic information across the two modalities by substituting white space for silent intervals. Results indicate a remarkably similar Ushaped pattern of cross-rate variation for both modalities, with minima at 2
Hz. Nevertheless, at high and low interruption rates text recognition
exceeded speech recognition, while the reverse trend was observed at middle rates. Surprisingly, no significant correlations were obtained in recognition accuracy between text and speech conditions. These findings indicate a
high degree of perceptual constancy in recognition of interrupted text and
speech, which may rely on retention of rate-specific linguistic and perceptual information retained after the interruptions. On the other hand, results
also indicate rate-specific variation in perceptual processing of text and
speech, which may potentially affect the degree to which recognition accuracy in one modality is predictive of the other.
3:40
2pPPb8. Predicting consonant recognition and confusions using a
microscopic speech perception model. Johannes Zaar and Torsten Dau
(Dept. of Elec. Eng., Tech. Univ. of Denmark, Ørsteds Plads, Bldg. 352,
Kgs. Lyngby 2800, Denmark, jzaar@elektro.dtu.dk)
The perception of consonants has been investigated in various studies
and shown to critically depend on fine details in the stimuli. The present
Acoustics ’17 Boston
3633
2p MON. PM
2:00
study proposes a microscopic speech perception model that combines an auditory processing front end with a correlation-based template-matching back
end to predict consonant recognition and confusions. The model represents
an extension of the auditory signal processing model by Dau et al. [(1997),
J. Acoust. Soc. Am. 102, 2892-2905] toward predicting microscopic speech
perception data. Model predictions were computed for the extensive consonant perception data set provided by Zaar and Dau [(2015), J. Acoust. Soc.
Am. 138, 1253-1267], obtained with consonant-vowels (CVs) in white
noise. The predictions were in good agreement with the perceptual data both
in terms of consonant recognition and confusions. The model was further
evaluated with respect to perceptual artifacts induced by (i) different hearing-aid signal processing strategies and (ii) simulated cochlear-implant
processing, based on data from DiNino et al. [(2016), J. Acoust. Soc. Am.,
140, 4404-4418]. The model successfully predicted the strong consonant
confusions measured in these conditions. Overall, the results suggest that
the proposed model may provide a valuable framework for assessing acoustic transmission channels and hearing-instrument signal processing.
4:00
2pPPb9. A fast test for determining the edge frequency of a dead
region. Josef Schlittenlacher (Experimental Psych., Univ. of Cambridge,
Downing Site, Cambridge CB2 3EB, United Kingdom, js2251@cam.ac.uk),
Richard E. Turner (Eng., Univ. of Cambridge, Cambridge, United
Kingdom), and Brian C. Moore (Experimental Psych., Univ. of Cambridge,
Cambridge, United Kingdom)
The presence and edge frequency, fe, of a dead region in the cochlea can
be diagnosed using psychophysical tuning curves (PTCs). When the signal
frequency, fs, falls in a dead region, the tip of the PTC lies close to fe, rather
than close to fs. However, measurement of PTCs is time consuming, limiting their application in clinical practice. We have developed a fast test based
on Bayesian active learning. Instead of estimating an entire PTC, we estimate parameters of an individual hearing model, including fe. The task is to
detect a fixed signal in the presence of a masker whose level and frequency
vary across trials. After each trial, the next masker level and frequency are
chosen to produce maximum reduction of the uncertainty about the parameters. The results for four participants tested so far were close to those
obtained using “fast” PTCs. The Bayesian procedure has two advantages
compared to PTCs: it allows quantification of the reliability of the subjects,
estimated from the standard deviation of a cumulative Gaussian fitted to the
psychometric function; and masker levels and frequencies can be restricted
to being close to the estimated minimum of the PTC, avoiding unnecessary
presentation of high level sounds.
MONDAY AFTERNOON, 26 JUNE 2017
BALLROOM A, 1:20 P.M. TO 5:20 P.M.
2pPPc
Psychological and Physiological Acoustics: Localization, Binaural Hearing, and Cocktail Party
(Poster Session)
Eugene Brandewie, Chair
Research, GN ReSound, Department of Psychology, 75 E. River Rd., Minneapolis, MN 55455
All posters will be on display from 1:20 p.m. to 5:20 p.m. To allow contributors in this session to see the other posters, authors of
odd-numbered papers will be at their posters from 1:20 p.m. to 3:20 p.m. and authors of even-numbered papers will be at their posters
from 3:20 p.m. to 5:20 p.m.
Contributed Papers
2pPPc1. Investigating processing delay in interaural time difference
discrimination by normal-hearing children. Z. Ellen Peng, Taylor Fields,
and Ruth Litovsky (Waisman Ctr., Univ. of Wisconsin-Madison, 1500
Highland Ave., Madison, WI 53711, zpeng49@wisc.edu)
Recent work suggests that interaural time difference (ITD) thresholds
are adult-like by 8-10 years of age in children with normal hearing (NH). If
processing time is considered, however, we hypothesize that the ability to
successfully extract ITDs is not fully mature in children. A novel paradigm
was designed to simultaneously measure eye gaze with an eye tracker during an ITD discrimination task with mouse-click. Stimuli were 4 kHz transposed tones modulated at 128 Hz, tested with a 3-interval, 2-alternative
forced choice task (left- or right-leading ITDs) with feedback. During each
trial, gaze position on the computer screen was simultaneously recorded
from stimulus onset to the time when a mouse click indicated either a left or
right response. Processing times were extracted from the eye gaze data and
compared with those from young NH adults. This presentation will focus on
3634
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
the developmental differences observed when threshold estimation is used
versus when processing time is assessed via eye gaze measures. Results
from this study will provide a better understanding of the developmental trajectory of binaural hearing abilities in NH children. [Work supported by
NIDCD (R01DC003083 and R01DC008365 to Ruth Litovsky).]
2pPPc2. Influence of spatial and non-spatial feature continuity on
cortical alpha oscillations. Golbarg Mehraei (Elec. Eng., Tech. Univ. of
Denmark, Reverdilsgade 10 3 TH, Copenhagen 1701, Denmark, gmehraei@
gmail.com), Barbara Shinn-Cunningham (Ctr. for Computational Neurosci.
and Neural Technol., Boston Univ., Boston, MA), and Torsten Dau (Elec.
Eng., Tech. Univ. of Denmark, Lyngby, Denmark)
In everyday environments, listeners face the challenge of parsing the
sound mixture reaching their ears into individual sources. Perceptual continuity of acoustic features, such as pitch or location, has an obligatory effect
Acoustics ’17 Boston
3634
on auditory attention even when a feature is not task relevant. Cortical alpha
oscillations (8-12 Hz) are thought to functionally inhibit the processing of
task-irrelevant information. Here, we hypothesize that discontinuities in a
task-irrelevant feature disrupt the attentional modulation of alpha rhythms.
Using electroencephalography in humans, we compare physiological measures during a selective auditory attention task where listeners were asked to
attend to either talker, based on gender (male or female) or location (left or
right). On half of the trials, a discontinuity was introduced in the task-irrelevant acoustic feature. When listeners attended to the talker, there was no
evidence of alpha power lateralization, and no effect of a discontinuity in
location. In contrast, when listeners attended to location, parieto-occipital
alpha power increased ipsilateral to the attended location; moreover, a
discontinuity in talker reduced alpha power and disrupted alpha lateralization. Our findings support the importance of parieto-occipital alpha in
suppressing sources when listeners focus spatial, but not non-spatial, attention, and show that task-irrelevant discontinuities affect these alpha
rhythms.
2pPPc5. Sound source localization identification procedures: Accuracy,
precision, confusions, and misses. M. Torben Pastore and William Yost
(Speech and Hearing, Arizona State Univ., 975 South Myrtle Ave., Tempe,
AZ 85287, m.torben.pastore@gmail.com)
2pPPc3. Sound source localization as a multisensory process: The
Wallach azimuth illusion. M. Torben Pastore and William Yost (Speech
and Hearing, Arizona State Univ., 975 South Myrtle Ave., Tempe, AZ
85287, m.torben.pastore@gmail.com)
2pPPc6. The impact of asymmetric rates on interaural time difference
lateralization and auditory object formation in bilateral cochlear
implant and normal hearing listeners. Tanvi D. Thakkar, Alan Kan
(Waisman Ctr., Univ. of Wisconsin-Madison, 934B Eagle Heights Dr.,
Madison, WI 53705, tthakkar@wisc.edu), and Ruth Litovsky (Commun.
Sci. and Disord., Univ. of Wisconsin-Madison, Madison, WI)
2pPPc4. Identifying a perceptually relevant estimation method of the
inter-aural time delay. Areti Andreopoulou (LIMSI, CNRS, Universite
Paris-Saclay, LIMSI, CNRS, Universite Paris-Saclay, Rue John von
Neumann Campus Universitaire d’Orsay, B^at 508, Orsay 91403, France,
areti.andreopoulou@gmail.com) and Brian F. Katz (Lutheries - Acoustique
- Musique, Inst. @’Alembert, UPMC/CNRS, Paris, France)
The Inter-aural Time Difference (ITD) is a fundamental cue for human
sound localization. Over the past decades, several methods have been proposed for its estimation from measured Head-Related Impulse Response
(HRIR) data. Nevertheless, inter-method variations in ITD calculation have
been found to exceed the known Just Noticeable Differences (JNDs), hence
leading to possible perceptible artifacts in virtual binaural auditory scenes,
even for cases when personalized HRIRs are being used. In the absence of
an objective means for validating ITD estimations, this paper evaluates
which methods lead to the most perceptually relevant results. A subjective
lateralization study compared objective ITDs to perceptually driven interaural pure delay offsets. Results clearly indicate the first-onset Threshold
detection method, using a low relative threshold of -30 dB, applied on 3 kHz
low-pass filtered HRIRs as the most perceptually relevant procedure across
various metrics. Alternative threshold values and methods based on the
maximum or centroid of the Inter-Aural Cross Correlation of similarly filtered HRIRs or HRIR envelopes also provided reasonable results. On the
contrary, phase-based methods employing the Integrated Relative Group
Delay were not found to perform as well.
3635
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Normal hearing (NH) listeners are able to accurately identify and locate
sound sources using auditory object formation (AOF) and interaural time differences (ITDs). Temporal cues can further facilitate AOF and ITD sensitivity: this includes within- and across-ear stimulus’ rate, envelope-, and onsetsymmetry. Bilateral cochlear implant (BiCI) listeners are not guaranteed to
receive symmetrical or complementing temporal information across ears.
Cochlear-Nucleus devices undergo “peak-picking” where stimulation of electrodes could yield asymmetric rates across the ears, interrupting good AOF
and ITD sensitivity. We investigated the impact of asymmetric rates and
ITDs, on AOF and ITD lateralization. BiCI and NH listeners were presented
with diotic and dichotic pulsatile stimulus rates, combined with an ITD. Rate
was fixed in one ear, and varied in the contralateral ear. In a single interval,
six-alternative forced-choice task, listeners reported where and how many
sounds they heard. We hypothesized that with interaural asymmetries in rate,
NH listeners would exhibit minimal AOF and poor lateralization, while BiCI
listeners may exhibit AOF and lateralization independent of interaural rate
asymmetry. The contribution of having matched rates for good AOF and ITD
sensitivity helps explain the lack of successful stream segregation in BiCI listeners, whose devices do not deliver temporally-symmetric information.
2pPPc7. Vertical sound source localization when listeners and sounds
rotate: The Wallach vertical illusion. M. Torben Pastore and William
Yost (Speech and Hearing, Arizona State Univ., 975 South Myrtle Ave.,
Tempe, AZ 12180, m.torben.pastore@gmail.com)
In addition to testing his prediction that listeners would use changes in
binaural cues relative to listeners’ self-induced head movements to disambiguate front-back confusions, Wallach (1939) also tested his calculations
that the relative rate at which these binaural cues change could be used by
listeners to determine the elevation of the sound source. Wallach was able
to induce the illusion that a sound source rotating along the azimuth plane
was perceived as though it were above the listener. We sought to replicate
and expand upon Wallach’s study. We rotated listeners in a specialized chair
at constant velocity. We presented filtered Gaussian noises at bandwidths of
one-tenth octave, two octaves, and broadband, using center frequencies of
500 Hz and 4 kHz from a ring of 24 azimuthal loudspeakers located at pinna
height. Sounds could also be presented from loudspeakers elevated relative
to pinna level. The relative rates of sound and listener rotation around the
azimuth plane were varied according to the relationship established by Wallach (1939) and listeners made judgments about the perceived elevation and
rotation of the sounds. [Partially supported by a grant from the National
Institute on Deafness and Other Communication Disorders, NIDCD.]
Acoustics ’17 Boston
3635
2p MON. PM
An auditory spatial illusion, introduced by Wallach (1939, 1940) and
recently revisited by Brimijoin and Ackeroyd (2012), occurs when both listeners and sounds rotate. Rotating a sound, around the listener in the azimuth plane at twice the rate of listeners’ head turns, can elicit the sensation
of a static sound source located either in front of the listener when the sound
is originally presented from behind, or behind the listener when the sound is
originally presented in front. We investigated this auditory illusion when listeners were rotated at constant velocity in a rotating chair with eyes open,
for bandpass noises presented from an azimuthal ring of 24 loudspeakers.
For noises that were likely to generate front-back confusions, the illusion of
a stationary sound was robust, especially for low-frequency sounds. On the
contrary, for noises that were unlikely to produce front-back confusions, listeners reported the sound rotating around them on the azimuth plane. These
observations are predicted by a simple model, based on Wallach’s original
multisensory explanation, combined with estimates of the availability of
spectral cues that may be used to disambiguate front/back confusions. [Partially support by a grant from the National Institute on Deafness and Other
Communication Disorders, NIDCD.]
Rakerd and Hartmann (1987) provided a useful set of equations that can
describe listener performance in sound source localization identification
tasks requiring listeners to identify which loudspeaker presented a sound.
The data from such identification tasks can be presented in confusion matrices in which one dimension is the actual sound source locations and the
other dimension is the reported/perceived sound source locations. This presentation describes how Rakerd and Hartmann’s measures relate to estimates of sound source localization accuracy, precision, confusions, and
misses. We will describe some of the advantages and limitations of these
measures of performance in sound source localization identification tasks,
especially in conditions involving sound sources located around an entire
azimuth circle. [Partially support by a grant from the National Institute on
Deafness and Other Communication Disorders, NIDCD.]
2pPPc8. The roles of inhibition and adaptation for spatial hearing in
difficult listening conditions. Jean-Hugues Lestang and Dan F. Goodman
(Elec. and Electron. Eng., Imperial College London, 2 Philchurch Pl.,
London E11PG, United Kingdom, jl10015@ic.ac.uk)
The computation of binaural cues such as the Interaural Time Difference
(ITD) and Interaural Level Difference (ILD) by the auditory system is
known to play an important role in spatial hearing. It is not yet understood
how such computations are performed in realistic acoustic environments
where noise and reverberations are present. It has been hypothesized that robust sound localization is achieved through the extraction of the ITD information in the rising part of amplitude modulated (AM) sounds. Dietz et al.
(2013) tested this hypothesis using psychoacoustics and MEG experiments.
They presented AM sounds with ITDs varying during the course of one AM
cycle. Their results showed that participants preferentially extracted the ITD
information in the rising portion of the AM cycle. We designed a computational model of the auditory pathway to investigate the neural mechanisms
involved in this process. Two mechanisms were tested. The first one corresponds to the adaptation in the auditory nerve fibers. The second mechanism
occurs after coincidence detection and involves a winner-take-all network
of ITD sensitive neurons. Both mechanisms qualitatively accounted for the
data, consequently we suggest further experiments based on similar stimuli
to distinguish between the two mechanisms. Dietz et al. (2013), “Emphasis
of spatial cues in the temporal fine structure during the rising segments of
amplitude-modulated sounds,” Proc. Natl. Acad. Sci. 110(37), 1515115156.
2pPPc9. Perceived amount of reverberation consistent with binaural
summation model. Gregory M. Ellis (Dept. of Psychol. and Brain Sci.,
Univ. of Louisville, Louisville, KY 40292, g.ellis@louisville.edu) and Pavel
Zahorik (Dept. of Otolaryngol. and Communicative Disord. and Dept. of
Psychol. and Brain Sci., Univ. of Louisville, Louisville, KY)
Although perceived amount of reverberation (PAR) is known to be dependent on the physical amount of reverberation present at the two ears, the
extent to which this information may be combined across the ears is not
well understood. Previous work using virtual auditory space techniques has
demonstrated that when physical reverberation is reduced in the ear nearest
a sound source while the contralateral ear is left unchanged, listeners do not
report a change in PAR. Reducing physical reverberation equally in both
ears, however, elicits a decrease in PAR. To better understand this phenomenon, the present study examines how PAR is affected by three additional
listening conditions: scaling the contralateral ear while leaving ipsilateral
fixed, scaling the contralateral ear only with no ipsilateral signal (monaural),
and scaling the ipsilateral ear only with no contralateral signal (monaural).
This study also examines how PAR is affected by an increase in physical
reverberation present at the ears in the two listening conditions from the
original study. Behavioral results are consistent with a binaural summation
model that combines reverberant sound power from the two ears.
2pPPc10. Effect of frequency region on binaural interference for
interaural level differences. Beth Rosen and Matthew Goupell (Hearing
and Speech Sci., Univ. of Maryland, College Park, 0119E Lefrak Hall,
College Park, MD 20742, brosen95@gmail.com)
Binaural interference occurs when a spectrally remote diotic interferer
affects the ability to detect changes in interaural differences in the target.
For interaural time differences, the magnitude of binaural interference
depends on the target and interferer locations. For interaural level differences (ILDs), it is unclear if the magnitude of binaural interference depends
on target and interferer location. ILDs are highly frequency dependent;
ILDs become larger for increasing frequency for sources in the free field.
Therefore, we hypothesized that binaural interference for ILDs would be
frequency dependent, and that both target and interferer frequency would
affect thresholds. In ten young normal-hearing listeners, we measured ILD
discrimination thresholds using single tones, once with no interferer and
then in the presence of a diotic interferer. We tested five frequencies: 500,
1000, 2000, 4000, and 8000 Hz. Each combination of target and interferer
frequency was tested, resulting in 25 conditions. ILD thresholds for targets
alone were frequency dependent. Binaural interference occurred for ILDs,
and the magnitude changed depending on the target and interferer location.
3636
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
These data will contribute to better understanding of across-frequency ILD
processing, which is important for bilateral cochlear-implant users who rely
on ILDs to localize sounds. [Work supported by NIH R01-DC014948
(M.J.G.).]
2pPPc11. Free-field sound localization on the horizontal plane as a
function of stimulus level with an electronic, level-dependent hearing
protection device. Eric R. Thompson and Zachariah N. Ennis (711th
Human Performance Wing, Air Force Res. Lab, 2610 Seventh St, B441,
Wright-Patterson AFB, OH 45433, eric.thompson.28@us.af.mil)
Electronic, level-dependent hearing protection devices (HPD) provide
different levels of attenuation as a function of the input sound pressure level,
so that loud sounds are attenuated for protection, but soft and moderate
sound levels may be presented with little or no attenuation, or even with a
small positive gain. While previous experiments have investigated sound
localization with HPDs and moderate stimulus levels, it is important to
understand the impact of level-dependent HPDs on the localization of both
low- and high-level sounds. In this experiment, horizontal-plane sound
localization judgments were obtained from human listeners with and without an earplug-type electronic, level-dependent HPD at several stimulus levels from 20 to 80 dB SPL. The data were analyzed in terms of the
proportion of front/back reversals, and the mean absolute lateral error after
correcting for front/back reversals. There were very few front/back reversals
in the open-ear conditions at any level, and the mean absolute lateral error
was less than 5 degrees. With the HPD, there were more front/back reversals
than with open ear and the mean lateral error was greater, particularly for
the loudest sounds where amplitude compression may have had an influence
on performance.
2pPPc12. Source-blind localization and segregation model featuring
head movement and reflection removal. Nikhil Deshpande and Jonas
Braasch (Architecture, Rensselaer Polytechnic Inst., 220 3rd St., Troy, NY
12180, deshpn@rpi.edu)
This model takes two simultaneous speech signals, spatialized to unique
azimuth positions and convolved with a simple multi-tap stereo impulse
response. The model first identifies reflections and generates an inversion filter for the left and right channels. It then localizes the sources and virtually
rotates its head to a known orientation for the best resulting segregation of
the sources. Next, the model segments the input signals in time and frequency, applies the inverse filter, and searches for residual energy in each
bin to compensate the target signal from the mixture. From the residual noncanceled energy, it generates a binary masking map and overlays this on the
mixed signal’s spectrogram to extract only the target signal. Improvement
in SNR from head rotation approaches over 30 dB.
2pPPc13. Adaptation in distance perception induced by audio-visual
stimuli with spatial disparity. Lubos Hladek (Inst. of Comput. Sci., P. J.
Safarik Univ. in Kosice, 10-16 Alexandra Parade, New Lister Bldg. 3L,
Glasgow Royal Infirmary, Glasgow G31 2ER, United Kingdom, lubos.
hladek@nottingham.ac.uk), Aaron Seitz (Dept. of Psych., Univ. of
California, Riverside, CA), and Norbert Kopco (Inst. of Comput. Sci., P. J.
Safarik Univ. in Kosice, Kosice, Slovakia)
Simultaneous presentation of audio-visual stimuli with disparity leads to
perceptual shifts in judgments of the stimulus auditory components (ventriloquism effect, VE). The shifts can persist even to auditory stimuli presented
alone (ventriloquism aftereffect, VA). A previous study showed asymmetrical VE and VA for visual adaptors presented closer vs. further than the auditory components [Hladek et al. (2014), Visual calibration of auditory
distance perception, ARO #37 Abstract PS-614]. In that study, a brief flash
of light (visual adaptor) or noise burst (auditory component) were presented
in front of the listener at distances 0.4-2.6 m in a small reverberant room,
with visual adaptor 30% closer or further than the auditory component.
Here, a new analysis of the results is presented, showing that much of the
previously observed asymmetries between the two directions of shift (visual-closer vs. visual-farther) can be accounted for by referencing responses
to the pre-adaptation baseline, and by scaling the data with respect to VAligned responses using the actual physical disparity of the auditory and
Acoustics ’17 Boston
3636
2pPPc14. The effects of diffuse noise and artificial reverberation on
listener weighting of interaural cues in sound localization. Tran M.
Nguyen (Health and Rehabilitation Sci. Graduate Program, Western Univ.,
London, ON, Canada) and Ewan A. Macpherson (National Ctr. for Audiol.,
Western Univ., 1201 Western Rd, Elborn College 2262, London, ON N6G
1H1, Canada, ewan.macpherson@nca.uwo.ca)
The reliability of interaural time and level difference (ITD, ILD) sound
location cues can be degraded by noise or reverberation. In this study, we
determined how weighting of ITD and ILD varied with signal-to-noise ratio
(SNR) in the presence of interaurally uncorrelated background noise and
with direct-to-reverberant ratio (DRR) in the presence of artificial reverberation (generated by convolving the target signal with an interaurally uncorrelated pair of impulse responses created by multiplying Gaussian noise with
a decaying exponential, RT60 = 500 ms). Wideband (0.5-16 kHz) 100-ms
noise-burst targets were presented over headphones using individual headrelated transfer functions. ITD and ILD were manipulated by attenuating or
delaying the sound at one ear (by up to 300 ls or 10 dB), and cue weighting
was computed by comparing localization response bias to imposed cue bias.
Wideband (0.5-16 kHz) and low-pass (0.5-2 kHz) noise and reverberation
and SNRs and DRRs from -5 to + 20 dB were used. ITD dominated in quiet,
anechoic conditions. ITD was downweighted and ILD upweighted with
decreasing SNR for both noises. Only downweighting of ITD and upweighting of ILD were associated with decreasing DRR in wideband and low-pass
reverberation, respectively. In general, listeners increased the relative
weighting of ILD in more adverse listening conditions.
2pPPc15. Preserving spatial perception in reverberant environments
using direct-sound driven dynamic range compression. Henrik G.
Hassager, Tobias May, Alan Wiinberg, and Torsten Dau (Hearing Systems
Group, Dept. of Elec. Eng., Tech. Univ. of Denmark, Ørsteds Plads 352,
Kgs. Lyngby DK-2800, Denmark, hgha@elektro.dtu.dk)
Fast-acting hearing-aid compression typically distort the auditory cues
involved in the spatial perception of sounds in rooms, due to the enhancement of low-level reverberant energy portions of the sound relative to the
direct sound. The present study investigated the benefit of a novel directsound driven compression scheme that adaptively selects appropriate time
constants to preserve the listener’s spatial impression. Specifically, fast-acting compression was maintained for time-frequency (T-F) units dominated
by the direct sound while the compressor was linearized via longer time
constants for T-F units dominated by reverberation. This novel compression
scheme was evaluated with normal-hearing listeners who indicated their
perceived location and distribution of virtualized speech in the horizontal
plane. The results confirmed that both independent compression at each ear
and linked compression across ears resulted in more diffuse and broader,
sometimes internalized, sound images as well as image splits. In contrast,
the novel linked direct-sound driven compressor provided the listeners with
a similar spatial perception obtained with linear processing that served as a
reference. Independent direct-sound driven compression created a sense of
movement of the sound between the two ears, suggesting that preserving the
interaural level differences via linked compression is advantageous with the
proposed direct-driven compression scheme.
Environment for Auditory Research of the Army Research Laboratory was
used to create ambient masking sounds. In general, masking sounds that
were similar to the vehicle sounds in spectral and envelope shape were more
effective than dissimilar sounds in reducing auditory perception abilities of
the listener. Although the complexity of most vehicle sounds enabled the listeners to correctly identify and localize the vehicle sounds in many masking
conditions. The results of these experiments will be discussed in terms of
the aural abilities of human subjects to identify and localize vehicle sounds
in various ambient masking conditions.
2pPPc17. Interaural level difference-based model of speech localization
in multi-talker environment. Peter Toth and Norbert Kopco (Inst. of
Comput. Sci., Pavol Jozef Safarik Univ. in Kosice, Srobarova 2, Kosice
04154, Slovakia, peter.toth@upjs.sk)
Horizontal localization is based on extraction of the interaural time and
level differences (ITD, ILD). Even in complex scenes with multiple talkers
and reverberation, the auditory system is remarkably good at estimating the
individual talker locations. Several previous models proposed mechanisms
that stressed the importance of ITD in the localization process. Here, we
examine whether azimuth estimation in complex scenarios can be based
solely on ILD. We implemented a model (based on Faller and Merimaa,
2004) in which azimuth estimation was based on ILDs of signal parts with
high interaural correlation and with spectral profile matching that of the target. Comparison with experimental data (Kopco et al., 2010) showed that
highly correlated parts of the signal, if available, provide reliable ILD estimates sufficient for precise target localization. However, for lateral target
positions, at which the target dominates one ear but not the other, interaural
correlation was too low to guide ILD extraction. In such cases, a new model
based on finding maximum ILD provided good estimates even if maskers
dominated in the worse ear. The combined model predictions matched the
experimental data with target locations between -50 and 50 and for 4
maskers in reverberation. [Work supported by APVV-0452-12, H2020MSCA-RISE-2015 #691229.]
2pPPc18. The source and effects of binaural cue ambiguity in free-field
stereo sound localization—Behavioral testing. Colton Clayton , Leslie
Balderas, and Yi Zhou (Speech and Hearing Sci., Arizona State Univ., 975
S. Myrle Ave, SHS, Tempe, AZ 85287, ctclayto@asu.edu)
Horizontal sound localization in free field requires integration of interaural time (ITD) and level (ILD) differences, in making accurate spatial judgments. Recently, we showed that listeners demonstrated great variability in
localizing a stereo sound source (Montagne and Zhou, JASA, 2016). We
hypothesized that this variability might arise from conflicting sidedness
between ITDs and ILDs within and/or across frequency bands. To test this
hypothesis, here we generated a new set of stimuli with variable spatial congruence between ITDs and ILDs by adding a constant inter-channel level
cue ( + /- 5 dB) either aligned with or opposed to the inter-channel timing
cue (from -1 to 1 msec). In Experiment 1 listeners responded to 15-ms
broadband noise bursts. Response variability decreased when the inter-channel timing and level cues were spatially congruent and increased when they
were not. In Experiment 2 listeners responded to low- and high-pass filtered
noise (1.5 kHz cutoff) for the spatially incongruent stimuli only. Response
variability was much reduced but the perceived source location consistently
pointed to the “wrong side,” favoring the level cue. Together, the new
results suggest a significant weighting role for ILDs (generated from levelbased stereophony) in determining the lateral position of a stereo image.
2pPPc16. Effects of masker similarity on the identification and
localization of vehicular sounds. Mark A. Ericson and Rachel Weatherless
(Human Res., Army Res. Lab., 520 Mulberry Point Rd., Aberdeen Proving
Ground, MD 21005, mark.a.ericson.civ@mail.mil)
2pPPc19. The source and effects of binaural cue ambiguity in free-field
stereo sound localization—Modeling simulation. Yi Zhou and
Christopher Montagne (Speech and Hearing Sci., Arizona State Univ., 975
S. Myrtle Ave., Coor 3470, Tempe, AZ 85287, yizhou@asu.edu)
Several experiments were conducted to determine the effects of masking
sound similarity on the identification and localization of airborne and
ground vehicle sounds. In some experiments, natural masking sounds were
played in the presence of target vehicle sounds. In other experiments, spectral and temporal cues were artificially manipulated to determine their individual effects on identification and localization performance. The
Our recent study (Montagne and Zhou, JASA 2016) showed that binaural localization cues—interaural time (ITD) and level (ILD) differences—
were more variably distributed when stimuli were presented stereophonically instead of from single speakers. We hypothesized that variability in listeners’ responses is directly related to variability in the binaural cues
imposed by the stimulus. Here, we investigate the validity of this hypothesis
3637
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3637
2p MON. PM
visual component. However, asymmetry still persists for the VA data and
VA buildup, suggesting that different neural substrates underlie VA and VE
in the distance dimension. [Work supported by APVV-0452-12 and EU
H2020-MSCA-RISE-2015 grant 691229.]
by examining the distribution of ITDs and ILDs using a simulated binaural
neural network. The peripheral component of this model includes an
updated auditory-nerve model (Zilany et al., 2014) and the central component of this model incorporated binaural correlation and level difference
analysis. The decision variable was made based on combined ITD and ILD
distributions across frequencies. The modeled data were analyzed and interpreted with regard to results from a parallel behavioral test. The model
results suggest that low-frequency ITDs are a major cue for sound source
localization even they are ambiguously distributed with multiple peaks. On
the other hand, the ILD cue can strongly modulate which ITD peak dominates the perceived source location. The network analysis further investigates potential neural mechanisms by examining decision-making based on
an ILD-modulated ITD network vs. separate ITD and ILD networks.
2pPPc20. Binaural detection of a Gaussian noise burst target in the
presence of a lead/lag masker. Jonas Braasch, M. Torben Pastore (School
of Architecture, Rensselaer Polytechnic Inst., 110 8th St., Troy, NY 12180,
braasj@rpi.edu), and Yi Zhou (Speech and Hearing Sci., Arizona State
Univ., Tempe, AZ)
When a leading stimulus is followed shortly thereafter by another similar stimulus coming from a different direction, listeners often report hearing
a single auditory event at or near the location of the leading stimulus. This
is called the precedence effect (PE). We measured masked detection thresholds for a noise target in the presence of a masker composed of (1) a lead/
lag noise pair with the lead ITD set the same or opposite to the target, (2) a
diotic masker, and (3) a dichotic pair of decorrelated noises. If the PE results
in actual elimination of the lag stimulus, we would expect lower masked
thresholds when the lead ITD is opposite to that of the target, as predicted
by spatial release from masking. Results show that for small lead/lag delays,
detection thresholds were similar to those for the diotic masker, regardless
of whether the lead ITD was the same or opposite to that of the target. For
longer lead/lag delays, which are unlikely to elicit the PE, thresholds
approached those measured for dichotic maskers composed of two decorrelated noises. An extended EC model is used to simulate the psychophysical
results. [Work supported by NSF BCS-1539276 and NSF BCS-1539376.]
2pPPc21. How accurate dereverberation should be? Effects of
underestimating and overestimating binaural room impulse responses.
Nirmal Kumar Srinivasan, Alexis Staudenmeier, and Kelli Clark (Dept. of
Audiol., Speech-Lang. Pathol. and Deaf Studies, Towson Univ., 8000 York
Rd., Towson, MD 21252, nsrinivasan@towson.edu)
The sound arriving at the listeners’ ears is influenced by the binaural
room impulse response (BRIR) of the listening environment. Previous
research (Srinivasan et al., 2017) has suggested that removing late reflections in BRIR improves speech understanding most when spatial cues are
absent. However, in real world listening scenarios, it is difficult to differentiate between the effects of early reflections and late reverberation. Here, we
present data from an experiment evaluating the effects of a simple static dereverberation technique on speech understanding for two reverberant environments (T60 = 1 and 2 s). The input speech signal was dereverbered by
deconvolving the speech signal under three different conditions: (1) underestimating the effects of reverberation, (2) overestimating the effects of
reverberation, and (3) correct estimation of the effects of reverberation.
Effects of the three deconvolving techniques on identification thresholds
and amount of release from masking will be discussed.
2pPPc22. Modelling the frequency dependency of binaural masking
level difference and its role for binaural unmasking of speech in normal
hearing and hearing impaired listeners. Christopher F. Hauth, Thomas
Brand, and Birger Kollmeier (Carl von Ossietzky (CvO) Universit€at
Oldenburg, Medizinische Physik and Cluster of Excellence Hearing4All,
Oldenburg D-26129, Germany, christopher.hauth@uni-oldenburg.de)
In binaural tone-in-noise detection experiments humans can achieve
substantially lower thresholds if either noise or tone have interaural time or
phase differences (ITDs / IPDs) compared to diotic presentation of tone and
3638
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
noise. This effect is named binaural masking level difference (BMLD) and
was mainly investigated at 500 Hz. The results obtained in these experiments were used to fit binaural processing errors in the equalization-cancelation (EC) mechanism, which is an effective model of binaural processing. In
this study, the frequency dependency of BMLDs is investigated for listeners
with normal hearing and high frequency hearing loss. Binaural tone-in-noise
detection experiments are conducted for tone frequencies of 250, 500, 750,
1000, 1500, and 2000 Hz, and ITDs of the noise up to 5 times the period of
the tested frequency. Furthermore, diotic and dichotic speech reception
thresholds for low pass filtered speech in noise are measured with the same
listeners. The EC mechanism with processing errors derived at 500 Hz can
predict the BMLDs for the remaining frequencies, except for the 250 Hz
condition. Moreover, some HI listeners show a reduced BMLD in both
tone-in-noise and speech intelligibility experiments, which is not covered by
the EC model with normal-hearing processing errors.
2pPPc23. Effects of clinical hearing aid settings on sound localization
cues. Anna C. Diedesch (Dept. of Otolaryngology/Head & Neck Surgery,
Oregon Health & Sci. Univ., 7012 Sonya Dr., Nashville, Tennessee 37209,
anna.c.diedesch@vanderbilt.edu), Frederick J. Gallun (Dept. of
Otolaryngology/Head & Neck Surgery, Oregon Health & Sci. Univ.,
Portland, OR), and G. Christopher Stecker (Hearing & Speech Sci.,
Vanderbilt Univ., Nashville, TN)
Sound localization cues, particularly interaural level difference (ILD)
cues, are known to be affected by hearing aid processing such as widedynamic range compression and strong directional microphones. These distorted cues may negatively impact spatial awareness and communication in
complex environments, two areas of challenge for new and experienced
hearing aid users. Previously, we investigated frequency specific alterations
to ILD and interaural time difference (ITD) cues using linear amplification
in simulated reverberant rooms and with gaged vent sizes. In reverberation,
ITD became erratic and ILD reduced; minimal effects of hearing aid venting
were observed (Diedesch and Stecker, Am. Aud. Soc. 2016). Here, we
applied that approach to hearing aid settings more typically encountered by
clinical patients. Phonak Audeo receiver-in-the-canal (RIC) hearing aids
were programmed to typical open-fit hearing impairments using clinically
normal settings for compression algorithms and directional microphones.
Recordings, collected using an acoustic manikin, were compared across
hearing aid coupling (standard open and closed domes) in anechoic and
simulated rooms. Frequency-specific ITD and ILD were quantified across
coupling, room, and hearing aid settings. [Work supported by the F. V. Hunt
Postdoctoral Research Fellowship, NIH R01-DC011548 (GCS), R01DC011828 (FJG), and the VA RR&D NCRAR.]
2pPPc24. Individual differences in cocktail party listening: The relative
role of decision weights and internal noise. Robert Lutfi, Alison Tan, and
Jungmee Lee (Commun. Sci. and Disord., Univ. of Wisconsin - Madison,
1410 E. Skyline Dr., Madison, WI 53705, ralutfi@wisc.edu)
A simulated “cocktail-party” listening experiment was conducted to
determine the relative role of decision weights and internal noise in accounting for the large individual differences in performance typically observed in
these experiments. The listener heard over headphones interleaved sequences of random vowels and were asked to judge on each trial whether the
vowels were spoken by the same AAA or different ABA talkers. The A and
B vowels had nominally different Fo and spatial position (simulated using
Kemar HRTFs), but were randomly perturbed around these values on each
presentation. Decision weights for each dimension, internal noise, and efficiency measures were estimated using COSS analysis [Berg (1990). J.
Acoust. Soc. Am. 88, 149-158]. Decision weights were nonoptimal and differed across listeners, but weighting efficiency across individuals was quite
similar. Individual differences in performance accuracy ranging over 40 percentage points were largely related to differences in internal noise. The
results are discussed in terms of their implications for the relative role of
sensory and attentional factors affecting individual performance differences
in simulated cocktail party listening. [Work supported by NIDCD
5R01DC001262-24.]
Acoustics ’17 Boston
3638
Auditory scene analysis depends on knowledge of natural sound structure, but little is known about how source-specific structures might be
learned and applied. We explored whether listeners internalize “schemas”—
the abstract structure shared by different occurrences of the same type of
sound source—during cocktail-party listening. We measured the ability to
detect one of two concurrent “melodies” that did not differ in mean pitch
(nor in timbre), ensuring that only the structure of these melodies over time
could be used to distinguish them. Target melodies were cued by presenting
them in isolation before each mixture, transposed to avoid exact repetition.
The task was to determine if the cued melody was present in the subsequent
mixture. Listeners performed above chance despite transposition between
cue and target. Particular melodic schemas could recur across a subset of trials within a block, as well as across blocks separated by epochs in which the
schema was absent. Recurrence across trials within a block facilitated target
detection, and the advantage grew over the experiment despite intervening
blocks, suggestive of learning. The results indicate that rapid and persistent
internalization of source schemas can promote accurate perceptual organization of sound sources that recur intermittently in the auditory environment.
2pPPc27. Influence of source location and temporal structure on spatial
auditory saliency. Zuzanna M. Podwinska (School of Computing, Sci. and
Eng., Univ. of Salford, The Crescent, Salford M5 4WT, United Kingdom, z.
podwinska@edu.salford.ac.uk), Bruno M. Fazenda (School of Computing,
Sci. and Eng., Univ. of Salford, Manchester, United Kingdom), and William
J. Davies (School of Computing, Sci. and Eng., Univ. of Salford, Salford,
United Kingdom)
2pPPc26. The effect of sound intensity on lateralization with interaural
time differences. Nima Alamatsaz (Biomedical Eng., New Jersey Inst. of
Technol., 323 Martin Luther King Blvd., Newark, NJ 07102, nima.
alamatsaz@njit.edu), Robert M. Shapley (Ctr. for Neural Sci., New York
Univ., New York, NY), and Antje Ihlefeld (Biomedical Eng., New Jersey
Inst. of Technol., Newark, NJ)
Hitherto, not many studies have dealt with spatial auditory saliency. Auditory attention studies concerned with spatial aspects generally concentrate
on top-down selective or divided attention, e.g., where subjects are asked to
attend to one source at a specific location whilst being distracted with sources from different directions. The work presented here reports on experiments in which bottom-up spatial auditory attention, or saliency, has been
tested. The tests were run using a fully immersive 3D audio-visual reproduction system, where interactions between auditory and visual modalities have
been included. We tested how temporal structure and absolute location of
sound sources around the listener influence saliency and attention.
Previous studies examining the effect of sound intensity on ITD lateralization disagree on whether ITD lateralization changes with increasing sound
level. We tested how sound intensity affects lateralization in three experiments. In all experiments, normal-hearing listeners judged the lateralization
of band-limited target noise tokens (300 to 1200 Hz, 1 s duration, 10-ms
MONDAY AFTERNOON, 26 JUNE 2017
ROOM 201, 1:15 P.M. TO 6:00 P.M.
2pSAa
Structural Acoustics and Vibration, Physical Acoustics, and Engineering Acoustics:
Acoustic Metamaterials II
Christina J. Naify, Chair
Acoustics, Naval Research Lab, 4555 Overlook Ave. SW, Washington, DC 20375
Chair’s Introduction—1:15
Invited Papers
1:20
2pSAa1. Highly directional source radiation using isotropic transformation acoustics. Andrew Norris and Xiaoshi Su (Mech. and
Aerosp. Eng., Rutgers Univ., 98 Brett Rd., Piscataway, NJ 08854, norris@rutgers.edu)
Recent developments in transformation acoustics (TA) have taken advantage of the isotropic nature of conformal mappings to form
gradient index lens devices, such as a two-dimensional monopole-to-quadropole lens. While this TA precisely maintains the wave equation solution within the lens the radiated field is still multi-directional and not fully efficient due to impedance mismatch and non-planar
3639
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3639
2p MON. PM
cos-squared ramp, presented with insert earphones). For each ear and target
noise, sensation level (SL) was estimated using two-down one-up adaptive
tracking. Each target stimulus contained an ITD of 0, 75, 150, 225, 300, or
375 ms and was presented at 10, 25, or 40 dB SL. In experiment 1, listeners
matched the ITD of a variable-ITD pointer (25 dB SL, 300-1200 Hz, 1 s duration, 10-ms cos-squared ramp) to each of the target tokens. In experiment
2, in each two-interval trial of a 2-AFC paradigm, the standard stimulus consisted of the same noise token as in experiment 1 and the signal stimulus
had a randomly chosen ITD of + / 0, 25, 50 or 75 ms relative to the target
ITD. Listeners reported whether the sound moved to the left or to the right,
and thresholds were estimated at the “50%-right” point. In experiment 3, listeners indicated the perceived laterality by visually pointing on a graphical
user interface. Preliminary data suggest that sound level affects lateralization, but that individual differences require testing of a greater number listeners than have historically been assessed.
2pPPc25. Rapid learning of sound schema for the cocktail party
problem. Kevin Woods (Dept. of Brain and Cognit. Sci., Massachusetts Inst. of
Technol., 43 Vassar st, 46-4078, Cambridge, MA 02139, kwoods@mit.edu)
radiation. A three-fold strategy is outlined here to achieve highly directional and impedance matched devices. First, most of the rays
leaving the original circular region are mapped to a single face of a polygon. Second, the center of the radiating face is impedance
matched by simply scaling the size up or down. Finally, the polygon is replaced by a two-sided crescent moon mapping which optimizes
the radiation across the face of higher curvature, allowing near-field focusing and quasi-planar far-field radiation. These ideas are illustrated by example simulations. Practical design methods, including water matrix and solid matrix devices, will be discussed. [Work supported through ONR MURI.]
1:40
2pSAa2. Perfect and broadband acoustic absorption in deep sub-wavelength structures for the reflection and transmission
problems. Vicente Romero-Garcıa, Noe Jimenez, Vincent Pagneux, and Jean-Philippe Groby (LAUM, UMR CNRS 6613, France, Av.
Messiaen, Le Mans 72085, France, virogar1@gmail.com)
The mechanisms to achieve perfect acoustic absorption by sub-wavelength structures in both reflection and transmission problems
are reported. While the mechanism consists in critically coupling a single resonance independently of its nature in the reflection problem, the mechanism becomes more complicated in the transmission problem. To tackle these issues, we use asymmetric interacting resonators, whose interaction leads to the perfect absorption condition. The analyzed system consists in a panel with a periodic distribution
of thin slits, the upper wall of which being loaded by Helmholtz Resonators. The propagation in the slit is highly dispersive due to the
presence of the resonators, producing slow sound conditions and down-shifting the slit resonance to low frequencies. By controlling the
geometry of the resonators, the visco-thermal losses are tuned to compensate the leakage of the system and fulfill the perfect absorption
condition. In the case of the reflection problem, a single resonator is enough to obtain the perfect absorption. However, in the case of
transmission, using an array of identical Helmholtz resonators only quasi-perfect absorption can be obtained. A possible solution is the
use of double interacting resonators, one acting as reflecting wall for the previous one. This procedure can be iteratively repeated and
one can design perfect and broadband acoustic absorbers based on the rainbow trapping mechanism.
Contributed Papers
2:00
2pSAa3. Redirection and splitting of sound waves by a periodic chain of
thin perforated cylindrical shells. Andrii Bozhko (Univ. of North Texas,
316 Fry St. Apt. 156, Denton, TX 76201, andriibozhko@my.unt.edu), Jose
Sanchez-Dehesa (Universidad Politecnica de Valencia, Valencia, Valencia,
Spain), and Arkadii Krokhin (Univ. of North Texas, Denton, TX)
A line of perforated cylindrical shells in air is practically transparent for
sound since each individual unit is a weak scatterer. However, strong scattering occurs due to the coupling to the acoustic eigenmodes of the chain.
Here we develop an analytical theory of sound transmission and scattering
at a linear chain of perforated shells and predict strong anomalous effect for
oblique incidence. The chain eigenmodes are weakly decaying, with symmetric profile and anomalous dispersion, or with antisymmetric profile and
normal dispersion, and their excitation leads to deep minima in the transmission and 90˚-redirection of the external sound. At normal incidence, only
the symmetric eigenmode can be excited, otherwise both modes are excited
at close frequencies. Moreover, the wave which resonates with the normaldispersion mode is redirected along the “right” direction, whereas the wave
resonating with the anomalous-dispersion mode is redirected in the “wrong”
direction. Thus, a periodic chain of perforated shells may serve not only as a
90˚-redirecting antenna but also as a splitter of sound waves with close frequencies. For example, an acoustic signal containing two frequencies
around 3 kHz can be split into monochromatic components propagating in
opposite directions along the chain, if the beat frequency is 500 Hz.
happens across so called interdomain wall. The point defect concentrations
along the neighboring domains are researched by PL-scanning, consisting of
taking PL-spectra from narrow zones across the domain structure along the
x-axis. This scanning reveal a nonuniform distribution of defects along the
FPC. The striking result is that some of the defects such as F-center and Fe+
have respectively narrow extrema in PL-intensity right in the interdomain
wall location. Engineering application of these findings may be new non-destructive characterization method for ferroelectric phononic crystals.
2:40
2pSAa5. An improved helical-structured acoustic metamaterial design
for broadband applications. Jie Zhu (Hong Kong Polytechnic Univ.,
Kowloon 00000, Hong Kong, jiezhu@polyu.edu.hk)
Helical structured acoustic metamaterial has been proposed to provide
non-dispersive sound wave slowdown, which is an interesting topic that not
only matters about fundamental explorations of slow wave physics, but also
will benefit a lot of applications. Although the effect of delay acoustic signal
the sound wave phase modulation is obvious, the helical structured acoustic
metamaterial only provides satisfying transmission performance over narrow frequency range. In this presentation, I will introduce an improved
design developed based on the original helical structured acoustic metamaterial. Such improved design can slow down acoustic wave propagation
through refractive index tuning and wave-front revolution, over a much
wide spectra.
2:20
3:00
2pSAa4. Nonuniform distribution of point defects in ferroelectric
phononic crystal. Chandrima Chatterjee (Phys. and Astronomy, The Univ.
of MS, Oxford, MS) and Igor Ostrovskii (Phys. and Astronomy, The Univ.
of MS, Lewis Hall, Rm. 108, University, MS 38677, iostrov@phy.olemiss.
edu)
2pSAa6. A discussion of macroscopic properties for acoustic
metamaterials: Models and measurements. Caleb F. Sieck, Andrea Alu
(Dept. of Elec. & Comput. Eng. and Appl. Res. Labs., The Univ. of Texas at
Austin, 1616 Guadalupe St., Austin, TX 78712, cfsieck@utexas.edu), and
Michael R. Haberman (Dept. of Mech. Eng. and Appl. Res. Labs., The
Univ. of Texas at Austin, Austin, TX)
The photoluminescence (PL) from point defects in ferroelectric phononic crystal (FPC) is investigated at room temperature. The FPC consists
of periodically poled domains 0.45-mm-long each along the x-axis in the
0.5-mm-thick z-cut LiNbO3 wafer. The spectra of PL are excited by 310 nm
ultraviolet light and registered in the range of 350 to 900 nm. The PL spectra
reveal different point defects including F-center, Ba, Ar, Ne, Cr, K, Fe+, Xe,
and others. The electrically active defects such as F-center and Fe+ are
expected to be sensitive on a local electric polarization. In a FPC, the ferroelectric neighboring domains are inversely poled and have an opposite electric polarization. The change from polarization “up” to polarization “down”
3640
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Macroscopic material properties are useful to describe long wavelength
dynamics of inhomogeneous media. Analytically, these properties are often
determined by weighted field averages, which define the effective fields of a
representative unit cell. The relations between these effective fields provide
macroscopic properties. In addition to traditional properties (wavenumber,
impedance, density, and compressibility), recent research has shown that inhomogeneous media require coupling parameters between effective volume-strain and momentum fields, known as Willis coupling or bianisotropy.
However in the absence of embedded sources, metamaterial properties are
Acoustics ’17 Boston
3640
3:20
2pSAa7. Development of a multi-material underwater anisotropic
acoustic metamaterial. Peter Kerrian (Penn State Appl. Res. Lab,
University Park, PA), Amanda Hanford, Robert W. Smith, Benjamin Beck,
and Dean Capone (Penn State Appl. Res. Lab, Appl. Res. Lab, PO Box 30 MS 3230D, State College, PA 16804, ald227@psu.edu)
Previous work in the open literature has described three potential ways
to create an acoustic metamaterial with anisotropic mass density and isotropic bulk modulus: (1) alternating layers of homogeneous isotropic materials, (2) perforated plates, and (3) solid inclusions. The primary focus of this
work will be to experimentally demonstrate the anisotropic behavior of a
metamaterial comprised of a multi-solid inclusion unit cell in water. The
two material design of the unit cell consists of one material more dense and
one less dense than the background fluid, which results in an effective mass
density tensor for the unit cell where one component is more dense and one
component is less dense than the background fluid. Successful demonstration of an anisotropic metamaterial with these effective parameters is an important step in the development of structures based on transformational
acoustics.
3:40–4:00 Break
4:00
2pSAa8. Acoustic wave phenomena in Willis metamaterials. Benjamin
M. Goldsberry and Michael R. Haberman (Appl. Res. Labs., The Univ. of
Texas at Austin, 10000 Burnet Rd., Austin, TX 78758, bgoldsberry@utexas.
edu)
The design of acoustic metamaterials requires accurate modeling of the
dynamics present at all relevant time and length scales. Recent work has
shown that the effective dynamic properties of inhomogeneous elastic materials can result in constitutive relations that couple strain to momentum and
velocity to stress, which is often referred to as Willis coupling [Willis,
Wave Motion, 3(1), 1-11, (1981)]. The current work will examine macroscale acoustic propagation for waves on different time scales in a material
by solving the coupled first-order equations of motion with the constitutive
relations that account for Willis coupling. Specifically, second-order perturbation theory will be used to examine the classic problem of a high-frequency, low-amplitude “signal” wave superposed on a low-frequency, highamplitude “pump” wave. Of particular interest is the slowly changing momentum bias generated by the pump wave and implications on dynamic
control of signal wave propagation. Analysis and discussion will be restricted to one-dimensional wave motion.
4:20
2pSAa9. Periodic resonance effect for the design of low frequency
acoustic absorbers. Thomas Dupont, Philippe Leclaire (Dr. EA1859, Univ.
Bourgogne Franche Comte, BP 31 - 49 rue Mlle Bourgeois, Nevers 58027,
France, thomas.dupont@u-bourgogne.fr), Raymond Panneton (GAUS,
Departement de Genie Mecanique, Universite de Sherbrooke, Sherbrooke,
QC, Canada), and Olga Umnova (Acoust. Res. Ctr., Univ. of Salford,
Salford, United Kingdom)
This presentation examines a perforated resonant material, in which the
principal perforations comprise a network of periodically spaced dead-end
pores. This material can show good sound absorption at low frequencies,
particularly given its relatively small thickness. In a recent study, this kind
3641
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
of material was modeled by an effective fluid approach which allowed low
frequency approximations. At low frequency, it was shown that the periodic
array of dead-end pores increases the effective compressibility without modifying the effective dynamic density. Thereby, the resonance frequency of
the material is reduced in a significant way, as is the frequency of the first
sound absorption peak. Moreover, a bandgap effect occurs at high frequency
for the sound transmission problem. This study suggested a new concept of
micro-structure for designing low-frequency resonant acoustic absorbers. A
transfer matrix approach is now proposed to model and optimize such a concept. Prototypes have been made with 3D printing and tested in an acoustic
tube for sound absorption and sound transmission loss. The resonant periodicity effects have been observed, and the measurements compare well with
the predictions of the transfer matrix model. Finally, an optimization of the
microstructure is proposed.
4:40
2pSAa10. Negative refraction experiments in soft 3D metafluids.
Thomas Brunet (I2M, Universite de Bordeaux, 351, cours de la liberation,
B^aitment A4 - I2M/APY, Talence 33405, France, thomas.brunet@ubordeaux.fr), Artem Kovalenko (CRPP, Universite de Bordeaux, Pessac,
France), Benoit Tallon (I2M, Universite de Bordeaux, TALENCE, France),
Olivier Mondain-Monval (CRPP, Universite de Bordeaux, Pessac, France),
Christophe Aristegui, and Olivier Poncelet (I2M, Universite de Bordeaux,
TALENCE, France)
Physics of negative refraction has been intensively studied since the
2000s. Negative refraction is usually evidenced by a Snell’s law experiment
using a prism shaped negative-index metamaterial wedge. The first experimental verification of negative index of refraction was reported in 2D resonant structures at microwave frequencies [1]. A few years later, negative
refraction was demonstrated in 2D and 3D resonant optical metamaterials
[2]. In acoustics, the first experimental demonstration of negative refraction
was reported in 3D (non resonant) phononic crystals at ultrasonic frequencies [3]. However, 3D acoustic (random) metamaterials should also offer
the possibility to explore this exotic phenomenon since the first 3D locally
resonant metamaterial with a negative index has been recently demonstrated
[4]. In this talk, we will report on negative refraction experiments performed
in these soft 3D metafluids composed of macro-porous micro-beads randomly dispersed in a water-based gel-matrix. Negative refraction will be
demonstrated by the negative deflection of an ultrasonic beam outgoing
from a prism shaped metafluid in a water tank experiment. [1] Shelby et al.,
Science 292, 77 (2001). [2] Valentine et al., Nature 455, 376 (2008). [3]
Yang et al., Phys. Rev. Lett. 93, 024301 (2004). [4] Brunet et al., Nat.
Mater. 14, 384 (2015).
5:00
2pSAa11. Scattering from a fluid cylinder with strain-momentum
coupled constitutive relations. Michael B. Muhlestein (Signature Phys.
Branch, ERDC-CRREL, 3201 Duval Rd. #928, Austin, TX 78759,
mimuhle@gmail.com), Benjamin M. Goldsberry, and Michael R.
Haberman (Mech. Eng., Univ. of Texas at Austin, Austin, TX)
Many important applications of acoustics are associated with the principle of scattering. For example, biomedical ultrasound and sonar both make
use of acoustic field scattering for localization, imaging, and identification
of objects. While the theory of acoustic scattering from fluid and elastic
materials is well established and has been validated with numerical and
physical experiments, no work has been published to describe scattering
from a more general class of acoustic materials known as Willis materials.
Willis materials are characterized by a bulk modulus and mass density as
well as a vector that couples the pressure-strain relationship with the momentum density-particle velocity relationship. The coupling vector is the
result of microstructural asymmetry. We present a theoretical description of
acoustic scattering of a plane wave incident upon a cylinder exhibiting weak
Willis coupling using a perturbation approach. The scattered field depends
upon the orientation of the Willis coupling vector and is therefore anisotropic despite the symmetry of the geometry. The analytical model is validated through comparison with a finite element-based numerical
experiment.
Acoustics ’17 Boston
3641
2p MON. PM
non-unique allowing for macroscopic descriptions which only include traditional properties or traditional properties and coupling parameters. Many
acoustic metamaterial measurements extract macroscopic properties using
reflection and transmission coefficients of finite samples. Unfortunately, this
widely used technique returns properties which relate boundary fields, not
effective fields as usually assumed. Even though boundary fields may well
approximate effective fields for very long wavelengths, the extracted properties are at best Bloch properties of a periodic medium and in general only
apply to the exact measurement setup. The aim of this talk is to provide discussion on the issues of non-uniqueness and measurements of macroscopic
properties in light of the importance of physically meaningful properties.
5:20
5:40
2pSAa12. Analysis of one-dimensional wave phenomena in Willis
materials. Michael B. Muhlestein (Signature Phys. Branch, ERDC-CRREL,
3201 Duval Rd. #928, Austin, TX 78759, mimuhle@gmail.com) and Michael
R. Haberman (Mech. Eng., Univ. of Texas at Austin, Austin, TX)
2pSAa13. Acoustic valley states and acoustic valley transport in sonic
crystals. Jiuyang Lu (Dept. of Phys., South China Univ. of Technol.,
Guangzhou, Guangdong 510641, China, phjylu@scut.edu.cn)
We report the discovery of acoustic valley states in sonic crystals and
the observation of the valley transport in the domain walls. The concept of
valley pseudospin, labeling quantum states of energy extrema in momentum
space, is attracting attention for its potential as a new type of information
carrier. Inspired by the recent valley related phenomenon in election systems, the acoustic version of valley states in sonic crystals is studied and the
vortex nature of such states is revealed. The extraordinary chirality of valley
vortex states may provide new possibilities in sound manipulations and will
be appealing to scalar acoustics since the absence of spin degree of freedom
here. We further experimentally observe the topological valley transport of
sound in sonic crystals. The acoustic valley transport is confined in the domain walls of the sonic crystals and behaves negligible reflection to the
interface corners. The acoustic valley transport of sound, strikingly different
from that in traditional sound waveguides, may serve as the basis for designing devices with unconventional functions.
The primary benefit of treating a complicated acoustic system as an
acoustic metamaterial (AMM) is that once the effective material properties
are determined, well-established mathematical analyses may be used to
describe wave propagation within the system. However, many standard analyses are not well understood for the class of materials known as Willis materials. Willis materials are characterized by constitutive relations that couple
both the pressure and momentum density to both the particle velocity and
the volume strain. This work presents the mathematical analysis of the propagation of a velocity pulse of finite duration within a one-dimensional Willis
material in the time-domain. In particular, the propagation of the pulse is
described in the context of (i) an infinite Willis material, (ii) two half-spaces
where one or both display Willis coupling, and (iii) a thin coupled partition
in an uncoupled background. Willis coupling is shown to affect the relationship between incident and scattered waves via a convolution rather than a
simple multiplication, as is the case with uncoupled media.
MONDAY AFTERNOON, 26 JUNE 2017
ROOM 204, 1:20 P.M. TO 5:40 P.M.
2pSAb
Structural Acoustics and Vibration and ASA Committee on Standards: Novel Treatments in Vibration
Damping
Kenneth Cunefare, Cochair
Georgia Tech, Mechanical Engineering, Atlanta, GA 30332-0405
Manuel Collet, Cochair
Dynamic of Complex systems, CNRS LTDS, Ecole Centrale de Lyon, 36 av G. de Collongue, Ecully 69131, France
Invited Papers
1:20
2pSAb1. Vibration damping materials with enhanced loss due to microstructural nonlinearity. Stephanie G. Konarski, Mark F.
Hamilton, and Michael R. Haberman (Dept. of Mech. Eng. and Appl. Res. Labs., The Univ. of Texas at Austin, 10000 Burnet Rd.,
Austin, TX 78758, haberman@arlut.utexas.edu)
One conventional approach to attenuating structure-borne waves or reduce the ringdown time of modes in structural elements is to
attach damping layers or patches to vibrating components. Common patch and layer materials are specially formulated polymers or engineered polymeric composites that demonstrate elevated viscoelastic loss factors in the frequency range of interest. Recent research has
shown that small volume fractions of negative stiffness inclusions embedded in a lossy material generate effective loss factors that
exceed that of the host material. The ability to generate negative stiffness behavior, however, is often the result of nonlinear inclusion
material response. Presented here is a multiscale model of a particulate composite material consisting of a nearly incompressible host
material containing small-scale heterogeneities with a nonlinear elastic stress-strain response. We investigate the nonlinear dynamic
behavior of the heterogeneous medium for small harmonic perturbations about several pre-strain states to demonstrate the influence of
the microscale dynamic response on the macroscopically observable mechanical loss. Of primary interest is the energy dissipation capabilities of the composite which can be tuned using inclusion pre-strain. Loss for composites with nonlinear inclusions is compared to
conventional composites containing air voids or steel inclusions. [Work supported by ONR.]
3642
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3642
1:40
2pSAb2. Versatile hybrid sandwich composite combining large stiffness and high damping: spatial patterning of the viscoelastic
core layer. Marta Gallo, Renaud G. Rinaldi, Laurent Chazeau, Jean-Marc Chenal (MATEIS CNRS UMR 5510 (Materials: Eng. and
Sciences), Universite de Lyon, INSA-Lyon, Universite Lyon 1, 7, Ave. Jean Capelle, Villeurbanne 69621, France, marta.gallo@insalyon.fr), François Ganachaud (IMP, CNRS UMR 5223 (Polymer Mater. Eng. lab.), Universite de Lyon, INSA-Lyon, Universite Lyon 1,
Villeurbanne, France), Quentin Leclerc, Kerem Ege, and Nicolas Totaro (LVA (Acoust. and Vibrations Lab.), Universite de Lyon,
INSA-Lyon, Villeurbanne, France)
With the aim of decreasing CO2 emissions, car producers’ efforts are focused, among others, on reducing the weight of vehicles yet
preserving the overall vibrational comfort. To do so, new lightweight materials combining high stiffness and high (passive) damping are
sought. For panels essentially loaded in bending, sandwich composites made of two external metallic stiff layers and an inner polymeric
(i.e. absorbing) core are broadly used. In the present work, the performances of such sandwich structures are enhanced by optimizing
their damping behavior according to their use. More precisely, spatial patterning through selective UV irradiation of the viscoelastic
properties of the silicone elastomeric layer is obtained based on a recently published UV irradiation selective technique [1]. Initially
developed to modulate the elastic property gradient in Liquid Silicone Rubber (LSR) membranes, the procedure is now generalized to
control the viscoelastic behavior of Room Temperature Vulcanization (RTV) silicone. Since the Young’s modulus and damping factor
of the polymeric material are triggered by the UV irradiation dose, the resulting vibration response of the sandwich composite, made of
aluminum skins and RTV silicone core, can be accordingly tuned. [1] Stricheret al. (2016). “Light-Induced Bulk Architecturation of
PDMS Membranes,” Macromol. Mater. Eng. 301(10), 1151-1157.
2:00
2p MON. PM
2pSAb3. Extreme impact mitigation by critical point constraints on elastic metamaterials. Ryan L. Harne, Justin Bishop, Daniel C.
Urbanek, Quanqi Dai, and Yu Song (Mech. and Aerosp. Eng., The Ohio State Univ., 201 W 19th Ave., E540 Scott Lab, Columbus, OH
43210, harne.3@osu.edu)
A critical transition occurs between pre- and post-buckled configurations of lightly-damped structures where the relative proportions
of dissipative and elastic forces may reverse, theoretically giving rise to large effective damping properties. This paper describes computational and experimental studies that investigate such fundamental theory and principles. It is found that impact mitigation capabilities
of elastic metamaterials are significantly enhanced by critical point constraints. The results of one-dimensional drop experiments reveal
that constrained metamaterials reduce impact force and suppress rebound effects more dramatically than conventional damping methods,
while constraints nearer to critical points magnify the advantages. When embedded into distributed structures as in conventional applications, it is found that constrained metamaterials provide superior impact mitigation capabilities than solid dampers applied at the same
locations. All together, the results show that critical point constraints on elastic metamaterials provide new directions for the control and
suppression of impact energy with effectiveness exceeding that achieved by solid dampers having the same bulk geometry.
2:20
2pSAb4. Evidence of multimodal vibration damping using a single piezoelectric patch with a negative capacitance shunt. Vicente
Romero-Garcıa, Charlie Bricault, Charles Pezerat (LAUM, UMR CNRS 6613, France, Av. Messiaen, Le Mans 72085, France,
virogar1@gmail.com), Manuel Collet (CNRS, LTDS UMR 5513, Ecully, France), Adrien Pyskir, Patrick Perrard (LTDS UMR 5513,
Ecully, France), and Ga€el Matten (Dept. of Appl. Mech., Institut FEMTO-ST, Besançon, France)
In this work, a piezoelectric patch shunted with a negative capacitance circuit has been used to simultaneously damp several modes
of a square aluminum plate at low frequencies. The active nature of such electromechanical system leads to regions of instabilities in
which the highest vibration attenuation performance appears in the softening region. Once the geometry is fixed, the system has two
degrees of freedom, dominated by the electrical parameters of the circuit: the resistance and the negative capacitance. We tune both the
value of the negative capacitance, in order to place the structure close to the instability in the softening region, and the resistance of the
circuit in such that control the losses of the system. This work shows an optimal design to simultaneously damp several modes with nonzero electromechanical coupling factors using a single shunted patch at low frequencies. The problem is solved numerically and tested
experimentally with good agreement. The results show the possibility of controlling the modal response of the system, opening prospects
to improve the acoustic comfort with systems using piezoelectric shunted damping circuits with small additional mass with a high tunability by only adjusting the properties of the shunt.
2:40
2pSAb5. Design and assessment of a distributed active acoustic liner concept for application to aircraft engine noise reduction.
Herve Lissek, Romain Boulandet (EPFL, EPFL STI IEL LEMA, Station 11, Lausanne 1015, Switzerland, herve.lissek@epfl.ch), Sami
Karkar (Ecole Centrale de Lyon, Ecully, France), Ga€el Matten (FEMTO-ST, Besançon, France), Manuel Collet (Ecole Centrale de
Lyon, Ecully, France), and Morvan Ouisse (FEMTO-ST, Besançon, France)
Acoustic liners are a widespread solution to reduce turbofan noise in aircraft nacelles, due to lightweight and relatively small dimensions for integration within nacelles. Although conventional liners might be designed so as to target multiple tonal frequencies, their passive principle prevents the adaptation to varying engine speeds and therefore lowers their performance during flight, especially in the
take-off and landing phases. This paper presents a novel concept of active acoustic liner based on an engineered design of microphones
and loudspeakers, aiming at absorbing noise over a broad frequency bandwidth. Integration issues have been taken into account so as to
fit to the targeted application to aircraft engines, yielding thickness minimization, with a view to challenging existing passive, narrowband, liners. The sound absorption performance of the proposed active lining concept is evaluated, through commercially available finite-element software, in a configuration mimicking an aeronautical insertion-loss measurement setup, and then tested in the corresponding experimental facility in the presence of flow. The results show that such a concept is readily surpassing conventional passive liners,
both in terms of insertion loss value and frequency bandwidth.
3643
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3643
Contributed Papers
3:00
2pSAb6. Damping treatment of optical breadboards. Vyacheslav
Ryaboy (Light and Motion Div., MKS, 1791 Deere Ave., Irvine, CA 92606,
vyacheslav.ryaboy@newport.com)
Vibration damping of honeycomb optical tables had been subject of significant research and development efforts during several decades. These
efforts resulted in tuned and broadband damping treatments, as well as
active damping system that found extensive practical applications. Smaller
optical platforms (breadboards) are often used for vibration-sensitive
applications due to dearth of laboratory space and increased sensitivity of
small optical instruments. Relatively massive damping devices that work for
large tables are not applicable to smaller (usually 5 to 10 cm thick) breadboards. This paper describes a new method for damping resonance vibration
of optical breadboards with honeycomb cores. The method makes use of
typical modal geometry of orthotropic laminated plates by integrating a
layer of highly damped material between two surfaces making it to work in
shear. Physical principle, optimization procedure, practical designs, implementation, and test results of damped breadboards are presented.
3:20–3:40 Break
3:40
4:20
2pSAb7. Efficiency of an absorber involving a nonlinear membrane
driven by an electro-acoustic device. Pierre-Yvon Bryk, Sergio Bellizzi,
and Renaud C^ote (LMA, Aix-Marseille Univ, CNRS, Centrale Marseille,
LMA, 4 impasse Nikola Tesla, Marseille 13453, France, bryk@lma.cnrsmrs.fr)
2pSAb9. Vibroacoustics behavior of self-similarly loaded sandwich
structures. Jeremie Derre and Frank Simon (DMAE, ONERA, BP 74025, 2
Ave. Edouard Belin, Toulouse CEDEX 4 31055, France, jeremie.derre@
onera.fr)
Great attention has been recently paid to employing nonlinear energy
sink (NES) as an essential nonlinear acoustic absorber rather than Helmholtz
absorbers. NES are based on the principle of the “Targeted Energy Transfer” (TET) that allows to transfer the energy from a primary acoustic field to
the NES. In this paper, an hybrid electro-acoustic NES (hNES) is described.
It is composed of an latex membrane with one face (exterior) coupled to the
acoustic field (to reduce) and the other one enclosed. The enclosure includes
a feedback loop composed of a microphone and a loudspeaker that control
the pressure difference at the level of the membrane. Due to the hardness
behavior of the membrane in non-linear deformation, the hNES can synchronize its resonance with one of the resonances of the acoustic field
greater than the linear resonance of the hNES. It allows to bring out the
TET toward the hNES and thus reduce noise. The feedback loop tunes the
linear resonance frequency of the hNES at low level, which is a key factor
for the triggering threshold of the TET. An experimental study of the hNES
will be presented including TET regime characterization and influence analysis of the feedback gain.
4:00
2pSAb8. Vibration damping using a spiral acoustic black hole. Wonju
Jeon and Jae Yeon Lee (KAIST, 291 Daehak-ro, Yuseong-gu, Daejeon
34141, South Korea, wonju.jeon@kaist.ac.kr)
This study starts with a simple question: can we efficiently reduce the
vibration of plates or beams using a lightweight structure that occupies a
small space? As an efficient technique to damp vibration, we adopted the
concept of an Acoustic Black Hole (ABH) with a simple modification of the
geometry. The original shape of an ABH has a straight wedge-type profile
with power-law thickness, with the reduction of vibration in beams or plates
increasing as the length of the ABH increases. However, in real-world applications, there exists an upper bound of the length of an ABH due to space
limitations. Therefore, in this study, the authors propose a curvilinear
shaped ABH using the simple mathematical geometry of an Archimedean
spiral, which allows a uniform gap distance between adjacent baselines of
the spiral. In numerical simulations, the damping performance increases as
the arc length of the Archimedean spiral increases regardless of the curvature of the spiral in the mid- and high-frequency ranges. Adding the damping material to an ABH can also strongly enhance the damping performance
while not significantly increasing the weight. In addition, the radiated sound
power of a spiral ABH is similar to that of a standard ABH.
3644
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
The reduction of structural vibrations and acoustic radiation is increasingly challenging due to lightweight material use driven by mass reduction,
particularly in the aerospace industry. Trim panels are well-known materials
with high acoustic transmission and radiation. This study combines semi-analytical modeling, numerical simulations and experiments on the vibroacoustic behavior of sandwich constructions locally overloaded by a prefractal mass distribution. The research originality lies in the overload distribution (a self-similar pattern inspired by the Cantor set) which is directly
slotted within the honeycomb core, so that the structure load-bearing capability is not altered. As the mass effect spatially localizes the vibrations, the
pre-fractal pattern focuses the effect of localization on some specific frequency band gaps, which reduces the total vibrational energy and thus the
acoustic radiation. Before the extension to composite plates, simulations of
a homogenized beam model have been computed to form a numerical modal
database. Experiments have been conducted to compare and tune the mechanical model parameters. Satisfying agreement has been obtained
between simulations and experiments. Finally, acoustic radiation simulations of self-similarly loaded structures have been conducted and radiation
coefficients are expected to be weaker than for classical bending beams.
4:40
2pSAb10. Effects of error in a subordinate oscillator array. John A.
Sterling, Joseph F. Vignola (Mech. Eng., Catholic Univ. of America, 620
Michigan Ave., Washington, DC 20064, jsterling@gmail.com), Teresa J.
Ryan, William Miller (Dept. of Eng., East Carolina Univ., Greenville, NC), and
Aldo A. Glean (Mech. Eng., Catholic Univ. of America, Washington, DC)
Existing research has shown that the response of a single degree of freedom resonant system can be modified by the attachment of sets of substantially smaller resonators. Such arrays of attachments, known as subordinate
oscillator arrays, can increase the apparent damping of the primary structure, and the property distributions can be selected such that the collective
effects result in a response of the primary resonator that is similar to an electrical band-rejection filter. Other prior work with this system has indicated
high sensitivity to disorder in the individual attachment properties such that
even 0.1% variation is likely to cause undesirable effects in the overall system response. Such levels of variation well below 1% are easily attributable
to typical manufacturing tolerances, environmental influences, and degradation factors. This work presents experimental results of a set of prototype
subordinate oscillator arrays produced with high precision additive manufacturing techniques so as to prescribe different levels of variation.
Acoustics ’17 Boston
3644
5:00
size. The aim of the proposed study is to present a new concept and strategy
of tunable Helmholtz resonator in order to enhance acoustic performances
in lower frequencies (<500 Hz). The proposed concept consists in replacing
the resonator rigid front plate by an electroactive polymer (EAP) membrane.
When an electric field is applied, a change is made in the mechanical properties of the EAP membrane which induce a resonance frequency shift. A
closed-loop control algorithm is developed to allow a real-time adaptability.
Experimental measurements are performed on the developed prototype to
determine the potential of this concept in term of acoustic absorption and
Transmission Loss for low-frequency.
2pSAb11. Design and closed-loop control of an electroactive polymerbased tunable Helmholtz resonator. Ahmed ABBAD (Appl. Mech.,
FEMTO-ST Institut/ GAUS, 2500 Boulevard de L’universite, departement
de Genie, Sherbrooke, QC J1K2R1, Canada, ahmed.abbad@usherbrooke.
ca), kanty Rabenorosoa (MN2S, FEMTO-ST Institut, Besançon, Doubs,
France), Morvan Ouisse (Appl. Mech., FEMTO-ST Institut/ GAUS,
Besançon, France), and Noureddine Atalla (Appl. Mech., Sherbrooke Univ.,
Sherbrooke, QC, Canada)
A Helmholtz resonator is a passive acoustic resonator used to control a
single frequency resulting from the cavity volume and the resonator neck
Invited Paper
5:20
2pSAb12. Resonance of damping. Yuri Bobrovnitskii (Theor. and Appl. Acoust., Blagonravov Mech. Eng. Res. Inst., 4, Griboedov
Str., Moscow 101990, Russian Federation, yuri@imash.ac.ru)
MONDAY AFTERNOON, 26 JUNE 2017
2p MON. PM
Considered is a linear forced vibrating structure (primary structure) to which another linear passive structure (absorber) is attached at
a number of points. It is shown analytically that the vibration power flow from the primary structure to the absorber reaches its absolute
maximum if two conditions are met simultaneously. The first condition is well known: there must be a frequency resonance when the
forcing frequency coincides with one of the eigen frequencies of the primary structure/absorber system. The second condition is also of
the resonant type: at the frequency resonance vibration mode, the amount of damping in the absorber must be equal to the amount of
damping in the primary structure. This can be called the resonance of damping. The vibration or sound absorber that satisfies both resonance conditions can be called the best or the perfect absorber. The theoretical result obtained has been verified in a laboratory experiment with a simple primary structure and a dynamic vibration absorber as well as in an impedance tube on a resonant sound absorber.
Relation to results known from the literature and possible applications using metamaterials are discussed.
ROOM 304, 1:15 P.M. TO 5:20 P.M.
2pSC
Speech Communication: New Trends in Imaging for Speech Production
Jennell Vick, Cochair
Psychological sciences, Case Western Reserve University, 11635 Euclid Ave., Cleveland, OH 44106
Maureen Stone, Cochair
University of Maryland Dental School, 650 W. Baltimore St., Rm. 8207, Baltimore, MD 21201
Chair’s Introduction—1:15
Invited Papers
1:20
2pSC1. Integrating optical coherence tomography with laryngeal videostroboscopy. Daryush Mehta (Ctr. for Laryngeal Surgery
and Voice Rehabilitation, Massachusetts General Hospital, One Bowdoin Square, 11th Fl., Boston, MA 02114, daryush.mehta@alum.
mit.edu), Gopi Maguluri, Jesung Park (Physical Sci., Inc., Andover, MA), James B. Kobler (Ctr. for Laryngeal Surgery and Voice
Rehabilitation, Massachusetts General Hospital, Boston, MA), Ernest Chang, and Nicusor Iftimia (Physical Sci., Inc., Andover, MA)
During clinical voice assessment, laryngologists and speech-language pathologists rely heavily on laryngeal endoscopy with videostroboscopy to evaluate pathology and dysfunction of the vocal folds. The cost effectiveness, ease of use, and synchronized audio and
visual feedback provided by videostroboscopic assessment serve to maintain its predominant clinical role in laryngeal imaging.
3645
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3645
However, significant drawbacks include only two-dimensional spatial imaging and the lack of subsurface morphological information. A
novel endoscope will be presented that integrates optical coherence tomography that is spatially and temporally co-registered with laryngeal videoendoscopic technology through a common path probe. Optical coherence tomography is a non-contact, micron-resolution
imaging technology that acts as a visual ultrasound that employs a scanning laser to measure reflectance properties at air-tissue and tissue-tissue boundaries. Results obtained from excised larynx experiments demonstrate enhanced visualization of three-dimensional vocal
fold tissue kinematics and subsurface morphological changes during phonation. Real-time, calibrated three-dimensional imaging of the
mucosal wave and subsurface layered microstructure of vocal fold tissue is expected to benefit in-office evaluation of benign and malignant tissue lesions. Future work calls for the in vivo evaluation of the technology in patients before and after surgical management of
these types of lesions.
1:40
2pSC2. New trends in visualizing speech production. Jamie Perry (East Carolina Univ., College of Allied Health Sci., Greenville, NC
27858, perryja@ecu.edu)
Most dynamic magnetic resonance imaging of the levator veli palatini (levator) muscle has been limited to studies using sustained
phonation (Yamawaki et al. 1996; Ettema et al. 2002; Tian et al., 2010a, 2010b; Kollara and Perry, 2013). Dynamic MRI sequences at
faster imaging speeds using word-level productions have been described along the midsagittal image plane (Sutton et al., 2009; Bae et
al., 2011; Scott et al., 2013) and oblique coronal image plane (Perry et al., 2013b, 2013c; Scott et al., 2013). The purpose of this presentation is to describe methods for analyzing the velopharyngeal mechanism and velar muscles during rest and speech production using
MRI. This presentation will demonstrate a potentially useful technique in dynamic MRI that does not rely on cyclic repetitions or sustained phonation and can provide dynamic information related to muscle function. Innovative techniques in imaging and interpreting
dynamic MRI of velopharyngeal function during speech will be described. It is expected these developments will provide insights that
will improve clinical methods in resonance assessments among children born with cleft lip and palate.
2:00
2pSC3. Three dimensional MRI analyses of tongue muscle behavior. Maureen Stone (Univ. of Maryland Dental School, 650 W.
Baltimore St., Rm. 8207, Baltimore, MD 21201, mstone@umaryland.edu), Jiachen Zhuo, Nahla ElSaid (Radiology, Univ. of Maryland
Med. School, Baltimore, MD), Jonghye Woo, Fangxu Xing (Massachusetts General Hospital, Harvard, Boston, Maine), and Jerry Prince
(Elec. and Comput. Eng., Johns Hopkins Univ., Baltimore, MD)
3D and 4D MRI provides unique, high dimensional data for use in clinical applications and scientific models. Our group is currently
using diffusion tensor MRI (DTI) and tagged-cine MRI (tMRI) to explore the fiber direction of muscles in the tongue and the shortening
patterns of those fibers during speech. tMRI uses “magnetic tags” to mark and track tissue points, so that when the tissue deforms, the
tags reflect these deformations. Soft tissue motion patterns, such as those within the tongue during speech, provide a link between muscle
activation and tongue surface shape. Tissue points also demark the endpoints and internal points of muscles, and can identify tongue
muscle position and shortening during speech. Tracking tongue muscle motion is unique to tMRI because their interdigitated fibers
make them fairly opaque to EMG. Disambiguating muscles and identifying their shortening pattern is a first step to relating muscle
action to tongue deformation. The second tool, DTI, identifies the location and orientation of “muscle fibers” within a muscle volume.
We will use these tools to track shortening of several tongue muscles during speech.
2:20
2pSC4. Multimodal investigation of speech production featuring real-time three-dimensional ultrasound. Steven M. Lulich
(Speech and Hearing Sci., Indiana Univ., 4789 N White River Dr., Bloomington, IN 47404, slulich@indiana.edu)
Real-time three-dimensional ultrasound offers new opportunities to investigate the spatial and temporal complexities of speech production, but ultrasound by itself continues to be limited in significant ways because (typically) only one surface of the vocal tract can be
imaged: the tongue. Nevertheless, many of these limitations can be overcome by appropriately time-aligning and spatially registering
ultrasound volumetric images with additional data streams, such as lateral webcam video, acoustic recordings, and digitized palate
impressions. One method by which this may be accomplished makes use of a new open-source toolbox for MATLAB, called “WASL,”
which was originally designed specifically for real-time three-dimensional ultrasound, but is now extensible to other biomedical imaging
modalities, including MRI, CT, and two-dimensional ultrasound. Examples from a study of rhotic sound production illustrate how multimodal investigations of speech production featuring real-time three-dimensional ultrasound may be carried out using WASL.
2:40
2pSC5. Recent results in silent speech interfaces. Bruce Denby (Tianjin Univ., Institut Langevin, 1 rue Cuvier, Paris 75005, France,
bruce.denby@gmail.com), Shicheng Chen, Yifeng Zheng (Tianjin Univ., Tianjin, China), Kele Xu (Universite Pierre et Marie Curie,
Paris, France), Yin Yang (Univ. of New Mexico, Albuquerque, NM), Clemence Leboullenger (Universite Pierre et Marie Curie, Paris,
France), and Pierre Roussel (ESPCI, Paris, France)
Silent Speech Interfaces (SSI) are sensor-based communication systems in which a speaker articulates normally but does not activate
their vocal chords, creating a natural user interface that does not disturb the ambient audio environment or compromise private content,
and may also be used in noisy environments where a clean audio signal is not available. The SSI field was launched 2010 in a special
issue of Speech Communications, where systems based on ultrasound tongue imaging, electromyography, and electromagnetic articulography were proposed. Today, although ultrasound-based SSIs can achieve Word Error Rate scores rivaling those of acoustic speech
recognition, they have yet to reach the marketplace due to performance stability problems. In recent years, numerous approaches have
been proposed to address this issue, including better acquisition hardware; improved tongue contour tracking; Deep Learning analysis;
and the association of ultrasound data with a real time 3D model of the tongue. After outlining the history and basics of SSIs, the talk
will present a summary of recent advances aimed at bringing SSIs out of the laboratory and into real world applications.
3646
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3646
3:00
2pSC6. Discovering functional units of the human tongue during speech from cine- and tagged-MRI. Jonghye Woo (MGH/
Harvard, 55 Fruit St., White 427, Boston, MA 02114, jwoo@mgh.harvard.edu), Maureen Stone (Univ. of Maryland, Baltimore,
Baltimore, MD), Fangxu Xing (MGH/Harvard, Baltimore, MD), Jordan Green (MGH Inst. of Health Professions, Boston, MA), Arnold
Gomez (Johns Hopkins Univ., Baltimore, MD), Van Wedeen (MGH/Harvard, Boston, MA), Jerry L. Prince (Johns Hopkins Univ.,
Baltimore, MD), and Georges El Fakhri (MGH/Harvard, Boston, MA)
Tongue motion during speech or other lingual behaviors involves synergies of locally deforming regions, or functional units. Determining functional units will provide an insight into the mechanisms of normal and pathological muscle coordination, leading to
improvement in understanding of speech production and treatment or rehabilitation procedures. In this work, we present an approach to
determining functional units using cine- and tagged-MRI. Functional units are estimated using a sparse non-negative matrix factorization
(NMF) framework, learning latent building blocks and the weighting map from motion features from displacements and strain. Current
models of gesture production suggest that, during speech planning, talkers select temporal frames prior to specifying the appropriate spatially fixed clusters. Our analysis is intended to parallel this process by using NMF to first identify temporal frames in the data based on
change points in motion features, and then to identify the spatially fixed clusters for all the input quantities in each time frame. A spectral
clustering is performed on the weighting map of each time interval to define the coherent sub-motions, revealing temporally varying
tongue synergies. Synthetic and human tongue data including both controls and patients with glossectomy and amyotrophic lateral sclerosis are used to define subject/task-specific functional units of the tongue in localized regions.
3:20–3:40 Break
2p MON. PM
3:40
2pSC7. Seeing is treating: 3D electromagnetic midsagittal articulography (EMA) visual biofeedback for the remediation of
residual speech errors. Jennell Vick (Psychol. Sci., Case Western Reserve Univ., 11635 Euclid Ave., Cleveland, OH 44106, jennell@
case.edu), Rebecca Mental (Psychol. Sci., Case Western Reserve Univ., Cleveland Heights, OH), Holle Carey (Vulintus, Dallas, TX),
and Gregory S. Lee (Elec. Eng. and Comput. Sci., Case Western Reserve Univ., Cleveland, OH)
Production distortions or errors on the sounds /s/ and /r/ are among the most resistant to remediation with traditional speech therapies, even after years of weekly treatment sessions (e.g., Gibbon et al., 1996; McAuliffe & Cornwell, 2008; McLeod, Roberts, & Sita,
2006). In this study, we report on the results of treating residual speech errors in older children and adults with a new visual biofeedback
treatment called Opti-Speech. Opti-Speech uses streaming positional data from the Wave EMA device to animate real-time motion of a
tongue avatar on a screen. Both the clinician and the client can visualize movements as they occur relative to target shapes, set by the clinician, intended to guide the client to produce distortion-free and accurate speech sounds. Analyses of positional data and associated
kinematics were completed during baseline, treatment, and follow-up phases for four participants, two who produced pre-treatment
residual errors on /s/, and two with residual errors on /r/. Measures included absolute position of the 5 tongue sensors, variability of
position, perceptual quality ratings, and acoustic measures of the consonants. Results indicate that Opti-speech effectively remediated
the residual speech errors with corresponding evidence of generalization to untreated contexts and maintenance of improvements at
follow-up.
4:00
2pSC8. Computer simulation of the vocal tract in speech production. Ian Stavness, Erik Widing, Francois Roewer-Despres
(Comput. Sci., Univ. of SK, 110 Sci. Pl., Saskatoon, SK S7N5C9, Canada, ian.stavness@usask.ca), and Bryan Gick (Linguist, Univ. of
Br. Columbia, Vancouver, BC, Canada)
Neuro-musculoskeletal function can be well characterized for most human movements, such as walking, through a combination of
experimental measurements, including motion capture, electromyography, and force sensing. Movements of the vocal tract, however,
pose a considerable challenge in terms of holistically measuring and visualizing the neuro-musculoskeletal processes underlying speech
production. This is due to the inaccessibility of the vocal tract, the large number of small, deep muscles involved, the fast speed of movements, and the inherent three-dimensional nature of vocal tract shape during speech production. An array of measurement and imaging
modalities have been tailored to vocal tract measurement, such as MRI, EMA, EPG, and ultrasound, each providing valuable, but incomplete information regarding vocal tract movements. Computer simulation of the vocal tract can play an important role in complementing
these sparse experimental measurements by aiding in the fusion of multiple imaging modalities, helping to describe and visualize the 3D
structures of interest, filling in the gaps in experimental measurements (both spatial and temporal), and extrapolating from the small sample sizes commonly found in speech production studies. We will discuss two types of computer simulations that are emerging as important complementary forms of speech production investigation: forward biomechanical simulation of the vocal tract articulators and
probabilistic simulation of neuro-musculoskeletal parameters.
Contributed Papers
3647
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3647
4:20
2pSC9. Maintaining vowel distinctiveness in children with Steinert
Myotonic Dystrophy: An ultrasound study. Marie BellavanceCourtemanche, Pamela Trudeau-Fisette, Amelie Premont, Christine
Turgeon (Linguist, Universite du PQ a Montreal, CP 8888, succ. CentreVille, Montreal, QC H3C 3P8, Canada), and Lucie Menard (Linguist,
Universite du PQ a Montreal, Montreal, QC H3C 3P8, Canada, menard.
lucie@uqam.ca)
Speech production entails appropriately timed contractions of many
muscles. Steinert myotonic dystrophy, a neurodegenerative disease that
causes muscle weakness and difficulties in muscle relaxation after muscle
contraction, frequently affects orofacial articulatory dynamics leading to
decreased speech intelligibility. We aimed to investigate the articulatory
and acoustic characteristics of cardinal vowels produced by children with
Steinert disease. We recruited fourteen 6- to 14-year-old French-speaking
children diagnosed with Steinert disease and 14 aged-matched typically
developing children. They were asked to produce repetitions of the vowels /
i a u/ in consonant-vowel (CV) contexts. A synchronized ultrasound, Optotrak motion tracking system, and audio recording system was used to track
lip and jaw displacement as well as tongue shape and position. Duration and
formant values were also extracted. The Euclidean distance between vowels,
in the formant space, was reduced in children with Steinert disease compared to the control children. Different patterns of articulatory contrasts
were observed among the children, with some of them using more tongue
contrasts than lip contrasts. Intelligibility tests conducted with adult listeners
on a subset of the data show that some patterns are related to higher intelligibility scores than others.
4:40
2pSC10. Predicting velum movement from acoustic features—A
regression approach. Hong Zhang (Linguist, Univ. of Pennsylvania, 619
Williams Hall, 255 S 36th St., Philadelphia, PA 19104, zhangho@sas.
upenn.edu)
Although acoustical methods have been widely used in the nasality literature, a direct link between the acoustical measurements and velum movement during speech production is yet to be established. In this study, we
propose a model through which the vertical movements of the velum are
inferred from an acoustic feature set. An X-ray Microbeam data set collected at University of Tokyo are used for the modeling. The data recorded
the vertical movements of the velum of 11 American English speakers saying both isolated words and sentences. Velum positions are recorded from
3648
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
tracing a metal pallet placed on top of the velum. 40 MFCC (Mel-frequency
Cepstral Coefficient) features are extracted from the accompanying acoustic
signal at each time frame. MFCCs of ten frames before the current frame,
together with the current frame, consist of the feature vector for predicting
the velum movement of the current frame. Elasticnet regression is used to
reduce the dimensionality of the feature vector. In general, MFCCs from
higher frequencies are penalized during model selection. The selected features are then fitted to a stepwise logistic model. For each individual
speaker, the inferred velum movements in the validation set are a good fit to
the actual observation as judged by the high accuracy in identifying locations of peaks and valleys and small deviance from the response. However,
there exists large inter-speaker variation in terms of both movement pattern
and model performance.
5:00
2pSC11. Measuring regional displacements of tongue parts on
ultrasound during // articulation. Sarah M. Hamilton, Suzanne Boyce
(Commun. Sci. and Disord., Univ. of Cincinnati, 3433 Clifton Ave.,
Cincinnati, OH 45220, hamilsm@mail.uc.edu), Neeraja Mahalingam,
Allison Garbo (Biomedical, Chemical, and Environ. Eng., Univ. of
Cincinnati, Cincinnati, OH), Ashley Walton, Michael A. Riley (Psych.,
Univ. of Cincinnati, Cincinnati, OH), and T. Doug Mast (Biomedical,
Chemical, and Environ. Eng., Univ. of Cincinnati, Cincinnati, OH)
The ability to differentiate movements of the front, back and root portions of the tongue is important to the development of mature speech coordination. This ability is particularly relevant for sounds with complex tongue
shapes, such as the American English rhotic approximant // (“r”), but
speakers also have a wider scope of coarticulatory opportunities if able to
control tongue parts independently [Zharkova, 2012]. In addition, lack of independence in tongue part movement is associated with speech sound disorders [Gibbon, 1999; Gick et al., 2008]. For this study, relative
displacements of tongue blade, dorsum, and root were analyzed using MATLAB-based image processing. Regions of interest (ROIs) were drawn for
these three areas on ultrasound images during production of /A/ by 25
adults. Displacements of each region were measured by tracking of local
brightness maxima from images representing /A/ and // production, resulting in ranges of relative blade, dorsum and root displacement associated
with normal /A/ production. These ranges are compared to data from a sample of ultrasound images of speech from children with persistent speech
errors. Preliminary results suggest that children with persistent sound errors
have a smaller range of tongue part displacement when compared to typical
adults.
Acoustics ’17 Boston
3648
MONDAY AFTERNOON, 26 JUNE 2017
ROOM 302, 1:20 P.M. TO 5:40 P.M.
2pSP
Signal Processing in Acoustics, Engineering Acoustics, and Architectural Acoustics:
Signal Processing for Directional Sensors II
Kainam T. Wong, Chair
Dept. of Electronic & Information Engineering, Hong Kong Polytechnic University, DE 605, Hung Hom KLN, Hong Kong
Invited Papers
1:20
2p MON. PM
2pSP1. Circular arrays of directive sensors for improved direction-of-arrival estimation. Houcem Gazzah (Elec. and Comput.
Eng., Univ. of Sharjah, M9-223, University City, Sharjah, Sharjah 27272, United Arab Emirates, hgazzah@sharjah.ac.ae)
In directing finding, it is highly desired to design antenna arrays with constant accuracy across the whole field-of-view. The so-called
array isotropy is achieved, most notably, by means of the uniform circular array. On the other hand, sensors with directive response
improve performance but unequally, even if displayed in circular configurations. It follows that arbitrarily oriented arrays can suffer
severe loss in performance. We show (i) what conditions are needed in order to preserve array isotropy while using directive sensors,
and (ii) what can be done to compensate for the loss of isotropy when this condition is not met. For instance, we show that there is a critical array size below which the Cramer Rao Bound, our performance measure, is not direction-independent. In some cases, the direction-dependent expression can be simplified to the point where a closed-form array-adaptation algorithm is obtained. It calculates the
optimal array orientation as function of the available statistical information about the source. Depending on how much is known about
the source, the amount of improvement may be very significant compared to cases where the array is arbitrarily oriented, up to 50% for
arrays of cardioid sensors.
1:40
2pSP2. A biomimetic coupled circuit based microphone array inspired by the fly Ormia ochracea. Xiangyuan Xu, Ming Bao, and
Han Jia (Key Lab. of Noise and Vib. Res., Inst. of Acoust., Chinese Acad. of Sci., No. 21,West Rd., Beisi Huan,Haiding District,
Beijing City 100190, China, xiangyxu699@gmail.com)
Miniaturization of microphone arrays poses challenges to attain accurate localization performance due to shrinking aperture. The fly
Ormia ochracea is able to determine the direction of sound source with an astonishing degree of precision, even though its ears are
extremely small. Inspired by this, an equivalent analog circuit is designed to mimic the coupled ear system of the fly for sound source
localization. This coupled circuit receives two signals with tiny phase difference from a space closed two-microphone array, and produces two signals with obvious intensity difference. The response sensitivity can be adjusted through the coupled circuit parameters.
The directional characteristics of the coupled circuit have been demonstrated in the experiment. The designed miniature microphone
array can localize the sound source with low computational burden by using the intensity difference. This system has significant advantages in various applications where the array size is limited.
Contributed Paper
2:00
2pSP3. LWD-DATC: Logging-While-Drilling Dipole Shear Anisotropy
estimation from Two-Component waveform rotation. Pu Wang
(Mitsubishi Elec. Res. Lab., Cambridge, MA), Sandip Bose, Bikash K.
Sinha, Ting Lei (Mathematics Modeling, Schlumberger-Doll Res.,
Cambridge, MA), and Matthew Blyth (Schlumberger, 300 Schlumberger
Dr., Sugar Land, TX 77478, MBlyth@exchange.slb.com)
Sonic dipole shear anisotropy orientation in subsurface formations is
key information for a complete characterization of rock mechanical models
for well planning. In Wireline logging, it is estimated with the well-known
Alford rotation on four-component inline and crossline waveforms from two
orthogonal dipole firings. In contrast, shear orientation estimation is more
challenging in Logging-While-Drilling (LWD) operations because of the
3649
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
following reasons:—No exactly orthogonal cross-dipole firings in LWD
tools due to tool rotation while drilling;—Coupling between the formation
flexural mode and the strong collar flexural mode propagating through the
stiff drill collar;—Unavoidable tool eccentering;—Strong drilling noise
from vibration, shock, and the turbulent mud flow around the tool; To overcome the aforementioned challenges, this paper describes a new technique
to estimate the LWD dipole shear orientation with two-component waveforms obtained from a single dipole firing. It is accomplished by maximizing the projected energy of the two-component waveforms into a subspace
defined by two eigenfunctions (the Bessel functions of the first- and secondkinds) accounting for the propagation of the two coupled flexural modes
over multiple frequency points. By resorting to the subspace estimation, the
new technique has been successfully validated on synthetic data and tested
on field data sets.
Acoustics ’17 Boston
3649
Invited Paper
2:20
2pSP4. Far field directivity of an omnidirectional parametric loudspeaker consisting of piezoelectric ultrasonic transducers set
on a sphere. Oriol Guasch, Patrıcia Sanchez-Martın, and Marc Arnela (GTM-Grup de recerca en Tecnologies Mèdia, Dept. of Eng., La
Salle, Universitat Ramon Llull, C/ Quatre Camins 30, Barcelona, Catalonia 08022, Spain, oguasch@salleurl.edu)
In this work, it is proposed to extend the convolution model for the far-field directivity of a parametric loudspeaker array (PLA) in
[C. Shi and Y. Kajikawa, J. Acoust. Soc. Am. 137 (2) (2015)] to predict the far-field sound field generated by an omnidirectional parametric loudspeaker (OPL). The original two-dimensional model, intended for flat PLAs, relies on convolving the directivity of the primary waves with the Westervelt one, rather than on performing their product. This allows one to deal with piezoelectric ultrasonic
transducers (PZTs) having large beam widths, typical of PLA applications. The model is herein enhanced to three dimensions, to predict
the far-field pressure level of the difference wave at any observation point in space, for a PZT located and pointing anywhere. This
makes the model amenable to compute the far-field directivity of PLAs on curved surfaces. In particular, it is shown how it can be
applied to compute the far-field pressure of an OPL consisting of a spherical surface with hundreds of PZTs placed on it.
Contributed Papers
2:40
2pSP5. Passive detection of low frequency sources using vector sensor
channel cross correlation. Thomas J. Deal (Naval Undersea Warfare Ctr.,
1176 Howell St., Newport, RI 02841, thomas.deal@navy.mil)
Acoustic vector sensors consisting of an omnidirectional hydrophone
and three orthogonal velocity sensors offer new opportunities for passive
detection of acoustic sources at low frequencies. In the frequency range
from 10 to 300 Hz, the predominant noise sources in the ocean are surface
wave action and distant shipping. Omnidirectional hydrophones are affected
by both noise types whereas vertically-oriented velocity sensors are more
affected by surface wave action, and horizontally-oriented velocity sensors
3:00–3:20 Break
3:20
2pSP6. Directive and focused acoustic wave radiation by tessellated
transducers with folded curvatures. Ryan L. Harne, Danielle T. Lynd,
Chengzhe Zou, and Joseph Crump (Mech. and Aerosp. Eng., The Ohio State
Univ., 201 W 19th Ave., E540 Scott Lab, Columbus, OH 43210, harne.3@
osu.edu)
Guiding radiated acoustic energy from transducer arrays to arbitrary
points in space requires fine control over contributions of sound provided to
the point from each transducer constituent. Recent research has revealed
advantages of mechanically reconfiguring array constituents along the folding patterns of an origami-inspired tessellation, in comparison to digitally
processing signals sent to each element in a fixed configuration. To date,
this concept of acoustic beamfolding has exemplified that far field wave
radiation may be adapted by orders of magnitude to a point according to the
folding of a Miura-ori tessellated array when the array constituents are
driven by the same signal. This research investigates a new level of adaptive
acoustic energy delivery from foldable arrays through study of tessellated
transducers that adopt folded curvatures, thus introducing opportunity for
are more affected by distant shipping. Under these ambient conditions,
when no source is present, the vector sensor channels have low cross-correlation levels. When a source is present, propagation conditions cause an
increase in correlation between all vector channels. This increase is measurable in the magnitude and phase of the correlation coefficients between each
channel pair, which forms the basis of an algorithm for detecting acoustic
sources. This paper presents the response of an acoustic vector sensor to
both types of ambient noise and demonstrates the change in correlation
when a source is present. These noise and signal responses determine a
detection threshold for a single frequency source for desired probabilities of
detection and false alarm.
near field energy focusing alongside far field directionality. The outcomes
of these computational and experimental efforts plainly reveal that foldable,
tessellated transducers that curve upon folding empower straightforward
means for the fine, real-time control needed to beam and focus sound to
points in space. Discussions are provided on the potentials and limitations
of the current tessellations considered, and means for future studies to build
upon the new understanding.
3:40
2pSP7. Cramer-Rao bound for direction finding at a tri-axial velocitysensor of an acoustic event having an AR(1) temporal auto-correlation.
Dominic M. Kitavi, Kainam T. Wong (Dept. of Electron. and Information
Eng., Hong Kong Polytechnic Univ., Hong Kong Polytechnic University,
Yuk Choi Rd., Kowloon, HungHom 0852, Hong Kong, dominic.kitavi@
connect.polyu.hk), Lina YEH (Dept. of Mathematics, Soochow Univ.,
Taipei, Taiwan), and Tsair-Chuan Lin (Dept. of Statistics, National Taipei
Univ., New Taipei City, Taiwan)
Various acoustical events have an order-1 autoregressive temporal autocorrelation model. This investigation derives the Cramer-Rao bound (CRB)
for direction finding of such an AR(1) acoustical event, if the data is collected by a tri-axial velocity-sensor.
Invited Paper
3650
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3650
4:00
2pSP8. An array of two biaxial velocity-sensors of non-identical orientation— Their “spatial matched filter” beam-pattern’s
pointing error. Chibuzo J. Nnonyelu (Electron. and Information Eng., The Hong Kong Polytechnic Univ., BC606 (Table 3), Hong
Kong Polytechnic University, 11 Yuk Choi Rd., Hung Hom, Hung Hom, Kowloon 999903, Hong Kong, joseph.nnonyelu@connect.
polyu.hk), Charles H. Lee (Dept. of Mathematics, California State Univ., Fullerton, Fullerton, CA), and Kainam T. Wong (Electron. and
Information Eng., The Hong Kong Polytechnic Univ., Hong Kong, Kowloon, Hong Kong)
A biaxial velocity-sensor (a.k.a. a v-v probe) measures two Cartesian components of an incident acoustical wavefield’s particle velocity vector. Such biaxial velocity-sensors are sometimes used as elements in an array. If these elements are not identically oriented,
what would happen to the array’s “spatial matched filter” beam-pattern? This paper investigates such a beam-pattern’s pointing error,
for a pair of non-identically oriented biaxial velocity-sensor. This pointing error is investigated here in terms of (i) the biaxial velocitysensor’s misorientation skew angle, (ii) the biaxial velocity-sensor’s spatial separation, (iii) the incident signal’s wavelength, and (iv)
the beamformer’s nominal look direction.
4:20
2pSP9. Higher-order estimation of acoustic intensity. Joseph S.
Lawrence, Kent L. Gee, Tracianne B. Neilsen, and Scott D. Sommerfeldt
(Dept. of Phys. and Astronomy, Brigham Young Univ., N283 ESC, Provo,
UT 84602, joseph-lawrence@hotmail.com)
The phase and amplitude gradient estimator (PAGE) method [D. C.
Thomas et al., J. Acoust. Soc. Am. 137, 3366-3376 (2015)] can be used to
increase the bandwidth of complex acoustic intensity estimates obtained
with multi-microphone probes. Despite the increased bandwidth, errors arise
when using this method, which is based on linear least-squares gradients, in
non-planar fields. Examples of non-planar fields include the acoustic near
field of a radiating source or near a null in a standing-wave field. The PAGE
method can be improved by using a Taylor expansion to obtain higher-order
estimates of center pressure, pressure amplitude gradient, and phase gradient. With a sufficient number of microphones, the higher-order method is
shown to improve the bandwidth of both the active and reactive intensity
estimates. Additionally, this method can be used to estimate the spatial dependence of intensity across the extent of the probe. [Work supported by the
National Science Foundation.]
4:40
2pSP10. Direction finding using a “p-v probe” of higher order.
Muhammad Muaz (Dept. of Electron. and Information Eng., The Hong
Kong Polytechnic Univ., CD-515, Hong Kong 999077, Hong Kong, m.
muaz@connect.polyu.hk), Yue I. Wu (College of Comput. Sci., Sichuan
Univ., Chengdu, China), Da Su, and Kainam T. Wong (Dept. of Electron.
and Information Eng., The Hong Kong Polytechnic Univ., Hong Kong,
Kowloon, Hong Kong)
The “p-v probe” comprises one isotropic pressure-sensor and one uniaxial velocity-sensor. The “p-v probe” has been used for acoustic direction
finding for a few decades. This work generalizes the “p-v probe,” by allowing a higher-order “figure-8” directional sensor to substitute for the uni-axial
velocity-sensor, which represents a first-order “figure-8” directional sensor.
This work presents new eigen-based azimuth-elevation direction-finding
algorithms in closed forms, as well as the corresponding Cramèr-Rao lower
bounds.
tracking. Each compact microphone array extends in a 10 cm length aperture and consists in an arrangement of digital MEMS microphones. Several
arrays are connected to a computing substation using the I2S, ADAT and
MADI protocols using optical fiber. These protocols used together allow to
collect the signals from hundreds of microphones spread over a distance of
up to 10 km. Sound source localization is performed on each array using
measured pressure and particle velocities. The pressure is estimated by averaging the signals of multiple microphones, and the particle velocity is estimated with high order finite differences of microphone signals. Multiple
calibration procedures are compared experimentally. Results in sound
source localization, noise reduction by spatial filtering and UAV recognition
using machine learning are presented.
5:20
2pSP12. Dual accelerometer vector sensor mounted on an autonomous
underwater vehicle (AUV)—Experimental results. Paulo J. Santos, Paulo
Felisberto, Frederich Zabel, Sergio Jesus (University of Algarve, LARSyS,
Campus da Penha, Faro, Faro 8005-139, Portugal, pjsantos@ualg.pt), and
Luıs Sebasti~ao (IST/ISR, University of Lisbon, LARSyS, Lisbon, Lisbon,
Portugal)
In seismic geo-acoustic exploration, the use of ships equipped with long
streamers is of major concern due to the complexity of the operations. The
European project WIMUST aims to improve the efficacy of actual geoacoustic surveys through the use of AUVs towing short streamers. A Dual
Accelerometer Vector Sensor (DAVS) was developed in order to complement the streamer’s data, allowing for the reduction of their size and facilitating the operation of the WiMUST distributed configuration. Each DAVS
consists of two tri-axial accelerometers and one hydrophone aligned along a
vertical axis. This configuration has the ability to cancel or significantly
attenuate the direct and the surface reflection paths, which are undesirable
in seismic imaging. Calibration tests with the DAVS have already been performed; this paper presents experimental results on the estimation of azimuthal directions when the DAVS is in motion. Signals in the 1-2kHz band
were emitted by a source deployed in a shallow pond at 1.5m depth and
acquired by the DAVS mounted on a MEDUSA class AUV, which was following a pre-programed path with a 0.26m/s nominal speed. The azimuth
estimates are coherent with the MEDUSA trajectories even in curved paths
where the thruster noise increases.
5:00
2pSP11. A distributed network of compact microphone arrays for
drone detection and tracking. Aro Ramamonjy (Acoust. and Protection of
the Combattant, French-German Res. Inst. of Saint-Louis, LMSSC, 2 rue
Conte, Paris 75003, France, aroramamonjy@gmail.com), Eric Bavu,
Alexandre Garcia (Cnam, Laboratoire de Mecanique des Structures et des
Systèmes Couples (EA3196), Conservatoire National des Arts et Metiers,
Paris, France), and Sebastien Hengy (Acoust. and Protection of the
Combattant, French-German Res. Inst. of Saint-Louis, Saint-Louis,
France)
This work focuses on the development of a distributed network of compact microphone arrays for unmanned aerial vehicle (UAV) detection and
3651
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3651
2p MON. PM
Contributed Papers
MONDAY AFTERNOON, 26 JUNE 2017
ROOM 309, 1:15 P.M. TO 5:40 P.M.
2pUWa
Underwater Acoustics and Acoustical Oceanography: In Honor of Ira Dyer, 60 Years as an Innovator,
Entrepreneur, and Visionary for Ocean Engineering
Arthur B. Baggeroer, Cochair
Mechanical and Electrical Engineering, Massachusetts Inst. of Technology, Room 5-206, Cambridge, MA 02139
Peter Mikhalvesky, Cochair
Leidos, 4001 N. Fairfax St., Arlington, VA 22207
Philip Abbot, Cochair
OASIS, Inc., 5 Militia Drive, Lexington, MA 02421
Chair’s Introduction—1:15
Invited Papers
1:20
2pUWa1. Ira Dyer and the BBN Applied Research Division. James Barger (Raytheon BBN Technologies, Raytheon BBN
Technologies, 10 Moulton St., Cambridge, MA 02138, docobra1@aol.com)
Ira Dyer was one of the first hires by Leo Beranek for the fledgling company Bolt, Beranek and Newman (BBN). Ira in turn hired
many of the now recognized pioneers in ocean, structural; and ocean acoustics and formed the BBN Applied Research Division. Members investigated all aspects of sound and vibration in ships, submarines, aircraft, and spacecraft. An example includes during the mid1950s, Ira helped design the US Navy X-1 Submarine, a small four-man quiet diesel-electric submarine for ADM Rickover. BBN
designed an innovative engine mounting system that quieted the vehicle and led the way for ultra-quiet submarines in the future Navy.
This submarine is now on display at the Navy Submarine Museum at Groton, CT. Other contributions by members include SEA (statistical energy analysis) and criteria for modeling flow-excitation forcing functions to structural radiation. Ira created a golden era for acoustics at BBN.
1:40
2pUWa2. The contribution of Ira Dyer to understanding hydro-structural acoustics. William Blake (Mech. Eng., Adjunct
Professor, The Johns Hopkins Univ., 6905 Hillmead Rd., Bethesda, MD 20817, hydroacoustics@aol.com)
Ira Dyer’s work in structural acoustics is well known, but what I think is little known is his start in aeroacoustics followed by his
early work in the response of structures to pressure fields of a kind that is typical of hydroacoustics. Lighthill’s seminal paper on aerodynamic noise in 1952 is generally regarded as the dawn of modern flow acoustics. Immediately after this, in 1953 and continuing into the
early 1960s, Ira began his career by examining scaling laws for the sound and the structural vibration associated with jets and high speed
flow over plate and shell structures. In this work he examined scaling laws for sound power, the importance of wave coincidence in
fluid-structure interaction, and the effects of space-time decorrelation of excitation pressure fields. This paper will examine the conclusions of these early papers of Ira’s and trace their relevance through to modern times and across the combined fields of hydro-structural
acoustics. Ira’s professional path led him away from these areas of interest but his early work nonetheless laid foundations for work of
the future.
2:00
2pUWa3. Ira Dyer at MIT—Professor, Department Head, and Arctic pioneer. Arthur B. Baggeroer (Mech. and Elec. Eng.,
Massachusetts Inst. of Technol., Rm. 5-206, MIT, Cambridge, MA 02139, abb@boreas.mit.edu) and Peter Mikhalvesky (Leidos,
Arlington, VA)
Ira arrived at MIT in 1970 from BBN to take a professorship offered by Alfred Kyle, Head of the Department of Naval Architecture.
He had said yes; however, Kyle surprised him and asked him to also head the department, renamed Ocean Engineering. With some trepidation he accepted. He started new research programs in ocean acoustics, ambient noise, reverberation and propagation while making
seminal contributions. He became the director of the MIT Sea Grant Program and soon MIT was one of the first Sea Grant colleges. In
1977 Ira had the idea to research basin-scale reverberation. He thought the Mediterranean would be the ideal enclosed basin including
opportunities for post-experiment R&R! He approached the Office of Naval Research, they enthusiastically agreed, and sent him to the
Arctic! That detour north turned into a super highway of decades of Arctic acoustics research from a controversial seamount discovery
3652
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3652
in his first reverberation experiment, detailed morphology of ice cracking noise, propagation and ice scattering to global climate change.
Ira will have lasting and enduring impact in acoustics through his contributions and through his many students and colleagues that had
the great privilege and joy to learn from, know, and work with Ira.
2:20
2pUWa4. Ira Dyer and MIT Arctic acoustics research. Gregory L. Duckworth (Raytheon BBN Technologies, 10 Moulton St.,
Cambridge, MA 02138, gregory.duckworth@raytheon.com)
Ira Dyer and his colleague Art Baggeroer began MIT’s Arctic Acoustics research efforts in the latter half of the 1970s. In this preGPS and newly digital era, Ira helped define an ambitious program for the Office of Naval Research that brought large-aperture highquality acoustic array data acquisition to bear on the problems of Arctic acoustic science. Supporting a generation of graduate students,
this work explored ambient noise generation mechanisms in the ice, Arctic basin reverberation, seismic reflection and refraction studies
to understand the underlying crustal structure, and long-range propagation that is now exploited for acoustic thermometry to study global
climate change. Advanced signal processing, combined with fundamental physical understanding and modeling of the ice, water column,
and underlying crust vastly increased our understanding of these phenomena in the Arctic. I will show some of the results of this work,
and how it has been extended for use in active sonars and measurements of sea-ice dynamics over large areas of the Arctic.
2:40
2pUWa5. The role of fluctuations in the interpretation of sonar detection performance. Philip Abbot, Vincent E. Premus, Mark N.
Helfrick, Charles J. Gedney, and Chris J. Emerson (OASIS, Inc., 5 Militia Dr., Lexington, MA 02421, abbot@oasislex.com)
2p MON. PM
The predictive probability of detection (PPD) is a metric that accounts for uncertainty in sonar detection performance due to random
fluctuations in transmission loss, noise level, and source level [P. Abbot and I. Dyer, Impact of Littoral Environmental Variability on
Acoustic Predictions and Sonar Performance, 2002]. It is well known that a significant portion of Ira’s career was dedicated to understanding the role fluctuations play in interpreting acoustic measurements. Building on this foundation, we now embrace the notion that a
useful statement of sonar system performance is one linked with a probabilistic description of the acoustic environment’s intrinsic variability. In this paper, we discuss the impact of fluctuations on passive sonar performance, including how PPD facilitates the interpretation of sonar system recognition differential. Data from a recent field test conducted in August, 2011, on the New Jersey continental
shelf, will be used to illustrate the methodology and interpret measured detection performance in the presence of a cold pool duct and
variable ambient noise conditions.
3:00–3:20 Break
3:20
2pUWa6. Acoustic intensity fluctuation studies in shallow water over the past quarter century. James Lynch (Woods Hole
Oceanographic, MS # 11, Bigelow 203, Woods Hole Oceanographic, Woods Hole, MA 02543, jlynch@whoi.edu), John A. Colosi
(Naval Postgrad. School, Monterey, CA), Timothy F. Duda, Ying-Tsong Lin, and Arthur Newhall (Woods Hole Oceanographic, Woods
Hole, MA)
One of Ira Dyer’s seminal contributions to underwater acoustics was in the understanding of acoustic intensity fluctuations. In particular, the 5.6 dB intensity fluctuation of a large number of interfering narrowband multipaths, which is often informally referred to as
“the Dyer number”, is a robust and well know effect. In shallow water, a number of physical effects can modify this basic number. These
effects have been measured in various experiments over the years, and we will show a subset of these results. The physics of the effects,
the measurement techniques, and the implications of this work for sonar processing will all be discussed. Work supported by ONR.
3:40
2pUWa7. Analysis of marginal ice zone noise events: Revisited. Chi-Fang Chen (Eng. Sci. and Ocean Eng., National Taiwan Univ.,
No. 1 Roosevelt Rd. Sec.#4, Taipei 106, Taiwan, chifang@ntu.edu.tw)
Acoustic transients observed in a noise time series, called noise events, are believed to be the major contributor to Arctic Ocean ambient noise. Underwater ambient noise data from the Marginal Ice Zone Experiment 1984 (MIZEX84), at frequencies between 20 and
2000 Hz, are studied. Results show that temporal and spatial distribution of events are characterized by clustering and seem to depend
on environmental conditions such as swells and ice concentration. Event densities increase as f at frequencies below 200 Hz, decrease as
f 1 at frequencies above 300 Hz, and reach a maximum at frequencies between 200 and 300 Hz. The change in trend may indicate a
transition in the source mechanism for frequencies below 200 Hz and above 300 Hz. The noise source can be modeled as octo-pole
which is the best fit from the noise data. [Work supported by Office of Naval Research, and completed under Prof. Ira Dyer’s supervision
in Massachusetts Institute of Technology in 1990.]
4:00
2pUWa8. Hybrid solution techniques for structural acoustics. Charles Corrado (Appl. Physical Sci. Corp., 475 Bridge St., Ste. 100,
Groton, CT 06340, ccorrado@aphysci.com), Matthew Conti, Ann Stokes (Appl. Physical Sci. Corp., Lexington, MA), and Edward Heyd
(Appl. Physical Sci. Corp., Littleton, CO)
Ira Dyer’s structural acoustics research at MIT emphasized developing an understanding of wave interaction mechanisms and the
role of structural complexity in dictating energy transport, dissipation, and radiation to the surrounding media. His work stressed a combination of experimentation and analytic-numeric methods to delineate the essential wave propagation mechanisms governing the
response of the structure and the radiated field. Here we present an extension of research Ira led to study acoustic scattering from internally loaded shell structures. Hybrid analysis techniques employing wave transmission methods to represent plate and shell structures,
3653
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3653
and analytic or finite element methods to represent attached structures provide a means to discern the partitioning between wave types
comprising the response. The method directly represents all allowable wave types and quantifies their displacement functions and dispersion behavior enabling detailed evaluation of energy transport mechanisms. Interaction loads produced at attached structures are computed using shell structure Green’s function matrices, impedance matrix descriptions of the attachments, and unloaded shell response
vectors. The method is illustrated for marine pile driving and aerospace liftoff applications. Broadband impulse response maps are
shown to illustrate the temporal evolution of the field over the structure and complement frequency domain evaluations providing further
physical insight.
4:20
2pUWa9. Ira Dyer advisor to the Navy. Henry Cox (Lockheed Martin, 4350 N. Fairfax Dr., Ste. 470, Arlington, VA 22203, harry.
cox@lmco.com)
Among his many activities, Ira Dyer was a valued consultant to the Navy. He was called to participate as a charter member of the
Submarine Superiority Technical Advisory Group established by Admiral De Mars in 1995. He served on this and other panels for more
than 15 years. He made many important contributions to the submarine force, particularly relating to ocean acoustics pertaining to operations, and structural acoustics pertaining to submarine design. Ira’s ability to penetrate difficult problems, simplify and explain was
unique. When Ira spoke, every one listened. Comments on his contributions and recollections of his colleagues are presented.
4:40
2pUWa10. An Office of Naval Research perspective on Ira Dyer’s Legacy. Robert H. Headrick (Code 32, Office of Naval Res., 875
North Randolph St, Arlington, VA 22203, bob.headrick@navy.mil)
The talk will highlight a few of the many important contributions Professor Dyer made to underwater acoustics through his interactions with ONR as a researcher, advisor, and educator. His wide ranging expertise included sound and vibration in complex structures
that led the way for the U.S. Navy to develop ultra-quiet submarines, as well as the statistics of sound propagation in the ocean, submarine sonar design, and advances in sonar signal processing. Beginning in 1978, he led and participated in six Arctic field programs,
including the Canadian Basin Arctic Reverberation Experiment that imaged the entire Arctic basin with acoustics. Over the course of
these efforts, that he really enjoyed talking about, he and his students developed a systematic understanding of sound propagation and
noise in the Arctic.
5:00
2pUWa11. The vision of Ira Dyer. Nicholas C. Makris (Mech. Eng., Massachusetts Inst. of Technol., 77 Massachusetts Ave., 5-212,
Cambridge, MA 02139, makris@mit.edu)
We will describe how Ira Dyer’s scientific ideas and vision have been passed on to us and survive as strong as ever influencing the
future.
Contributed Paper
5:20
2pUWa12. Changing Arctic ambient noise. Andrew J. Poulsen (Appl.
Physical Sci., 49 Waltham St., Lexington, MA 02421, poulsen@alum.mit.edu)
and Henrik Schmidt (Massachusetts Inst. of Technol., Cambridge, MA)
The Arctic Ocean is undergoing dramatic changes, with the most apparent being the rapidly reducing extent and thickness of the summer ice
cover. Furthermore, a persistent inflow of a shallow tongue of warm Pacific
water has recently been discovered in the Beaufort Sea region of the Arctic,
often called the Beaufort Lens, which creates a strong acoustic duct between
3654
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
approximately 100 and 200 m depth. These changes have had a significant
effect on underwater acoustic propagation and noise properties. In spring
1994, acoustic data was collected in the Beaufort Sea region of the Arctic
using a suspended vertical array; in spring 2016, similar data was collected
in the same region. The 1994 data features meandering narrow-band features due to ice ridge friction, while the 2016 data in the new Arctic is
largely dominated by ice mechanical events at discrete ranges and bearings.
Supported by acoustic noise modeling, we illustrate these and other noise
properties measured more than two decades apart in a region of rapid and
significant change. [Work supported by ONR and DARPA.]
Acoustics ’17 Boston
3654
MONDAY AFTERNOON, 26 JUNE 2017
ROOM 306, 1:20 P.M. TO 5:40 P.M.
2pUWb
Underwater Acoustics: Sound Propagation and Scattering in Three-Dimensional Environments II
Ying-Tsong Lin, Cochair
Applied Ocean Physics and Engineering, Woods Hole Oceanographic Institution, Bigelow 213, MS#11, WHOI, Woods Hole,
MA 02543
Frederic Sturm, Cochair
Acoustics, LMFA, Centre Acoustique, Ecole Centrale de Lyon, 36, avenue Guy de Collongue, Ecully 69134, France
2p MON. PM
Invited Papers
1:20
2pUWb1. Three-dimensional model benchmarking for cross-slope wedge propagation. Orlando C. Rodrıguez (LARSyS, Campus
de Gambelas - Universidade do Algarve, Faro, N/A PT-8005-139, Portugal, orodrig@ualg.pt), Frederic Sturm (Ctr. National de la
Recherche Scientifique, Ecully, France), Pavel S. Petrov (V.I. Il’ichev Pacific Oceanological Inst., Vladivostok, Russian Federation),
and Michael Porter (HLS Res. Inc., La Jolla, CA)
Cross-slope wedge propagation is considered for the three-dimensional benchmarking of three underwater acoustic models, one
based on normal mode theory and the other two based on ray tracing. To this end, the benchmarking relies on analytic solutions for adiabatic and non-adiabatic propagation, as well as on experimental data from a scale tank experiment (a previous benchmarking with a parabolic equation model is known to provide extremely accurate predictions in all cases). The benchmarking allows to identify the
advantages, accuracy and limitations of the considered models.
1:40
2pUWb2. Examining the intra-modal interference in an idealized oceanic wedge using scale-model experiments and acoustic
propagation modeling. Jason D. Sagers and Megan S. Ballard (Environ. Sci. Lab., Appl. Res. Labs., The Univ. of Texas at Austin,
10000 Burnet Rd., Austin, TX 78758, sagers@arlut.utexas.edu)
Scale-model tank experiments are beneficial because they offer a controlled environment in which to make underwater acoustic
propagation measurements that can provide high-quality data for comparison with numerical models. This talk presents results from a
1:7500 scale model experiment for a wedge with a 10 slope fabricated from closed-cell polyurethane foam to investigate three-dimensional (3D) propagation effects. A 333 ls second long pulse allows the acoustic field to obtain a steady-state, continuous-wave signal. A
computer controlled positioning system accurately moves the receiving hydrophone in 3D space to create a dense field of vertical line
arrays, which are used to mode filter the measured time series. The single-mode fields show the classical interference pattern resulting
from rays launched up and along the slope. The measured data are compared to an exact, closed-form solution for a point source in
wedge with impenetrable boundaries. The finite size of the source and the departure from the perfectly reflecting boundary conditions
are discussed. The measured data are also compared to results from a three-dimensional ray model known as Bellhop3D which can
account for the non-ideal boundary condition. [Work supported by ONR.]
2:00
2pUWb3. Measurements and modeling of three-dimensional acoustic propagation in a scale-model canyon. Megan S. Ballard and
Jason D. Sagers (Appl. Res. Labs. at the Univ. of Texas at Austin, P.O. Box 8029, Austin, TX 78758, meganb@arlut.utexas.edu)
Scale-model acoustic propagation experiments were conducted in a laboratory tank to investigate three-dimensional (3D) propagation effects induced by range-dependent bathymetry. The model bathymetry, patterned after measured bathymetric data, represents a
portion of the Hudson Canyon at 1:7500 scale and was fabricated from closed-cell polyurethane foam using a computer-numerically
controlled (CNC) milling machine. In the measurement apparatus, a computer-controlled positioning system precisely locates the receiving hydrophone which permits the creation of synthetic horizontal line arrays. Results are shown for propagation paths along and across
the axis of the canyon. The measurements are explained using both a hybrid method known as vertical modes/horizontal parabolic equation and a 3D ray model known as Bellhop3D. For propagation along the canyon axis, horizontal focusing is observed and discussed in
the context of normal modes. For the across canyon propagation, the reflective foam walls of the canyon scatter sound back toward the
receiver array from out of plane. This effect is examined using the 3D ray trace. For both cases, the capabilities of the models and their
computation details are discussed. [Work supported by ONR.]
3655
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3655
2:20
2pUWb4. Decoherence effects in 3D fluctuating environments: Numerical and experiment study. Gaultier Real (DGA Techniques
Navales, Ave. de la tour royale, Toulon 83100, France, gaultier.real@gmail.com), Xavier Cristol (Thales Underwater Systems, SophiaAntipolis, France), Dominique Habault (Laboratoire de Mecanique et d’Acoustique, Aix-Marseille Universite, CNRS, Centrale
Marseille, Marseille, France), and Dominique Fattaccioli (DGA Techniques Navales, Toulon, France)
This paper is devoted to the study of the effects of ocean fluctuations on acoustic propagation. The development of an ultrasonic testbench allowing to reproduce, under laboratory conditions, the influence of 3D fluctuations on received acoustic data is presented. The
experimental protocol consists in transmitting, in a water tank, a high-frequency wavetrain throughout an acoustic slab presenting a
plane input face and a randomly rough output face. The various regimes of saturation and unsaturation classically used in the literature
are explored by tuning the statistics of the so-called RAFAL (RAndom Faced Acoustic Lens). Comparisons to a “corresponding” oceanic
medium are obtained via a scaling procedure. In parallel, numerical tools were developed to provide meaningful comparison with the
acquired data. Both based on a split-step Fourier algorithm, a 3D PE simulation of the tank experiment and a 3D PE simulation of real
scale acoustic propagation programs are presented. Features of acoustic fields perturbed by internal waves are found. The relevance of
our procedure is evaluated through calculations of the coherence function (in particular, measurement of the radius of coherence) and
statistical distributions of the received complex pressure and intensity. Comparisons between our scaled measurements, numerical computations and analytical results are analyzed.
Contributed Papers
2:40
3:00
2pUWb5. Experimental modal decomposition of acoustic field in
cavitation tunnel with square duct test section. Romuald Boucheron (DGA
HydroDynam., Chaussee du Vexin, Val-de-Reuil 27105, France, romuald.
boucheron@intradef.gouv.fr), Sylvain Amailland, Jean-Hugh Thomas,
Charles Pezerat (LAUM, Le Mans, France), Didier Frechou, and Laurence
Briançon-Marjollet (DGA HydroDynam., Val-de-Reuil, France)
2pUWb6. Vertical underwater acoustic tomography in an experimental
basin. Guangming Li, David Ingram (Inst. of Energy System, Univ. of
Edinburgh, Faraday Bldg., King’s Buildings,Colin Maclaurin Rd.,
Edinburgh, Scotland EH9 3DW, United Kingdom, G.Li@ed.ac.uk), Arata
Kaneko, Minmo Chen (Graduate School of Eng., Hiroshima Univ., HigashiHiroshima, Hiroshima, Japan), Noriaki Gohda (Graduate School of Eng.,
Hiroshima Univ., Higashihiroshima, Japan), and Nick Polydorides (Inst. of
Digital Communications, Univ. of Edinburgh, Edinburgh, United
Kingdom)
The operational requirements for naval and research vessels have seen
an increasing demand for quieter ships either to comply the ship operational
requirements or to minimize the influence of shipping noise on marine life.
To estimate the future radiated noise of a ship, scale measurements are realized in cavitation tunnel. DGA Hydrodynamics operates its cavitation tunnel
with low background noise which allows such measurements. Understanding acoustic propagation in cavitation tunnel remains a challenge. The success of an accurate acoustic measurement depends both on a realistic
propagation model and also on an efficient control of acoustic sensor characteristics. This short communication presents the results of experiments
performed in GTH (Large Cavitation Tunnel) at DGA Hydrodynamics. An
acoustic source radiates pure sine wave at the entrance of the test section
and generates an acoustic field measured with flush mounted hydrophones.
A modal decomposition is then performed to fit measurements. Complex
amplitudes of all propagative modes could be estimated both for upstream
and downstream propagation. Results show that for a given frequency range,
modal decomposition could be an accurate model for acoustic propagation.
Furthermore, different configurations of the test section and source locations
have been investigated and show acoustic properties of the tunnel.
Ocean acoustic tomography is well developed for monitoring environmental parameter changing for mesoscale ocean distribution. Small scale
underwater acoustic tomography could be used for flow profile mapping in
experimental tank. Vertical acoustical tomography analyse is a key aspect
for 3D mapping of flow current profile. This article investigates vertical
underwater acoustic tomography in a circular multidirectional wave/current
basin. Two modified coastal acoustic tomography (CAT) systems were
deployed in the 25 m diameter circular basin. High frequency (50 kHz) Msequence signal was transmitted reciprocally for time of flight. The 2 m
depth water column was divided into 5 layers for layered analysis. Multipath
arrivals reflected by surface and bottom were distinguished by ray tracing.
The 0.8 m/s straight flow was generated along the sound propagation path.
Vertical layered current velocity was analyzed by solving inverse problem.
The study suggest that vertical acoustic tomography could be used in the lab
test tank for flow current velocity measurement.
3:20–3:40 Break
Invited Papers
3:40
2pUWb7. Radial and azimuthal acoustic propagation effects of the continental slope and sand dunes in the northeastern South
China Sea. Chi-Fang Chen (Eng. Sci. and Ocean Eng., National Taiwan Univ., Taipei, Taiwan), Linus Chiu (Inst. of Appl. Marine
Phys. and Undersea Technol., National Sun Yat-sen Univ., Kaohsiung, Taiwan), and Ching-Sang Chiu (Dept. of Oceanogr., Naval
Postgrad. School, 833 Dyder Rd., Rm. 328, Monterey, CA 93943-5193, chiu@nps.edu)
A sound source, transmitting a 1.5-2.0 kHz chirp signal periodically, was towed at a depth of 50 m along a circular track that has a
radius of 3 km centered on a vertical hydrophone array moored on the upper slope of the northeastern South China Sea during the Sand
Dunes Acoustic Propagation Experiment in 2014. The largest amplitude of these sand dunes was close to 20 m with horizontal length
scales between 200 and 400 m. Two-dimensional (2-D) and three-dimensional (3-D) underwater acoustic propagation models, namely
FOR3D with the Nx2D option and FOR3D with the fully 3D option, were employed to simulate the acoustic propagation over the continental slope, with and without the sand dunes, from the towed source to the vertical hydrophone array. Environmental inputs to the
3656
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3656
models were measured bathymetry and sound speed profiles, obtained from multibeam echo sounding surveys and moored oceanographic sensors, respectively. Simulation results pertaining to the 2-D and 3-D propagation effects in relation to the slope, the sand dunes
and the water-column variability are presented and discussed. Simulation results are also compared to the measured transmission data.
[The research is jointly sponsored by the Taiwan MOST and the US ONR.]
4:00
2pUWb8. Variability of the sound field interference pattern due to horizontal refraction in shallow water. Boris Katsnelson
(Marine GeoSci., Univ. of Haifa, Mt. Carmel, Haifa 31905, Israel, bkatsnels@univ.haifa.ac.il)
Variability of the ocean waveguide’s parameters in horizontal plane (bathymetry, sound speed profile, etc.) can lead to some set of
effects in the sound propagation, which are specified as horizontal refraction, or in more general form, as 3D effects. These effects are
being studied both in shallow and deep water, some of them were registered in experiments, or were analyzed using different theoretical
models. In given paper main attention is drawn to the theory and experimental manifestation of spatial and temporal variations of interference pattern formed by narrow-band signals. Theoretical analysis is carried out using approach “Vertical Modes and Horizontal Rays”
and “Vertical Modes and PE in horizontal plane.” It is shown, that dependence of the sound field distribution in horizontal plane (both in
a ray or PE approaches) on frequency and mode number, combined with possible multipath propagation, may lead to rather specific
observable effects: variations of the amplitude and phase fronts, evolution of signal’s spectrum and shape, appearance of whispering gallery modes in vicinity of curvilinear coastal line. Results of modeling and experiments (mainly Shallow Water 2006) are presented. Data
processing techniques to register 3D effects are discussed. [Work was supported by ISF.]
4:20
5:00
2pUWb9. Characteristics of bottom-diffracted surface-reflected
arrivals in ocean acoustics. Ralph A. Stephen, S. T. Bolmer (Woods Hole
Oceanographic Inst., 360 Woods Hole Rd., Woods Hole, MA 02543-1592,
rstephen@whoi.edu), Peter F. Worcester, and Matthew A. Dzieciuch
(Scripps Inst. of Oceanogr., La Jolla, CA)
2pUWb11. Sound propagation in deep water with an uneven bathymetry.
Zhiguo Hu, Zhenglin Li, Renhe Zhang, Yun Ren (State Key Lab. of Acoust.,
Inst. of Acoust.,Chinese Acad. of Sci., No. 21, North Fourth Ring Rd. West,
Haidian District, Beijing 100190, China, hzhg@mail.ioa.ac.cn),
Bottom-diffracted surface-reflected (BDSR) arrivals are a ubiquitous
feature in long-range ocean acoustic propagation. BDSRs are distinct from
bottom-reflected surface-reflected (BRSR) arrivals because the angle of
emergence is not equal to the angle of incidence. They are not predicted by
existing forward models based on available bathymetric and bottom properties data. Research cruises in the Philippine Sea and North Pacific, in 2011
and 2013, respectively, were carried out to understand BDSRs in more detail
for transmissions out to 50 km range. Transmissions from a controlled
source at about 60m depth were received on ocean bottom seismometers
and a deep vertical line array of hydrophones. In the North Pacific experiment alone over 40 distinct bottom diffractor locations were identified.
Based on these data sets BDSRs can be characterized in terms of: (a) inplane or out-of-plane diffractors, (b) diffractor location relative to bathymetric features, c) grazing angle of the incident field, d) transmission frequency
(from 75 to 310 Hz), (e) receiver type (vertical or horizontal seismometer,
hydrophone, etc.), (f) receiver location, and (g) signal strength relative to
direct and BRSR paths. [Work supported by ONR.]
4:40
2pUWb10. Multi-modal and short-range transmission loss in ice-covered,
near-shore Arctic waters. Miles B. Penhale and Andrew R. Barnard (Mech.
Eng. - Eng. Mechanics, Michigan Technolog. Univ., 1400 Townsend Dr.,
R.L. Smith MEEM Bldg., Houghton, MI 49931, mbpenhal@mtu.edu)
Prior to the 1970s, extensive research has been done regarding the sound
propagation in thick (kilometers) ice sheets in Arctic and Antarctic environments. Due to changing climate conditions in these environments, new
experimentation is warranted to determine sound propagation characteristics
in, through, and under thin-ice sheets (meters). In April 2016, several
experiments were conducted approximately 1 mile off the coast of Barrow,
Alaska on shore-fast, first year ice, approximately 1.5 m thick. To determine
the propagation characteristics of various sound sources, Frequency
Response Functions (FRFs) were measured between a source location and
several receiver locations at various distances from 50 m to 1 km. The primary sources used for this experiment were, an underwater speaker with
various tonal outputs, an instrumented impact-hammer on the ice, and a propane cannon that produced an acoustic blast wave in air. In addition, several
anthropogenic sources, namely, a snowmobile, generator, and ice auger,
were characterized. The transmission characteristics of the multipath propagation (air, ice, and water) are investigated and reported.
3657
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Water depth has significant effects on sound propagation in underwater.
A deep-water propagation experiment along two different tracks with the
flat and uneven bottoms was conducted in the South China Sea in 2014.
Some different propagation phenomena and horizontal-longitudinal correlations oscillation patterns were observed. Due to the reflection-blockage
effects by a sea hill with height less than 1/10 water depth, transmission
losses increase up to about 8 dB in the reflection area of the sea hill than
that of flat bottom. Moreover, there is an inverted-triangle shadow zone
with a maximal depth of 1500 m below the sea surface. The horizontal-longitudinal correlations in the reflection area of the sea hill do not show an
obvious cyclical oscillation any longer as that in flat bottom environment.
The differences of the transmission losses and the horizontal-longitudinal
correlations oscillation patterns are explained by using the ray theory.
[Work supported by the National Natural Science Foundation of China
under Grant Nos. 11434012, 41561144006, and 11404366.]
5:20
2pUWb12. Acoustic propagation from the transitional area to deep
water. Jixing Qin, Renhe Zhang, and Zhenglin Li (State Key Lab. of
Acoust., Inst. of Acoust., Chinese Acad. of Sci., No. 21 North 4th Ring Rd.,
Haidian District, Beijing 100190, China, qjx@mail.ioa.ac.cn)
Sound propagation over the continental shelf and slope is complicated
and is also an important problem. Motivated by a phenomenon in an experiment conducted in the Northwestern Pacific indicating that the energy of the
received signal around the sound channel axis is much greater than that at
other depths, we study sound propagation from the transitional area to deep
water. Numerical simulations with different source depths are first performed, from which we reach the following conclusions. When the source is
located near the sea surface, sound wave will be strongly attenuated by bottom losses in a range-independent environment, whereas it can propagate to
a very long range because of the continental slope. When the source is
mounted on the slope bottom in shallow water, acoustic energy will be
trapped near the sound channel axis, and it converges more evidently than
the case where the source is located near the surface. Then, simulations with
different source ranges are performed. By comparing the relative energy
level in the vertical direction between the numerical results and the experimental data, the range of the unknown air-gun source is approximated. The
phenomenon can be confirmed by the experiment with a deterministic source
located in the transitional area. [Work supported by the National Natural Science Foundation of China under Grant Nos. 11434012 and 41561144006.]
Acoustics ’17 Boston
3657
2p MON. PM
Contributed Papers
MONDAY AFTERNOON, 26 JUNE 2017
ROOM 104, 2:00 P.M. TO 3:00 P.M.
Meeting of Accredited Standards Committee (ASC) S3/SC 1, Animal Bioacoustics
D. S. Houser, Vice Chair ASC S3/SC 1
National Marine Mammal Foundation, 2240 Shelter Island Drive Suite 200, San Diego, CA 92106
K. Fristrup, Vice Chair ASC S3/SC 1
National Park Service, Natural Sounds Program, 1201 Oakridge Dr., Suite 100, Fort Collins, CO 80525
Accredited Standards Committee S3/SC 1 on Animal Bioacoustics. Working group chairs will report on the status of standards under
development. Consideration will be given to new standards that might be needed over the next few years. Open discussion of committee
reports is encouraged.
People interested in attending the meeting of the TAGs for ISO/TC 43/SC 1 Noise and ISO/TC 43/SC 3, Underwater acoustics, take
note—those meetings will be held in conjunction with the Standards Plenary meeting at 9:15 a.m. on Monday, 26 June 2017.
Scope of S3/SC 1: Standards, specifications, methods of measurement and test, instrumentation and terminology in the field of psychological and physiological acoustics, including aspects of general acoustics, which pertain to biological safety, tolerance, and comfort of
non-human animals, including both risk to individual animals and to the long-term viability of populations. Animals to be covered may
potentially include commercially grown food animals; animals harvested for food in the wild; pets; laboratory animals; exotic species in
zoos, oceanaria, or aquariums; or free-ranging wild animals.
MONDAY AFTERNOON, 26 JUNE 2017
ROOM 104, 3:15 P.M. TO 4:30 P.M.
Meeting of Accredited Standards Committee (ASC) S3 Bioacoustics
C. J. Struck, Chair ASC S3
CJS Labs, 57 States Street, San Francisco, CA 94114 1401
P. B. Nelson, Vice Chair ASC S3
Department of SLHS, University of Minnesota, 115 Shevlin, 164 Pilsbury Drive S.E., Minneapolis, MN 55455
Accredited Standards Committee S3 on Bioacoustics. Working group chairs will report on the status of standards under development.
Consideration will be given to new standards that might be needed over the next few years. Open discussion of committee reports is
encouraged.
People interested in attending the meeting of the TAGs for ISO/TC 43 Acoustics and IEC/TC 29 Electroacoustics, take note—those
meetings will be held in conjunction with the Standards Plenary meeting at 9:15 a.m. on Monday, 26 June 2017.
Scope of S3: Standards, specifications, methods of measurement and test, and terminology in the fields of psychological and physiological acoustics, including aspects of general acoustics which pertain to biological safety, tolerance, and comfort.
3658
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3658
MONDAY AFTERNOON, 26 JUNE 2017
ROOM 104, 4:45 P.M. TO 5:45 P.M.
Meeting of Accredited Standards Committee (ASC) S1 Acoustics
R. J. Peppin, Chair ASC S1
5012 Macon Road, Rockville, MD 20852
A. A. Scharine, Vice Chair ASC S1
U.S. Army Research Laboratory, Human Research & Engineering Directorate
ATTN: RDRL-HRG, Building 459, Mulberry Point Road,
Aberdeen Proving Ground, MD 21005 5425
Accredited Standards Committee S1 on Acoustics. Working group chairs will report on the status of standards currently under development in the areas of physical acoustics, electroacoustics, sonics, ultrasonics, and underwater sound. Consideration will be given to
new standards that might be needed over the next few years. Open discussion of committee reports is encouraged.
2p MON. PM
People interested in attending the meeting of the TAGs for ISO/TC 43 Acoustics, ISO/TC 43/SC 3, Underwater acoustics, and IEC/
TC 29 Electroacoustics, take note—those meetings will be held in conjunction with the Standards Plenary meeting at 9:15 a.m. on Monday, 26 June 2017.
Scope of S1: Standards, specifications, methods of measurement and test, and terminology in the field of physical acoustics, including
architectural acoustics, electroacoustics, sonics and ultrasonics, and underwater sound, but excluding those aspects which pertain to biological safety, tolerance, and comfort.
MONDAY EVENING, 26 JUNE 2017
8:00 P.M. TO 9:30 P.M.
OPEN MEETINGS OF TECHNICAL COMMITTEES
The Technical Committees of the Acoustical Society of America will hold open meetings on Monday and Wednesday. See the list below
for the exact schedule.
These are working, collegial meetings. Much of the work of the Society is accomplished by actions that originate and are taken in these
meetings including proposals for special sessions, workshops, and technical initiatives. All meeting participants are cordially invited to
attend these meetings and to participate actively in the discussion.
Committees meeting on Monday, 26 June
Committee
Acoustical Oceanography
Animal Bioacoustics
Architectural Acoustics
Engineering Acoustics
Physical Acoustics
Psychological and Physiological
Acoustics
Structural Acoustics and Vibration
Start Time
8:00 p.m.
8:00 p.m.
8:00 p.m.
8:00 p.m.
8:00 p.m.
8:00 p.m.
Room
310
313
207
204
210
311
8:00 p.m.
312
Committees meeting on Wednesday, 28 June
Committee
Biomedical Acoustics
Musical Acoustics
Noise
Signal Processing in Acoustics
Speech Communication
Underwater Acoustics
3659
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Start Time
8:00 p.m.
8:00 p.m.
8:00 p.m.
8:00 p.m.
8:00 p.m.
8:00 p.m.
Room
312
200
203
302
304
310
Acoustics ’17 Boston
3659
TUESDAY MORNING, 27 JUNE 2017
BALLROOM B, 8:00 A.M. TO 9:00 A.M.
Session 3aIDa
Interdisciplinary: Keynote Lecture
Keynote Introduction—8:00
Invited Paper
8:05
3aIDa1. Hearing as an extreme sport: Underwater ears, infra to ultrasonic, and surface to the abyss. Darlene R. Ketten (Otology
and Laryngology, Harvard Med. School, Boston Univ. and Harvard Med. School, Boston, MA 6845, dketten@whoi.edu)
It has been argued that “hearing” evolved in aquatic animals. Auditory precursors exist in fossil Agnatha and cephalopod statolithic
organs, but how/when/why did a dedicated acoustic receptor, the true primordial ear, first appear? Did hearing arise linearly or independently, in parallel, in and out of the water? Modern aquatic species have an extraordinary range of auditory systems, from simple pressure
receptors to complex biosonar systems. What drives this breadth of “hearing”? Vertebrate ears reflect selective pressures. While vision,
touch, taste, and olfaction are important, only hearing is ubiquitous. Even natural mutes, like goldfish and sea turtles, listen. Ears capture
passive and active sound cues. Auditory structures, honed by habit and habitat, delimit species abilities to detect, analyze, and act on survival cues. Cochleae, from shrews to bats to wolves to whales, evolved from the essential papilla of stem reptiles, elongating, coiling,
increasing in complexity that enhanced frequency discrimination, but with heads tuned to the physics of sound in their media. Air-water
parallels evolved: ultrasonic echolocators and massive infrasonic specialists. The ear then is a window into the evolutionary push-pull
driven by three tasks that shaped the several thousand elements packed into every auditory system: feed, breed, and survive another day.
TUESDAY MORNING, 27 JUNE 2017
ROOM 206, 9:15 A.M. TO 12:20 P.M.
Session 3aAAa
Architectural Acoustics: Retrospect on the Works of Bertram Kinzey I
Gary W. Siebein, Cochair
Siebein Associates, Inc., 625 NW 60th Street, Suite C, Gainesville, FL 32607
David Lubman, Cochair
DL Acoustics, 14301 Middletown Ln., Westminster, CA 92683-4514
Chair’s Introduction—9:15
Invited Papers
9:20
3aAAa1. Professor Bertram Y, Kinzey, Jr., mentor, scholar, practitioner, artist, and colleague. Gary W. Siebein (Siebein Assoc.,
Inc., 625 NW 60th St., Ste. C, Gainesville, FL 32607, gsiebein@siebeinacoustic.com)
Professor Bertram Y. Kinzey, Jr., had significant impacts on the shape of architectural acoustics today through his teaching, professional practice, research, and service. He influenced several generations of students at Virginia Tech and the University of Florida where
he developed innovative, laboratory-based courses to teach architects about acoustics and other environmental technology subjects. At
Florida, he initiated graduate coursework to allow students to specialize in Environmental Technology and later architectural acoustics
3660
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3660
as part of the professional Master of Architecture curriculum, with a thesis in acoustics in addition. His teaching was based on ideas
expressed in his landmark textbook, Environmental Technologies in Architecture that was published as a primer in the art and science of
sustainable design in 1963, over 50 years ahead of its time. His students entered the profession with ideas of sustainability, integrating
serious, graduate level research into their design work, grounded by the experiences that Bert brought to the classroom from his practice
where he consulted on most of the major buildings built in Florida from the 1960s until his retirement many years later. The acoustical
richness of the buildings that he worked on sounds clearly in all of the cities and towns where he worked!
9:40
3aAAa2. Contrapuntal design: The influence of Bertram Y. Kinzey, Jr., on architectural pedagogy at the University of Florida.
Martin A. Gold (Architecture, Univ. of Florida, 231 ARCH, Gainesville, FL 32611-5702, mgold@ufl.edu)
Bertram Kinzey introduced modes of analysis, conceptualization, and exploration, as a dedicated and productive researcher, author,
teacher, architect, consultant, and musician that were transformative and continue to reverberate internationally through the work of his
students. Bert’s insightful theoretical and practical relationships between music, acoustics, and architecture were drawn from his reflective approach to teaching and refined through active practice. Perhaps foundational to his educational contribution, is his interest and talent as an organist and organ builder leading to his use of “contrapuntal” to describe the interaction of music and architecture—new
notes overlapping those lingering in the space and time of reverberance. Overlapping, independent, yet harmonizing elements might also
describe Bert’s approach to architectural design as a theoretical construct; an integrative process; and as a measure of great architecture.
Contrapuntal thinking is well known in literature and music but is just emerging in architecture. Bert introduced this important idea—
among many others at the University of Florida—through his writings and tenure as a UF professor. This paper discusses Bert’s ‘contrapuntal thinking’ and the subsequent influence on architectural education through a recent graduate studio project for a new arts park and
concert hall for the community of Sarasota, Florida.
10:00
3aAAa3. A living example—An attitude towards sustainable living. Weihwa Chiang (Vice President Office, National Taiwan Univ.
of Sci. and Technol., 43 Keelung Rd. Section 4, Taipei 106, Taiwan, whch@mail.ntust.edu.tw)
3a TUE. AM
A sustainable built environment aims for minimizing environmental impact while maintaining healthy living. The Acoustics and
Environmental Technology programs started by Professor Bertram Y. Kinzey, Jr., at the University of Florida were established with
architecture design integration in mind. The programs and Professor Kinzey’s influence were not limited to the books and theories but
also to the lives of his students. His remarkable work, Bailey Hall at Broward Community College, the University Auditorium renovation at UF and classroom are valuable memories to students who sat in them daily for decades. His passive design signature house in
Gainesville is another unforgettable inspiration for students. It was where he successfully applied sustainable design concepts and theories in reality in the early 1960s. These are evidence of how he inspires students by his life as a solid practitioner as well as a modern
“Renaissance man.” Despite his retirement from UF, his influence has been passing on and spread around the world continually. His
pass-through guidance has been followed in teaching, research, design and planning projects with a holistic approach. It not only peruses
technical excellence with thoughtful social-economic concerns but heavily relies on “Kinzeian” humanity and attitude for sustainable
living and architecture.
10:20
3aAAa4. The shape of sound. Hyun Paek and Gary W. Siebein (Siebein Assoc., Inc., 625 NW 60th St., Ste. C, Gainesville, FL 32607,
hpaek@siebeinacoustic.com)
The work of Bertram Y. Kinzey, Jr., has influenced generations of architecture students and acousticians to visualize the sounds that
become the acoustic signature of a performance or rehearsal space. His research and practice has shaped spaces we hear and listen in.
Through the shaping of space, music and speech are enhanced and optimized by the addition of each curve, angle, planes, that become
the unique character of the space. The sounds created on stages of theaters, concert halls, worship platforms, or lecterns are reflected,
directed, and propagated to enhance the experience of the listener. Striving to achieve the perfect impulse response, we have learned
from Bertram Kinzey that the acoustician’s work is not unlike the work of a sculptor, an interior designer, or an architect. Each surface
and each texture on the surface has a purpose and a meaning to its existence.
10:40–11:00 Break
11:00
3aAAa5. Bert Kinzey, teacher, and collaborator in design. William W. Brame (606 NE First St., Gainesville, FL 32601, b.brame@
brameheck.com)
As both his student and his client, I have known Bert Kinzey since 1971. In 1973 I started my career in Gainesville, and recently retired as
the senior partner of the 106 year old firm known as Brame Heck Architects, Inc. From my earliest days up until 2004 when Bert moved away
to Virginia, I had the pleasure of retaining Bert to serve as our architectural acoustics consultant on a wide variety of projects. These projects
included music rooms for educational facilities, conference centers, worship centers, and multi-purpose facilities that required excellent acoustics regardless of what was taking place in the space. Ever the teacher, he would explain his recommendations and solutions to each situation
in such a way that I (and our clients) understood both the acoustical theory and the resulting physical manifestations that were being incorporated into the facility. We would routinely seek Bert’s expertise for assuring that our new facilities were properly designed to isolate mechanical or environmental sounds. And, if a client had an existing facility that needed to remedy such noises, he would perform field testing and
explore various options to solve those problems. Bert is a true gentleman and master of his profession.
3661
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3661
11:20
3aAAa6. Studying with Professor Bertram Y. Kinzey, Jr., in the University of Florida, School of Architecture Environmental
Technologies Option and its impact on my professional career. Bruno E. Ramos (BEA Architects Inc., 3075 NW South River Dr.,
Miami, FL 33142, ber@beai.com)
Professor Kinzey was instrumental in the careers of many professionals. He challenged students daily, pushed us to think “outside
the box” and considered the integration of sustainable design principles in buildings long before LEED ever existed. I recall him saying
“the old tests are in the library, you may go and review them... as it’s my job to test your knowledge in different ways.” He encouraged
his students to gain the knowledge and experience to derive the answers. The scale models we built to measure and study acoustics and
lighting within spaces also influenced my professional career. We applied those same concepts in marine architecture using a “Kinzey
type” solution by building a scale model to determine the requirements and force needed to haul a large barge out of shallow water.
Lastly, I am personally grateful to Professor Kinzey for stimulating the pursuit of excellence in his students which helped me to pass the
A.R.E. and LEED licensing, at a very young age, in my first attempt. Professor Kinzey has had a lifetime of significant work in Acoustics and has influenced thousands of students in a profound manner.
11:40
3aAAa7. An appreciation of the educational achievements of Bert Kinzey. Edward G. Clautice (Global Buildings, Jacobs Eng., 1100
North Glebe Rd., Arlington, VA 22201, edward.clautice@jacobs.com)
The aim of this paper is to describe the achievements of Bertram Kinzey in the field of acoustics within architectural education. Bert
Kinzey’s ability to bridge engineering and architecture is of particular note to the personal experience of the author. With Bert’s personal
efforts that the University of Florida was able to begin accepting students into the program who did not have a traditional undergraduate
degree in architecture. As the first student to matriculate into the Graduate School of Architecture at the University of Florida from this
category the author owes a great deal of his personal career to Bert Kinzey. Bert lead the Environmental Technologies option at the
School, which brought the art of music, the art of architecture and the engineering of applied technology into a combination which has
enriched the profession(*). The study of concert hall architecture and acoustics has benefited greatly from Bert’s leadership where acoustical modeling and auralization can stand beside traditional visual techniques in the field of architectural modeling.
12:00
3aAAa8. Spreading acoustics to architecture programs. Sang Bum Park (School of Architecture and Eng. Technol., Florida A and M
Univ., 1938 South Martin Luther King Jr. Blvd., Tallahassee, FL 32307, sang.park@famu.edu)
Environmental Technology is one of the important core courses in an architecture program. It provides architectural students with
basic physical principles in thermodynamics, acoustics, lighting, and indoor air quality and architectural design factors that affect those
environmental qualities. This course is the only channel for the architectural students to be exposed to acoustical education and research,
unless there are master’s degree programs associated with acoustics at the school. Professor Bertram Kinzey, Jr., was the first one who
planned and started the environmental technology and architectural acoustics program in the School of Architecture at UF. Since then,
UF has been contributing as a producer of acoustical educators and researchers. The author taught discussion sessions of Environmental
Technology as a graduate teaching assistant between 2008 and 2012 at UF, while he conducted acoustical research for his doctoral
degree. He is now spreading the acoustical education to the School of Architecture and Engineering Technology at FAMU where previously there was no faculty or acoustical researchers with acoustical backgrounds through courses such as Environmental Systems in
Architecture and Architectural Acoustics. He also provides the design studios with acoustical workshops and helps thesis students whose
design priority lies in environmentally sustainable buildings.
3662
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3662
TUESDAY MORNING, 27 JUNE 2017
ROOM 208, 9:20 A.M. TO 12:00 NOON
Session 3aAAb
Architectural Acoustics: Room Acoustics Design for Improved Behavior, Comfort, and Performance II
Nicola Prodi, Cochair
Dept. of Engineering, University of Ferrara, via Saragat 1, Ferrara 44122, Italy
Kenneth P. Roy, Cochair
Building Products Technology Lab, Armstrong World Industries, 2500 Columbia Ave., Lancaster, PA 17603
Invited Papers
9:20
3aAAb1. S&N-S Light: An anthropic noise control device to reduce the noise level in densely occupied spaces encouraging personal control of voice. Sonja Di Blasio, Giulia Calosso, Giuseppina E. Puglisi, Giuseppe Vannelli (Dept. of Energy, Politecnico di Torino,
Corso DC degli Abruzzi, 24, Torino 10129, Italy, sonja.diblasio@polito.it), Simone Corbellini (Dept. of Electronics and Telecommunications, Politecnico di Torino, Turin, Italy), Louena Shtrepi, Marco C. Masoero (Dept. of Energy, Politecnico di Torino, Torino, Italy),
and Arianna Astolfi (Dept. of Energy, Politecnico di Torino, Turin, Italy)
3a TUE. AM
Recently, in many fields related to environmental quality, such as thermal and visual quality, the tendency is to customize the comfort according to the user’s needs. A tailored comfort zone is planned in public spaces, in which occupants can set their own comfort
level with passive or active systems. In this context, the reduction of noise due to anthropic sources can be seen as a priority. In densely
occupied spaces, such as classrooms, workplaces, restaurants, and outdoor spaces, the noise due to users chatting has a detrimental effect
upon performance, health, and environmental quality. This study reports applications of S&N-S Light, Speech & Noise Stop-Light, a
patented smart phonometric device with a warning light activation of exceeding of predetermined anthropic sound levels limits, which
encourages personal voice control through visual feedback. The light activation, with green, yellow, and red color, is based on an adaptive algorithm that accounts for pre-defined statistical noise levels; therefore, accidental noise levels can be filtered. The device has been
used in classrooms, restaurants, and urban squares. Results indicate a decrease of noise levels with S&N-S Light, especially when the
occupants received training about the device benefits.
9:40
3aAAb2. Designing triangular diffusers for architectural acoustics. Trevor J. Cox (Acoust. Res. Ctr., The Univ. of Salford, Newton
Bldg., Salford M5 4WT, United Kingdom, t.j.cox@salford.ac.uk) and Peter D’Antonio (Chesapeake Acoust. Res. Inst. LLC, Bowie,
MD)
Pyramids and wedges can be used to change how sound is reflected in concert halls and other performance spaces. Simple geometric
acoustic models can explain the reflection behavior when the wavelength of sound is small compared to the dimensions of the faces.
Depending on the angle between adjacent surfaces, considerable dispersion, moderate diffusion, or specular reflection can result. There will
be a bandwidth where geometric models are inaccurate because diffraction will be significant. Consequently, this study uses 2D Boundary
Element Methods (BEMs) to overcome the wavelength limit. It is assumed that the understanding from 2D triangles can be generalized to
3D surfaces such as pyramids. A numerical optimization process is used to design arrays of triangles, examining the effect of depth, asymmetry, and periodicity. The performance for shallow and deep surfaces will be presented for different incident sound fields.
Contributed Papers
10:00
3aAAb3. The research of ceramic as sound absorption material in
underground space. Hui Li and Xiang Yan (Acoust. Lab of Architecture
School, Tsinghua Univ., Main Bldg., Rm. 104, Beijing, Beijing 100084,
China, lihuisylvia@aliyun.com)
Beijing-Zhangjiakou High Speed Rail will be completed at the end of
2019. Badaling Station, the only underground station in this line, is the
deepest high speed rail station in China with a depth of 102 m. The platform
and transforming hall are both narrow spaces, which means acoustic treatment is necessary for the sake of speech articulation. Taking fireproofing,
waterproofing, long lasting, culture and visual effect into consideration, ceramic is the most and only appropriate material being used in the
3663
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
underground station. This article will introduce the whole process of design
and scale model test for the ceramic Helmholtz resonant cavity.
10:20–10:40 Break
10:40
3aAAb4. Enhancement of bass frequency absorption in fabric-based
absorbers. Jonas Schira (Sales Manager Acoust., Gerriets GmbH, Im Kirchenh€
urstle 5-7, Umkirch 79224, Germany, jschira@gerriets.com)
Variable Acoustics has become an important topic in the acoustic design of
multi-purpose venues but also in classical concert halls and opera houses. Varying the reverberation time in the middle and high frequencies can easily be
Acoustics ’17 Boston
3663
achieved by using fabric-based absorbers like curtains or roll banners. When
using fabric-based absorbers, the proportionally low absorption capability in the
bass frequencies below 400 Hz can be challenging when planning a multi-purpose facility. A full range variable system would provide a great tool to consultants and architects. Double layer roll banner systems with a highly absorbing
fabric are mostly installed freely hanging in front of the wall. Research shows a
significant enhancement of the bass frequency absorption if the fabric is installed in an enclosed housing. This paper will examine the fundamental problem of low bass absorption capability but will also show the measurement data
and technical solution for the described problem.
11:00
3aAAb5. Noise insulation of a curtain wall for natural ventilation. JeanPhilippe Migneron, Andre Potvin, and Jean-Gabriel Migneron (School of
Architecture, Univ. Laval, 1, cote de la Fabrique, PQ City, QC G1K 7P4,
Canada, jean-philippe.migneron.1@ulaval.ca)
The growing interest in natural or hybrid ventilation systems brings a
challenge for good integration of openings in building façades. In a noisy
environment, there is a major limitation for the use of direct openings in
common building envelopes. As a part of a research project dedicated to
this problem, it is possible to evaluate the impact of a curtain wall that could
be added to get a double skin façade. Experimental measurements made in
laboratory conditions lead to the estimation of usual noise reduction and
sound transmission class. The airflow at constant differential pressure was
assessed as a function of the aperture and compared to sound insulation.
Analyzing those parameters together give useful information for the design
of passive ventilation with a significant airflow when acoustic performance
is an important issue. This paper aims to detail performances of common
curtain wall assemblies.
11:20
3aAAb6. Design and optimization of the sound diffusers using radial basis functions-based shapes and genetic algorithms. Ricardo Patraquim,
Luis Godinho , and Paulo Amado Mendes (ISISE, Dept. Civil Eng., Univ.
of Coimbra, DEC/FCTUC - Rua Luis Reis Santos, Coimbra 3030-788, Portugal, ricardo.patraquim@gmail.com)
Sound diffusers are a common technical solution used in the last four
decades for conditioning performance rooms with greater acoustic requirements. A significant number of the acoustic diffusers commercially
3664
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
available are based on the phase grating diffusers or Schroeder-type diffusers. However, in some particular cases, the visual appearance of the acoustic
conditioning of the room with QRDs is considered by architects to be unaesthetic or visually unattractive in modern spaces, and thus, other geometrical
forms of the diffusive surfaces or elements need to be customized and
explored. The optimization of the diffusers’ design has been a topic of
intense research in the last years. In this paper, the authors propose an alternative technique to define new shapes of sound diffusion configurations,
based on the use of radial basis functions (RBF). In addition, to allow the
definition of optimal surface shapes for a given frequency band, a genetic
algorithm is used. The diffusion coefficient is computed within the optimization procedure using the Kirchoff integral equation. A set of application
results are presented and discussed and experimental evaluation of the diffusers is performed in a simplified semi-anechoic room (to evaluate diffusivity) according to ISO 17497-2 in order to compare the numerical results.
11:40
3aAAb7. Rezonator effects of student cabinets used in classes. Filiz B.
Kocyigit (Architecture, Atilim Univ., incek, Ankara 06560, Turkey, filizbk@gmail.com)
Achieving acoustic comfort in high school is of great importance for
increasing the quality of education. Due to the high number of students in
classrooms, canteens, cafeterias, and corridors, the voices of young girls and
young men are intense. Apart from that, HVAC systems, lighting fixtures,
announcement systems, and electronic devices increase the background
sound level and students communicate with each other with a higher sound
pressure level. The conditions of use of the training areas require hygiene
and materials resistant to vandalism. This necessitates the use of hard and
smooth material, which increases the RT of the interior spaces. Different
methods are being sought for ensuring the value of RT required to increase
D50, C80, and S/N level in student-student-teacher communication in classrooms. For this purpose, different schools were observed in different sizes,
covered with different materials and interior materials, and RT, EDT, D50,
and C80 measurements, Lmax, Lmin, and Leq measurements were made.
The studies show that circular and rod-shaped openings with ventilation purpose in the student cupboards used in the space show swallowing characteristics at low frequencies and they are working as a rezonator. In this study,
samples from different classrooms and factors affecting indoor sound quality were evaluated.
Acoustics ’17 Boston
3664
TUESDAY MORNING, 27 JUNE 2017
ROOM 207, 9:20 A.M. TO 12:00 NOON
Session 3aAAc
Architectural Acoustics: Acoustic Regulations and Classification of New and Retrofitted Buildings II
Birgit Rasmussen, Cochair
SBi, Danish Building Research Institute, Aalborg University Copenhagen, A.C. Meyers Vænge 15,
Copenhagen SV 2450, Denmark
Jorge Patricio, Cochair
LNEC, Av. do Brasil, 101, Lisbon 1700-066, Portugal
David S. Woolworth, Cochair
Oxford Acoustics, 356 CR 102, Oxford, MS 38655
Contributed Papers
3aAAc1. Determining “reasonable” levels of sound insulation in domestic properties for use in building regulations. Richard G. Mackenzie, Nicola Robertson (RMP Acoust., Edinburgh Napier Univ., 42 Colinton Rd.,
Edinburgh, Scotland EH10 5BT, United Kingdom, ri.mackenzie@napier.ac.
uk), and Sean Smith (Inst. for Sustainable Construction, Edinburgh Napier
Univ., Edinburgh, United Kingdom)
The Scottish Building Regulations, similar to other countries regulations, provide standards for the protection of occupants health. Minimum
sound insulation standards are provided to control noise passing through
walls and floors from neighboring properties. In Scotland, the minimum
standard should provide adequate protection for a “reasonable” person
from “normal” living activities. This paper presents the findings of a
research study undertaken by Edinburgh Napier University, to assess the
level of sound insulation that test subjects would deem to be “reasonable”
for a range of noise sources passing through a variety of construction types.
The study undertaken within the Universities auralization suite assessed the
responses involving over 100 participants subjected to common domestic
noise sources passing through separating structures. The participants were
given the ability to adjust the source noise level for each source type to
determine the level of noise from their neighbor they would consider
“reasonable” to tolerate. The participants determinations were correlated to
the equivalent sound insulation for each source and construction assessed.
The paper will present the setup in the test suite, and the results of the study
will be presented in a range of acoustic parameters, DnT,w, STC, R’w and be
assessed against a range of countries current standards.
9:40
3aAAc2. A strategy for sustainable acoustic classification scheme of
dwellings. Miomir Mijic (School of Elec. Eng., Univ. of Belgrade, Bulevar
Kralja Aleksandra 73, Belgrade 11000, Serbia, emijic@etf.rs), Aleksandar
Milenkovic, Danica Boljevic (Acoust. Lab., Institut IMS, Belgrade, Serbia),
and Dragana Sumarac Pavlovic (School of Elec. Eng., Univ. of Belgrade,
Belgrade, Serbia)
The new legislation in Serbia for sound insulation in buildings, currently
in preparation, should introduce for the first time a national implementation
of dwellings’ acoustic classification. There are some structural systems and
building typology that are common in existing housing stock. Some of them
are caused by seismic demands and some by architects’ routines. All that
introduced realistically accessible range of apparent sound reduction index
values at different positions in existing buildings. To achieve a sustainable
classification scheme in prepared new legislation the analysis of sound insulation in existing buildings were performed and implications of possible
boundary values between different classes on the housing stock acoustic
score were analyzed. Based on such approach, the suggestions concerned
with the classification scheme and class limits are presented.
Invited Papers
10:00
3aAAc3. Acoustic regulations and classification of various types of buildings in the Nordic countries. Steindor Gudmundsson (Verkis Consulting Engineers Ltd., Ofanleiti 2, Reykjavik IS-103, Iceland, stgu@verkis.is)
It is relatively well known that in the Nordic Countries, there are national classification standards for dwellings, and one of the
classes (class C) is referred to in the building regulations as minimum acoustic quality. In Norway, Iceland, and Sweden, there are also
national acoustic classification standards for different other types of buildings, and in Norway and Iceland, class C is referred to as minimum acoustic quality in the building regulations for these buildings as well as for dwellings. In Sweden, the acoustic quality for these
buildings is recommended, but not mandatory. In the paper, the different types of premises for work included in the standards are discussed with examples of some of the acoustic demands. The regulated buildings include schools, kindergardens, hospitals, and nursing
institutions. Hotels and offices are also included, and also the minimum sound absorption and maximum noise levels in restaurants, cultural, and sports buildings and many other different premises for work. Sometimes it is decided not to use only the minimum demands,
defined by class C, but to use the better quality defined by class B (or even class A).
3665
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3665
3a TUE. AM
9:20
10:20–10:40 Break
10:40
3aAAc4. Experience obtained during the ten years of the sound classification practical use for regulation building acoustics in
Lithuania. Aleksandras Jagniatinskis, Boris Fiks, and Marius Mickaitis (Vilnius Gediminas Tech. Univ., Linkmenu 28, Vilnius 08217,
Lithuania, akustika@vgtu.lt)
Advantages of application the sound classification schemes were foreseen in the two applications. In the legal regulation, the scheme
expresses in the users friendly and easy understandable form protection against noise requirements for the buildings. The scheme is also
a tool for designer to advice criteria of the suitable acoustic comfort in premises and also to label buildings according acoustic quality.
Classification scheme implemented in Lithuania comprise five acoustic comfort classes—A, B, C, D, and E. Acoustical requirements
expressed by the C sound class limit values correspond to the at least acceptable acoustic comfort level. The lowest (worst) sound class
is E and comprise limit values corresponding to the acoustical comfort level in old buildings erected under sound insulation requirements
existed before. By this reason, the step in limit values between different classes cannot be permanent and depends from the changes in
acoustical demands during the time. For enforcement, legal requirements expressed through mandatory sound class C for new dwellings
and E for renovated buildings from 2007 pre-completing testing become mandatory. In this approach, more important becomes a guideline for verification of compliance with an acoustic class.
11:00
3aAAc5. Revision of the Swedish sound classification scheme for premises. Krister Larsson (Bldg. Technology/Sound & Vibrations,
RISE Res. Institutes of Sweden, Box 857, Boras SE-50115, Sweden, krister.larsson@sp.se)
The Swedish building code puts requirements on noise protection in new buildings. For residential buildings, minimum requirements
are given directly in the building code, and demands for better acoustic quality are given in a sound classing scheme according to the
Swedish Standard SS 25267, which has been revised in 2015. For premises, such as offices, schools, or hotels, the building code does
not contain specific quantified minimum requirements on the acoustic properties, but refer instead to the sound classing standard SS
25268 for tabulated values. Sound class C corresponds to the minimum requirements for new buildings, and demands for better acoustic
quality are given according to class B or class A. The sound classing standard for premises, SS 25268, is currently under revision, and
during the last years, the needs for changes and updates have been collected in cooperation with stakeholders and experts. The paper
presents the motivations for the revision and the status of the work. Ideas for major revisions are being discussed such as suggestions for
improved room acoustic requirements in schools, as well as sound insulation and acoustic comfort in open plan offices.
11:20
3aAAc6. A pilot study on acoustic regulations for office buildings—Comparison between selected countries in Europe. Birgit Rasmussen (SBi, Danish Bldg. Res. Inst., Aalborg Univ. Copenhagen, A.C. Meyers Vænge 15, Copenhagen SV 2450, Denmark, bir@sbi.aau.dk)
Acoustic regulations or guidelines for office buildings are found in several countries in Europe. The main reason is to ensure satisfactory acoustic working conditions for the various tasks and activities taking place in the many different kinds of rooms in such buildings.
Examples of room types are offices, meeting rooms, open-plan offices, corridors, reception areas, dining areas, all with different acoustic
needs. Some countries specify a few acoustic limit values only, while others define several different criteria, guidelines only, or a combination of requirements and guidelines. As a pilot study, comparison between requirements in selected countries in Europe has been carried
out. The findings show a diversity of limit values for acoustic requirements. The paper includes examples of requirements for reverberation
time, airborne and impact sound insulation, noise from traffic and from service equipment. Examples of guidelines will also be presented.
The discrepancies between countries are being discussed, and some priorities for adjusting acoustic regulations will be given. In addition to
a set of regulations or guidelines, some countries have office buildings included in national acoustic classification standards with different
acoustic quality levels. The paper will indicate examples of such classification criteria for comparison with acoustic regulations.
Contributed Paper
11:40
3aAAc7. Acoustic behavior of facades: Acoustic isolation versus air permeability. Diogo M. Ferreira (Engenharia Cvil, Faculdade de Ci^encias e
Tecnologia da Universidade de Lisboa FCT-UNL, Rua do Casal, n 29 1 B,
Cacem 2735-354, Portugal, diogom.fer88@gmail.com)
Today, the acoustic comfort of dwellings is revealed as a very important
factor in the context of overall comfort of its inhabitants. For this comfort, it
contributes very importantly the sound insulation that the facades of buildings
can provide. Since the facades comprise of an opaque portion and a translucent
portion, the latter will be more relevant to the sound insulation that the facade
element provides, which is influenced by the window itself, as by the openings
and air permeability associated with. In order to evaluate the influence of the
3666
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
possible differences of acoustic insulation on a residential façade over time, an
experimental study was developed in LNEC. The experimental work was
based on a set of acoustic and air permeability tests considering several opening areas, between 0,5 cm2 and 250 cm2, in a given test window. The results
obtained allow to evaluate in which direction the sound insulation versus air
permeability can parameterize the performance of the facades of buildings,
and what is its relationship to the well-being of residents. For the 0.5 and 1
cm2 areas of aperture, no acoustic or air permeability differences were found
in relation to the reference values, assuming that this scenario does not cause
any significant variation in a housing façade. For the last two areas studied
(200 and 250 cm2), in acoustic and air permeability terms, it is concluded that,
for these areas, the scenario is similar to that of an open window due to the
high loss of sound insulation and the low air permeability.
Acoustics ’17 Boston
3666
TUESDAY MORNING, 27 JUNE 2017
ROOM 313, 9:15 A.M. TO 12:20 P.M.
Session 3aAB
Animal Bioacoustics: Comparative Bioacoustics: Session in Honor of Robert Dooling I
Micheal L. Dent, Cochair
Psychology, University at Buffalo, SUNY, B76 Park Hall, Buffalo, NY 14260
Amanda Lauer, Cochair
Otolaryngology-HNS, Johns Hopkins University School of Medicine, 515 Traylor, 720 Rutland Ave., Baltimore, MD 21205
Chair’s Introduction—9:15
Invited Papers
9:20
3aAB1. Perceptual perseverance in a passerine with permanent papillar impairment. Amanda Lauer (Otolaryngology-HNS, Johns
Hopkins Univ. School of Medicine, 515 Traylor, 720 Rutland Ave., Baltimore, MD 21205, alauer2@jhmi.edu) and Robert Dooling
(Psych., Univ. of Maryland, College Park, MD)
3a TUE. AM
The Belgian Waterslager canary is unique for its loud, low-pitched song which is accompanied by a hereditary pathology involving
missing and damaged hair cells in the basal end of the papilla. These birds were been bred for hundreds of years for loud, low-pitched
song. Breeders likely selected for high frequency hearing loss. In spite hair cell regeneration, the papillae in these birds never approaches
that of normal-hearing canaries. Auditory nerve and brainstem responses are also diminished, and auditory brainstem nuclei show
reduced cell size. And these birds show a suite of psychoacoustic deficits consistent with impaired active processing as seen in humans
with hearing loss. It is rather remarkable, then, that Belgian Waterslagers are able to learn, discriminate, and produce complex, speciesspecific sounds with such impaired frequency selectivity and phase processing. This feat, in the presence of severe peripheral auditory
damage, underscores the importance of temporal information in the avian auditory perception and vocal learning. The obvious genetic
basis of this pathology places the Belgian Waterslager canary in another unique position of being the only nonhuman organism which
must navigate through vocal development and vocal learning in the face of an inherited developmental peripheral auditory pathology.
9:40
3aAB2. Cormorant audiograms under water and in air. Ole N. Larsen (Biology, Univ. of Southern Denmark, Campusvej 55,
Odernse M 5230, Denmark, onl@biology.sdu.dk), Jakob Christensen-Dalsgaard, Alyssa Maxwell, Kirstin A. Hansen, and Magnus
Wahlberg (Biology, Univ. of Southern Denmark, Odense M, Denmark)
Little is known about underwater hearing abilities of diving birds. To help fill this gap we measured audiograms of cormorants (Phalacrocorax carbo) using two different methods. Wild-caught cormorant fledglings were anesthetized and their auditory brainstem
responses (ABR) to clicks and tone bursts were measured; first in an anechoic box in air and then in a large water-filled tank with their
head and neck submerged. In addition, audiograms were obtained from two adult cormorants using the psychophysical method of constant stimuli both in air and underwater. The shape of audiograms obtained from in-air ABR-recordings was similar to that reported for
birds of similar size. The highest sensitivity in air was found at about 2 kHz, while the most sensitive response was at about 1 kHz underwater. In general, the audiograms obtained from psychophysical measurements were similar to the ABR audiograms but had much
higher sensitivity than the ABR audiograms. The results from both methods suggest that cormorants have rather poor in-air hearing compared to similar-sized birds. Their underwater hearing sensitivity, however, is higher than what would have been expected for purely airadapted ears, and it is likely that cormorants use their underwater hearing abilities during foraging dives.
10:00
3aAB3. Studies of rhythmic synchronization in avian vocal learners using operant conditioning methods. Yoshimasa Seki (Aichi
Univ., 1-1 Machihata-machi, Toyohashi 4418018, Japan, yoshimasa.seki@gmail.com) and Kazuo Okanoya (The Univ. of Tokyo,
Meguro-ku, Japan)
Researchers have argued that non-human animals exhibit a sense of rhythm. Budgerigars and Bengalese finches were trained to peck
a key iteratively in response to metronomic stimuli using an operant conditioning method. Peck timing in Budgerigars was distributed
around the stimulus onset of the metronome, suggesting the birds synchronized their body movement to the rhythm of the metronome.
However, peck timing in finches appeared to correspond to an estimated reaction time, suggesting that peck responses were a mere reaction to the metronome stimuli. Next, budgerigars were trained to peck two keys alternatively without any metronomic stimuli, so that
pecking of the keys was self-paced. Metronomic sounds were created to match the intervals of the self-paced pecking. Additional metronomic stimuli were created to be 10% faster, 10% slower, and 20% slower than the original self-paced metronome stimuli for each bird.
3667
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3667
These sped up or slowed down metronomic sounds were played back in the background of the self-paced pecking task. In this experiment, rhythmic synchronization was not observed; however, one bird exhibited shorter pecking intervals for the faster metronome and
longer intervals for the slower metronome, suggesting that the metronomic stimuli influenced peck timing in this bird in the absence of
training.
10:20
3aAB4. Elaborate network of avian intracranial air-filled cavities and its potential role in hearing. Kenneth K. Jensen (Starkey
Hearing Technologies, 8901 Rockville Pike, Bethesda, MD 20889, kenneth.kragh.jensen@gmail.com), Jakob Christensen-Dalsgaard
(Inst. of Biology, Univ. of Southern Denmark, Odense M, Denmark), and Ole N. Larsen (Inst. of Biology, Univ. of Southern Denmark,
Odernse M, Denmark)
Many avian species possess an intracranial air-filled passage, directly connecting the medial surfaces of the tympanic membranes,
called the interaural canal. It is known to greatly improve directional hearing by passive acoustics in small animals where the external
interaural delay is too minute to allow temporal neural coding. For long, the avian interaural canal was assumed to be a simple cylindrical cavity. Contrary to this, we discovered through CT scans and other techniques that many birds (e.g., zebra finches and pigeons) do in
fact have a rather elaborate system of interconnected air-filled cavities throughout the entire skull. The cavities communicate directly or
indirectly with the tympanic membranes. How does this network affect the directional hearing in birds? On one hand, it may simply be
an adaptation to flight and play little or no role in hearing. On the other hand, theoretical considerations suggest that the directional
response may be optimized through frequency dependent “tuning” of attenuation and phase shift through the interaural canal. In this
talk, we will first present the anatomy, then present some preliminary directional responses from zebra finch ears, and finally discuss
future directions and considerations for what may be the functional interaural canal in birds.
10:40
3aAB5. Psychophysical basis for finite-state song syntax in Bengalese finches. Kazuo Okanoya (Life Sci., The Univ. of Tokyo, 3-8-1
Komaba, Meguro-ku 153-8902, Japan, cokanoya@mail.ecc.u-tokyo.ac.jp)
Bengalese finches have been domesticated for 250 years in Japan from wild white-rumped munias originally imported from China.
Bengalese finches were domesticated for parental behavior and white color morphs, but not for songs. Nevertheless, Bengalese finches
sing complex songs: 2-5 song notes were chunked and chunks are organized into finite-state syntax. Song duration indicates physical fitness of the bird and song complexity stimulates female reproductive behavior. We examined sequential expertizes in Bengalese finches
using behavioral procedures. In a click detection task, birds were trained to peck when they heard a short click embedded in a chunk or
between chunks. Reaction times were longer in the former cases. In a flash-song interruption task, song termination occurred more often
when the flash was given between chunks. These data are suggestive of the perceptual and motor reality of chunk structures. In a serial
reaction time task, birds where trained to peck horizontally arranged keys in certain order. Male birds learned the task better than
females, suggesting that song motor control capacity may be utilized other motor domains as well. However, abstract rule learning by
auditory discrimination was not possible, suggesting that the sequential expertizes maybe limited on the motor domain.
11:00
3aAB6. Mouse psychoacoustics: Not just re-Dooling the bird psychoacoustics research. Micheal L. Dent (Psych., Univ. at Buffalo,
SUNY, B76 Park Hall, Buffalo, NY 14260, mdent@buffalo.edu)
The clever artist Willie Nelson once said “the early bird gets the worm but the second mouse gets the cheese.” While Willie was not
likely referring to animal psychoacoustics, the quote easily applies to the historical trajectory of the field. For years, birds served as a primary model for human hearing. Many species were already domesticated. They were known to be vocal learners. The hearing abilities
of numerous species of birds were similar to humans. Finally, birds could be quickly trained using operant conditioning procedures and
positive reinforcement. Robert Dooling was a pioneer of these techniques and results from his laboratory were instrumental for birds
being used as models for human auditory processing for decades. With the development of the mouse genome in 2002, however, many
researchers turned their focus instead towards these small mammals as research models. Genetically engineered strains of mice mimicking human disorders could be easily developed and studied, but unfortunately, the basic hearing and communication abilities of mice
were largely unknown, limiting their utility as models. Successful but numerous adaptations of Dooling’s behavioral methods have been
recently made to measure auditory acuity in mice to measure the perception of both simple stimuli and complex vocalizations.
11:20
3aAB7. Vocal production, auditory perception, and signal active space in an open habitat specialist, the Grasshopper Sparrow.
Bernard Lohr (Dept. of Biological Sci., Univ. of Maryland Baltimore County, 1000 Hilltop Circle, Baltimore, MD 21250, blohr@umbc.
edu)
Grasshopper Sparrows are specialists in open grassland habitats and face acoustic challenges normally associated with that habitat type.
They produce several types of calls and two distinct types of song, all of which are high-pitched for songbirds (6—10 kHz). The primary
territorial song, also known as the “buzz” song, consists of 3 or 4 brief introductory notes followed by a high-pitched, rapidly modulated
trill. The function of the secondary song type, or “warble” song, remains unknown, but data from autonomous recording units demonstrates
a correlation with pairing status and breeding cycle timing. Operant discrimination tests with Grasshopper Sparrows show a broader audiogram and extended high frequency auditory limit when compared with other small songbirds, suggesting that these birds, and potentially
related species as well, have evolved a wider spectral range of auditory sensitivity in this habitat type. Auditory detection and discrimination thresholds were used to explore the active space and communication distances of this species’ vocalizations using a habitat bioacoustics model incorporating its normal territorial behavior. Results suggest that songs can be detected up to three territories away from the
singer, but that birds may have difficulty discriminating between different conspecific songs more than two territories away.
3668
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3668
Contributed Papers
11:40
12:00
3aAB8. Saliency of temporal fine structure in zebra finch vocalizations.
Nora H. Prior, Edward Smith, Gregory F. Ball, and Robert Dooling (Biology
Psych., Univ. of Maryland, 4094 Campus Dr., College Park, MD 20742,
nhprior@umd.edu)
3aAB9. Micro-scale habitat use of humpback whales around Maui Nui,
ugler (Hawaii Inst. of Marine Biology, Univ. of Hawaii
Hawaii. Anke K€
Manoa, 2525 Correa Rd. HIG 132, Honolulu, HI 96822, akuegler@hawaii.
edu) and Marc Lammers (Hawaii Inst. of Marine Biology, Univ. of Hawaii
Manoa, Kaneohe, HI)
Previous work has shown that birds. in general, and zebra finches. in
particular, have remarkable sensitivity to temporal fine structure (TFS).
While spectral, envelope, and TFS cues are present in vocalizations, TFS
has been largely ignored since sonographic, and not time waveform, analyses have been the mainstay in bioacoustics. However, birds’ impressive sensitivity to TFS raises the question of whether behaviorally relevant
information is carried within the TFS. Indeed, zebra finches have at least 10
call types that both males and females use in sophisticated ways to coordinate activities. Furthermore, zebra finch vocalizations are typically composed of harmonic stacks, rich in TFS. Here, we isolated and described
patterns in the TFS between and within individuals for different call types
and constructed test stimuli from these patterns of TFS for psychoacoustic
experiments. Demonstrating TFS sensitivity within natural stimuli would
argue for the increasing salience of TFS for real-life communication in
birds. [Work supported by a NIDCD T32 DC000046-16 to NHP.]
TUESDAY MORNING, 27 JUNE 2017
ROOM 310, 9:15 A.M. TO 12:20 P.M.
Session 3aAO
Acoustical Oceanography and Underwater Acoustics: Acoustic Measurements of Sediment Transport and
Near-Bottom Structures I
James Lynch, Cochair
Woods Hole Oceanographic, MS # 11, Bigelow 203, Woods Hole, MA 02543
Peter D. Thorne, Cochair
Marine Physics and Ocean Climate, National Oceanography Centre, National Oceanography Centre, Joesph Proudman
Building, 6 Brownlow Street, Liverpool L3 5DA, United Kingdom
Chair’s Introduction—9:15
Invited Papers
9:20
3aAO1. Perspectives of ongoing acoustic developments for measuring sediment dynamics. Peter D. Thorne (Joseph Proudman
Bldg., National Oceanogr. Ctr., 6 Brownlow St., Liverpool, Merseyside L3 5DA, United Kingdom, pdt@noc.ac.uk) and David Hurther
(Lab. of Geophysical and Industrial Flows (LEGI), CNRS UMR 5519, Grenoble, France)
Sediment entrainment, transport, and deposition over bedforms can be highly dynamic with strong spatially temporal variability. To
probe the multi-scaled processes of sediment transport, there has been continuing developments of instrumentation to obtain high resolution measurements of near-bed sediment dynamics. Such observations are used for both the development and assessment of process
based sediment transport modeling. Here, results are reported from studies on developing high resolution acoustic instruments, deployed
to make co-located observations of bedforms, the near-bed and suspended concentration fields, and the horizontal and vertical
3669
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3669
3a TUE. AM
Each winter, thousands of North Pacific humpback whales (Megaptera
novaeangliae) migrate from their high latitude feeding grounds in Alaska to
mate and calve in the shallow tropical waters around the Main Hawaiian
Islands. Previous studies on humpback whales in Hawaii have focused on the
whales’ acoustic behavior and their general distribution within the islands, but
little is known about small-scale habitat preferences. Off the island of Maui,
anecdotal reports from commercial operators and researchers tell of clusters
of whales within the breeding area. However, to our knowledge, no studies
have been conducted to examine the phenomenon of micro-scale aggregations. A pilot study using passive acoustic monitoring with Ecological Acoustic Recorders (EARs) was conducted from January through early March 2016
at three sites off Maui, using male singers as a proxy for relative whale abundance. Root-mean-square sound pressure levels (SPLs) were calculated to
compare low frequency acoustic energy (0-1 kHz) between the different sites.
Preliminary results indicate that singers alternate between the two farthest
sites. Further, different diel patterns in song activity where observed among
the sites. These results suggest at least some degree of variable spatial and
temporal habitat use and that further monitoring is warranted.
components of intra-wave and turbulent flows. To evaluate the instruments, a series of bottom boundary layer measurements were collected over sandy sediments under differing conditions. The acoustic instruments under examination consisted of a Bedform and Suspended Sediment Imager, BASSI, a three dimensional acoustic ripple profiler, 3D-ARP, and high resolution Acoustic Concentration and
Velocity Profilers, HR-ACVP. The results obtained from the instruments are used to illustrate ongoing developments in acoustics and its
expanding capability for studying the dynamics of near-bed sediment transport processes.
9:40
3aAO2. Acoustic measurements of sediment transport. From pulse-coherent Doppler to an autonomous bathymetry vessel. Peter
Traykovski (Appl. Phys. and Eng., Woods Hole Oceanographic Inst., Woods Hole, MA 02543, ptraykovski@whoi.edu)
In this talk, we present recent advances in sediment transport research using high frequency acoustic sensors. The topics covered
will range from development and use of a multifrequency pulse-coherent Doppler system to measuring bathymetry with unmanned autonomous surface vessel (ASV) equipped with a state-of-the-art swath bathymetry sensor. The pulse coherent Doppler was able to measure velocity profiles through the wave boundary layer in a high concentration fluid mud layer off the coast of Louisiana. At the end of
energetic wave events, as the forcing decreased, the acoustic measurements revealed the settling of the mud layer with transition from
turbulent to laminar flow, and a four order of magnitude increase of the viscosity of the mud and water fluid. This increase was inferred
from the vertical structure of the wave boundary layer and offers a unique in-situ non-invasive approach to measuring the viscosity of
flows with complex rheology. The ASV measurements allow O(10 cm) resolution bathymetric measurements in energetic tidal environments, where navigation of manned vessels on accurate track lines suitable for repeat bathymetry measurements is difficult, and are providing new insights into the dynamics of tidal inlets.
10:00
3aAO3. Measuring sediments: Backscatter optics, laser diffraction, acoustics, and combined optics and acoustics. Yogesh C.
Agrawal (Sequoia Sci., Inc., 2700 Richards Rd., Ste. 107, Bellevue, WA 98005, yogi.agrawal@sequoiasci.com)
Over the last 3-plus decades, point measurements of sediment concentrations in water have mostly been made using optical backscatter. Physics tells us that this signal correlates with particle area concentration so that while measuring volume concentrations, suspended
load (sand) can be swamped by wash load (fines) and remain unseen. Over 2 decades ago, we introduced laser diffraction to the marineaquatic environment, delivering concentration, size distribution (and settling velocity spectrum when equipped with a setline tube), with
the characteristic of uniform sensitivity to a 200:1 range of grain sizes, excluding flocs which have variable fractal dimensions. Most
recently, we introduced a high-frequency 8 MHz acoustic backscatter point-measurement system with the attraction of enhanced sensitivity to suspended load, i.e., opposite of backscatter optics, a higher limit to sediment concentrations, and greater tolerance to fouling.
Very recently, a small conceptual jump was made to combine backscatter optics and 8MHz backscatter acoustics into a sensor with
nearly uniform sensitivity to sizes over a 1—500 micron size range. I will review each briefly and illustrate how well these methods
work in nature. Finally, I will offer thoughts on the information content of multi-frequency acoustics.
10:20–10:40 Break
10:40
3aAO4. Relating acoustic and optical measurements to particle and floc concentrations in the bottom boundary layer. Christopher
R. Sherwood (US Geological Survey, 384 Woods Hole Rd., Woods Hole, MA 02543, csherwood@usgs.gov)
Profiles of optical and acoustic properties were measured by moving instruments vertically in the bottom boundary layer, between
the bottom and about 2 m above the sea floor, at a sandy inner shelf site 12 m deep. Profiles were performed every two hours for 36
days, spanning a range of wave and current conditions. Acoustic instruments on the profiling arm included a three-frequency acoustic
backscatter profiler and a 1.5-MHz acoustic Doppler velocimeter. Optical instruments on the arm measured backscatter, attenuation, and
absorption. Stationary instruments on the main tripod measured waves, currents, Reynolds stress, and vertical temperature/salinity gradients. The acoustic backscatter measurements were coupled with optical measurements and recent models for the backscattering and
absorption by flocs to provide time series of particle-concentration and size profiles. Remarkable changes in particle sizes, concentrations, and inferred densities and settling velocities were observed. These observations demonstrate the value of the traditional Rouse profile assumptions often used in suspended-sediment transport calculation, but also reveal the rich temporal and spatial complexity of the
near-bottom particle field.
11:00
3aAO5. Acoustic scattering from flocculating suspensions. Sarah Bass, Erin V. King, Andrew J. Manning (School of Biological and
Marine Sci., Univ. of Plymouth, Drake Circus, Plymouth PL4 8AA, United Kingdom, sbass@plymouth.ac.uk), and Peter D. Thorne
(National Oceanogr. Ctr., Liverpool, Merseyside, United Kingdom)
Acoustic backscatter from sediment suspensions in the marine environment has been limited in application by the lack of understanding of how sound scatters from flocculating particles. To support theoretical development of sound scattering, combined measurements of high frequency acoustic backscatter and particle population characteristics are presented over a range of flocculating
suspensions, from natural in-situ muddy suspensions to laboratory controlled pure clay flocs. Field measurements of cohesive suspended
sediments were made in the meso-tidal Tamar Estuary, Devon, UK over several tidal cycles during spring tides. Controlled laboratory
experiments were conducted using oscillating grid turbulence to suspend kaolin, oxidized, and natural marine sediments sieved below
63 microns. In both field and laboratory cases distributions of floc size and settling velocity were acquired using video techniques (from
which effective density was derived) and acoustic backscatter measured over frequencies of 1-4 MHz. Particle measurements were complimented by pumped suspension samples later analyzed for mass and organic content. Measured scattering properties from the various
sediments are compared against each other and with predictions from a hybrid elastic-fluid sphere model. Initial results suggest a good
agreement with the model for both in-situ field suspensions and oxidized natural sediments from the laboratory.
3670
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3670
11:20
3aAO6. Doppler sonar measurements of bedload transport: Evaluations using computer models and lab trials. Len Zedel (Phys.
and Physical Oceanogr., Memorial Univ. of NF, Chemistry-Phys. Bldg., St. John’s, NF A1B 3X7, Canada, zedel@mun.ca), Alex E. Hay
(Oceanogr., Dalhousie Univ., Halifax, NS, Canada), Greg Wilson (CEOAS, Oregon State Univ., Corvallis, OR), and Jenna Hare (Oceanogr., Dalhousie Univ., Halifax, NS, Canada)
Studies have demonstrated that bottom velocity measurements from Doppler sonar systems are proportional to bedload transport
rates. Given the complexity of acoustic backscatter and sidelobe interactions near the bottom boundary, the exact source of this signal is
not obvious. We explore this measurement using a computer simulation and a series of laboratory trials. The system that we have developed for these studies, MFDop, is a multi-frequency (1.2-2.2 MHz), bistatic Doppler sonar that provides 3-component velocity profiles
over a ~ 30 cm profile with ~5 mm resolution at a rate of 50 profiles/sec. Model simulations show that side-lobe contamination biases
the velocity measurements above the bottom but that effect is reduced in the bedload layer itself. We report on tests of our system in
field conditions at the St. Anthony Falls Laboratory (SAFL). The SAFL facility provides a 1.8 m deep, 2.75 m wide flume tank. In our
trials, we used a 1 m depth flow of about 1 m/s over a mobile bed of sand with median grain size d50 = 0.4 mm. We find agreement
between transport estimates determined using the MFDop, the sediment trap system integrated in the SAFL flume, and estimates based
on bedform migrations.
11:40
3aAO7. Turbulent particle flux measurement with pulse-coherent Doppler sonar. Alex E. Hay and Kacie Conrad (Oceanogr., Dalhousie Univ., 1355 Oxford St., Halifax, NS B3H4R2, Canada, alex.hay@dal.ca)
Contributed Paper
12:00
3aAO8. Sediment transport studies—A brief retrospective look. James
Lynch, Peter Traykovski, and Arthur Newhall (Woods Hole Oceanographic,
MS # 11, Bigelow 203, Woods Hole, MA 02543, jlynch@whoi.edu)
In this paper, we present a short retrospective look at the evolution of
acoustical, optical, and related measurements of sediment transport, bottom
3671
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
stress, and bottom bedforms over the past quarter century, using our programs at the Woods Hole Oceanographic Institution as a “representative
sample.” Results from the High Energy Benthic Boundary Layer Experiment, the Sediment Transport on Shelves and Slopes experiment, and the
Strata Formation on Shelves and Slopes experiment will be shown. Emphasis will be given to both technological and scientific advances. [Work supported by ONR.]
Acoustics ’17 Boston
3671
3a TUE. AM
The accuracy of velocity estimates from pulse-coherent acoustic Doppler systems is related to the magnitude, R, of the ensembleaveraged complex pulse-pair correlation. The closer R is to unity, the more accurate the estimate. The accuracy of particle concentration
estimates is also related to the value of R but, in contrast, high accuracy requires R to be less than unity, i.e., the amplitudes from individual pulses in an ensemble should be independent, in order to beat down Rayleigh statistics associated with configuration noise. Consequently, when estimating the turbulent component of the particle flux, i.e., the product of the particle concentration and velocity
fluctuations, a tradeoff arises between these conflicting requirements for velocity and concentration accuracy. This tradeoff is investigated through a statistical model of sound scattering from particles embedded in idealized turbulence, and laboratory experiments in
which particle velocities in turbulent flow are measured both acoustically, via pulse-coherent sonar, and optically, using particle imaging
velocimetry.
TUESDAY MORNING, 27 JUNE 2017
BALLROOM B, 9:15 A.M. TO 12:20 P.M.
Session 3aBAa
Biomedical Acoustics: Advances in Shock Wave Lithotripsy I
Robin Cleveland, Cochair
Engineering Science, Inst. Biomedical Engineering, University of Oxford, Old Road Campus Research Building,
Oxford OX3 7DQ, United Kingdom
Adam D. Maxwell, Cochair
University of Washington, 1013 NE 40th St., Seattle, WA 98105
Julianna C. Simon, Cochair
Graduate Program in Acoustics, Pennsylvania State University, Penn State, 201E Applied Sciences Building,
University Park, PA 16802
Chair’s Introduction—9:15
Invited Papers
9:20
3aBAa1. Ultrasound, shock waves, and phonons. Rainer Pecha (RP Acoust. e.K., Friedhofstrasse 27, Leutenbach 71397, Germany,
rainer.pecha@rp-acoustics.de)
On Dec. 10, 2016, the brilliant experimental physicist Prof. Wolfgang Eisenmenger passed away completely unexpectedly. His
extensive work had influence on many different fields of acoustics. This presentation gives an overview on his remarkable and inspiring
life and research.
9:40
3aBAa2. Stone formation. James C. Williams (Anatomy and Cell Biology, Indiana Univ. School of Medicine, 635 Barnhill Dr., MS5055, Indianapolis, IN 46202, jwillia3@iupui.edu) and James E. Lingeman (Urology, Indiana Univ. School of Medicine, Indianapolis,
IN)
How stones are retained within the kidney while small in size is still not fully understood. In this talk, we will show two examples of
how stones are retained during early growth: One is growth on Randall’s (interstitial) plaque, and the other is growth on mineral that has
formed as a luminal plug in a terminal collecting duct. These two mechanisms of stone retention during early growth have distinctive
morphologic features that can be seen by methods that show the microscopic structure of the stones. Stones growing on Randall’s plaque
display a mineralized region (composed of apatite) that is typically not large in size (less than 0.5 mm across) but which usually shows
luminal spaces, which are signs of its origin in the connective tissue of the papilla. Stones growing on ductal plugs also show attachment
to a piece of apatite, but the apatite regions are typically larger (often >1 mm long and >0.5 mm wide), and they are solid, without
spaces running through them. Still other stone formers exhibit neither of these known mechanisms of stone retention, and we propose
urinary stasis as a third possible way that stones are retained within the kidney. We propose that knowing the mechanisms of stone retention during early stone formation should allow for better treatment of stone diseases.
10:00
3aBAa3. New insights into the mechanisms and process of stone fragmentation in shock wave lithotripsy. Pei Zhong (Mech. Eng.
and Mater. Sci., Duke Univ., 101 Sci. Dr., Durham, NC 27708, pzhong@duke.edu)
Stone fragmentation in shock wave lithotripsy (SWL) is the consequence of dynamic fatigue produced by stress waves and cavitation. Stress waves [longitudinal (or P), transverse (or S), and surface acoustic waves (SAW)] and associated tensile and shear stresses
are the primary driving forces to create fracture, initially from pre-existing (or intrinsic) flaws inside the stone. In contrast, cavitation
produces pitting on the stone surface, and consequently, introducing new (or extrinsic) flaws to weaken the stone structure during SWL.
Stress waves and cavitation act synergistically to produce effective and successful stone comminution in SWL, with cavitation serving
as catalysts to enhance the efficiency of stress waves-driven stone fracture. In this talk, contemporary understanding about the mechanisms and process of stone fragmentation in SWL will be summarized, using a heuristic model which incorporates two important lithotripter field parameters (i.e., pressure and dose) that can critically influence the treatment outcome. The effects of stone size, geometry,
composition on the transient stress field produced inside the stone, and the potential role of SAW in crack initiation and propagation will
be discussed to provide physical insights into improvements in lithotripsy device design and treatment strategy.
3672
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3672
10:20
3aBAa4. Quantification of the shielding of kidney stones by bubble clouds during burst wave lithotripsy. Kazuki Maeda, Tim Colonius (California Inst. of Technol., Pasadena, CA), Wayne Kreider, Adam D. Maxwell, and Michael Bailey (Univ. of Washington, 1013
NE 40th St., Seattle, WA 98105, wkreider@uw.edu)
Bubble clouds can shield kidney stones from insonification, and limit stone breakage during burst-wave lithotripsy (BWL), a recently
proposed technique that uses focused ultrasound pulses with an amplitude of O(1-10) MPa and frequency of O(0.1) MHz. We use numerical simulations to quantify the magnitude of such shielding. In the simulations, we solve for the radial evolution of Lagrangian bubbles coupled to a compressible fluid using volume-averaging techniques. The resulting equations are discretized on an Eulerian grid. In
particular, we quantify the reduction in acoustic energy flux incident on a rigid, plane wall that models the stone surface. We consider a
burst wave with an amplitude of 6 MPa and a bubble cloud of diameter O(1) mm. The size distribution of nuclei, the number density of
bubbles, and the distance of the cloud from the wall are varied, We show that a cloud containing O(10) bubbles with a diameter of
O(10) um can reduce the total energy flux by more than 50%, largely independent of distribution of nuclei. Finally, we compare the simulation results with high-speed images and hydrophone measurements of bubble clouds from companion experiments. [Work supported
by NIH 2P01-DK043881.]
10:40
3aBAa5. Acoustic removal of cavitation nuclei to enhance stone comminution in shockwave lithotripsy. Timothy L. Hall, Hedieh
Alavi Tamaddoni (Univ. of Michigan, 2200 Bonisteel Blvd., Ann Arbor, MI 48109, hallt@umich.edu), Alexander P. Duryea (Histosonics, Inc., Ann Arbor, MI), and William W. Roberts (Univ. of Michigan, Ann Arbor, MI)
Contributed Papers
11:00
3aBAa6. Passive acoustic mapping of cavitation during shock wave lithotripsy. Kya Shoar, Erasmia Lyka, Constantin Coussios, and Robin Cleveland (Inst. of Biomedical Eng., Univ. of Oxford, Old Rd. Campus Res.
Bldg., Oxford OX3 7DQ, United Kingdom, kya.shoar@magd.ox.ac.uk)
Passive acoustic mapping (PAM) has previously been used to localize
inertial cavitation during high intensity focused ultrasound. Here, this technique has been applied to shock wave lithotripsy (SWL), a non-invasive
procedure whereby kidney stones are fragmented. Conventional diagnostic
ultrasound probes were used to detect acoustic emissions during SWL. Signals consisted of reverberation sound from the incident shock wave followed, several hundred microseconds later, by emissions from cavitation
collapses. Time-gating was used to isolate the cavitation signals, which
were then processed using PAM to create spatial maps of the cavitation activity. Experiments in water indicated the spatial resolution was an ellipsoidal volume 5mm long by 1mm wide. Experiments were carried out in ex
vivo pig kidneys and it was observed that cavitation was initiated in the
region of the focus but moved laterally by up to 10mm and during treatment
exhibited a general migration towards the source. These results suggest that
PAM can be used as a tool to map the location of cavitation during SWL
and has the potential to differentiate cavitation in tissue (that could contribute to injury) from cavitation near the stone which affects comminution.
[Work supported in part by NIH through P01-DK43881.]
11:20
3aBAa7. Interaction between lithotripsy-induced surface acoustic waves
and pre-existing cracks. Ying Zhang, Chen Yang, Defei Liao, and Pei
Zhong (Mech. Eng. and Material Sci., Duke Univ., Hudson 229, Durham,
NC 27708, zhang.ying@duke.edu)
The interaction between pre-existing cracks and surface acoustic waves
(SAW) in lithotripsy is investigated. Surface acoustic waves are generated
at a water-glass interface by an incident shock wave produced by the spark
discharge of a nano pulse lithotripsy (NPL) device or an electromagnetic
shock wave lithotripsy (SWL) source. Evidence of SAW, including leaky
Rayleigh wave and Scholte wave, will be presented based on photoelastic
3673
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
imaging and numerical stimulations using COMSOL. A clear correlation
between SAW and the location of the maximum tensile stress produced on
the glass boundary has been identified, which can lead to ring-like fractures
on a flat glass surface exposed to NPL-generated spherically divergent
shock waves. To simulate cavitation-induced surface pitting in SWL, preexisting crack will be introduced on the glass surface by microindentation
using a Vickers or Knoop indenter. The interaction of SAW with the preexisting cracks will be examined to characterize crack extension and
branching as a function of their location and orientation to the incident
shock wave.
11:40
3aBAa8. Improving environmental and stone factors toward a more realistic in vitro lithotripsy model. Justin Ahn (Urology, Univ. of Washington School of Medicine, Seattle, WA), Wayne Kreider, Christopher Hunter,
Theresa Zwaschka, Michael Bailey (Ctr. for Industrial and Medical Ultrasound, Appl. Phys. Lab., Univ. of Washington, Seattle, WA), Mathew Sorensen, Jonathan Harper, and Adam D. Maxwell (Urology, Univ. of
Washington School of Medicine, 1013 NE 40th St., Seattle, WA 98105,
amax38@u.washington.edu)
To improve in vitro lithotripsy models, we investigated the effects of
multiple experimental variables on stone fragmentation. We performed
timed burst wave lithotripsy (BWL) and shock wave lithotripsy (SWL)
exposures in a water tank with the following variable parameters: water gas
content (60, 30, and 15% O2), temperature (20 and 37 C), stone holder
degree of enclosure (open wire basket, polyvinyl chloride (PVC) openended gel, and similar material anatomically accurate artificial kidney), and
stone type (Begostone at 2 mixture ratios, calcite, calcium oxalate monohydrate (COM), and uric acid). At least 3 stones were treated for each condition, with fragmentation defined as percent stone mass <2 mm. Begostone
(2:1 powder:water ratio) treated with BWL at 20 C vs. 37 C showed
75613% vs. 6266% breakage, respectively. Using the same stone type, gas
concentrations of 60, 30, and 15% O2 showed breakage of 2364%, 5568%,
and 82616%, respectively. More enclosed kidney phantoms showed
decreasing lithotripsy efficacy of 94611%, 64621%, and 1362% breakage
in basket, PVC, and anatomic phantoms, respectively. 2:1 Begostone most
Acoustics ’17 Boston
3673
3a TUE. AM
Cavitation bubbles are formed by shockwaves as part of the normal SWL procedure and can assist in fragmentation when they collapse against a stone. However, following collapse, the bubble cloud leaves behind a large population of residual micron sized bubble
“nuclei” that can interfere with subsequent shockwaves. This often manifests as more efficient fragmentation at lower shockwave repetition rates where there is sufficient time for nuclei to dissolve. This study will show how the application of low amplitude, unfocused
ultrasound bursts can be used to stimulate bubbles to coalescence or dispersion from the shockwave path by the primary and secondary
Bjerknes forces. Applying these bursts in between shockwaves reduces the bubble nuclei shielding effect allowing more energy to reach
the stone and increasing efficacy. Our results will show this technique is effective at reducing the number of shocks required for stone
comminution on a clinical electromagnetic lithotripter with a simple supplemental transducer to generate the low amplitude field.
main objective is to elucidate the effects of a bubble in the shock path to the
elastic and fracture behaviors of the stone. The computational framework couples a finite volume two-phase computational fluid dynamics (CFD) solver
with a finite element (FE) computational solid dynamics (CSD) solver. The
stone surface is represented as a dynamic embedded boundary in the CFD
solver. The evolution of the bubble surface is captured by solving the level set
equation. The interface conditions are enforced through the construction and
solution of local fluid-solid and two-fluid Riemann problems. The results of
shock-bubble-stone simulations suggest that the dynamic response of a bubble
to LSW varies dramatically depending on its initial size. Bubbles with an initial radius smaller than a threshold collapse within 1 ls after the passage of
LSW; whereas larger bubbles do not. Moreover, this study suggests that a
non-collapsing bubble imposes a negative effect on stone fracture while a collapsing bubble may promote fracture on the proximal surface of the stone.
closely mimicked COM stone breakage. SWL exposures produced similar
trends. This work indicates the importance of controlling multiple variables
during in vitro lithotripsy experiments. [Work supported by NIH P01
DK043881 and K01 DK104854.]
12:00
3aBAa9. Multiphase fluid-solid coupled analysis of shock-bubble-stone
interaction in shock wave lithotripsy. Kevin G. Wang and Shunxiang Cao
(Dept. of Aerosp. and Ocean Eng., Virginia Tech, Rm. 332, Randolph Hall,
460 Old Turner St., Blacksburg, VA 24060, kevinwgy@vt.edu)
A novel multiphase CFD-CSD coupled computational framework is
applied to investigate the interaction of a kidney stone immersed in liquid
with a lithotripsy shock wave (LSW) and a gas bubble near the stone. The
TUESDAY MORNING, 27 JUNE 2017
ROOM 312, 9:15 A.M. TO 12:20 P.M.
Session 3aBAb
Biomedical Acoustics: Partial Differential Equation Constrained and Heuristic Inverse Methods in
Elastography I
Mahdi Bayat, Cochair
Biomedical and Physiology, Mayo Clinic, 200 1st St. SW, Rochester, MN 55905
Wilkins Aquino, Cochair
Civil and Environmental Engineering, Duke University, Hudson Hall, Durham, NC 27708
Chair’s Introduction—9:15
Invited Papers
9:20
3aBAb1. Variational formulations for elastic inversion from full-field wave data. Paul E. Barbone (Mech. Eng., Boston Univ., 110
Cummington St., Boston, MA 02215, barbone@bu.edu)
Ultrasound elastography uses propagating P-waves to measure the deformation of soft elastic solids, including soft tissues. In some
cases, the deformation of interest is quasistatic while in others it is a propagating S-wave. From the measured deformation field, it is of
interest to infer the distribution of tissue rigidity and/or shear wave speed distribution within the imaged region. We review several variational formulations that have been proposed to solve this inverse problem, and their respective mathematical properties. The primary
focus will be on variational formulations that lead to direct solution of inverse problem. These include the virtual fields method (and the
related method of Romano, Shirron, and Bucaro), the direct error in constitutive equation formulation, and the adjoint-weighted variational formulation. We will consider consistency with the original boundary value problem, stability of the variational formulation, and
required continuity of both data functions and the unknown modulus distributions. For completeness, we will briefly review three cost
functions related to iterative solution of the inverse problem: energy error, error in the constitutive equation (ECE), and stabilized output
least squares.
9:40
3aBAb2. A comparative study of full-wave inversion and local time-of-flight approaches in elastography. Wilkins Aquino (Civil
and Environ. Eng., Duke Univ., Hudson Hall, Durham, NC 27708, wa20@duke.edu), Mahdi Bayat (Physiol. and Biomedical Eng.,
Mayo Colllege of Medicine, Rochester, MN), Olalekan A. Babaniyi (Civil and Environ. Eng., Duke Univ., Durham, NC), and Mostafa
Fatemi (Physiol. and Biomedical Eng., Mayo Colllege of Medicine, Rochester, MN)
Most of the current techniques for elastography rely on a pointwise measurement of the induced shear waves. A common assumption
in these methods is plane wave propagation in an unbounded domain and local homogeneity of the medium. Because of these simplifying assumptions, complex wave patterns, boundary conditions, and interfaces can present significant challenges to these methods. On
the other hand, general nonlinear optimization approaches with PDE constraints relax the underlying assumptions of planar waves and
3674
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3674
unbounded domains and, hence, can handle very general conditions. However, this generality usually leads to a higher computationally
expense as compared to time-of-flight methods. Therefore, the added computational expense and higher complexity of these optimization approaches needs to be justified in the context of elastography. In this work, we present a recent study that compares shear modulus
reconstructions obtained with a PDE-constrained optimization approach with a conventional shear wave elastography method using synthetic data. We show that our optimization approach produced significantly more consistent and accurate results than a conventional
SWE method. Moreover, we show that material distributions that lead to strong wave diffraction present serious challenges to time-offlight local approaches, while they can be handled naturally with optimization-based approaches. [Acknowledgment: This research was
supported by NIH Grant R01CA174723.]
10:00
3aBAb3. A machine learning alternative to model-based elastography. Cameron Hoerigd (BioEng., Univ. of Ilinois at UrbanaChampaign, 1270 Digital Comput. Lab., MC-278, Urbana, IL 61801, hoerig2@illinois.edu), Jamshid Ghaboussi (Civil and Environ.
Eng., Univ. of Ilinois at Urbana-Champaign, Urbana, IL), and Michael F. Insana (BioEng., Univ. of Illnois at Urbana-Champaign,
Urbana, IL)
Model-based elastography methods suffer severe limitations in imaging the complex mechanical behavior of real biological tissues.
We adapted the Autoprogressive Method (AutoP) to address these limitations by approaching the inverse problem with machine learning
tools. AutoP combines finite element analysis (FEA) and artificial neural networks (ANNs) with force and displacement measurements
to develop soft-computational models of mechanical behavior. Unlike model-based elastography methods, only measurement data
inform the material properties learned by the ANNs. Because this machine learning approach foregoes the initial model assumption,
AutoP can be applied to anisotropic, time-varying, and nonlinear media common in biomedical imaging applications. We first implemented AutoP to characterize linear-elastic gelatin phantoms and ex vivo rabbit kidneys to demonstrate the potential for medical imaging. Those models required an estimate of the interior geometry via segmentation of the B-mode images. In our current work, the
capabilities of AutoP are extended by developing a novel ANN architecture. Incorporating spatial information as part of the input to a
pair of ANNs working in tandem allows the models to learn the spatially varying mechanical behavior, thus precluding the segmentation
requirement. We will demonstrate this new approach to elasticity imaging by presenting elastograms generated by trained ANN material
models.
3a TUE. AM
10:20–10:40 Break
10:40
3aBAb4. Breast ultrasound elastography using inverse finite element elasticity reconstruction. Abbas Samani, Seyed R. Mousavi
(Western Univ., Dept. of Medical Biophys., Medical Sci. Bldg., London, ON N6A 5C1, Canada, asamani@uwo.ca), Hassan Rivaz (Concordia Univ., Montreal, QC, Canada), Ali Sadeghi-Naini, and Gregory Czarnota (Sunnybrook Res. Inst., Toronto, ON, Canada)
Breast cancer is the most common cancer in women worldwide. Its early detection is paramount for its successful treatment outcome.
Among imaging techniques developed for breast cancer diagnosis, elastography has shown good promise. In this presentation, a breast
ultrasound elastography method will be described, and its application in breast cancer patients will be demonstrated. The method follows
the quasi-static elastography approach where the breast is stimulated using regular ultrasound transducer. RF data are utilized within a
dynamic programming minimization algorithm for tissue motion tracking, leading to 2D (axial + lateral) strain field. This field is processed within a novel inverse finite-element reconstruction framework to reconstruct the breast Young’s modulus distribution. The framework uses Hooke’s law to obtain the Young’s modulus distribution. It is iterative where the stress distribution is updated using finite
element method at the end of each reconstruction iteration. To ensure convergence, the Young’s modulus was averaged within 5x5 finite
element windows. A phantom study mimicking breast cancer was performed to validate the developed system which demonstrated high
(93%) accuracy of Young’s modulus reconstruction. The method was then applied to breast cancer patients where elastography images
reconstructed using the proposed method showed its effectiveness in clinical setting. The only hardware required in the system is an
ultrasound scanner. As such, it is a promising candidate for clinical cancer diagnosis.
11:00
3aBAb5. Shear modulus is a good surrogate for total tissue pressure: Preliminary studies with a xenograft pancreatic cancer tumor model. Marvin M. Doyley, Hexuan Wang (Elec. and Comput. Eng., Univ. of Rochester, 333 Hopeman Eng. Bldg., Rochester, NY
14627, m.doyley@rochester.edu), Michael Nieskoski, and Brian Pogue (Thayer School of Eng., Darmouth College, Hanover, NH)
Pancreatic ductal adenocarcinoma (PDA) is a common and lethal disease, with a 5-year survival rate of less than 6%. The absence of
a functional vasculature and the build-up of dense stromal regions impede drug delivery that prevents the disease from being eradicated,
even when surgery is combined with chemotherapy. We hypothesize that real-time measurement of total tissue pressure, as related to
cancer cell growth and drug delivery, will enable translational research into patient-specific therapies. Since no imaging methods can
measure tissue pressure in vivo, we investigated the feasibility of using elastography to provide a surrogate measure of total tissue pressure. Specifically, we performed studies on orthotopically and subcutaneously grown xenograft tumors (n = 20) to assess how the shear
modulus of naturally occurring AsPc-1 pancreatic tumors varies with stromal density. The results of this investigation revealed that there
is 6 kPa difference between the shear modulus of orthotopically and subcutaneously grown tumors. A strong correlation was observed
between the shear modulus of the extracellular matrix and tissue pressure measured with a pressure probe. We also observed good correlation between shear modulus and collagen density. These preliminary results demonstrate that elastography is a good imaging surrogate
biomarker of total tissue pressure.
3675
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3675
11:20
3aBAb6. Elastic reconstruction of shear modulus and stress distribution for assessing abdominal aorta aneurysmal rupture risk.
Doran Mix, Michael Stoner, and Michael S. Richards (Surgery, Univ. of Rochester Medical Ctr., Univ. of Rochester Med. Ctr., 601 Elmwood Ave., Rochester, NY 14642, michael.richards@rochester.edu)
The necessity of surgical intervention of abdominal aorta aneurysms is based on a risk-reduction paradigm primarily relying on
trans-abdominal ultrasound (US) measurements of the maximum diameter of an AAA. However, AAA diameter is only a rough estimate
of rupture potential and elastographic estimates of material property changes and stresses within aortic tissue may be a better predictor.
This work presents an elastic imaging technique to match model predicted displacement fields to those measured using clinical US. A
linear elastic finite-element model is used and is assumed to be undergoing a quasi-static, plane strain deformation. This technique uses
a regularization scheme to incorporate geometric segmentation information, as a penalty or soft prior, to counter the inherent ill-posedness of the inverse problem. In addition, displacement fields are measured and accumulated over the entire cardiac cycle and incorporated simultaneously in the reconstruction technique to improve the signal to noise of the recovered modulus distribution. Model
predicted strain fields and modulus distributions are used to predict the relative stress induced over the cardiac cycle. Results of validation studies comparing modulus and stress fields performed using finite-element simulations of 3D and time dependent geometries, tissue-mimicking phantom simulations, and initial clinical results will be presented.
11:40–12:20 Panel Discussion
TUESDAY MORNING, 27 JUNE 2017
ROOM 205, 9:20 A.M. TO 11:40 A.M.
Session 3aEA
Engineering Acoustics and Physical Acoustics: Microelectromechanicalsystems (MEMS) Acoustic Sensors I
Vahid Naderyan, Cochair
Physics/National Center for Physical Acoustics, University of Mississippi, NCPA, 1 Coliseum Drive, University, MS 38677
Kheirollah Sepahvand, Cochair
Mechanical, Technical University of Munich, Boltzmannstraße 15, Garching bei Munich 85748, Germany
Robert D. White, Cochair
Mechanical Engineering, Tufts University, 200 College Ave., Medford, MA 02155
Invited Papers
9:20
3aEA1. Acoustic performance of MEMS microphones. Past, present, and future. Michael Pedersen (Novusonic Corp., P.O. Box
183, Ashton, MD 20861, info@novusonic.com)
An overview will be presented of the current state of commercial MEMS microphone technology with focus on acoustic performance
metrics such as noise and sound pressure limits, bandwidth, and low frequency behavior. Important limitations for MEMS element and
electronic performance will be discussed in the context of the various transducer technologies currently being pursued or offered. The performance of MEMS microphones, since their commercial introduction in the early 2000s, has been strongly driven by the mobile phone
handset application, which continues to be the most important market segment by volume. Substantial improvements in performance have
been realized since the inception, to meet opposing demands of better acoustic performance and lower power consumption/smaller size. A
brief summary of the development trajectory and possible future directions will be given. With the advent of new important applications,
such as home automation, and a general movement in design towards digital interfaces, other blends of microphone performance requirements are emerging. A discussion of such requirements and their possible impact on acoustic MEMS design will be provided.
9:40
3aEA2. When good mics go bad. Martin D. Ring (Consumer Electronics ProDC Eng., Bose Corp., the Mountain, M.S.; 271-E, Framingham, MA 01701-9168, ring@bose.com)
Billions of MEMs microphones are fabricated each year and the vast majority of them are used for voice pickup in Consumer Electronics (CE) devices where anomalous behavior or failure can cause little more than annoyance. Some millions of these microphones are
destined for more sophisticated and/or critical applications where malfunction can lead to unpleasant and possibly dangerous situations
3676
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3676
for both man and machine. This talk will discuss some of the environmental stimuli that our lives provide and our observations of
MEMS microphone thermal variation, pressure overload characteristics, and lack of mechanical robustness to dust, gases, and fluids. In
some cases, capacitive and piezoelectric sensors will be compared.
10:00
3aEA3. Microphone and microphone array characterization utilizing the plane wave tube method. Tung Shen Chew, Arthur Zhao,
and Robert Littrell (Vesper, 77 Summer St., Boston, MA 02110, rlittrell@vespermems.com)
Microelectromechanical Systems (MEMS) Microphone arrays are becoming ubiquitous in consumer electronics. Large and expensive anechoic chambers are commonly used to characterize these arrays. Individual MEMS microphones, on the other hand, are typically
tested using one of three methods: a free field calibration in an anechoic chamber, a pressure field calibration in a pressure chamber, or a
pressure field calibration in a plane wave tube (PWT). In this work, we present a PWT system for testing a single microphone as well as
a second PWT system for testing an array of four MEMS microphones. Both systems utilize a 3D printed portion of the tube that is
designed to minimize reflections and standing waves while allowing the sound pressure to reach a calibrated instrumentation microphone
and the MEMS microphone(s) under test. With these PWT systems, we characterize individual microphones up to 30 kHz and microphone arrays up to 3 kHz. Further, the array test system is used to measure the polar pattern of the microphone array at several frequencies and measure the impact of microphone mismatch on array performance. This PWT test methodology is a size and cost effective
way to characterize MEMS microphone arrays.
10:20–10:40 Break
10:40
3aEA4. Thermal boundary layer limitations on the performance of micromachined microphones. Michael Kuntzman, Janice
LoPresti, Yu Du, Wade Conklin, Dave Schafer, Sung Lee, and Peter Loeppert (Knowles Corp., 1151 Maplewood Dr., Itasca, IL 60143,
michael.kuntzman@knowles.com)
3a TUE. AM
The extent to which thermal boundary layer effects limit the performance of micromachined microphones is examined. A lumped
element network model for a micromachined microphone is presented which includes a ladder network in parallel with the adiabatic
back volume compliance to account for the transition of the enclosure from adiabatic to isothermal conditions when the thermal boundary layer becomes large compared to the enclosure dimensions. The thermal correction to the cavity impedance contains a resistive component which contributes thermal-acoustic noise to the system. The model results are compared to measurements taken from
commercially available microphone units with various back volume sizes, and the simulated relative noise power contribution of each
acoustic noise source is calculated. The impedance of the back volume, including the thermal correction factor, is compared to the adiabatic compliance and the impedance derived from thermoacoustic finite element analysis. It is shown that the noise due to the thermal
component of the back volume impedance becomes significant in microphones with small back volumes and effectively sets an upper
bound on the signal-to-noise ratio of a microphone of given package dimensions.
Contributed Papers
11:00
11:20
3aEA5. Calibration and characterization of MEMS microphones.
Andrea Prato, Alessandro Schiavi (INRIM, Strada Delle Cacce 91, Torino
10135, Italy, a.prato@inrim.it), Irene Buraioli (Politecnico di Torino, Torino, Italy), Davide Lena (STMicroelectronics, Torino, Italy), and Danilo
Demarchi (Politecnico di Torino, Torino, Italy)
3aEA6. Characterization of the vibration response of miniature microphones by subtraction. Jonathan D. Walsh, Quang T. Su (Eng., Binghamton Univ., 4400 Vestal Parkway East, Binghamton, NY 13902, jwalsh3@
binghamton.edu), and Daniel M. Warren (Knowles Electronics, Itasca,
IL)
In recent years, the increase in the number of smartphones led to a remarkable demand for low-cost microphones. This was met by the rapid development of MEMS (MicroElectroMechanical Systems) microphones,
whose technology is becoming a promising perspective for future noise
measurements based on new acoustic sensor networks. Nevertheless, current
Standards do not provide proper calibration and test procedures for these
microphones. In this work, calibration standard procedures have been
adapted to characterize condenser MEMS microphones by comparison technique with laboratory standard microphones. Microphone parameters (sensitivity, frequency response, linearity, directivity, stability, and dynamic
range) and changes of sensitivity with temperature (from -10 C to + 50 C)
and humidity (from 25% to 90%) have been evaluated in a hemi-anechoic
room and in an environmental chamber, respectively. These procedures
open up the possibility to provide a robust and metrological characterization
of MEMS microphones for noise measurements.
Presented is a test methodology for characterizing the vibration sensitivity of miniature microphones for hearing aids. A common method for
obtaining the vibration sensitivity of a system is to use an electrodynamic
shaker to deliver a calibrated vibration input and measure the corresponding
output. When the system under test is a microphone, it is difficult to obtain
the vibration response since the measured output will also be due to coherent sound created by the vibration delivery system. The method models the
microphone as a system with two inputs, vibration and sound, with one electronic output. Using frequency-domain signal processing, this method
extracts the vibration response from a shaker-driven signal by subtracting a
synthesized acoustic response signal. When compared to vibration measurements under vacuum, the vibration responses from the two methods generally agree. The vibration response estimates produced using this algorithm
are more accurate than vacuum chamber data due to the loading and stiffening effects caused by the presence of air, as would occur under standard
operating conditions. This test method allows the rapid acquisition of microphone vibration responses by eliminating the need for a vacuum chamber,
or carefully designed acoustic baffling.
3677
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3677
TUESDAY MORNING, 27 JUNE 2017
BALLROOM A, 9:20 A.M. TO 12:20 P.M.
Session 3aIDb
Interdisciplinary, and Education in Acoustics and Student Council: Graduate Programs in Acoustics
Poster Session
Dominique A. Bouavichith, Cochair
New York University, 33 Washington Sq. W, 1115, New York, NY 10011
Brent O. Reichman, Cochair
Brigham Young University, 453 E 1980 N, #B, Provo, UT 84604
Michaela Warnecke, Cochair
Psychological and Brain Sciences, Johns Hopkins University, 3400 N Charles St, Dept Psychological & Brain Sciences,
Baltimore, MD 21218
All posters will be on display from 9:20 a.m. to 12:20 p.m. To allow contributors in this session to see the other posters, authors of
odd-numbered papers will be at their posters from 9:20 a.m. to 10:50 a.m. and authors of even-numbered papers will be at their posters
from 10:50 a.m. to 12:20 p.m.
Invited Papers
3aIDb1. University at Buffalo, SUNY: Variety of graduate programs in acoustics. Anastasiya Kobrina (Psych., SUNY Univ. at Buffalo, B23 Park Hall, Amherst, NY 14261, akobrina@buffalo.edu)
University at Buffalo, SUNY has an outstanding reputation, due to its commitment to research and knowledgeable faculty. UB is
known for its diversity in auditory research spanning from the psychophysics of hearing in humans and animals to the neurophysiological mechanisms of hearing. This unique variety leads to collaborations spanning departments and laboratories. The Cognitive Psychology doctoral program is aimed at training students for research-oriented careers. Graduate students are exposed to a variety of
laboratories and courses in order to develop collaborations and build research skills. In addition, the Psychology Department holds regular colloquia on various general topics in psychology, and the Cognitive Psychology and Behavioral Neuroscience areas both hold
weekly “brownbag” seminars, often related to topics in auditory processing and communication. Biological Sciences Department, the
Communication Disorders Department, as well as the Center for Hearing and Deafness, make UB a perfect oasis of auditory research.
Thus, the University at Buffalo is an ideal fit for training in hearing research and for facilitating collaborations.
3aIDb2. The Physical Acoustics Research Program at the University of Louisiana at Lafayette. Andi Petculescu (Univ. of Louisiana at Lafayette, 240 Hebrard Blvd., Lafayette, LA 70503, andi@louisiana.edu)
The Department of Physics at UL Lafayette has a strong history of acoustics research. Recently, the program has expanded considerably as a result of renewed interest in acoustics-related research. Tied into the current rethinking of the Department’s multiple physics
tracks, the Physical Acoustics Research Program offers ample opportunities for students—both undergraduate and graduate—to be
involved in wide-spectrum research in acoustics. This program, unique in Louisiana, involves acoustic sensing of alien environments,
atmospheric and underwater acoustics, ultrasonics in the solid state, ultrasonic materials characterization and structural health monitoring, seismology and geodynamics. In parallel to research projects, the faculty offer a variety of acoustics-related courses on topics such
as Matlab- and Python-based computational acoustics, solid state acoustics, experimental techniques in acoustics, room acoustics, as
well as machine learning techniques for wave propagation.
3aIDb3. The Graduate Program in Acoustics at the Pennsylvania State University. Victor Sparrow and Daniel A. Russell (Graduate
Program in Acoust., Penn State Univ., 201 Appl. Sci. Bldg., University Park, PA 16802, vws1@psu.edu)
The Graduate Program in Acoustics at Penn State is the only program in the U.S. offering the Ph.D. degree in acoustics as well as
M.S. and M.Eng. degrees in acoustics. An interdisciplinary program with faculty from a variety of academic disciplines, the Graduate
Program in Acoustics is administratively aligned with the College of Engineering and closely affiliates with the Applied Research Laboratory. Research areas include structural acoustics, nonlinear acoustics, architectural acoustics, signal processing, aeroacoustics, biomedical ultrasound, transducers, computational acoustics, noise and vibration control, psychoacoustics, and underwater acoustics. Course
offerings include fundamentals of acoustics and vibration, electroacoustic transducers, signal processing, acoustics in fluid media, sound
and structure interaction, digital signal processing, experimental techniques, acoustic measurements and data analysis, ocean acoustics,
architectural acoustics, noise control engineering, nonlinear acoustics, outdoor sound propagation, computational acoustics, flow induced
noise, spatial sound and 3D audio, marine bioacoustics, and acoustics of musical instruments. Penn State Acoustics graduates serve
widely throughout military and government labs, academic institutions, consulting firms and industry. This poster describes faculty
research areas, laboratory facilities, student demographics, successful graduates, and recent enrollment and employment trends.
3678
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3678
3aIDb4. Graduate training opportunities in the hearing sciences at the University of Louisville. Pavel Zahorik, Jill E. Preminger,
and Christian Stilp (Dept. of Otolaryngol. and Communicative Disord. & Dept. of Psychol. and Brain Sci., Univ. of Louisville, University of Louisville, Louisville, KY 40292, pavel.zahorik@louisville.edu)
The University of Louisville currently offers two branches of training opportunities for students interested in pursuing graduate training in the hearing sciences: A Ph.D. degree in experimental psychology with concentration in hearing science, and a clinical doctorate in
audiology (Au.D.). The Ph.D. degree program offers mentored research training in areas such as psychoacoustics, speech perception,
spatial hearing, multisensory perception, and language development. The program guarantees students four years of funding (tuition
plus stipend). The Au.D. program is a 4-year program designed to provide students with the academic and clinical background necessary
to enter audiologic practice. Both programs are affiliated with the Heuser Hearing Institute, which, along with the University of Louisville, provides laboratory facilities and clinical populations for both research and training. An accelerated Au.D./Ph.D. training program
that integrates key components of both programs for training of students interested in clinically based research is under development.
Additional information is available at http://louisville.edu/medicine/degrees/audiology and http://louisville.edu/psychology/graduate/
experimental.
3aIDb5. Graduate studies in acoustics at the University of Notre Dame. Thomas Corke (Univ. of Notre Dame, Notre Dame, IN) and
Christopher Jasinski (Univ. of Notre Dame, 54162 Ironwood Rd., South Bend, IN 46635, chrismjasinski@gmail.com)
The University of Notre Dame department of Aerospace and Mechanical Engineering is conducting cutting edge research in aeroacoustics, structural vibration, and wind turbine noise. Expanding facilities are housed at two buildings of the Hessert Laboratory for
Aerospace Engineering and include two 25 kW wind turbines, a Mach 0.6 wind tunnel, and an anechoic wind tunnel. Several faculty
members conduct research related to acoustics and multiple graduate level courses are offered in general acoustics and aeroacoustics.
This poster presentation will give an overview of the current research activities, laboratory facilities, and graduate students and faculty
involved at Notre Dame’s Hessert Laboratory for Aerospace Engineering.
3aIDb6. Graduate research opportunities in acoustics at the University of Michigan, Ann Arbor. Tyler J. Flynn and David R. Dowling (Mech. Eng., Univ. of Michigan, Ann Arbor, 1231 Beal Ave., Ann Arbor, MI 48109, tjayflyn@umich.edu)
3a TUE. AM
The University of Michigan is host to a wide array of acoustics research which encompasses many of the core Technical Committees
of the ASA. Within the Department of Mechanical Engineering work is being done to advance the field of remote sensing and underwater acoustics, to better understand the physics of the cochlea in human hearing, and even to design safer football helmets. Within the
University of Michigan Medical School, faculty and graduate students are constantly advancing techniques for diagnostic and therapeutic ultrasound procedures. In the Department of Naval Architecture and Marine Engineering, computational methods are being used to
predict sound signatures and structural loading of complex sea vessels. Researchers in the Linguistics Department are using acoustic,
perceptual, and articulatory methods to analyze human speech. And while these are only a sample of the projects taking place at Michigan, new opportunities for acoustics research and collaboration open up each semester. Combined with a rich course catalog, first-rate
facilities, and prospects for publication, these opportunities prepare Michigan graduate students for careers in industry and academia
alike. Go Blue!
3aIDb7. Graduate programs in Hearing and Speech Sciences at Vanderbilt University. G. Christopher Stecker (Hearing and Speech
Sci., Vanderbilt Univ., 1215 21st Ave. South, Rm. 8310, Nashville, TN 37232, g.christopher.stecker@vanderbilt.edu)
The Department of Hearing and Speech Sciences at Vanderbilt University is home to several graduate programs in the areas of Psychological and Physiological Acoustics and Speech Communication. Programs include the PhD in Audiology, Speech-Language Pathology, and Hearing or Speech Science, Doctor of Audiology (Au.D.), and Master’s programs in Speech-Language Pathology and
Education of the Deaf. The department is closely affiliated with Vanderbilt University’s Graduate Program in Neurobiology. Several
unique aspects of the research and training environment in the department provide exceptional opportunities for students interested in
studying the basic science as well as clinical-translational aspects of auditory function and speech communication in complex environments. These include anechoic and reverberation chambers capable of multichannel presentation, the Dan Maddox Hearing Aid Laboratory, and close connections to active Audiology, Speech-Pathology, Voice, and Otolaryngology clinics. Students interested in the
neuroscience of communication utilize laboratories for auditory and multisensory neurophysiology and neuroanatomy, human electrophysiology and neuroimaging housed within the department and at the neighboring Vanderbilt University Institute for Imaging Science.
Finally, department faculty and students engage in numerous engineering and industrial collaborations, which benefit from our home
within Vanderbilt University and setting in Music City, Nashville Tennessee.
3aIDb8. Graduate Education in Acoustics at The Catholic University of America. Joseph F. Vignola, Diego Turo (Mech. Eng., The
Catholic Univ. of America, 620 Michigan Ave., NE, Washington, DC 20064, vignola@cua.edu), Shane Guan (Office of Protected
Resources Permits, Conservation and Education Div., National Marine Fisheries Service, Silver Spring, MD), and Teresa J. Ryan (Eng.,
East Carolina Univ., Greenville, NC)
The Catholic University of America (CUA) has a graduate program with a long history in acoustics dating back to the early 1930s.
The acoustics program moved to the School of Engineering in the 1960s when there were strong needs in underwater acoustic studies to
meet U.S. Naval applications. The end of the Cold War was concurrent with a decline in the CUA’s acoustics education that persisted
into the 1990s. However, renewed interests in acoustical engineering, acoustic metamaterial, and environmental acoustic research has
revived the acoustics research and education programs at the CUA in recent years. Currently, a variety of graduate level acoustic courses
are being offered in the CUA’s Mechanical Engineering Department. Students can pursue a master’s degree or Ph.D. degree with
research in acoustics or vibrations. The courses in the program include a two-course sequence in fundamentals in acoustics, and more
focused courses in ocean acoustics, atmospheric acoustics, acoustic metrology, marine bioacoustics, nonlinear vibration, acoustic imaging, and acoustic metamaterials. In addition, CUA offers masters and Ph.D. programs to students who are interested in the field of acoustic research.
3679
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3679
3aIDb9. Graduate acoustics education in the Cockrell School of Engineering at The University of Texas at Austin. Michael R.
Haberman (Mech. Eng., Univ. of Texas at Austin, 1 University Station, C2200, Austin, TX 78712-0292), Neal A. Hall (Elec. and Comp.
Eng., The Univ. of Texas at Austin, Austin, TX), Mark F. Hamilton (Mech. Eng., Univ. of Texas at Austin, Austin, TX), Marcia J. Isakson (Appl. Res. Labs., The Univ. of Texas at Austin, Austin, TX), and Preston S. Wilson (Mech. Eng., Univ. of Texas at Austin, Austin,
TX, pswilson@mail.utexas.edu)
While graduate study in acoustics takes place in several colleges and schools at The University of Texas at Austin (UT Austin),
including Communication, Fine Arts, Geosciences, and Natural Sciences, this poster focuses on the acoustics program in Engineering.
The core of this program resides in the Departments of Mechanical Engineering (ME) and Electrical and Computer Engineering (ECE).
Acoustics faculty in each department supervise graduate students in both departments. One undergraduate and eight graduate acoustics
courses are cross-listed in ME and ECE. Instructors for these courses include staff at Applied Research Laboratories at UT Austin, where
many of the graduate students have research assistantships. The undergraduate course, taught every fall, begins with basic physical
acoustics and proceeds to draw examples from different areas of engineering acoustics. Three of the graduate courses are taught every
year: a two-course sequence on physical acoustics, and a transducers course. The remaining five graduate acoustics courses, taught in
alternate years, are on nonlinear acoustics, underwater acoustics, ultrasonics, architectural acoustics, and wave phenomena. An acoustics
seminar is held most Fridays during the long semesters, averaging over ten per semester since 1984. The ME and ECE departments both
offer Ph.D. degree qualifying exams in acoustics.
3aIDb10. Graduate Acoustics at the University of New Hampshire. Anthony P. Lyons (Univ. of New Hampshire, 24 Colovos Rd.,
Durham, NH 03824), Jennifer L. Miksis-Olds (Univ. of New Hampshire, Durham, NC), and Thomas C. Weber (Univ. of New Hampshire, Durham, NH, tom.weber@unh.edu)
The University of New Hampshire (UNH) offers several opportunities for graduate students interested in studying acoustics and its
application. Faculty mentors who are expert in acoustic methods and technologies reside in a range of programs and departments that
are largely focused on the use of acoustics in the marine environment, including biological science, earth science, mechanical engineering, natural resources and earth systems, ocean engineering, and oceanography. UNH faculty mentors who specialize in acoustics are
active in the Animal Bioacoustics, Acoustical Oceanography, and Underwater Acoustics technical committees. Recent studies by faculty
and students focusing on fundamental acoustic problems, such as those that would cause a graduate student to be a regular attendee of
meetings of the Acoustical Society of America, have come largely from mechanical engineering, ocean engineering, and the newly
formed School of Marine Sciences and Ocean Engineering. Graduate students in these programs of study have the opportunity for formal
classroom training in the fundamentals of acoustics, vibrations, and advanced topics in ocean acoustics as they pursue their graduate
training.
3aIDb11. Graduate Acoustics at Brigham Young University. Scott D. Sommerfeldt, Jonathan Blotter, Timothy W. Leishman, Scott
L. Thomson, Kent L. Gee, Brian E. Anderson, Tracianne B. Neilsen, and William Strong (Brigham Young Univ., N311 ESC, Provo, UT
84602, tbn@byu.edu)
Graduate studies in acoustics at Brigham Young University prepare students for jobs in industry, research, and academia by complementing in-depth coursework with publishable research. Graduate-level coursework provides students with a solid foundation in core
acoustics principles and practices. A new acoustical measurements lab course provides a strong foundation in experimental techniques
and writing technical memoranda. Labs across the curriculum include calibration, directivity, scattering, absorption, Doppler vibrometry, lumped-element mechanical systems, equivalent circuit modeling, arrays, filters, room acoustics measurements, active noise control,
and near-field acoustical holography. Recent thesis and dissertation topics include active noise control, directivity of acoustic sources,
room acoustics, radiation and directivity of musical instruments, energy-based acoustics, time reversal, nondestructive evaluation, flowbased acoustics, voice production, aeroacoustics, sound propagation modeling, nonlinear propagation, and high-amplitude noise analyses. Recently, the BYU acoustics program has added two faculty members and increased the number of graduate students, who are
expected to develop their communication skills, present their research at professional meetings, and publish in peer-reviewed acoustics
journals. Graduate students also often serve as peer mentors to undergraduate students on related projects and may participate in field
experiments to gain additional experience.
3aIDb12. Distance Education Master of Engineering in Acoustics from Penn State. Daniel A. Russell and Victor Sparrow (Graduate
Program in Acoust., Penn State Univ., 201 Appl. Sci. Bldg., University Park, PA 16802, vws1@psu.edu)
The Graduate Program in Acoustics at Penn State provides online access to graduate level courses leading to the M.Eng. degree in
Acoustics. Lectures are broadcast live via Adobe Connect to students scattered around the world, while archived recordings allow working students to access lectures at their convenience. Students earn the M.Eng. in Acoustics degree by completing 30 credits of coursework (six required courses and four electives) and writing a capstone paper. Since 1987, more than 135 distance education students have
completed the M.Eng. degree in Acoustics. Many other students take individual courses as non-degree students. Courses offered online
include elements of acoustics and vibration, elements of waves in fluids, electroacoustic transducers, signal processing, acoustics in fluid
media, sound and structure interaction, digital signal processing, aerodynamic noise, acoustic measurements and data analysis, ocean
acoustics, architectural acoustics, noise control engineering, nonlinear acoustics, outdoor sound propagation, computational acoustics,
flow induced noise, spatial sound and 3D audio, marine bioacoustics, and acoustics of musical instruments. This poster describes the distance education experience leading to the M.Eng. degree in Acoustics from Penn State and showcases student demographics, capstone
paper topics, enrollment statistics and trends, and the success of our graduates.
3680
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3680
3aIDb13. Biomedical research at the image-guided ultrasound therapeutics laboratories. Christy K. Holland (Dept. of Internal
Medicine, Div. of Cardiovascular Health and Disease, and Biomedical Eng. Program, Univ. of Cincinnati, Cardiovascular Ctr. Rm.
3935, 231 Albert Sabin Way, Cincinnati, OH 45267-0586, Christy.Holland@uc.edu), T. Douglas Mast (Dept. of Biomedical, Chemical
and Environ. Eng., Univ. of Cincinnati, Cincinnati, OH), Kevin J. Haworth (Dept. of Internal Medicine, Div. of Cardiovascular Health
and Disease, and Biomedical Eng. Program, Univ. of Cincinnati, Cincinnati, OH), and Todd A. Abruzzo (Dept. of Radiology, Cincinnati
Children’s Hospital Medical Ctr., Cincinnati, OH)
The Image-guided Ultrasound Therapeutic Laboratories are located at the University of Cincinnati in the Heart, Lung, and Vascular
Institute, a key component of efforts to align the UC College of Medicine and UC Health research, education, and clinical programs.
These extramurally funded laboratories, directed by Prof. Christy Holland, are comprised of graduate and undergraduate students, postdoctoral fellows, principal investigators, and physician-scientists with backgrounds in physics, chemistry, and biomedical engineering,
and clinical and scientific collaborators in fields including cardiology and neurosurgery. Prof. Holland’s research focuses on biomedical
ultrasound including sonothrombolysis, ultrasound-mediated drug and bioactive gas delivery, development of echogenic liposomes,
early detection of cardiovascular diseases, and ultrasound-image guided tissue ablation. Prof. Todd Abruzzo, an experienced neurointerventional radiologist, directs preclinical porcine sonothrombolysis studies. The Biomedical Ultrasonics and Cavitation Laboratory,
directed by Prof. Kevin Haworth, employs ultrasound-triggered phase-shift emulsions for image-guided treatment of cardiovascular disease, especially thrombotic disease. Imaging algorithms incorporate both passive and active cavitation detection. The Biomedical
Acoustics Laboratory, directed by Prof. T. Douglas Mast, employs ultrasound for monitoring thermal therapy, ablation of cancer and
vascular targets, transdermal drug delivery, and noninvasive measurement of tissue deformation.
3aIDb14. Acoustics research and graduate studies within the College of Engineering at the University of Nebraska—Lincoln.
Lily M. Wang, Erica E. Ryherd (Univ. of Nebraska - Lincoln, PKI 100C, 1110 S. 67th St., Omaha, NE 68182-0816, lwang4@unl.edu),
Joseph A. Turner (Univ. of Nebraska - Lincoln, Lincoln, NE), and Jinying Zhu (Univ. of Nebraska - Lincoln, Omaha, NE)
3a TUE. AM
The University of Nebraska—Lincoln (UNL) offers opportunities to study and conduct research in acoustics within a number of our
graduate engineering degree programs, including (1) Architectural Engineering (AE) within the Durham School of Architectural Engineering and Construction, (2) Civil Engineering (CIVE) and (3) Mechanical and Materials Engineering (MME). Dr. Lily Wang and Dr.
Erica Ryherd (faculty in the Durham School, based at UNL’s Scott Campus in Omaha) are active in architectural acoustics and noise.
More information on the ‘Nebraska Acoustics Group’ within the Durham School may be found online at http://nebraskaacousticsgroup.org/. Dr. Jinying Zhu (faculty in CIVE, also based on UNL’s Scott Campus in Omaha) focuses in structural acoustics, using ultrasonic
waves for non-destructive evaluation of concrete structures and material. Dr. Joseph Turner (faculty in MME, based at UNL’s City Campus in Lincoln) studies ultrasound propagation through complex media for quantitative characterization of materials/microstructure
(http://quisp.unl.edu). UNL additionally hosts an active student chapter of the Acoustical Society of America, the first to be founded in
2004. The poster will describe the graduate-level acoustics courses and lab facilities at UNL, as well as the research interests and
achievements of our faculty, graduates, and students.
3aIDb15. Acoustics-related graduate programs at the University of Minnesota. Kelly L. Whiteford (Psych., Univ. of Minnesota, 75
East River Parkway, Minneapolis, MN 55455, whit1945@umn.edu), Peggy B. Nelson (Speech-Language-Hearing Sci., Univ. of Minnesota, Minneapolis, MN), Hubert H. Lim (Biomedical Eng., Univ. of Minnesota, Minneapolis, MN), Mark Bee (Ecology, Evolution, and
Behavior, Univ. of Minnesota, St. Paul, MN), and Andrew J. Oxenham (Psych., Univ. of Minnesota, Minneapolis, MN)
The University of Minnesota offers a wide variety of graduate programs related to acoustics, primarily in the areas of Speech Communication, Psychological and Physiological Acoustics, and Animal Bioacoustics. Degree programs include Psychology (Ph.D.),
Speech-Language-Hearing Sciences (M.A., Au.D., and Ph.D.), Biomedical Engineering (M.S. and Ph.D.), Ecology, Evolution, and
Behavior (Ph.D.), and Neuroscience (Ph.D.). Faculty across departments have a shared interest in understanding how the ear and brain
work together to process sound and in developing new technologies and approaches for improving hearing disorders. The university
offers a number of resources for pursuing research related to these topics. The Center for Applied and Translational Sensory Science
(CATSS) provides opportunities for utilizing interdisciplinary collaborations to better understand sensory-related impairments, including
hearing loss and low vision. Within CATSS is the Multi-Sensory Perception Lab, which houses shared equipment, including eye trackers, and electroencephalography. The Center for Magnetic Resonance Research houses several ultrahigh field magnets, while the Center
for Neural Engineering and affiliated faculty labs also house multiple neuromodulation and neurorecording devices to interact with and
monitor neural activity in humans and animals. Students and faculty gather monthly for the Acoustic Communication Seminar, where
labs alternate presenting their research findings and identify new collaborative research directions.
3aIDb16. Acoustics education opportunities at UMass Dartmouth. David A. Brown (ECE, Univ. of Massachusetts Dartmouth, 151
Martine St., Fall River, MA 02723, dbAcoustics@cox.net), John R. Buck, Karen Payton, Paul J. Gendron, and Antonio Costa (ECE,
Univ. of Massachusetts Dartmouth, North Dartmouth, MA)
The University of Massachusetts Dartmouth has a long tradition of research and course offerings in Acoustics and Signal Processing
within the Department of Electrical and Computer Engineering dating back to the 1960s. The department has four full-time faculty with
funded research programs in acoustics related areas as well as unique research/calibration facilities including a large underwater acoustic
test facility and three fully autonomous underwater vehicles. UMass Dartmouth offers B.S., M.S., and Ph.D. degrees in Electrical Engineering with research opportunities and course offerings in fundamentals of acoustics, underwater acoustics, electro-acoustic transducers, medical ultrasonics, signal processing, speech processing, communications, and detection theory. The department works closely
with the Center for Innovation and Entrepreneurship (CIE) and many local companies and government research centers. The poster will
highlight course offerings and research opportunities. http://www.umassd.edu/engineering/ece/.
3681
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3681
3aIDb17. Acoustics and ocean engineering at the University of Rhode Island. Lora J. Van Uffelen, Gopu R. Potty, and James H
Miller (Ocean Eng., Univ. of Rhode Island, 215 South Ferry Rd., 213 Sheets Lab., Narragansett, RI 02882, loravu@uri.edu)
Acoustics is one of the primary areas of emphasis in the Ocean Engineering Department at the University of Rhode Island, one of
the oldest Ocean Engineering programs in the United States. The program offers Bachelors, Masters (thesis and non-thesis options), and
Ph.D. degrees in Ocean Engineering. These programs are based at Narragansett Bay, providing access to a living laboratory for student
learning. Some key facilities of the program are the 100-foot-long wave tank, acoustics tank, and R/V Endeavor, a UNOLS oceanographic research vessel operated by the University of Rhode Island. At the graduate level, students are actively involved in research
focused in areas such as acoustical oceanography, propagation modeling, geoacoustic inversion, marine mammal acoustics, ocean acoustic instrumentation, and transducers. An overview of classroom learning and ongoing research will be provided, along with information
regarding the requirements of entry into the program.
3aIDb18. Graduate Studies in Acoustics at Northwestern University. Jennifer Cole, Matthew Goldrick, and Ann Bradlow (Dept. of
Linguist, Northwestern Univ., 2016 Sheridan Rd., Evanston, IL 60208, jennifer.cole1@northwestern.edu)
Northwestern University has a vibrant and highly interdisciplinary community of acousticians. Of the 13 ASA technical areas, 3
have strong representation at Northwestern: Speech Communication, Psychological and Physiological Acoustics, and Musical Acoustics.
Sound-related work is conducted across a wide range of departments including Linguistics (in the Weinberg College of Arts and Sciences), Communication Sciences & Disorders, and Radio/Television/Film (both in the School of Communication), Electrical Engineering
& Computer Science (in the McCormick School of Engineering), Music Theory & Cognition (in the Bienen School of Music), and Otolaryngology (in the Feinberg School of Medicine). In addition, The Knowles Hearing Center involves researchers and labs across the
university dedicated to the prevention, diagnosis, and treatment of hearing disorders. Specific acoustics research topics across the university range from speech perception and production across the lifespan and across languages, dialect and socio-indexical properties of
speech, sound design, machine perception of music and audio, musical communication, the impact of long-term musical experience on
auditory encoding and representation, auditory perceptual learning, and the cellular, molecular, and genetic bases of hearing function.
We invite you to visit our poster to learn more about the “sonic boom” at Northwestern University!
3aIDb19. Graduate research and education in architectural acoustics at Rensselaer Polytechnic Institute. Ning Xiang, Jonas
Braasch, and Todd Brooks (Graduate Program in Architectural Acoust., Rensselaer Polytechnic Inst., Greene Bldg., 110 8th St., Troy,
NY 12180, xiangn@rpi.edu)
The Graduate Program in Architectural Acoustics has been constantly advanced from its inception in 1998 with an ambitious mission
of educating future experts and leaders in architectural acoustics, due to the rapid pace of change in the fields of architectural-, physical-,
and psycho-acoustics. Recent years the program’s pedagogy using “STEM” (science, technology, engineering, and mathematics) methods has been proven to be effective and productive, including intensive, integrative hands-on experimental components that integrate architectural acoustics theory and practice. The graduate program has recruited graduate students from a variety of disciplines including
individuals with B.S., B.Arch. or B.A. degrees in Mathematics, Physics, Engineering, Architecture, Electronic Media, Sound Recording,
Music and related fields. Graduate students under this pedagogy and research environment have been succeed in the rapidly changing
field. RPI’s Graduate Program in Architectural Acoustics has since graduated more than 120 graduates with both M.S. and Ph.D.
degrees. Under the guidance of the faculty members they have also actively contributed to the program’s research in architectural acoustics, communication acoustics, psychoacoustics, signal processing in acoustics, as well as our scientific exploration at the intersection of
cutting edge research and traditional architecture/music culture. This paper illuminates the evolution and growth of the graduate
program.
3aIDb20. Graduate studies in Acoustics and Noise Control in the School of Mechanical Engineering at Purdue University. Patricia Davies, J. S. Bolton, and Kai M. Li (Ray W. Herrick Labs., School of Mech. Eng., Purdue Univ., 177 South Russell St., West Lafayette, IN 47907-2099, daviesp@purdue.edu)
The acoustics community at Purdue University will be described with special emphasis on the graduate program in Mechanical Engineering. Around 30 Purdue faculty study aspects of acoustics and closely related disciplines and so there are many classes to choose
from as graduate students structure their plans of study to complement their research activities and to broaden their understanding of
acoustics. In Mechanical Engineering, the primary emphasis is on understanding noise generation, noise propagation, and the impact of
noise on people, as well as development of noise control strategies, experimental techniques, and noise impact prediction tools. The
noise control research is conducted at the Ray W. Herrick Laboratories, which houses several large acoustics chambers that are designed
to facilitate testing of a wide array mechanical systems, reflecting the Laboratories’ long history of industry-relevant research. Complementing the noise control research, Purdue has vibrations, dynamics, and electromechanical systems research programs and is home to a
collaborative group of engineering and psychology professors who study human perception and its integration into engineering design.
There are also very strong ties between ME acoustics faculty and faculty in Biomedical Engineering and Speech Language and Hearing
Sciences.
3682
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3682
TUESDAY MORNING, 27 JUNE 2017
ROOM 200, 9:15 A.M. TO 12:20 P.M.
Session 3aMU
Musical Acoustics: Session in Honor of Thomas D. Rossing
Daniel A. Russell, Cochair
Graduate Program in Acoustics, Pennsylvania State University, 201 Applied Science Bldg., University Park, PA 16802
Andrew C. Morrison, Cochair
Joliet Junior College, 1215 Houbolt Rd, Natural Science Department, Joliet, IL 60431
D. Murray Campbell, Cochair
School of Physics and Astronomy, University of Edinburgh, James Clerk Maxwell Building, Mayfield Road,
Edinburgh EH9 3JZ, United Kingdom
Chair’s Introduction—9:15
Invited Papers
3a TUE. AM
9:20
3aMU1. Six decades of inspiration: Thomas D. Rossing, internationally renowned musical acoustician, writer, educator, and
friend. D. Murray Campbell (Acoust. and Audio Group, Univ. of Edinburgh, James Clerk Maxwell Bldg., Peter Guthrie Tait Rd., Edinburgh EH9 3FD, United Kingdom, d.m.campbell@ed.ac.uk)
Tom Rossing occupies a very special place in the international community of researchers in musical acoustics. Generations of undergraduate and postgraduate students have been directed to The Physics of Musical Instruments by Fletcher and Rossing as the most helpful and authoritative textbook in this interdisciplinary field. Specialists in the study of percussion and stringed instruments have enjoyed
and profited from the textbooks which he has written and edited on these topics, and from the many research papers which he has published. He has traveled widely, researching and teaching not only in the United States but also in Australia, England, France, Germany,
the Netherlands, Scotland, South Korea, and Sweden. He has been a stalwart supporter of the series of International Symposia in Musical
Acoustics, and has made an outstanding contribution to the promotion of education in acoustics. Musical acousticians worldwide owe
Tom a great debt of gratitude for his collaboration, inspiration, and friendship.
9:40
3aMU2. Professional and personal interactions with Tom Rossing. William J. Strong (Phys. and Astronomy, Brigham Young Univ.,
Provo, UT 84602, strongw.byu@gmail.com)
This talk was motivated by time spent in Australia at the University of New England in Armidale, NSW, during the latter part of
1980. My wife and I and our three youngest sons spent five months there while I carried out research on the flute with Neville Fletcher
and Ron Silk. As you might suspect from the title of the talk, Tom was also at the University of New England and our times there overlapped. Tom was solicitous of our sons which enriched their Australian experience. Tom’s research was concerned with acoustical
aspects of percussion instruments. Though discussions of our respective research projects was limited, our shared experience in Australia
led to our continuing interaction during the ensuing years. A major part of the talk will consider interactions with Tom at other times
and in other places.
10:00
3aMU3. “Mode studies in musical instruments,” a journey with Tom. Uwe J. Hansen (Phys., Utah Valley Univ., 64 Heritage Dr.,
Terre Haute, Indiana 47803-2374, uwe.hansen@indstate.edu)
For me, that journey began when Tom agreed to have me work with him in 1984. While Tom was busy at the Minneapolis ASA
meeting, I learned about holographic interferometry at Dick Peterson’s laboratory at Bethel College, while mode mapping a guitar, for
which Tom had designed a support rack which isolated the front and back plates, enabling us to record their principle resonances. Immediately following that experience, we went on a whirlwind tour, meeting with some of the “Greats” in musical acoustics. Starting with
Carleen Hutchins, we then met with Norman Pickering in Southampton, did some modal analysis guitar studies at the Steinway laboratories with William Y. Strong, and later visited with Gaby Weinreich. We concluded the tour with a visit to Gila Eba,’s guitar building studio. In the course of studying two-tone Chinese bells, we used judicious mirror placements to observe the bell modes in three
dimensions. Eventually, the wet plates were replaced by Karl Stetson’s computer based device, enabling us to study many instruments,
such as Caribbean Steel-Pans much more efficiently. All these experiences led to world-wide opportunities, and a life-long, cherished
friendship.
3683
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3683
10:20
3aMU4. Tom Rossing’s influence on our understanding of the acoustics of wind instrument mechanisms. Jonas Braasch (School of
Architecture, Rensselaer Polytechnic Inst., 110 8th St., Troy, NY 12180, braasj@rpi.edu)
During his outstanding career, Tom Rossing studied the acoustics of every major class of musical instruments. During my first trip to
the United States, to participate as a student in the ASA Columbus 1999 meeting, I was given the opportunity to measure my free-reed
pipe in Tom Rossing’s lab using his novel laser vibrometer. Like many others, he soon also became my role model for his broad knowledge, scientific depth, and his sheer pragmatism (that enabled him to write so many groundbreaking books). In particular, our generation
greatly benefits from his gift for accurately describing the key fundamentals of complex acoustic phenomena based on a traditional
understanding of physics and the use of groundbreaking measurement techniques such as laser vibrometry and TV holography. Tom
Rossing never separated the hard work required to understand the underlying mechanisms of musical instruments from the cultural importance and delight of listening to and performing music. And so, he reminded everybody, right after 9/11 during the ISMA 2001 conference, that music brings people and cultures together and that our research is needed right now more than ever.
10:40–11:00 Break
11:00
3aMU5. Sound and shape of pyeongyoung, stone chime and pyeonjong, bell chime. Junehee Yoo (Phys. Education, Seoul National
Univ., Kwanak-ro1, Kwanak-gu, Seoul 151-742, South Korea, yoo@snu.ac.kr)
My main research with Tom is about Korean traditional stone chimes, pyeongyoung and bell chimes, pyeonjong which have been
allocated as a set of instruments in Korean traditional court music. The vibrational mode frequencies and the frequency ratios of the
modes in modern pyeongyoung and pyeonjong have been studied. The modal shapes of stones and bells were mapped by TV holography, by scanning with an accelerometer and animated by STAR. The vibrational mode frequencies and mode shapes of ancient stone
chimes are analyzed and their dependence on stone shapes had been studied by using finite element methods. The dependence of mode
shapes and frequencies on vertex angle and base curvature suggests that the geometries used in late Chinese bianqing and Korean pyeongyeong may have been selected to give the best sound. Based on the research with Tom, I could extend the study to reconstruct the
whangjongeum or scale in Korean traditional instruments. My Korean group have measured frequencies of historical 261 pyeongyeong
stones and 236 pyeonjong bells mainly from the 14th to 19th centuries. The frequencies and the frequency ratios of the modes were analyzed by the era of building them.
11:20
3aMU6. Musical acoustics and science education: What I learned from Thomas D. Rossing. Andrew C. Morrison (Natural Sci.
Dept., Joliet Junior College, 1215 Houbolt Rd., Joliet, IL 60431, amorrison@jjc.edu)
My professional journey into teaching and working on musical acoustics projects owes a great amount to the mentorship I was fortunate to receive from Thomas D. Rossing. I had the great privilege to meet Dr. Rossing as an undergraduate when my advisor arranged
for us to make some measurements in Rossing’s acoustics lab at Northern Illinois University. I expressed interest in attending NIU to
study with him and was thrilled to realize that dream when I started (and completed) my Ph. D. program under Dr. Rossing’s supervision. I frequently think about the many ways in which I learned what was important for student learning about science, especially acoustics, from my time as Dr. Rossing’s teaching assistant. Throughout the years of knowing him, he has continuously been an inspiration to
my way of thinking about how I teach and how I mentor my students. His pursuit of knowledge and love of learning have been instilled
in my teaching and in the musical acoustics work that I do with students. In this talk I will highlight the various ways in which my career
and life have been enriched by the example that Thomas Rossing has set.
11:40
3aMU7. The Rossing factor: How I benefited from being his student. Daniel A. Russell (Graduate Program in Acoust., Penn State
Univ., 201 Appl. Sci. Bldg., University Park, PA 16802, drussell@engr.psu.edu)
From 1998 to 1991, I had the privilege of pursing a master’s degree with Dr. Thomas D. Rossing, exploring a thesis on the nonlinear
behavior of piano hammers at Northern Illinois University. I later earned a Ph.D. degree in Acoustics from Penn State, but my experience as Rossing’s student was instrumental in helping me develop a diverse set of research skills and interests which have to be proved
extremely beneficial throughout my academic career. Dr. Rossing involved me in several side projects (like optical holographic interferometry, experimental modal analysis, mode scanning), that were extracurricular to my thesis research but which led to several publications on a variety of acoustics topics, both while his student and later on my own as a physics and acoustics faculty member. The
experimental skills acquired in Rossing’s acoustics laboratory generated the inspiration for many of my own classroom demonstrations
and research projects with my own students. This talk will summarize the many ways that Dr. Rossing’s example as a teacher,
researcher, and author have had a significant influence on my own academic career and success.
12:00–12:20 Panel Discussion
3684
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3684
TUESDAY MORNING, 27 JUNE 2017
ROOM 203, 9:15 A.M. TO 11:20 A.M.
Session 3aNSa
Noise: Mechanical System Noise
Eric L. Reuter, Cochair
Reuter Associates, LLC, 10 Vaughan Mall, Suite 201A, Portsmouth, NH 03801
Shiu-Keung Tang, Cochair
Department of Building Services Engineering, The Hong Kong Polytechnic University, Hong Kong, Hong Kong
Chair’s Introduction—9:15
Invited Paper
9:20
3aNSa1. Results of a laboratory round robin for ASTM International Standard E477-13 for duct silencers. Jerry G. Lilly (JGL
Acoust., Inc., 5266 NW Village Park Dr., Issaquah, WA 98027, jerry@jglacoustics.com)
3a TUE. AM
ASTM International initiated a laboratory round robin in November 2014 to support the development of a precision and bias statement for the recently revised standard E477-13. Two different silencer designs were constructed for testing by 5 participating laboratories. This paper will discuss the revised test method, the test results, and identify future modifications to the standard that should be
looked at in the future.
Contributed Papers
9:40
3aNSa2. Machinery noise in a commercial building. Sergio Beristain
(IMA, ESIME, IPN, P.O.Box 12-1022, Narvarte, Mexico City 03001, Mexico, sberista@hotmail.com)
A commercial building was designed for installation of offices, open
public commercial sites, and exhibits, with multiple fixed small and large
stores and wide corridors, to include small temporal exhibits or small booths
for commercial or information purposes. Noise generated from all of the
building services machinery, such as temperature control and hygiene
systems, was an issue, so ventilating systems, water pumps, garbage
disposal, and the like had to be installed in a convenient way in order to
avoid intruding noise, either to workers or customers. The building is a large
solid concrete structure where every effort had to be made in order to properly insulate all the machinery producing noise and vibrations in order to
provide all the necessary services, and at the same time, adequate acoustics
comfort.
10:00
3aNSa3. Rolling noise modeling in buildings. Fabien Chevillotte, François-Xavier Becot, and Luc Jaouen (Matelys, 7 rue des Mara^ıchers, B^at B,
VAULX-EN-VELIN 69120, France, fabien.chevillotte@matelys.com)
New buildings in urban areas are divided in commercial and living
surfaces. This usage has revealed critical disturbances due to the noise of
the trolleys delivering at time where the buildings are mostly occupied, e.g.,
early times in the morning. Rolling trolleys indeed generate low frequency
vibrations (below 100 Hz) which propagate easily in the entire building
structure and in upper storeys. This work presents an original model for rolling noise in buildings. The developed model is able to account for the
ground surface roughness as well as the rolling wheel asperity profile. It
also enables to consider the mechanical impedance of the ground including
some possible flooring noise treatment. It is shown that the model is able to
3685
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
correctly reproduce the measured level of vibrations and measured noise
levels. It is also proved to accurately predict the sensitivity to different types
of rolling noise and floorings having various properties, based on a single
layer or a multi-layer construction.
10:20
3aNSa4. The inlets and outlets of ducted silencer selection. Thomas Kaytt
and Alana DeLoach (Vibro-Acoust. Consultants, 490 Post St., Ste. 1427,
San Francisco, CA 94102, tom@va-consult.com)
The passive in-duct attenuator is an established, but often misused,
staple of the modern HVAC design toolbox. All too often, a quick silencer
selection without thought to the system at large can cause havoc with a
noise sensitive application. How often have you seen a “typical” silencer
scheduled throughout a project without regard to the specifics of the rooms
being treated? This paper will review various types of silencers as well as
location & selection methods to balance noise control needs against
airflow capacities, air quality, and space restrictions. Case studies will be
presented to illustrate some of the classic silencer design and installation
errors.
10:40
3aNSa5. Noise control for a public transport interchange—A Hong
Kong experience. Shiu-Keung Tang (Bldg. Services Eng. Dept., Hong
Kong Polytechnic Univ., Hong Kong, Hong Kong, shiu-keung.tang@polyu.
edu.hk)
Recently, a large public transport interchange (PTI) is proposed to be
built near to a new public housing estate in a relatively remote area of Hong
Kong. The aim is to provide convenient transport to the residents in the
housing estate. In the design stage of the whole development, it is found
that the idling buses inside the PTI are likely to create noise problem as the
PTI is designed to be built in open air so as not to block the views of the
Acoustics ’17 Boston
3685
11:00
residents. There are also already nearby shopping malls and markets. In the
view of the aesthetics, the PTI will be built using mainly glass panels and is
designed to be ventilated by natural wind. There will be openings on the
roofs of the PTI as heated air exhaust. In order to reduce the noise transmission towards the nearby housing blocks, the interior of the PTI and the openings are lined with sound absorption. A ray-tracing simulation was carried
out. With the appropriate opening orientation sand amount of sound absorption, the noise levels at the building façades in concern can be kept under 48
dBA.
3aNSa6. Pool equipment mechanical noise impact. Walid Tikriti (Acoustonica, LLC, 33 Pond Ave., Ste. 201, Brookline, MA 02445, wtikriti@
acoustonica.com)
The paper discusses noise impact from mechanical systems in residential
buildings. The project presented discusses noise impact related to pool
equipment mechanical systems. Sound readings were taken and recorded
with different settings. Sound analysis and noise mitigation solutions will be
discussed. Project located in Paradise Island, Nassau, Bahamas.
TUESDAY MORNING, 27 JUNE 2017
ROOM 202, 9:15 A.M. TO 12:00 NOON
Session 3aNSb
Noise, Education in Acoustics, ASA Committee on Standards, and Psychological and Physiological
Acoustics: Using Acoustic Standards in Education
William J. Murphy, Cochair
Hearing Loss Prevention Team, Centers for Disease Control and Prevention, National Institute for Occupational Safety
and Health, 1090 Tusculum Ave., Mailstop C-27, Cincinnati, OH 45226-1998
Lawrence L. Feth, Cochair
Speech and Hearing Science, Ohio State University, 110 Pressey Hall, 1070 Carmack Road, Columbus, OH 43210
Massimo Garai, Cochair
DIN, University of Bologna, Viale Risorgimento 2, Bologna 40136, Italy
Chair’s Introduction—9:15
Invited Papers
9:20
3aNSb1. Incorporating measurement standards in an advanced acoustics laboratory course. Kent L. Gee (Dept. of Phys. and Astronomy, Brigham Young Univ., N243 ESC, Provo, UT 84602, kentgee@byu.edu)
In an advanced acoustics laboratory course at Brigham Young University, students are introduced to ANSI measurement standards
in the context of sound power. They are introduced to the anatomy of a typical acoustics standard and then plan and carry out sound
power measurements of an electric leaf blower using both reverberation chamber and sound intensity methods. The students are required
to write a technical memorandum describing (a) the blower’s radiated sound power levels over an appropriate frequency range, as
obtained with the two methods; (b) setup documentation and deviations from the standards’ recommended practices; and (c) how any
deviations might have contributed to discrepancies between the sound power levels obtained with the two methods. In this talk, a
description of the experience from the faculty and student perspectives is given, along with plans for future improvements.
9:40
3aNSb2. German DIN 18041 Acoustic Quality in Rooms. Christian Nocke (Akustikbuero Oldenburg, Sophienstr. 7, Oldenburg
26121, Germany, nocke@akustikbuero-oldenburg.de)
DIN 18041 was first published in 1968 and at that time summarized a lot of knowledge in the field of room acoustics on the design
of everyday life rooms such as class rooms, lecture halls, conference rooms, etc. DIN 18041 was first revised in 2004; a second revision
was undertaken from October 2013 to mid 2015 to commit the room acoustic requirements for the implementation of the inclusion in
the field of hearing and to take into account trends in the modern architecture. In addition to these technical and social aspects DIN
18041 of 2016 with the new title “Acoustic quality in rooms — requirements, recommendations and instructions for planning” gives
clarifications and additions as well as deletions compared to the edition of 2004. These changes are presented and discussed. The revision of DIN 18041 provides clear and unambiguous guidelines described as requirements and recommendations for everyday rooms
where the mutual listening and understanding but also finding of quietness is of significant importance. The use of standards in education
for education facilities will be discussed.
3686
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3686
10:00
3aNSb3. The role of acoustic standards in the improvement of educational spaces in Italy. Arianna Astolfi (Dept. of Energy, Politecnico di Torino, Turin, Italy), Dario D’Orazio, and Massimo Garai (DIN, Univ. of Bologna, Viale Risorgimento 2, Bologna 40136,
Italy, massimo.garai@unibo.it)
A good acoustics is one of the main requirements for indoor educational spaces, where information is transmitted from a teacher to
students mainly by oral communication. Often obligatory regulations do not take into account this aspect as expected, but technical
standards can provide a more informed and sound reference. In Italy, a new standard is under development at UNI (the national standardization body) to provide a technical framework for the design and use of teaching rooms in schools and a guideline has been released by
AIA (Italian Acoustical Association) to provide a comprehensive, easy-to-read guide to authorities and stakeholders on the acoustical
design of schools. The underlying principles and methods will be presented and discussed, as well as the way they are taught in University courses in order to raise awareness on these topics in future engineers and architects.
10:20–10:40 Break
10:40
3aNSb4. Incorporating standards into an instrumentation class. Lawrence L. Feth (Speech and Hearing Sci., Ohio State Univ., 110
Pressey Hall, 1070 Carmack Rd., Columbus, OH 43210, feth.1@osu.edu)
3a TUE. AM
First-year AuD students at Ohio State take Acoustics and Instrumentation in their first semester in the program. The course is offered
simultaneously with courses in psychoacoustics and anatomy and physiology of the auditory system, as well as the first course in assessment covering behavioral testing. It is essentially a prerequisite for the two-course sequence on hearing aids, and later assessment
courses covering middle ear measurements and ABR, as well as courses on cochlear implants and hearing conservation. Students enrolled in the course generally do not have a strong background in math and physics, but they are competent in algebra and trigonometry.
One goal of the course is to develop a conceptual understanding of the basic acoustical and electronics principles underlying electroacoustic measurements. included with in those goals is the introduction to electroacoustic standards and to the role they play in the practice of audiology. To that end, the on-line instruction on standards offered by ANSI and the content of a recent two volume issue of
Seminars in Hearing are incorporated into the “conventional” materials on acoustics and electronics. One unique feature of the course is
the use of essay exams to test underlying concepts usually taught by “plug-and-chug” drills.
11:00
3aNSb5. Acoustic standards in audiology education. Peggy B. Nelson and Robert S. Schlauch (Univ. of Minnesota, 164 Pillsbury Dr.
Se, Minneapolis, MN 55455, peggynelson@umn.edu)
Standards are essential for the practice of audiology. Microphones, sound level meters, and all calibration equipment and procedures
depend on effective standards. Equipment and methods for assessing hearing function, and for the fitting and evaluation of sensory aids
for hearing loss all require the development and refinement of good standards. The involvement of audiologists in standards development is essential for maintaining high quality professional service. Graduate students in audiology are introduced to standards from both
American National Standards Institute (ANSI) and International Standards Organization (ISO) during their graduate education at the
University of Minnesota. Particular areas include calibration, audiometry, hearing aids, cochlear implants, and noise measurement and
exposure. Methods for incorporating standards into graduate education will be discussed.
11:20
3aNSb6. Acoustic standards for education of hearing conservation and industrial hygiene professionals. William J. Murphy (Hearing Loss Prevention Team, Centers for Disease Control and Prevention, National Inst. for Occupational Safety and Health, 1090 Tusculum Ave., Mailstop C-27, Cincinnati, OH 45226-1998, wjm4@cdc.gov)
The practice of hearing conservation and industrial hygiene requires that workers’ noise exposures and hearing be evaluated. The
National Institute for Occupational Safety and Health has supported the development of new standards to assess hearing, hearing protector effectiveness and examined methods to best assess noise exposures to better understand the risk of workers developing noise induced
hearing loss. This talk will consider noise measurement using both American National Standards Institute (ANSI) and International
Standards Organization (ISO) acoustic standards. The paper will compare and contrast the hearing protector evaluation and rating standards for ANSI and ISO. The potential for using applications with mobile devices will be discussed.
Contributed Paper
11:40
3aNSb7. Uncertainty of ANSI/ASA S12.42 Hearing Protection Device
Impulsive Measurements. Cameron J. Fackler, Elliott H. Berger, and Michael E. Stergar (3M Personal Safety Div., 7911 Zionsville Rd., Indianapolis, IN 46268, cameron.fackler@mmm.com)
ANSI/ASA S12.42 was extended in 2010 to include methods for measuring the performance of hearing protection devices (HPDs) in impulsive
noise conditions. The standard specifies the instrumentation, methods, and
data analysis required to measure impulse peak insertion loss (IPIL). IPIL is
defined as the amount by which an HPD reduces the effective peak level of
3687
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
an impulsive sound. To characterize HPDs whose attenuation may be leveldependent, IPIL is measured at several impulse peak sound pressure levels,
typically in the range of 130-170 dB. Factors contributing to the uncertainty
of IPIL measurements include repeatability in the generation of test
impulses, variability of the HPD samples under test and their repeated fitting
to the test fixture, and spectral properties of the impulse source and the
HPD’s attenuation. To help inform future revisions to the S12.42 standard,
we quantify IPIL measurement uncertainty for a variety of HPDs tested with
two different impulsive sound sources. End users of HPDs should be educated about the uncertainty inherent in IPIL assessments of HPD
performance.
Acoustics ’17 Boston
3687
TUESDAY MORNING, 27 JUNE 2017
BALLROOM A, 10:20 A.M. TO 11:40 A.M.
Session 3aNSc
Noise, Physical Acoustics, ASA Committee on Standards, and Structural Acoustics and Vibration: Aircraft
Noise and Measurements (Poster Session)
Victor Sparrow, Chair
Grad. Program in Acoustics, Penn State, 201 Applied Science Bldg., University Park, PA 16802
All posters will be on display from 10:20 a.m. to 11:40 a.m. To allow contributors in this session to see the other posters, authors of oddnumbered papers will be at their posters from 10:20 a.m. to 11:00 a.m. and authors of even-numbered papers will be at their posters
from 11:00 a.m. to 11:40 a.m.
Contributed Papers
3aNSc1. Potential changes in aircraft noise sound quality due to continuous descent approaches. Abhishek K. Sahai (Aircraft Noise and Climate
Effects (ANCE), Delft Univ. of Technol., Kluyverweg 1, Delft, Zuid-Holland 2629HS, Netherlands, a.k.sahai@tudelft.nl), Miguel Yael Pereda
Albarran (Inst. of Aerosp. Systems (ILR), RWTH Aachen Univ., Aachen,
Northrhine-Westphalia, Germany), and Mirjam Snellen (Aircraft Noise and
Climate Effects (ANCE), Delft Univ. of Technol., Delft, Netherlands)
This paper presents an analysis of how flying Continuous Descent
Approaches (CDAs) can affect the quality of sounds that aircraft produce in
airport vicinities. It is well known that CDAs present potential benefits in
terms of community noise impact with reductions in excess of 5 dBA in
peak noise levels. It is however unclear if these reductions in A-weighted
level, which is a poor predictor of perceived annoyance, also correspond to
an improvement in the quality of the aircraft sounds that reach the residents
on the ground. A real comparison can only be made by comparing the
sounds an aircraft produces while flying a CDA with a standard approach
procedure. A short-range and a long-range aircraft are simulated to fly a
standard approach procedure and a CDA with 3, 4, and 5 degree glideslope
angle. The noise produced over both approach procedures is then auralized
at representative ground locations, and the sounds are analyzed for changes
in sound quality. Quantifying the changes in the aircraft sounds in terms of
sound quality metrics provides much clearer information regarding how the
sound the residents hear has changed, and if the CDAs actually result in an
improved sound quality and hence lower annoyance.
3aNSc2. Reduced aerodynamic drag concepts for acoustic liners. Christopher Jasinski (Univ. of Notre Dame, 54162 Ironwood Rd., South Bend, IN
46635, chrismjasinski@gmail.com) and Thomas Corke (Univ. of Notre
Dame, Notre Dame, IN)
The objective of this paper is to describe the development of reduced
aerodynamic drag concepts for acoustic liners in turbofan engine nacelles.
Conventional acoustic liners help aircraft to achieve U.S. governmental
noise regulations, however they are responsible for a measurable increase in
the total aerodynamic drag of the aircraft. As regulations on commercial aircraft noise become more strict, additional surfaces may be covered with
acoustic liner, heightening the need for the understanding and reduction of
aerodynamic drag caused by liners. A linear force balance has been
designed at the Mach 0.6 Wind Tunnel at the University of Notre Dame to
evaluate the aerodynamic and acoustic characteristics of both conventional
and 3D-printed liner samples. Through multi-parameter characterization,
reduced drag concepts for future liners have been created. This paper will
discuss the experimental work done to develop these reduced drag concepts,
3688
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
outline the future work to be done, and discuss the potential acoustic coupling effect on aerodynamic drag.
3aNSc3. Methodology for designing aircraft having optimal sound signatures. Abhishek K. Sahai, Tom van Hemelen, and Dick G. Simons (Aircraft
Noise and Climate Effects (ANCE), Delft Univ. of Technol., Kluyverweg 1,
Delft, Zuid-Holland 2629HS, Netherlands, a.k.sahai@tudelft.nl)
This paper presents a methodology with which aircraft designs can be
modified such that they produce optimal sound signatures on the ground.
With optimal sound it is implied in this case sounds that are perceived as
less annoying by residents living near airport vicinities. A novel design and
assessment chain has been developed which combines the aircraft design
process with an auralization and sound quality assessment capability. It is
demonstrated how different commercial aircraft can be designed, their
sounds auralized at representative locations in airport vicinities and subsequently assessed for sound quality. As sound quality is closely related to the
perceived annoyance, it is expected that designs with improved sound quality would also be perceived as less annoying. By providing a feedback to
the design optimizer in terms of one of the sound quality metrics or a suitable combination thereof, the designs of aircraft can be altered to produce
potentially less annoying sounds. The paper will focus on three current aircraft and will demonstrate the application of the novel design chain to auralize and alter their sounds toward improved sound quality. The presented
methodology can also be extended to unconventional aircraft configurations
and propulsion concepts, for optimizing future aircraft sounds.
3aNSc4. Practical consideration for measuring airborne ultrasound
measurement. Isaac Harwell and Arno S. Bommer (CSTI Acoust., 16155
Park Row, Ste. 150, Houston, TX 77084, isaac@cstiacoustics.com)
When attempting to make meaningful measurements, airborne ultrasound in the 20 kHz to 100 kHz frequency range presents several significant
challenges. These include source and sensor directivity, low signal-to-noise
ratios, the inaudible nature of such sounds, and the lack of widespread literature on the subject. For each of these challenges, the practical impacts versus conventional acoustic measurements are identified and discussed.
Solutions and suggestions are presented to allow reliable and repeatable
measurements when presented with these challenges, including critical information to have prior to measurement, an assortment of techniques which
may be employed when performing these measurements, and a brief overview of useful post-processing techniques. The concepts presented in this
paper are illustrated using ultrasound measurements made in a vivarium.
Acoustics ’17 Boston
3688
TUESDAY MORNING, 27 JUNE 2017
ROOM 210, 9:15 A.M. TO 12:20 P.M.
Session 3aPA
Physical Acoustics and Noise: Eco-acoustics: Acoustic Applications for Green Technologies and
Environmental Impact Measurements
JohnPaul R. Abbott, Cochair
Department of Physics and Astronomy, National Center for Physical Acoustics, University of Mississippi, 1 Coliseum Dr.,
Room 1044, Oxford, MS 38677
Andre Fiebig, Cochair
HEAD acoustics GmbH, Ebertstr. 30a, Herzogenrath 52134, Germany
Chair’s Introduction—9:15
Invited Papers
9:20
3a TUE. AM
3aPA1. Elastic energy harvesting: Materials and applications. Josh R. Gladden (Phys. & NCPA, Univ. of MS, 108 Lewis Hall, University, MS 38677, jgladden@olemiss.edu)
As societies wean themselves off fossil fuel based energy sources, an “all of the above” approach will be required to satisfy expanding energy needs. This necessitates a renewed creativity from the scientific and engineering communities. Various ambient energy sources hold potential to supply power in particular applications. Solar and wind are of course well known examples, but vibrational, or
elastic, energy should not be overlooked. A key component in harnessing any ambient energy source is the transduction mechanism to
convert the energy from its original form into electrical. In this talk, I will explore available energy densities in a number of common
scenarios, novel energy conversion materials, and discuss some niche applications.
9:40
3aPA2. Sound labels for classifying environment-friendly products—Progress and challenges. Andre Fiebig (HEAD Acoust.
GmbH, Ebertstr. 30a, Herzogenrath 52134, Germany, andre.fiebig@head-acoustics.de)
People are permanently exposed to noise caused by numerous products. The environmental awareness increases, the harmful effect
of noise on humans is well acknowledged, and at the same time the desire for acoustic comfort rises. Thus, different emission-related
labels are introduced as a reference for consumers informing about relevant product emissions. In fact, surveys show that product sound
is already one of the top product features regarding the purchase decision. Thus, it seems that fewer emissions of everyday products are
beneficial for all—consumers as well as manufacturers, and finally public health. However, most of the current sound labels use only
simple noise level indicators and are only optional leading to an insignificant impact on purchase decisions and reducing the benefit of
acoustically-friendly products on our acoustic environment. Moreover, several existing sound labels consider only simple minimum
specifications and neglect sound quality related aspects at all. The paper provides an overview of the current status of sound labeling focusing on the European market. Moreover, case studies are presented illustrating massive differences in the sound quality of products on
the market within the same product category. The limitations of current sound labeling approaches and initiatives are discussed in detail.
10:00
3aPA3. Environmentally friendly parametric alarm for alerting marine mammals of approaching vessels. Edmund R. Gerstein
(Charles E. Schmidt College of Sci., Florida Atlantic Univ., 777 Glades Rd., Boca Raton, FL 33486, gerstein2@aol.com) and Laura A.
Gerstein (Leviathan Legacy Inc., Boca Raton, FL)
Marine mammals are vulnerable to boat, barge, and ship collisions. Although more commonly identified and reported in busy coastal
areas, collisions are not restricted to shipping lanes or shallow water environments. A common denominator is that they all occur near
the surface. Here the acoustical laws of reflection and propagation significantly limit the ability of marine mammals to hear and locate
the sounds of approaching vessels. Acoustic measurements from controlled ship passages through vertical hydrophone arrays demonstrate the confluence of factors that poses auditory detection challenges for both whales and manatees. A highly directional, environmentally-friendly, low intensity underwater parametric alarm has been developed to mitigate these challenges & safely alert marine
mammals of approaching vessels. The efficacy has been demonstrated with wild manatees. Ninety-five percent of manatees during
alarm-on trials elicited avoidance reactions while only 5% of manatees during alarm-off trials elicited any change in behavior. The
mean distance at which manatees reacted to boat approaches during alarm-on trials was also 20 m compared to only 6 m for alarm-off
trials (F= 218.4 df = 1, p< 0.01). Counter-intuitive to speed reduction laws, slow vessels can be more difficult for marine mammals to
detect and locate. The low intensity, directional alarm assures animals can detect & locate vessels at distances sufficient to avoid injury.
[Funded by DOD Legacy Natural Resource Management Program, USFWS Permit MA063561-4.]
3689
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3689
10:20
3aPA4. Photoacoustic spectroscopy accurately measures atmospheric 12CO2 and 13CO2 concentrations and optical properties of
carbonaceous aerosols. Keith A. Gillis (Sensor Sci. Div., National Inst. of Standards and Technol., 100 Bureau Dr., Mailstop 8360, Gaithersburg, MD 20899-8360, keith.gillis@nist.gov), Christopher Zangmeister, Zachary Reed, and James Radney (Chemical Sci. Div.,
National Inst. of Standards and Technol., Gaithersburg, MD)
Understanding of today’s climate and predictions of future climate require accurate input data to model the energy balance between the
sun’s irradiance and Earth’s atmosphere, oceans, land, and surface ice. An important driver of climate change is the absorption and scattering of sunlight by carbon-based aerosols (soot, smoke, etc) that have widely-varying, source-dependent, and history-dependent optical properties. We use a resonant photoacoustic spectrometer (PAS) to measure the optical absorption cross-section of various carbonaceous
aerosols that we generate and characterize in situ. The photoacoustic signal is directly proportional to the energy absorbed by the particles.
When combined with simultaneous measurements of the total extinction using cavity ring-down spectroscopy, we obtain the particles’
wavelength-dependent albedo (fraction of incident light scattered). Another important driver of climate change is atmospheric carbon dioxide, a greenhouse gas. With the remarkable linearity, sensitivity, and resolution of our PAS resonator, we measure the individual concentrations of 12CO2 and 13CO2 in atmospheric samples to determine the isotopic ratio 13C/12C, which gives a clue to its origin. A temperaturecontrolled portable PAS system continuously monitors the concentration of atmospheric 12CO2 on a NIST rooftop.
10:40
3aPA5. Using correlated noise of turbulent flow in a long-wavelength acoustic flowmeter to measure the average flow speed. John
Paul R. Abbott (Dept. of Phys. and Astronomy, National Ctr. for Physical Acoust., Univ. of MS, 1 Coliseum Dr., Rm. 1044, Oxford, MS
38677, johnpaul.abbott@gmail.com), Keith A. Gillis, Michael Moldover (National Inst. of Standards and Technol., Gaithersburg, MD),
and Lee Gorny (Mountain View, CA)
Current methods to measure the flow of gaseous emissions from coal-burning power plant smokestacks have uncertainties of 5% to
20%, which is unsuitable if a carbon pricing program is implemented. As part of its Greenhouse Gas and Climate Science Measurements
Program, the Fluid Metrology Group at the National Institute of Standards and Technology (NIST) is investigating methods to reduce
the uncertainty of flow measurements from smokestacks. In particular, NIST’s scale model long-wavelength acoustic flowmeter
(LWAF) uses low-frequency plane waves to measure the average axial flow speed, V, of turbulent fluid flow in a duct with an uncertainty
of 1%. To apply this technology to smokestacks, we are investigating cross-correlations of low-frequency flow noise. The spectral density of the measured flow noise is consistent with fluctuations smaller than the duct diameter D for f >> V/D. The amplitude and width
of the correlation peak for broadband flow noise is shown to be dependent on V, and a model of this dependence is forthcoming. Our current work, developing this model, includes filtering the broadband data to determine phase shifts, modeling the effects of the radiation
impedance, and examining effects on the broadband flow noise spectra. The results of these investigations are presented.
11:00
3aPA6. Challenges in practical operational testing of a nuclear powered thermoacoustic sensor. James A. Smith (Nuclear Sci. and
Technol., Idaho Natioanl Lab., P.O. Box 1625, Idaho Falls, ID 83415, James.Smith@INL.Gov), Steven L. Garrett (Penn State, State
College, PA), Andrew Bascom (Penn State, Idaho Falls, PA), Brenden Heidrich (Nuclear Sci. and Technol., Idaho Natioanl Lab., Idaho
Falls, ID), and Michael Heibel (Westinghouse, Church Hill, PA)
The world’s first nuclear powered, wireless power and temperature sensor was demonstrated in Penn State’s Breazeale Nuclear Reactor during the last week of September 2015. The sensor consisted of a thermoacoustic heat engine powered by nuclear fission designed
to acoustically telemeter temperature and neutron flux information. The acoustic frequency of operation and the amplitude of the acoustic signal were proportional to the temperature and the reactor power respectively. The proof-of-concept tests were conducted in the
research reactor twice daily over five days. Sensor performance was as expected with the exception that the amplitude of the acoustic
signal diminished after each test. In this paper we will present our “wet sock” theory that a seal weld isolating the thermal insulation
from the reactor coolant at the hot end of the thermoacoustic sensor failed early in the testing. This allowed water to be drawn in each
time the thermoacoustic sensor cooled down reducing the efficiency of the insulation and therefore the sensor output. Thermometric
data will be presented that supports our hypothesis. The result of this testing validates the resilience of the thermoacoustic sensor to
adverse conditions present in the core of a nuclear reactor even when degraded. Lessons learned in the initial testing will be carried forward to the planned Advanced Test Reactor experiments in 2017.
Contributed Papers
11:20
3aPA7. Vibro-acoustic imaging development for microstructure characterization and metrology. James A. Smith, Eric D. Larsen, and Larry D.
Zuck (Nuclear Sci. and Technol., Idaho Natioanl Lab., P.O. Box 1625,
Idaho Falls, ID 83415, James.Smith@INL.Gov)
Vibro-acoustics (VA) is being developed into a portable scanning infrastructure for a novel material characterization technique focused on nuclear
applications to characterize fuel, cladding materials, and structures at the
Idaho National Laboratory (INL). The proposed VA technology is based on
ultrasound and acoustic waves; however, it provides information beyond
what is available from the traditional ultrasound techniques and can expand
3690
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
the knowledge on nuclear material characterization and microstructure evolution. VA is a three-dimensional (3D) imaging modality based on ultrasound-stimulated acoustic emission. VA uses the force caused by the
beating of two frequencies to generate an acoustic emission signal. Due to
absorption or reflection, the energy density in the object at an acoustic focal
point changes to produce a “radiation force.” This force locally vibrates the
object, which results in an acoustic field that depends on the characteristics
of the object at that point. This acoustic field is detected for every point in
the object; the resulting data is used to make an image of the object’s mechanical properties. This paper will focus on the development of a portable
scanning system to image in situ the microstructure of materials used in nuclear applications.
Acoustics ’17 Boston
3690
11:40
12:00
3aPA8. Acoustic measurement infrastructure to enable the enhancement of nuclear reactor efficiency and safety. Vivek agarwal and James
A. Smith (Nuclear Sci. and Technol., Idaho Natioanl Lab., P.O. Box 1625,
Idaho Falls, ID 83415, James.Smith@INL.Gov)
3aPA9. Optimization of acoustic absorption by green walls made of foliage and substrate. Emmanuel Attal, Nicolas C^
ote (ISEN, IEMN UMR
CNRS 8520, Lille, France), Takafumi Shimizu (Daiwa House Industry,
Central Res. Lab., Nara City, Japan), and Bertrand Dubus (ISEN, IEMN
UMR CNRS 8520, 41 boulevard Vauban, Lille cedex 59046, France, bertrand.dubus@isen.fr)
Nuclear research reactors are used to test efficiency and safety of new
technology, such as material or fuel samples under prototypic commercial
conditions, operation of reactor prototypes, safety studies, assessment of
neutron parameters, etc. Many experiments performed in research reactors
require in-situ measurements to monitor the progress and performance of
conducted tests. The core of any nuclear reactor presents a particularly harsh
environment for sensors and instrumentations. The reactor core also imposes
challenging constraints on signal transmission from inside the reactor core
to outside of the reactor vessel. In this paper, an acoustic measurement infrastructure (AMI) installed at the Advanced Test Reactor (ATR), located at
Idaho National Laboratory, is presented. The AMI consists of ATR in-pile
structural components, coolant, acoustic receivers, primary coolant pumps,
a data-acquisition system, and signal processing algorithms. Intrinsic and
cyclic acoustic signals generated by the operation of the primary coolant
pumps are collected using acoustic receivers and processed. The characteristics of the intrinsic signal can indicate the process state of the ATR during
operation (i.e., real-time measurement). The innovation of AMI can be
extended to collect information on other phenomena, such as fuel motion,
individual fuel rod vibration, loose parts, thermal expansion, and flow
blockage, that occur inside an operating nuclear reactor.
TUESDAY MORNING, 27 JUNE 2017
ROOM 311, 9:15 A.M. TO 12:20 P.M.
Session 3aPPa
Psychological and Physiological Acoustics: Auditory Cognition and Scene Analysis in Complex
Environments
Barbara Shinn-Cunningham, Cochair
Boston University, 677 Beacon St., Boston, MA
Janina Fels, Cochair
Institute of Technical Acoustics, RWTH Aachen University, Neustr. 50, Aachen 52074, Germany
Volker Hohmann, Cochair
Medical Physics, Universit€
at Oldenburg, Postfach, Oldenburg 26111, Germany
Chair’s Introduction—9:15
Invited Papers
9:20
3aPPa1. Examining auditory selective attention in reverberant environments. Josefa Oberem (Medical Acoust. Group, Inst. of
Tech. Acoust., RWTH Aachen Univ., RWTH Aachen, Kopernikusstrasse 5, Aachen 52074, Germany, job@akustik.rwth-aachen.de),
Julia Seibold, Iring Koch (Cognit. and Experimental Psych., Inst. for Psych., RWTH Aachen Univ., Aachen, Germany), and Janina Fels
(Medical Acoust. Group, Inst. of Tech. Acoust., RWTH Aachen Univ., Aachen, Germany)
Using a well-established binaural-listening paradigm, the ability to intentionally switch auditory selective attention was examined
under anechoic, low reverberation (0.8 s) and high reverberation (1.75 s) conditions. Twenty-four young, normal-hearing subjects were
tested in a within-subject design to analyze influences of the reverberation times. Spoken word pairs by two speakers were presented
3691
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3691
3a TUE. AM
Green walls may absorb sound and contribute to noise reduction in
urban areas. Recent experiments demonstrate that a foliage layer placed
above a substrate layer may lead to a significant increase of acoustic absorption coefficient in a broad frequency range. However, the physical origin of
this improvement remains unclear. In this work, measurements are carried
out in an impedance tube on foliage, substrate, and foliage/substrate samples
using the three-microphone two-load method. Acoustic absorption coefficient and surface specific impedance are measured in rigid backing condition between 100 and 1000 Hz. Effective speed of sound and characteristic
impedance are also experimentally determined for foliage and substrate. For
foliage/substrate samples, a good agreement is obtained between measured
acoustic absorption coefficients and calculated ones using the effective
properties of foliage and substrate layer and matrix manipulations. Analysis
of results reveals that absorption coefficient spectrum is mainly explained
by the thickness resonances of the sample and by the impedance matching
between air and substrate provided by the foliage.
simultaneously to subjects from two of eight azimuth positions. The stimuli were word pairs that consisted of a single number word (i.e.,
1 to 9) followed by either the German direction “UP” or “DOWN.” Guided by a visual cue prior to auditory stimulus onset indicating
the position of the target speaker, subjects were asked to identify whether the target number was numerically smaller or greater than five
and to categorize the direction of the second word. Switch costs (i.e., reaction time differences between a position switch of the target
relative to a position repetition) were larger for the high reverberation condition. Furthermore, the error rates were highly dependent on
reverberation times and interacted with the congruency effect (i.e., stimuli spoken by target and distractor may evoke the same answer
(congruent) or different answers (incongruent)), indicating larger congruency effects in higher reverberation.
9:40
3aPPa2. Measuring auditory spatial perception in realistic environments. Virginia Best (Dept. Speech, Lang. and Hearing Sci., Boston Univ., 635 Commonwealth Ave., Boston, MA 02215, ginbest@bu.edu), Jorg M. Buchholz, and Tobias Weller (National Acoust.
Labs., Macquarie University, NSW, Australia)
While much is known about how well listeners can locate single sound sources under ideal conditions, it remains unclear how this
ability relates to the more complex task of spatially analyzing realistic acoustic environments. There are many challenges in measuring
spatial perception in realistic environments, including generating simulations that offer a level of experimental control, dealing with the
presence of energetic and informational masking, and designing meaningful behavioral tasks. In this work we explored a new method to
measure spatial perception in one realistic environment. A large reverberant room was simulated using a loudspeaker array in an
anechoic chamber. Within this room, 96 different “scenes” were generated, comprising 1-6 concurrent talkers seated at different tables.
Listeners were presented with 45-sec samples of each scene, and were required to count, locate, and identify the gender of all talkers,
using a touchscreen interface. Young listeners with normal hearing were able to reliably analyze scenes with up to four simultaneous
talkers, while older listeners with hearing loss demonstrated errors even with two talkers at a time. Localization accuracy for detected
talkers, as measured by this approach, was sensitive both to the complexity of the scene and to the listener’s degree of hearing loss.
10:00
3aPPa3. What leads to audio-visual object formation and when is it helpful? Ross K. Maddox (Biomedical Eng. and Neurosci.,
Univ. of Rochester, 1715 NE Columbia Rd., Box 357988, Seattle, WA 98195, ross.maddox@rochester.edu)
An important aspect of parsing complicated auditory scenes is forming perceptual objects that correspond to individual sound sources (e.g., a friend speaking). Once formed, an object can be selected from the acoustic mixture and attended. Auditory cognition can be
greatly enhanced by a concomitant visual stimulus (e.g., the speaking friend’s mouth movements), but the mechanism or mechanisms
underlying that benefit are not well understood. Previous studies have treated auditory and visual stimuli as separate sources of information and found that they are optimally combined. However, we have shown that auditory selective attention can be improved by a visual
stimulus that offers no information at all. Our experiments suggest that these benefits are derived from the visual stimulus being bound
to the target auditory stimulus into a single cross-modal object through temporal coherence of those stimuli’s features. In this talk we
will discuss a model of auditory-visual object formation that allows an uninformative but coherent visual stimulus to aid listening, along
with data that suggest performance benefits may be dependent on the similarity of the target and the masker.
10:20
3aPPa4. Electrophysiological markers of auditory perceptual awareness and release from informational masking. Andrew R.
Dykstra, Marnie E. Shaw, and Alexander Gutschalk (Dept. of Neurology, Ruprecht-Karls-Universit€at Heidelberg, Im Neuenheimer Feld
400, MEG Lab, Heidelberg 69120, Germany, andrew.dykstra@med.uni-heidelberg.de)
In complex acoustic environments, even suprathreshold sounds that are faithfully represented in the ascending auditory pathway
sometimes go unperceived, a phenomenon termed informational masking. Little is known regarding the large-scale brain dynamics giving rise to conscious perception under informational masking, particularly outside auditory cortex. To examine this question, we combined simultaneous M/EEG with trial-by-trial perceptual reports and anatomically constrained distributed source estimates. Listeners
reported the moment at which they became aware of spectrally isolated and otherwise suprathreshold tone streams rendered sometimes
inaudible by random multitone masker “clouds.” While all targets elicited early responses in auditory cortex, later auditory-cortex activity (peaking between 150 and 200 ms) was only observed for targets that were detected. A robust P3-like response with distributed sources was observed for the second detected target (immediately preceding listeners’ reports), and was greatly diminished or absent for
prior and subsequent targets. The results highlight late, distributed aspects of neuronal activity associated with task-related post-perceptual processing (i.e., task relevance), but argue against this activity underlying conscious perception, per se.
10:40–11:00 Break
11:00
3aPPa5. The role of self-motion processing in impairments of the spatial perception of auditory scenes. W. Owen Brimijoin, Graham Naylor, and Andrew McLaren (Inst. of Hearing Res. - Scottish Section, Medical Res. Council/Chief Scientist Office, MRC/CSO Fl.
3, New Lister Bldg., 16 Alexandra Parade, Glasgow G31 2ER, United Kingdom, owen.brimijoin@nottingham.ac.uk)
Natural auditory scenes typically include some motion. When this motion is the result of a moving listener, as is often the case, an
accurate spatial percept requires that the listener be able to: 1) determine sound source location over time, 2) determine the extent and
characteristics of his/her own motion, and 3) integrate these two pieces of information together. We assembled a battery of tests to evaluate these three criteria in normal, hearing-impaired, and balance-impaired listeners. The battery consists of a dynamic visual acuity test
to estimate the listener’s self-motion processing, a measure of the minimum audible movement angle to determine source-motion acuity,
and an adapted version of a previously published dynamic front/back illusion to examine the ability to combine self and sound-source
motion. We found that listeners with potential vestibular impairment were more likely to have larger differences between source-motion
acuity and self-motion integration acuity. This finding and the observed inter-subject variability together underscore the sensitive nature
3692
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3692
of the ongoing comparison between one’s own motion and the motion of the acoustic world. [Work supported by the MRC (Grant No.
U135097131) and the Chief Scientist Office of the Scottish Government.]
11:20
3aPPa6. Evaluation of scene analysis using real and simulated acoustic mixtures: Lessons learnt from the CHiME speech recognition challenges. Jon P. Barker (Comput. Sci., Univ. of Sheffield, Regent Court, 211 Portobello, Sheffield S1 4DP, United Kingdom, j.
p.barker@sheffield.ac.uk)
Computational auditory scene analysis is increasingly presented in the literature as a set of auditory-inspired techniques for estimating “Ideal Binary Masks” (IBM), i.e., time-frequency domain segregations of the attended source and the acoustic background based on
a local signal-to-noise ratio objective (Wang and Brown, 2006). This talk argues that although IBMs may be a useful stand-in when evaluating signal-processing systems, they can provide a misleading perspective when considering models of auditory cognition. First, there
is no evidence that human cognition computes or requires an explicit binary mask representation (ideal or otherwise). Second, evaluation
of an IBM requires artificially-mixed acoustic scenes in order to provide access to the ground truth mask. It is possible that systems that
work well on artificially mixed acoustic scenes will fail to generalize to real data. The danger in predicting real performance from results
obtained on artificial mixtures is seen in an analysis of systems submitted to the recent CHiME distant microphone speech recognition
challenges which evaluates on both types of data (http://spandh.dcs.shef.ac.uk/chime). It is argued that rather than presume specific internal representations, auditory scene analysis systems can be best evaluated by direct comparison of human and machine percepts, e.g.,
in the case of a speech recognition task, comparison of human and machine transcriptions at a phonetic level.
11:40
3aPPa7. Modeling speech localization, identification, and word recognition in a multi-talker setting. Angela Josupeit, Joanna
Luberadzka, and Volker Hohmann (Medizinische Physik and Cluster of Excellence Hearing4all, Univ. of Oldenburg, Medizinische
Physik, Fakultaet VI, Universitaet Oldenburg, Oldenburg 26111, Germany, angela.josupeit@uni-oldenburg.de)
3a TUE. AM
In many everyday situations, listeners are confronted with complex acoustic scenes. Despite the complexity of these scenes, they are
able to follow and understand one particular talker. This contribution presents auditory models that aim to solve speech-related tasks in
multi-talker settings. The main characteristics of the models are: (1) restriction to salient auditory features (“glimpses”); (2) usage of periodicity, periodic energy, and binaural features; and (3) template-based classification methods using clean speech models. Further classification approaches using state-space models will be discussed. The model performance is evaluated on the basis of human
psychoacoustic data [e.g., Brungart and Simpson, Perception & Psychophysics, 2007, 69(1), 79-91; Schoenmaker and van de Par, Physiology, Psychoacoustics and Cognition in Normal and Impaired Hearing, 2016, 73-81]. The model results were mostly found to be similar
to the subject results. This suggests that sparse glimpses of periodicity-related monaural and binaural auditory features provide sufficient
information about a complex auditory scene involving multiple talkers. Furthermore, it can be concluded that the usage of clean speech
models is sufficient to decode speech information from the glimpses derived from a complex scene, i.e., computationally complex models of sound source superposition are not required for decoding a speech stream.
12:00–12:20 Panel Discussion
TUESDAY MORNING, 27 JUNE 2017
ROOM 304, 10:00 A.M. TO 12:20 P.M.
Session 3aPPb
Psychological and Physiological Acoustics: Environmental Auditory Experience
Andrzej Miskiewicz, Chair
Department of Sound Engineering, Fryderyk Chopin University of Music, Okolnik 2, Warsaw 00-368, Poland
Contributed Papers
10:00
3aPPb1. Annoyance due to railway noise and vibrations: A comparison
of two methods of collecting annoyance scores. Phileas Maigrot (Univ.
Lyon, LGCB and LVA, INSA-Lyon/LVA, 25 bis Ave. Jean Capelle, Villeurbanne 69621, France, phileas.maigrot@insa-lyon.fr), Catherine Marquis-Favre (Univ. Lyon, ENTPE, Laboratoire Genie Civil et B^atiment,
Vaulx-en-Velin, France), and Etienne Parizet (Univ. Lyon, INSA-Lyon,
Laboratoire Vibrations Acoustique, Villeurbanne, France)
An experiment has been conducted in order to determine if the method
of collecting partial and overall annoyance scores—during separated
3693
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
sessions or during the same session—has an influence on the participants’
answers. The experiment used controlled noise and vibration stimuli corresponding to a train pass-by, recorded inside a house in the vicinity of a railway track. 32 participants attended 4 sessions A, B, C, and D during each of
which they were presented with 16 combinations of noise and vibrations.
They had to evaluate partial annoyance due to noise in the presence of
vibration (session A), partial annoyance due to vibrations in the presence of
noise (session B) or overall annoyance (session C). Lastly, they were asked
to rate partial and overall annoyances in a same session (session D). Results
show that partial and overall annoyance scores, simultaneously collected
during session D, were quite similar to the ones respectively collected
Acoustics ’17 Boston
3693
during dedicated sessions. Furthermore, this method is convenient as a
reduced number of stimuli is presented to each participant.
10:20
3aPPb2. Annoyance response and improvement of Zwicker’s psychoacoustic annoyance model aiming at tonal noises. Guoqing Di (Inst. of
Environmental Pollution & Control Technol., Zhejiang Univ., Yuhangtang
Rd. 866, Hangzhou, Zhejiang Province 310058, China, dgq@zju.edu.cn)
Zwicker’s psychoacoustic annoyance model can be used to estimate the
relative degree of noise annoyance. However, this model cannot be well
applied to compare the annoyance degrees of tonal noises and atonal noises.
In order to improve its estimation effect on tonal noises, 3 groups of noise
samples were selected randomly, i.e., 27 low-frequency tonal noise samples
induced by a 1000 kV transformer with A-weighted equivalent sound pressure levels ranging from 41.2 dBA to 73.0 dBA; 30 low-, mid-, or high-frequency tonal/atonal noise samples with loudness levels ranging from 60
phon to 80 phon; and 60 other noise samples with A-weighted equivalent
sound pressure levels ranging from 40.7 dBA to 75.0 dBA. Laboratory listening tests were conducted on the above 3 sample groups respectively via
an 11-point numerical scale. The Zwicker’s psychoacoustic annoyance
model was improved by taking tonality into account, and introducing the
evaluation result of the first noise sample group (1000 kV transformer noise
samples) to determine the coefficients in the model. The applicability of the
improved model was examined by the evaluation results of the other two
groups as well as the data in a previous research on annoyance of 220 kV/
500 kV transformer noises. Results show that the improved model can estimate the relative annoyance degrees caused by various types of tonal /
atonal noises much more accurately.
10:40–11:00 Break
11:00
3aPPb3. Clang, chitter, crunch: Perceptual organisation of onomatopoeia. Oliver C. Bones, William J. Davies, and Trevor J. Cox (Computing,
Sci. and Eng., Univ. of Salford, 119 Newton Bldg., Acoust. Res. Ctr., Salford, Greater Manchester M5 4WT, United Kingdom, o.c.bones@salford.ac.
uk)
A method has been developed that utilizes a sound-sorting and labeling
procedure, with correspondence analysis of participant-generated descriptive terms, to elicit perceptual categories of sound. Unlike many other methods for identifying perceptual categories, this approach allows for the
interpretation of participant categorization without the researcher prescribing descriptive terms. The work has allowed robust sound taxonomies to be
created, which give insight into categorical auditory processing of everyday
sounds by humans. Work on common audio search terms has highlighted
that onomatopoeia are an important group that has been largely overlooked
in quotidian sound studies. These are words for which the meaning of the
word maps onto the sound of the utterance, and are an example of sound
symbolism where there is a non-arbitrary link between the form and the
meaning of word. Early analysis of the data suggests that people do draw on
sound symbolism to carry out the categorization, but that in addition they
also draw similarities between the inferred sound sources, such as organic
versus non-organic.
11:20
3aPPb4. Perception of environmental sounds: Recognition-detection
gaps. Andrzej Miskiewicz, Teresa Rosciszewska, and Jacek Majer (Dept. of
Sound Eng., Fryderyk Chopin Univ. of Music, Okolnik 2, Warsaw 00-368,
Poland, misk@chopin.edu.pl)
The study was conducted to assess the detection and recognition thresholds for 16 selected environmental sounds and determine the sound pressure
level difference between those thresholds, called the recognition-detection
3694
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
gap (RDG). The sounds were recorded with a dummy head and played back
through headphones. Recognition and detection thresholds were measured
for two groups of listeners—musicians and non-musicians, in two conditions: in quiet and in the presence of multitalker masking noise added to the
signal. The results demonstrate that RDG considerably varies, depending on
the acoustic characteristics the sound, from about 2 to as much as 20 dB for
sounds that are particularly difficult to recognize. The difficulty with which
a listener recognizes the sounds, assessed on the basis of RDG, is also
reflected by the steepness of psychometric functions for recognition. The
present findings do not support a working hypothesis in which it was
assumed that musical training results in an enhancement of the ability to
recognize sounds at levels close to detection threshold, when not all their
acoustic signatures are clearly audible, and manifests itself in smaller
RDGs.
11:40
3aPPb5. The auditory experience of infants born prematurely. Brian B.
Monson (Dept. of Pediatric Newborn Medicine, Brigham and Women’s
Hospital, Harvard Med. School, 75 Francis St., Boston, MA 02115, bmonson@research.bwh.harvard.edu)
The third trimester of gestation is a time of rapid auditory and brain development, and auditory experience during this period affects brain development. For example, newborns exhibit auditory learning and memory for
acoustic stimuli frequently heard in utero. Preterm infants in the neonatal intensive care unit (NICU) during the third-trimester-equivalent period have a
vastly different auditory experience than their fetal peers in utero, but the
acoustic differences between these two environments are not well defined.
The goal of quantifying these differences is to better understand and even
predict their impact on auditory processing deficits exhibited by preterm
infants later in childhood. Here I will present data describing the acoustic
environment of the NICU, including noise sources, alarms, and sound levels
measured in a NICU in a Boston hospital. One striking finding is that, in
stark contrast to the intrauterine environment, periods of silence in the
NICU are abundant. The consequences of this atypical perinatal acoustic exposure on auditory and brain development are unknown.
12:00
3aPPb6. Psychoacoustic sonification for tracked medical instrument
guidance. Tim Ziemer (Spatial Cognition Ctr. (BSCC), Univ. of Bremen,
Neue Rabenstr. 13, Hamburg 20354, Germany, tim.ziemer@uni-hamburg.
de) and David Black (Inst. for Medical Image Computing, Fraunhofer
MEVIS, Bremen, Germany)
In image-guided surgery, displays show a tracked instrument relative to
a patient’s anatomy. This helps the surgeon to follow a predefined path with
a scalpel or to avoid risk structures. A psychoacoustically motivated sonification design is presented to help assist surgeons in navigating a tracked
instrument to a target location in two-dimensional space. This is achieved
by mapping spatial dimensions to audio parameters that affect the magnitude of different perceptual sound qualities. Horizontal distance and direction are mapped to glissando speed and direction of a Shepard tone. The
vertical dimension is divided into two regions. Below the target, the vertical
distance controls the LFO speed of an amplitude modulation to create a regular beating well below the threshold of roughness sensation. Above the target elevation, the vertical deflection controls the depth of frequency
modulation to gradually increase the number and amplitudes of sidebands,
affecting perceived noisiness and roughness. This redundancy is necessary
because the magnitudes of each single sound quality are only differentiable
with little confidence. In a preliminary study, non-surgeons successfully
identified a target field out of 16 possible fields in 41% of all trials. The correct cardinal direction was identified in 84%. Based on findings and further
psychoacoustic considerations, the mapping range is optimized and an
implementation of an additional depth dimension is discussed.
Acoustics ’17 Boston
3694
TUESDAY MORNING, 27 JUNE 2017
ROOM 204, 9:15 A.M. TO 12:20 P.M.
Session 3aSAa
Structural Acoustics and Vibration: Energy Methods in Acoustics and Vibration I
Donald B. Bliss, Cochair
Mechanical Engineering, Duke University, 148B Hudson Hall, Durham, NC 27705
Linda P. Franzoni, Cochair
Dept. of Mech. Eng. and Materials Sci., Duke Univ., Box 90271, Durham, NC 27708-0271
Otto von Estorff, Cochair
Institute of Modelling and Computation, Hamburg University of Technology, Hamburg 21073, Germany
Chair’s Introduction—9:15
Invited Papers
3a TUE. AM
9:20
3aSAa1. Modeling high frequency broadband acoustic fields inside rectangular and cylindrical enclosures using an energy-intensity boundary element method. Donald B. Bliss, David Raudales (Mech. Eng., Duke Univ., 148B Hudson Hall, Durham, NC 27705,
dbb@duke.edu), Krista A. Michalis (Naval Surface Warfare Ctr. Carderock Div., West Bethesda, MD), and Linda P. Franzoni (Mech.
Eng., Duke Univ., Durham, NC)
Accurate prediction of high frequency broadband acoustic fields inside enclosures with curved boundaries typically requires time
consuming frequency-by-frequency numerical techniques. A quick solution is developed for general enclosure shapes using an energyintensity boundary element method, previously tested for rectangular geometries and now extended to cylinders with flat or spherical
endcaps. Derived from first principles, which are reviewed, the approach uses uncorrelated spreading energy-intensity boundary sources
to directly solve for the steady-state mean-square pressures. The enclosure boundary is discretized into radiating panels that account for
interior energy transfer and satisfy prescribed reflection and absorption boundary conditions. Half-space orthogonal spherical harmonics
serve as basis functions to emulate both diffuse and specular reflections. Panel interactions are calculated by an energy-accurate quadrature technique and studied for curved boundaries. Computationally intensive benchmark solutions are developed for verification. A fully
correlated Helmholtz solution is derived for the cylindrical enclosure using a novel internal scattering approach to calculate high frequency pressure, and numerically integrated over a third-octave band. Comparisons between the fully correlated solution and the energy
method for enclosures of various aspect ratios reveal excellent agreement. Simulations are also presented contrasting interesting behavioral differences between diffuse and specular reflection fields.
9:40
3aSAa2. Radiosity and radiative transfer in sound and vibration. Alain Le Bot (CNRS - Ecole centrale de Lyon, Ecole centrale de
Lyon 36, av. Guy de Collongue, Ecully 69134, France, alain.le-bot@ec-lyon.fr)
At high frequencies in acoustics, the most popular method is ray-tracing and its variants including radiosity. In structural vibrations,
the most known method is rather statistical energy analysis [1]. Both methods may be derived from a unique approach based on radiative
transfer equations analogous to radiative exchanges of energy in thermics [2]. In this study, we present an overview of the radiative
transfer equations in sound and vibration. We first show that radiosity is equivalent to ray-tracing with Lambertian reflection. In steadystate condition, radiosity is strictly equivalent to the view factor method in thermics. But in transient condition, radiosity provides an elegant solution to predict reverberation beyond the validity of Sabine’s law. The theory is also well suited for structural rays in built-up
structures. The radiative transfer equations account for reflection and transmission at interfaces of structural components. Sound radiation may also be described in the limit of high frequencies by this approach. It is also shown that a radiative transfer equation may
include diffraction in a simple way. Simple and multiple diffraction by corners can be predicted. For all these phenomena, some numerical examples are presented to illustrate the relevance of the approach. [1] Foundation of statistical energy analysis in vibroacoustics, A.
Le Bot. Oxford University Press, Oxford UK, 2015. [2] High frequency vibroacoustics: a radiative transfer equation and radiosity based
approach, A. Le Bot, E. Reboul, Wave Motion, 2014.
3695
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3695
10:00
3aSAa3. A regression-based energy method for predicting structural vibration and interior noise. Shung H. Sung (SHS Consulting, 4178 Drexel Dr., Troy, MI 48098, ssung@asme.org) and Donald J. Nefske (DJN Consulting, Troy, MI)
A regression-based energy method is developed for predicting the structural vibration and interior noise for prescribed loads applied
to a structural-acoustic enclosure subject to differences in the structural or acoustic design. The formulation is based on the energy transfer functions that relate the applied load energy to the structural or acoustic response energy. The energy transfer functions are determined from a statistical regression analysis of the measured or predicted multiple responses that result from the differences in the
structural or acoustic design. The applied load energy is determined analytically or experimentally for prescribed loading conditions.
The energy method can then be used to estimate the mean-value and variation of the structural or acoustic response for different structural or acoustic designs and various prescribed loading inputs. A simple tube-mass-spring-damper system terminated with absorption
material with variation is presented as an example. The practical application of the method to estimate the interior noise in an automotive
vehicle for road and aerodynamic loads at different speeds is then presented. Comparisons of the predicted versus measured mean-value
and variation of the sound pressure response show reasonable agreement. The methodology is generally applicable for rapidly estimating
the structural or acoustic response for different designs and various loading conditions.
10:20
3aSAa4. Predicting the variance of the frequency-averaged energetic response in hybrid finite element—Statistical energy analysis. Edwin Reynders (KU Leuven, Kasteelpark Arenberg 40, Leuven 3001, Belgium, Edwin.Reynders@bwk.kuleuven.be) and Robin S.
Langley (Univ. of Cambridge, Cambridge, United Kingdom)
In this contribution, the hybrid finite element-statistical energy analysis method is extended such that not only the mean and the ensemble variance of the harmonic system response can be computed, but also the ensemble variance of the frequency band-averaged system response. The computed variance represents the uncertainty that is due to the assumption of a diffuse field in components of the
hybrid system. The developments start with a cross frequency generalization of the diffuse field reciprocity relationship between the total
energy in a diffuse field and the cross spectrum of the external loading. By making extensive use of this generalization in a first-order
perturbation analysis, explicit expressions are derived for the variance of the vibrational energies in the diffuse components and for the
variance of the cross spectrum of the response of the deterministic components. These expressions are extensively validated against
Monte Carlo analyses of systems consisting of connected plates, in which diffuse fields are simulated by randomly distributing small
concentrated masses, acting as wave scatterers, across the diffuse components.
10:40
3aSAa5. High frequency analysis of a point-coupled parallel plate system. Dean R. Culver and Earl Dowell (Duke Univ., 12 Prestwick Pl., Durham, NC 27705, culver.dean@gmail.com)
The RMS response of various points in a system comprised of two parallel plates coupled at a point undergoing high frequency,
broadband transverse point excitation of one component is considered. Through this prototypical example, Asymptotic Modal Analysis
(AMA) is extended to two coupled continuous dynamical systems. It is shown that different points on the plates respond with different
RMS magnitudes depending on their spatial relationship to the excitation or coupling points in the system. The ability of AMA to accurately compute the RMS response of these points (namely the excitation point, the coupling points, and the hot lines through the excitation or coupling points) in the system is shown. The behavior of three representative prototypical configurations of the parallel plate
system: two similar plates (in both geometry and modal density), two plates with similar modal density but different geometry, and two
plates with similar geometry but different modal density. After examining the error between reduced modal methods (such as AMA) to
Classical Modal Analysis (CMA), it is determined that these several methods are valid for each of these scenarios. The data from the
various methods will also be useful in evaluating the accuracy of other methods including SEA.
11:00
3aSAa6. General thermodynamics of vibrating systems. Antonio Carcaterra (Dept. of Mech. and Aerosp. Eng., Sapienza, Univ. of
Rome, Via dei Velfra, Tarquinia, VT 01016, Italy, carcaterra.antonio@gmail.com)
This paper introduces a general view of a generalized thermodynamic theory for vibrating systems, with special emphasis to vibration and acoustics applications. One of the basis, is the temperature concept for Hamiltonian systems to describe the energy flow
between two coupled sub-systems. As a result, a general and strict method to approach the energy analysis of linear and nonlinear systems, with potential applications both in theoretical mechanics as well as in engineering vibroacoustics and Statistical Energy Analysis
is disclosed. The opportunity of a strict mathematical foundation to this important physical and engineering problem, is provided by the
introduction of the Khinchin’s entropy. The analysis shows that, under (i) linearity, (ii) weak coupling, and (iii) close-to-equilibrium
conditions, a Fourier-like heat transmission law is obtained, where the thermodynamic temperature in proportional to the modal energy
of the system, that is the ratio of its total energy and the number of its degrees of freedom. Generalized results both for large shocks and
for nonlinear systems are indeed derived in closed form for weak anharmonic potentials, showing in this case that the temperature
depends on a series of integer and fractional powers of the system’s modal energy. At the end, a generalized statistical energy analysis
of nonlinear systems is outlined.
3696
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3696
11:20
3aSAa7. A boundary element based method for accurate prediction of the surface pressure cross-spectral density matrix. Jerry
W. Rouse (Structural Acoust. Branch, NASA Langley Res. Ctr., 2 North Dryden St., MS 463, Hampton, VA 23681, jerry.w.rouse@
nasa.gov)
Accurate prediction of the surface pressure cross-spectral density matrix is necessary to predict the dynamic response of a structure
loaded by a diffuse acoustic field. The cross-spectral density matrix describes the frequency dependence of the correlation between the
surface pressure at all pairs of points on the structure. Most often the cross-spectral density matrix is obtained from either a uniform distribution of incident plane waves or direct application of the diffuse field spatial cross-correlation function. While the method of plane
waves is relatively more accurate, especially at low frequencies, the necessary distribution of incidence angles and ensemble size can be
problematic. This talk shall present a boundary element based methodology for determining the surface pressure cross-spectral density
for any structure and frequency, including the effects of scattering and shielding. The method involves a power spectral density formulation of the boundary element method and takes advantage of the underlying foundations in potential theory. The method can be generalized beyond diffuse fields and can be applied to structures having a known surface impedance.
11:40
12:00
3aSAa8. Absorption scaling theory applied to an energy-intensity
boundary element method for prediction of broadband acoustic fields in
enclosures. David Raudales and Donald B. Bliss (Mech. Eng., Duke Univ.,
Edmund T. Pratt Jr. School of Eng. Box 90300 Hudson Hall, Durham, NC
27708, david.raudales@duke.edu)
3aSAa9. Sound radiation efficiency of unbaffled plates using an edge
source integral equation. Kiran C. Sahu, U. Peter Svensson (Dept. of Electron. Systems, NTNU, Norway, Acoust. Res. Ctr., Norwegian Univ. of Sci.
and Technol., Trondheim NO - 7491, Norway, kiran.sahu@ntnu.no), and
Sara R. Martın (Ctr. for Comput. Res. in Music and Acoust. (CCRMA),
Stanford Univ., Stanford, CA)
Insight into the acoustic energy distribution in enclosures is gained by
applying Absorption Scaling Theory. Broadband high-frequency acoustic
fields within 3D rectangular enclosures are modeled with an energy-intensity
boundary element method (EIBEM) that replaces the enclosure boundary
with a distribution of broadband uncorrelated directional intensity sources to
simulate either diffuse or specular reflection. Assuming a highly reflective enclosure, the boundary panel strengths are expanded in a power series with the
spatially-averaged absorption as a small parameter. For diffuse reflection
fields, where the theory is well developed, a matrix formulation is derived for
each of the expansion coefficients that must be solved in sequential order.
The leading order term in the expansion is inversely proportional to the scaling parameter and estimates the average level inside the enclosure, the next
term gives spatial variation independent of average absorption level, while
higher order terms account for the spatial variation of energy due to the distribution of absorption and the location of sources. For a highly reflective enclosure, only a few terms are needed to accurately predict the mean-square
pressures. Similar behavior is demonstrated for specular reflection using numerical simulations. Applications include theoretical and empirical enclosure
design and assessment, and solving the inverse problem.
3697
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
The accurate prediction of sound radiation from unbaffled vibrating
plates remains a challenging problem. Finite structures make the sound
waves diffract off and around the edges, an effect which is particularly
strong at low frequencies. This phenomenon can be modeled with an edge
source integral equation (ESIE) [A. Asheim, U. P. Svensson, J. Acoust. Soc.
Am. 133 (2013) 3681-3691]. The modeling is based on a separation of the
radiated sound into a geometrical-acoustics (GA) term, which equals the infinite-baffle solution, and diffraction of first- and higher orders. Expressions
for the GA term and first-order diffraction are available explicitly, whereas
higher-order diffraction is calculated through the solution of an integral
equation. We present a method with a combination of time- and frequencydomain modeling which gives particularly efficient modeling. In this study,
the sound radiation efficiency is targeted so only the sound field at the plate
is computed. Thereby the numerically challenging receiver positions of the
ESIE method at visibility zone boundaries are avoided. The results of the
present study are compared with published results [A. Putra, D.J. Thompson, Applied Acoustics 71 (2010) 1113-1125] and close agreement is found,
for a number of vibration modes.
Acoustics ’17 Boston
3697
3a TUE. AM
Contributed Papers
TUESDAY MORNING, 27 JUNE 2017
ROOM 201, 10:40 A.M. TO 12:20 P.M.
Session 3aSAb
Structural Acoustics and Vibration, Physical Acoustics and Engineering Acoustics:
Acoustic Metamaterials III
Christina J. Naify, Chair
Acoustics, Naval Research Lab, 4555 Overlook Ave. SW, Washington, DC 20375
Invited Papers
10:40
3aSAb1. Effects of visco-thermal losses in metamaterials slabs based on rigid building units. Vicente Cutanda Heniquez (Dept. of
Elec. Eng., Tech. Univ. of Denmark, Ørsteds Plads, Bldg. 352, Kgs. Lyngby 2800, Denmark, vcuhe@elektro.dtu.dk), Victor Manuel
Garcia-Chocano, and Jose Sanchez-Dehesa (Wave Phenomena Group, Universitat Politècnica de València, Valencia, Valencia, Spain)
Potential applications of negative-index acoustic metamaterials are strongly limited by absorptive effects of different origin. In this
context, we present an investigation of the visco-thermal effects on the acoustic properties of double-negative metamaterials based on
specifically designed rigid units with subwavelength dimensions. It is shown that visco-thermal losses dissipate about 70% of the acoustic energy associated to the excitation of monopolar and dipolar resonances, leading to the suppression of negative refractive index. Our
numerical simulations based on the Boundary Element Method (BEM) are in excellent agreement with recent experimental data showing
the quenching of the double-negative transmission peak. The BEM numerical model, which has been specifically adapted to this purpose, has also been validated against an equivalent Finite Element Method model. We also present the results and discuss the differences
of visco-thermal effects on monopolar resonances leading to negative bulk modulus metamaterials, and Fabry-Perot resonances in metamaterial slabs.
11:00
3aSAb2. Non-reciprocal sound propagation in zero-index metamaterials. Li Quan, Dimitrios Sounas, and Andrea Alu (The Univ. of
Texas at Austin, 1616 Guadalupe St. UTA 7.215, Austin, TX 78701, alu@mail.utexas.edu)
Moving media have recently attracted attention for their ability to break reciprocity without magnetic materials. By spinning air in an
acoustic cavity, it was recently shown that it is possible to realize an acoustic circulator [R. Fleury, D. Sounas, A. Al
u, Science 343, 516
(2014)], with applications for sonars and medical imaging devices. Here we show that the non-relativistic Fresnel-Fizeau effect at the basis
of these mechanisms can be boosted in zero-index acoustic metamaterials, due to their large phase velocity. This is a different scenario
than resonant structures, where the Fresnel-Fizeau effect is boosted by the effectively large wave-matter interaction distance, even for large
intrinsic refractive index for the moving medium. Our results open a new venue to use zero-index metamaterials, and can become practically important in the realization of non-reciprocal acoustic imaging systems with built-in isolation and protection from reflections.
Contributed Papers
11:20
11:40
3aSAb3. Nonlinear metamaterial from piecewise discontinuous acoustic
properties. Alexey S. Titovich (Naval Surface Warfare Ctr., Carderock
Div., 9500 MacArthur Blvd., West Bethesda, MD 20817, alexey.titovich@
navy.mil)
3aSAb4. Nonlinearity based wave redirection for acoustic metamaterials.
Saliou Telly (Mech. Eng., Univ. of Maryland College Park, 14359 Long Channel Dr., Germantown, MD 20874, stelly@umd.edu) and Balakumar Balachandran (Mech. Eng., Univ. of Maryland College Park, College Park, MD)
Numerous metamaterials have recently been designed which exhibit either negative effective density and/or bulk modulus leading to extraordinary
wave bearing capabilities. This research increases the design space by considering displacement-dependent properties which are discontinuous over a
period of oscillation. Such nonlinear behavior has been studied in dynamical
systems with intermittent contact. This work applies those analytical results
together with numerical methods to provide insight into the harmonic and
subharmonic generation in acoustic metamaterials with properties differing
from compression to rarefaction. Stability is analyzed providing criteria for
the onset of chaotic behavior.
Advances in metamaterials have revealed novel opportunities for controlling wave propagation paths for various applications not realizable with
conventional materials. Some prominent examples are schemes for electromagnetic and acoustic cloaking and focusing devices. In the classical
approach to the formulations of these devices, one exploits a change of
physical coordinates to achieve a desired wave behavior within a finite
space. Such a change can be interpreted as a transformation of material
properties when the field equations of interest are invariant to coordinate
transformations. With regard to acoustics, this approach is constrained to
fluid-like metamaterials amenable to the propagation of longitudinal waves
only. Complications arise with solid materials because of their inherent ability to sustain both longitudinal and transverse waves, which refract differently in linear isotropic materials because of dissimilar propagation speeds.
3698
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3698
In this work, the authors explore wave redirection mechanisms that may
take advantage of nonlinear wave propagation phenomena in a solid metamaterial. Starting from the classical nonlinear Murnaghan model, a hyperelastic material is formulated to realize coupling between shear and compressional modes that could lead to a more suitable refractive behavior for
acoustic wave redirection in a solid metamaterial. The formulated model is
studied using analytical and numerical tools.
12:00
3aSAb5. Development of the underwater acoustic prism. Katherine F.
Woolfe, Jeffrey S. Rogers, Matthew D. Guild, Charles Rohde (Naval Res.
Lab., 4555 Overlook Ave. SW, Washington, District of Columbia, katherine.
woolfe@gmail.com), Christina J. Naify (Jet Propulsion Lab., Pasadena, CA),
and Gregory Orris (Naval Res. Lab., Washington, District of Columbia)
The acoustic prism (i.e., leaky wave antenna) has been experimentally
demonstrated in air as a way to steer an emitted beam using only a single
broadband acoustic source. The prism relies on a leaky, dispersive waveguide to provide a unique radiation angle for each narrowband frequency
projected by the acoustic source. In air, the leakage occurs through a series
of periodically spaced shunts in the waveguide. This study examines an
acoustic prism design that is capable of operating underwater, where leakage occurs through the waveguide wall itself due to the much lower impedance contrast of the waveguide material in water to that in air. This results
in a geometrically simpler design in the underwater case. However, shear
wave effects must be considered in the design of the underwater acoustic
prism. The waveguide wall is constructed out of a composite material to
have a high impedance but a low shear modulus, which are both necessary
conditions to decrease sidelobes in the radiated pressure field. Numerical
results indicate that the acoustic prism design is capable of scanning a range
of frequencies from broadside to forward endfire. Experimental realization
of the underwater acoustic prism is also discussed. [Work sponsored by
ONR.]
TUESDAY MORNING, 27 JUNE 2017
BALLROOM A, 9:20 A.M. TO 12:20 P.M.
Session 3aSC
Speech Communication: Prosody (Poster Session)
3a TUE. AM
Steven M. Lulich, Chair
Speech and Hearing Sciences, Indiana University, 4789 N White River Drive, Bloomington, IN 47404
All posters will be on display from 9:20 a.m. to 12:20 p.m. To allow contributors in this session to see the other posters, authors of oddnumbered papers will be at their posters from 9:20 a.m. to 10:50 a.m. and authors of even-numbered papers will be at their posters from
10:50 a.m. to 12:20 p.m.
Contributed Papers
3aSC1. Reading aloud: Acoustic differences between prose and poetry.
Filip Nenadic and Benjamin V. Tucker (Dept. of Linguist, Univ. of AB,
1617 8515 112 St. NW, Edmonton, AB T6G 1K7, Canada, nenadic@ualberta.ca)
Research on silent reading has shown that text genre influences the way
texts are read, including differences between prose and poetry (e.g., Zwaan,
1994; Hanauer, 1998). There is little data examining whether text layout
(prose vs. poetry) affects the way it is read aloud by non-expert readers, and,
if yes, how do readers express those differences acoustically. Native speakers
of Serbian (N = 28) and English (N = 37) read aloud twenty short texts in
their native language. Stimuli were original texts that were acceptable as both
prose and poetry, written by young published authors. Each text was formatted in four layouts (prose left aligned and justified, a single stanza and verses
in multiple stanzas). Each participant saw each text in only one of these layouts. Separate mixed effects logistic regression analyses were performed for
each language, testing whether prose vs. poetry layouts influenced the silent
period and utterance duration, pitch, and intensity values of the productions.
Differences and similarities between reading prose and poetry and between
Serbian and English participants are discussed.
3aSC2. Focus effects on acoustic cues to sibilant place of articulation.
Yung-hsiang Shawn Chang (Dept. of English, National Taipei Univ. of
Technol., Zhongxiao E. Rd., Sec. 3, No. 1, Taipei 106, Taiwan, shawnchang@ntut.edu.tw)
Prosodically driven contrast enhancement has been reported for vowels
and consonantal voicing, but limited evidence of such prosodic strengthening
effects has been found for consonantal place of articulation (e.g., Cole et al.
3699
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
2007, Silbert & de Jong 2008). Chang and Shih (2015) extended similar investigation to the Mandarin alveolar-retroflex contrast. They had participants disambiguate an alveolar or retroflex syllable with phonologically unrelated
syllables (e.g., contrasting /sa/ with /bo/) and found no focus enhancement of
the alveolar-retroflex contrast. The current study investigated whether employing a smaller contrastive focus domain (i.e., disambiguating contrastive sibilants, such as contrasting /sa/ with /óa/) would give rise to sibilant
hyperarticulation. Map tasks with stimuli that were vowel context-balanced
and lexical frequency-controlled were used for elicitation of focused and unfocused productions of Mandarin alveolars and retroflexes. Results showed that
contrastive focus results in adjustments of non-contrastive properties (i.e., longer syllable and frication duration, as well as higher frication amplitude) without enhancing the feature-defining dimension (i.e., a greater acoustic distance
between alveolar and retroflex sibilants). Along with evidence from English /
s/ and /S/ in Silbert & de Jong (2008), it is suggested that the place contrast of
coronal sibilants is less subject to cue-enhancing hyperarticulation.
3aSC3. Prosodic characteristics of speech directed to adults and to
infants with and without hearing impairment. Laura Dilley, Elizabeth
Wieland, Evamarie Burnham (Michigan State Univ., Dept. of Communicative Sci. and, East Lansing, MI 48824, ldilley@msu.edu), Yuanyuan Wang,
Derek Houston (The Ohio State Univ., Columbus, OH), Maria V. Kondaurova (Univ. of Louisville, Louisville, KY), and Tonya Bergeson (Indiana
Univ. School of Medicine, Indianapolis, IN)
Infant-direct (ID) and adult-directed (AD) speech are distinguished via
multiple acoustic-prosodic characteristics, but it is unclear how these differences map onto linguistic constructs, including pitch accents, prominences,
and phrasal boundaries, or how a child’s hearing impairment affects
Acoustics ’17 Boston
3699
caregiver prosody. In two studies, trained analysts coded prosody in corpora
of mothers reading to their children (ID condition) or another adult (AD
condition). In Study 1, 48 mothers read a storybook to their infants aged 3,
9, 13, or 20 months or an experimenter. In Study 2, 11 mothers read a storybook to their child with a cochlear implant at 3 months post-implantation or
to an experimenter; each hearing-impaired child was paired two normalhearing dyads based on the hearing-impaired child’s chronological age and
amount of hearing experience. ID speech contained a greater density of
pitch accents and prominences than AD speech. There was no difference in
distributions of phrase boundaries across speech styles, and hearing status
did not mediate effects of speech style on prosody. Results suggest that
acoustic differences distinguishing ID and AD speech map onto combined
phonological structural and gradient paralinguistic characteristics and contribute to understanding effects of child hearing loss on caregiver input.
[Work supported by NIH Grant 5R01DC008581-07.]
3aSC4. The perception of speech rate in non-manipulated, reversed and
spectrally rotated speech reveals a subordinate role of amplitude envelope information. Volker Dellwo and Sandra Schwab (Phonet. Lab., Universitaet Zurich, Plattenstrasse 54, Phonet. Lab., Zurich 8005, Switzerland,
volker.dellwo@uzh.ch)
Previous research suggests that the broad-band amplitude envelope
(ENV) of speech is crucial for the perception of speech rate and timing. The
present experiment tested this claim using non-manipulated and spectrally
rotated speech (rotated around 2.5 kHz) with a bandwidth of 5 kHz which
both contain identical ENV and reversed speech in which the temporal
organisation of ENV is distorted. 44 listeners of Swiss German rated perceived speech tempo on a continuous scale reaching from “rather slow” to
“rather fast” in 48 stimuli (4 sentences 4 speakers 3 signal conditions).
Results revealed a significant effect of signal condition. Both reversed and
spectrally rotated speech were perceived as significantly faster than clear
speech but there was no difference between spectrally rotated and reversed
speech. Results were consistent for all sentences and speakers. Results suggest that the intelligibility of the signal plays a higher role in the perception
of speech rate than the presence of the ENV.
3aSC5. Spectro-temporal cues for perceptual recovery of reduced syllables from continuous, casual speech. Laura Dilley, Meisam K. Arjmandi,
and Zachary Ireland (Dept. of Communicative Sci., Michigan State Univ.,
East Lansing, MI 48824, ldilley@msu.edu)
Function words may be highly reduced, with little to no discontinuity
marking their onsets to cue their segmentation from continuous speech. The
present study investigated whether reduced function words lacking onset discontinuities have residual timing cues that could be used for word segmentation. Participants (n = 51) briefly viewed sentences and spoke them from
memory to elicit casual speech. They were randomly assigned to either a
“function-word present” condition (n = 29) in which experimental items contained a critical function word expected to frequently blend spectrally with
context, or a “function-word absent” set (n = 22) with phonetically matched
items lacking the critical word. Acoustic analyses confirmed that in
“function-word present” sentences, critical words lacked detectable onset discontinuities 60% of the time. Critically, in the “function-word present” condition, portions of speech containing critical function words were longer, both
in terms of absolute duration and normalized for context speech rate, compared with matched portions in the “function-word absent” condition, even
when the former were highly reduced and lacked onset discontinuities. These
findings suggest that relative duration cues provide substantial information
which may be used by listeners for segmentation of highly reduced syllables
from continuous speech. [Work supported by NSF Grant BCS-1431063.]
3aSC6. A model of Mandarin Chinese question intonation. Edward
Flemming (Linguist & Philosophy, MIT, 77 Massachusetts Ave., 32-D808,
Cambridge, MA 02139, flemming@mit.edu)
Echo questions in Mandarin Chinese provide an interesting case study
of the interaction between lexical tone and intonation because it appears that
3700
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
echo questions are distinguished from declaratives by modifying the F0 trajectory of the sentence-final lexical tone in ways that depend on the identity
of that tone. The most general effect is that the offset of the final tone is
higher in a question than in a declarative rendition of the same sentence. If
the final tone is high or falling, F0 is raised throughout the tone, but if the
final tone if low or rising, the F0 minimum is not raised in questions. We
propose a quantitative analysis according to which question intonation consists of a high boundary tone realized simultaneously with the offset of the
final lexical tone. Compromise between the boundary tone and targets for
the lexical tone results in raising of the offset of the tone. Additional effects
result from interactions with other targets pertaining to the shape of the lexical tone. For example, simply raising the offset of a falling tone would result
in failure to realize a sufficiently steep fall, so the onset of the fall is raised
as well.
3aSC7. Prosodic cues to psychosis risk. Emily Cibelli, Jennifer Cole (Linguist, Northwestern Univ., 2016 Sheridan Rd., Evanston, IL 60208, emily.
cibelli@northwestern.edu), Vijay Mittal (Psych., Northwestern Univ., Evanston, IL), and Matthew Goldrick (Linguist, Northwestern Univ., Evanston,
IL)
Schizophrenia is known to impact prosody, often described as “flat
affect.” Individuals with schizophrenia show reduced F0 variability relative
to neurotypical individuals (Rapcan et al. 2010). However, the speech of
adolescents at high risk for psychosis in the prodromal (pre-diagnosis) stage
has not been investigated, leaving open the question of whether speech
prosody might be an early signal of symptoms associated with psychotic
disorders. To investigate this issue, the speech of 18 ultra high-risk (UHR)
youth (ages 16-21) was compared to 18 age- and gender-matched controls.
F0 (pre-processed for smoothing and error correction) was extracted from
10-minute segments of speech recorded during clinical interviews. Using
LDA classification, F0 summary statistics (mean and variability) separated
male UHR and control speakers (69% accuracy) but not female speakers
(42% accuracy), consistent with gender differences in psychosis onset during adolescence (Ochoa et al., 2012). Linear models of symptoms measured
by the Structural Interview for Prodromal Syndromes (Miller, 2003) found
that F0 mean and variability predicted negative symptoms in UHR speakers
when controlling for age and gender. These results suggest that prosodic
markers documented in individuals with psychosis may also be present in
prodromal populations, pointing to speech as a potential biomarker for psychosis risk.
3aSC8. Loudness trumps pitch in politeness judgments: Evidence from
Korean. Kaori Idemaru, Lucien Brown (East Asian Lang. and Literatures,
Univ. of Oregon, Eugene, OR 97403, idemaru@uoregon.edu), Bodo Winter
(Univ. of Birmingham, Merced, California), and Grace E. Oh (Konkuk
Univ., Seoul, South Korea)
Politeness is a vital aspect of everyday life that is receiving increased
attention in sociophonetic research. The current study investigated how
deferential and intimate stances, examples of politeness-related expressions, are conveyed by phonetic cues in Korean. Previously, we found
that Korean listeners can distinguish these stances based on speech acoustics alone. The current study manipulated fundamental frequency (F0) and
intensity of spoken Korean utterances to investigate the specific role of
these cues in politeness judgments. Across three experiments with a total
of 63 Korean listeners, we found that intensity reliably influenced politeness judgments, but F0 did not. An examination of individual differences
revealed that all listeners interpreted deferential stances to be associated
with low intensity: quiet utterances were perceived as deferential. On the
other hand, the interpretation of F0 varied across listeners: some perceived high-pitched utterances as deferential and others perceived lowpitched utterances as deferential. These results present a challenge to the
Frequency Code as a universal principle underlying politeness phenomena. The results also indicate that perception does not perfectly mirror
production in politeness expressions in Korean, since previous production
studies have reliably found low pitch to be associated with deferential
stances.
Acoustics ’17 Boston
3700
This paper presents work in progress on the phonetic realization of
uptalk rises in Northern Irish English, a variety which is well-known for
another type of rising intonation, the rise-plateau(-slump) (Cruttenden 1997;
Grabe 2002; Ladd 2008). However, a recent pilot study has shown that the
steeper and more steadily rising uptalk rise, which is mainly associated with
American and Antipodean Englishes, is now found not only in Southern
British English (cf. Arvaniti and Atkins 2016), but also in Northern Ireland.
For this study, 6 female speakers were recorded while taking part in a Map
Task and approximately 3 minutes of speech was examined. Intonational
rises were labeled using the IViE guidelines (see Grabe et al. 2000; Grabe
2002) and f0 measurements were taken at 10 intervals between the low starting points and peaks of each rise. Rises were then assigned to either the riseplateau or uptalk categories according to the phonological label assigned
and the steepness, height and steadiness of the rise. This study thus provides
confirmation that Northern Irish English speakers really use uptalk rises,
and acoustic evidence of how these differ from the variety’s traditional riseplateaux.
3aSC10. Segmental intonation in tonal and non-tonal languages. Maida
Percival and Kaz Bamba (Linguist, Univ. of Toronto, 100 St. George St., 4th
Fl., Toronto, ON M5S 3G3, Canada, maida.percival@mail.utoronto.ca)
The nature of edge intonational contours as well as acoustics of fricatives have generally been independently discussed in the literature (Hughes
& Halle 1956; Ladd 1996; Gussenhoven 2004 inter alia). Voiceless consonants were traditionally conceived as irrelevant to the study of utterancelevel intonation and thought merely to interrupt pitch contours (Bolinger
1964). However, Niebuhr (2012) proposes that the two domains interact,
reporting that German fricatives exhibit relatively higher centre of gravity
(CoG) and higher acoustic energy in the context of rising intonation. This
phenomenon, known as segmental intonation, has been found in some languages (Polish, Zygis et al. 2014; Dutch, Heeren 2015), but remains controversial in others (English, Niebuhr p.c.). We test this hypothesis by
replicating the reading task in Niebuhr’s (2012) study for English and also
extending to a tonal language, Cantonese, in which F0 is used grammatically to distinguish words, in addition to intonation. Preliminary results
from 10 speakers suggest that segmental intonation in the form of higher
CoG and intensity exists in English, but not in Cantonese. With additional
data recently collected, we hope to confirm these findings and contribute to
determining whether the segmental intonation is an epiphenomenon of
speech production in general or not.
3aSC11. Recognition of emotional prosody in Mandarin: Evidence from
a synthetic speech paradigm. Cecilia L. Pak and William F. Katz (Commun. Sci. and Disord., Univ. of Texas at Dallas, 800 West Campbell Rd.,
Richardson, TX 75080, sxl083020@utdallas.edu)
Pitch-dominant information is reduced in the spectrally-impoverished
signal transmitted by cochlear implants (CIs), leading to potential difficulties in perceiving voice emotion. However, this evidence comes from nontonal languages such as English, in which pitch information is not required
for lexical meaning. In order to better understand how hearing impaired
(HI) speakers of a tone language with cochlear implants (CIs) process emotional prosody, an experiment was conducted with healthy normal hearing
(NH) Mandarin-speaking adults listening to synthetic stimuli designed to
resemble CI input. Listeners heard short sentences from a read-speech database produced by professional actors. Stimuli were selected to express four
emotions (“angry,” “happy,” “sad,” and “neutral”), under four conditions
which varied the lexical tones of Mandarin. Listeners heard natural speech
and three noise-vocoded speech conditions (4-, 8-, and 16-spectral channels)
and made a four-alternative, forced-choice decision about the basic emotion
underlying each sentence. Preliminary results indicate more accurate emotional prosody recognition for natural speech than for synthesized speech,
with greater accuracy for higher channel stimuli than lower channel stimuli.
The findings also suggest NH Mandarin-speaking listeners show lower overall vocal emotional prosody accuracy compared with previous studies of
non-tonal languages (e.g., English).
3701
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
3aSC12. Proposal of description for an intonation pattern: The simulacrum of neutral intonation. Marcus Vinicius Moreira Martins and Waldemar Ferreira-Netto (Letras Classicas e Vernaculas, Univ. of S~ao Paulo, Av.
Professor Luciano Gualberto, 403 - Departamento de Letras Classicas e
Vernaculas, S~ao Paulo, S~ao Paulo 05508-900, Brazil, marcusvmmartins@
gmail.com)
The aim of this study is describe the intonation pattern Simulacrum of
Neutral Intonation (SNI), defined as the monotone speech produced by the
speakers with a prevalence of rhythm and a flat speech melody. As first step,
71 recordings were separated by gender of the subjects and their emotional
states: 9 anger male and female speech, 10 neutral male speech, 11 neutral
female speech, 11 sad male speech, 9 sad female speech, 7 SNI female
speech, and 5 SNI male speech. The 71 samples were analyzed in terms of
44 acoustical parameters established a priori and separated by clustered
using the parameters selected by Principal Component Analysis. The four
final parameters of this analysis were as follows: (i) lower F0 value, (ii)
lower dispersion of positive variations of Focus/Emphasis, (ii) lower mean
interval between units of bearing of intonation, and (iv) lower median of
these same intervals. These values shows a possible characterization of intonational speech register, called Simulacrum of Neutral Intonation. The SNI
record is similar to other speech registers, as those used by speakers with
psychic disorders or in stressful situations. These values confirms some findings, as seen in Ragin et al. (1989) and Thomas et al. (1990).
3aSC13. Word length modulates the effect of emotional prosody. Seung
Kyung Kim (Aix Marseille Univ, CNRS, LPL, 5 Ave. Pasteur, Aix en Provence 13100, France, kim.seungkyung@gmail.com)
Previous work has shown that emotional prosody, independent of the
lexical carrier, activates words associated with the emotional information
(Kim, 2015; Kim & Sumner, submitted). For example, hearing a non-emotional word (pineapple) uttered with angry prosody facilitates recognition of
angry-associated words (mad). Building on this finding, the current study
delves into the nature of the affective priming between emotional prosody
and emotional words and tests if word length modulates affective priming.
Word length is an important dimension in lexical processing, as longer
words are shown to produce stronger lexical activation than shorter words
(Pitt & Samuel, 2006). I hypothesize that social information shows a stronger effect in spoken word processing when lexical activation is weaker.
Then we should find stronger affective priming with shorter words than longer words. This hypothesis was tested with a cross-modal priming experiment. The visual targets were 12 angry related words (e.g., mad, upset). The
targets were preceded by two-, three-, or four-syllable non-emotional primes
(e.g., atom, envelope, aluminum) spoken with angry prosody. Listeners recognized angry words faster after hearing angry prosody than after hearing
neutral prosody when the prime words were short (2 syllables) but not when
the prime words were longer (3-4 syllables). The current results provide evidence that social effects in word recognition are modulated by the strength
of lexical activation.
3aSC14. A cross-linguistic study of speech modulation spectra. Leo Var
net (Laboratoire des Systèmes Perceptifs, Institut d’Etude
de la Cognition,
ENS Paris, Laboratoire des Systèmes Perceptifs, 29 rue d’Ulm, Paris 75005,
France, leo.varnet@ens.fr), Maria C. Ortiz-Barajas, Ram
on Guevara Erra,
Judit Gervain (Laboratoire Psychologie de la Percept., Universite Paris-Descartes, CNRS, Paris, France), and Christian Lorenzi (Laboratoire des Sys
tèmes Perceptifs, Institut d’Etude
de la Cognition, ENS Paris, Paris, Ile de
France, France)
Languages show systematic variation in their sound patterns and grammars. Accordingly, they have been classified into typological categories
such as stress-timed vs. syllable-timed on the basis of their rhythms, HeadComplement vs. Complement-Head on the basis of their basic word order,
or tonal vs. non-tonal on the basis of the presence/absence of lexical tones.
To date, it has remained incompletely understood how these linguistic properties are reflected in the acoustic characteristics of speech in different languages. In the present study, the amplitude-modulation (AM) and
frequency-modulation (FM) spectra of 1862 utterances produced by 44
speakers in 12 languages were analyzed. Overall, the spectra were similar
across languages. However, a perceptually based representation of the AM
Acoustics ’17 Boston
3701
3a TUE. AM
3aSC9. Uptalk in northern Irish English. Anna Jespersen (Dept. of English, Aarhus Univ., Vesterport 7, 105, Aarhus C 8000, Denmark, anna.jespersen@cc.au.dk)
spectrum revealed significant differences between languages. The maximum
value of this spectrum distinguished between HC non-tonal, CH non-tonal,
and tonal languages, while the exact frequency of this maximum value differed between stress-timed and syllable-timed languages. Furthermore,
when normalized, the f0-modulation spectra of tonal and non-tonal languages also differed. These findings reveal that broad linguistic categories
are reflected in differences in temporal modulation features of different languages. This has important implications for theories of language processing
and acquisition.
3aSC15. Effects of dialect and syllable onset on Serbian tone timing. Robin P. Karlin (Linguist, Cornell Univ., 103 W Yates St., Ithaca, NY 14850,
karlin.robin@gmail.com)
In recent years, the c-center effect has been posited as the main coordinative structure for lexical tone. This has two predictions: (1) the timing of
tone gestures is affected by syllable onset complexity, and (2) tone gestures
will start relatively late compared to the first onset consonant. However, little work has investigated these predictions, thus far focusing on languages
with few word-initial clusters. In this acoustic study, I examine two dialects
of Serbian, a pitch-accent language that allows word-initial sonorant-sonorant clusters such as /ml/. In both dialects, the H(igh) tone excursion starts
later in syllables with complex onsets, in accordance with prediction 1 and
is suggestive of a c-center structure. The dialect comparison addresses the
second prediction, as Valjevo Serbian has early peaks, and Belgrade Serbian
has late peaks. The results show that the early peaks in Valjevo Serbian are
due to a combination of both earlier H onset and shorter pitch excursion,
which suggests that at least one of the dialects is not using a c-center structure. Based on these results, a model of tonal representation is proposed that
has implications for the possible coordinative structures of lexical tone.
3aSC16. Imitation of F0 timing in Mandarin disyllabic sequences. Hao
Yi (Dept. of Linguist, Cornell Univ., 307 Columbus Ave., APT 9, Syracuse,
NY 13210, hy433@cornell.edu) and Sam Tilsen (Dept. of Linguist, Cornell
Univ., Ithaca, NY)
This study investigates the control of relative timing between tones and
segments in Mandarin Chinese. Thirty native Mandarin speakers participated in an experiment in which they imitated the variation in a disyllabic,
bi-tonal sequence—Tone2 + Tone2 (rising + rising). The stimuli vary parametrically in the relative timing of F0 turning points with respect to the segment boundary. The variation occurs either within the first syllable or
between the two syllables. The results show that within the first syllable,
speakers did not imitate the variation in the relative timing patterns. However, across syllable boundaries, such parametric variation leads to more
faithful imitations in terms of the relative timing of F0 turning points.
Therefore, native Mandarin speakers are more sensitive to variation in the
relative timing patterns across syllable boundaries than within the first syllable. This shows that the control over the relative timing between F0 gestures
and articulatory gestures within the first syllable is more stable than across
syllable boundaries. We argue this is because final lengthening of the second
syllable can provide an additional co-selection set with which the Low tone
gesture can be associated.
3aSC17. Listener adaptation to lexical stress misplacement. Maho Morimoto (Linguist, Univ. of California, Santa Cruz, 1156 High St., Santa Cruz,
CA 95064, mamorimo@ucsc.edu)
Speech including unfamiliar accents can result in decreased processing
efficiency. However, listeners are able to overcome the difficulty of processing accented speech with adequate exposure, through the process of perceptual adaptation (Norris, McQueen & Cutler 2003; Bradlow & Bent 2008,
among others). The current study addresses the role of word-level prosodic
3702
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
information in listener adaptation to accented speech. Specifically, it investigates adaptation to lexical stress misplacement in English, and examines
how it compares with adaptation to segmental mismatches and accentedness
at the whole utterance level. 91 native speakers of English were exposed to
English words in isolation with canonical and non-canonical stress position,
while performing a speeded cross-modal matching task. Results suggest that
adaptation to lexical stress misplacement is largely comparable to adaptation to non-canonical productions at the segmental or utterance level in
terms of speed and generalizability across lexical items. Results also indicate that adaptation to lexical stress misplacement is generalizable across
talkers to some extent.
3aSC18. Decoding linguistically-relevant pitch patterns from frequency-following responses using hidden Markov models. Fernando Llanos, Zilong Xie, and Bharath Chandrasekaran (Commun. Sci. and Disord.,
Univ. of Texas at Austin, 1801 Rio Grande St., 103, Austin, TX 78701, fllanos@utexas.edu)
Pitch encoding is often studied with frequency-following response
(FFR), a scalp-recorded potential reflecting phase-locked activity from
auditory subcortical ensembles. Prior work using FFR have shown that
long-term language experience modulates subcortical encoding of linguistically-relevant pitch patterns. These studies typically rely on FFRs averaging
across thousands of repetitions, due to low signal-to-noise ratio of singletrial FFRs. Here, we evaluated the extent to which hidden Markov models
(HMMs), with fewer numbers of trials, can be used to quantify pitch encoding as well as capture language experience-dependent plasticity in pitch
encoding. FFRs were recorded from fourteen Mandarin Chinese and fourteen American English passively listening to four Mandarin tones (1000 trials per tone). HMMs were used to recognize FFRs to each tone in individual
participants. Specifically, HMMs were trained and tested across FFR sets of
different sizes, ranging from 50 to 500 trials. Results showed that HMMs
were able to recognize tones from FFRs, above chance, across all training
sizes and languages. Interestingly, HMMs picked up language differences
(Chinese > English) at very small sizes for training (e.g., 200) and testing
(e.g., 100). These findings highlight the potential benefits of using HMMs to
reduce experimental time and efforts in FFR data collection. [Project funded
by NIH NoR01DC013315 834 (B.C.).]
3aSC19. Effects of consonant voicing on vocalic segment duration
across resonants and prosodic boundaries. D. H. Whalen (Haskins Labs.,
300 George St. Ste. 900, New Haven, CT 06511, whalen@haskins.yale.
edu)
Most languages, and especially English, reduce the duration of vocalic
segments before voiceless obstruents relative to voiced ones. Previous studies examined this effect for final singleton consonants in tauto- and heterosyllabic contexts within a word. Here, monosyllabic words are examined for
the effect both across resonants (e.g., “code”/”coat” vs. “cold”/”colt”) and
across word boundaries and stress conditions (e.g., “no CODE”/”NO code”
vs. “no GOAD”/”NO goad”). Preliminary results showed a typical effect for
singleton stops, with vocalic segment reduced 33.6% for the voiceless stop.
Vocalic segments were somewhat reduced with an intervening resonant
(26.4%) but the resonance itself was more reduced (57.3%); vocalic segment and resonance were approximately the same duration as the vocalic
segments alone in singleton syllables. The vocalic segment of “no”
decreased slightly with following voiceless stop (8.2% when unstressed,
10.2% when stressed); this was smaller than other studies’ effect across syllable boundaries within words. The results indicate that the word and syllable structure impose timing effects of different magnitudes on both vowels
and resonants due to voicing of an adjacent stop. It is surprising that the
effect occurs across word boundaries and further that stress makes little difference. [Work supported by NIH grant DC-002717.]
Acoustics ’17 Boston
3702
TUESDAY MORNING, 27 JUNE 2017
ROOM 302, 9:20 A.M. TO 12:20 P.M.
Session 3aSP
Signal Processing in Acoustics, Engineering Acoustics, and Architectural Acoustics: Signal Processing for
Directional Sensors III
Kainam T. Wong, Chair
Dept. of Electronic & Information Engineering, Hong Kong Polytechnic University, DE 605, Hung Hom KLN, Hong Kong
9:20
9:40
3aSP1. Numerical study on the effect of various parameters on beamforming performance and the estimation of particle velocity using a circular array. Sea-Moon Kim and Sung-Hoon Byun (Korea Res. Inst. of
Ships and Ocean Eng., 32 Yuseong-daero 1312beon-gil, Yuseong-gu, Daejeon 34103, South Korea, smkim@kriso.re.kr)
3aSP2. Modal beamforming for circular acoustic vector sensor arrays.
Berke M. Gur (Mechatronics Eng., Bahcesehir Univ., Ciragan Cad. Osmanpasa Mektebi Sok., No: 4-6 Besiktas D-527, Istanbul 34349, Turkey, berke.
gur@eng.bau.edu.tr)
Numerous studies on beamforming techniques have been done for the
estimation of source direction with an array. The conventional beamforming
(delay and sum), MVDR, and MUSIC are some examples. Recently, the frequency difference beamforming was also introduced for applications to high
frequency ranges or the elimination of the aliasing effect. This talk compares performance of the beamforming techniques including frequency difference beamforming using a circular array. The effects of the various
parameters, such as number of sensors, array radius, frequency range, and
SNR, on the beamforming performance are studied. The error analysis for
the estimation of particle velocity is also discussed. [This work was financially supported by the research project PES9020 funded by KRISO.]
Vector sensors are directional receivers that measure the vectorial particle velocity associated with an acoustic wave rather than the scalar pressure.
Therefore, arrays of vector sensors possess some desirable directional properties compared to conventional arrays of pressure sensors. In this paper, a
modal beamformer for circular arrays of 1-D acoustic vectors sensors are
presented. Arrays of both radially and circumferentially oriented vector sensors are considered. It is shown that the highly directional modes of the
acoustic velocity field can be extracted from the sensor measurements using
the spatial Fourier transform. These modes are weighted and combined to
form narrow steerable beams. The highest order of mode that can be
extracted is limited by the number of vector sensors utilized in the array.
Theoretical analysis and numerical simulations indicate that the proposed
modal beamformer attains the same directivity performance as that of circular pressure sensor array beamformers but outperforms them in terms of
white noise gain. In addition, it uses half the number of sensors to achieve
the same directivity performance of a circular vector sensor array modal
beamformer reported previously in the literature. The proposed method is
suitable for in-air and underwater low frequencies array processing
applications.
Invited Papers
10:00
3aSP3. Influence of beat phenomenon on direction of arrival estimation based on a single vector hydrophone. Hongning Liu, Yi
Zheng, Yufeng Mao, Yanting Yu, and Xiaodong Gong (Shandong Acad. of Sci. Inst. of Oceanographic Instrumentation, 28 Zhejiang
Rd., Shinan District, Qingdao, China, maoyf_sdioi@163.com)
The vector hydrophone can measure sound pressure and orthogonal components of particle velocity co-locately in space and simultaneously in time. DOA (direction of arrival) estimation based on a single vector hydrophone has attracted extensive attention. Beat phenomenon can cause confusion in the DOA estimation results. In this paper, the impact mechanism of beat phenomenon to DOA
estimation by cross spectrum method and average acoustic intensity method is explored through theoretical study and simulation. The
result shows that beat phenomenon causes the change of sound pressure and vibration velocity amplitude with time and leads to DOA
estimation results of these two methods are instability eventually. First, DOA is estimated by the average sound intensity method, if the
temporal resolution is smaller than the minimum envelope period, the DOA estimation will change over time; Second, DOA is estimated
by the cross spectral method, if the FFT frequency resolution is greater than frequency difference, it is unable to distinguish the frequency of these two signals. Subsequent operations will also be confused by two signal energy, causing error estimates. In the end, the
solutions and application examples are proposed. The conclusion could be applied to improve the DOA estimation performance of a single vector hydrophone.
10:20–10:40 Break
3703
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3703
3a TUE. AM
Contributed Papers
10:40
3aSP4. Temporal-spatial distributions for a repertoire of fin whale vocalizations from directional sensing with a coherent hydrophone array. Heriberto A. Garcia (Elec. and Comput. Eng., Northeastern Univ., 151 Mystic St., Apt 35, Arlington, MA 02474, garcia.
he@husky.neu.edu), WEI HUANG (Elec. and Comput. Eng., Northeastern Univ., Malden, MA), and Purnima R. Makris (Elec. and
Comput. Eng., Northeastern Univ., Boston, MA)
The ability to monitor and differentiate vocalizations from a given marine mammal species can be challenging with single sensor
measurements when there are multiple marine mammal species vocalizing in close proximity and when the vocalizations have not been
observed or documented previously. Here we employ a large-aperture coherent hydrophone array system with directional sensing to
detect, localize, and classify a repertoire of fin whale vocalizations using the passive ocean acoustic waveguide remote sensing
(POAWRS) technique. The fin whale vocalizations are comprised of their characteristic 20 Hz centered pulses, interspersed by 130 Hz
centered upsweep calls, and other vocalizations with frequencies ranging between 40 and 80 Hz. The directional sensing ability of
POAWRS is essential for associating various call types to fin whales after long term tracking of the vocalization bearing-time trajectories and localizations over multiple diel cycles. Here, we quantify the relative diel occurrence of the three distinct fin vocalization types
and apply the results to infer their behaviors as a function of the observation region.
11:00
3aSP5. Instantaneous wide-area passive acoustic monitoring of surface ships and submerged vehicles using a coherent hydrophone array. Wei Huang (Elec. and Comput. Eng., Northeastern Univ., 500 Broadway, Apt. 3157, Malden, MA 02148, huang.wei1@
husky.neu.edu), Purnima R. Makris (Elec. and Comput. Eng., Northeastern Univ., Boston, MA), and Heriberto A. Garcia (Elec. and
Comput. Eng., Northeastern Univ., Arlington, MA)
The ability to monitor surface ships and other ocean vehicles continuously over instantaneous wide areas is essential for a wide range
of applications including defense and ocean environmental assessment. Here, we employ a large-aperture coherent hydrophone array
system to detect, localize, and classify several surface ships and other ocean vehicles from their sounds radiated underwater using the
passive ocean acoustic waveguide remote sensing (POAWRS) technique. The approach is calibrated for four distinct research and fisheries survey vessels with accurately known locations obtained from global positioning systems (GPS). Acoustic signals passively
recorded on the coherent hydrophone array are first beamformed for their azimuthal dependencies. The sounds radiated by ocean
vehicles are automatically detected using a threshold detector from the beamformed spectrograms. The bearing versus time trajectories
of sequences of detections are used to localize the ocean vehicles by employing the moving array triangulation technique. The sounds
radiated by the ships are dominated by distinct tonals and cyclostationary signals in the 50 to 2000 Hz frequency range. The temporalspectral characteristics of these signal can be used to classify each ship. Our analysis indicates the ocean vehicles can be instantaneously
monitored and tracked over wide areas spanning more than 300 km in diameter.
Contributed Paper
11:20
3aSP6. The MITRE undersea sounding experiment 2016. Nicholas A.
Rotker, Ballard J. Blair, Caitlyn N. Marcoux, Alexander J. Tooke (The
MITRE Corp., MS S118, 202 Burlington Rd., Bedford, MA 01730,
nrotker@mitre.org), Ilya A. Udovydchenkov (The MITRE Corp., Woods
Hole, MA), Bindu Chandna, Melissa G. Meyer (The MITRE Corp., Bedford, MA), and Harold T. Vincent (Ocean Eng. Dept., Univ. of Rhode
Island, Narragansett, RI)
This talk describes the MITRE Undersea sounding experiment
(MUSE16) conducted in Narragansett Bay from September 12-23, 2016,
where acoustic communication, localization waveforms, and signal processing techniques were explored. This experiment utilized newly developed
acoustic buoys which were designed and built by the University of Rhode
Island (URI) Ocean Engineering Dept. in collaboration with the MITRE
Corporation. The buoys use Global Positioning Satellites (GPS) for localization and time synchronization and are capable of both transmitting and
receiving acoustic data in the range of 8-18 kHz. The buoys were designed
to further research in the areas of acoustic communications, channel modeling, and continuous active sonar (CAS). For the communication and channel
modeling experimentation, modulated M-sequences of various sequence
length were transmitted to explore channel characterization and communication enhancements. For the CAS experimentation, Linear Frequency Modulated (LFM) chirps of various bandwidths and center frequencies were
explored as well as utilization of several underwater targets. A description
of the prototype buoys including hardware, software, experimental setup,
types of data collected, as well as some initial results will be discussed.
Invited Paper
11:40
3aSP7. Direction-finding techniques for a small-aperture hydrophone array. Martin Gassmann, Sean M. Wiggins, and John Hildebrand (Scripps Inst. of Oceanogr., Univ. of California San Diego, 9152 Regents Rd., Apt. L, La Jolla, CA 92037, mgassmann@ucsd.edu)
A volumetric array of hydrophones was coupled to a long-term, autonomous acoustic recorder and deployed ~130 km offshore of
Southern California to the seafloor (~1300 m depth) to track continuous-wave (CW) and transient underwater sound sources. The array
was composed of four, wide-band, omnidirectional hydrophones closely-spaced ~1 m apart. Sampling was continuous at 100 KSamples/
s for each hydrophone over a period of more than two months. To track CW sound sources, conventional and adaptive beamforming
techniques were implemented. The array’s beam pattern characteristics as a function of frequency (10—1000 Hz) were investigated by
simulating the arrival of plane waves from directions of interest. Beamforming techniques were opportunistically applied to nearby (<5
km) transiting commercial ships with automated identification system (AIS) transmitted locations. Discrepancies were within the physical dimension of the transiting ships. Source levels for each of the transiting ships were estimated at the ships’ various aspects to
3704
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3704
characterize the directionality of underwater ship noise. In addition, a cross-correlation based direction-finding technique for transient
sounds was developed and implemented for short duration (<1 ms), frequency-modulated echolocation clicks emitted by deep-diving
beaked whales.
Contributed Paper
12:00
speaker, while reducing the influence of competing talkers. A two-stage
approach is used to obtain the desired array directivity pattern, optimizing
both microphone locations and filter weights. The positions of the microphones are optimized by using a hybrid optimization technique, taking into
account the influence of the nearby acoustic environment (array shape and
conferencing desk). FIR filter coefficients for each microphone are derived
from a regularized least-squares (LSQR) solution, combined with null-steering. An implementation of the array is made with digital MEMS microphones and the performance of the design is evaluated experimentally and
compared with a classically used goose-neck microphone.
3aSP8. Design of a microphone array for near-field conferencing applications. Pieter Thomas (Res. group WAVES, Ghent Univ., Technologiepark-Zwijnaarde 15, Gent B-9052, Belgium, pieter.thomas@ugent.be),
Reinout Verburgh, Michael Catrysse (Televic Conference, Kachtem, Belgium), and Dick Botteldooren (Res. group WAVES, Ghent Univ., Ghent,
Belgium)
Microphone arrays are becoming increasingly popular for conferencing
applications and near-field speech recording. In this work, a 16-element cylindrical microphone array is designed for beamforming toward a nearby
TUESDAY MORNING, 27 JUNE 2017
ROOM 309, 9:20 A.M. TO 12:00 NOON
Session 3aUWa
3a TUE. AM
Underwater Acoustics, Acoustical Oceanography, Engineering Acoustics, and Signal Processing in
Acoustics: A Century of Sonar I
Michael A. Ainslie, Cochair
Underwater Tech. Dept., TNO, P.O. Box 96864, The Hague 2509JG, Netherlands
Kevin D. Heaney, Cochair
OASIS Inc., 11006 Clara Barton Dr., Fairfax Station, VA 22039
Invited Papers
9:20
3aUWa1. From Canton to the Curie brothers: The dawn of sonar. Michael A. Ainslie (Acoust. and Sonar, TNO, P.O. Box 96864,
The Hague 2509JG, Netherlands, michael.ainslie@tno.nl) and Willem D. Hackmann (Oxford, United Kingdom)
Toward the end of the First World War, Paul Langevin and Robert Boyles developed the first underwater echo-ranging systems capable of detecting and localizing submarines, with important consequences for the outcome of the Second World War. Their invention
involved the use of quartz transducers for initial generation of sound and reception of the echo, and vacuum tubes for amplification of
the (much weakened) received signal. The distance to the object responsible for the echo could be deduced from the two-way travel time
and known speed of sound. The advances in scientific knowledge leading to these technologies are traced from the discoveries of the
compressibility of water (1762, John Canton), thermionic emission (1853, Edmond Becquerel), and piezoelectricity (1880, Jacques and
Pierre Curie).
9:40
3aUWa2. Sonar science and technology in Russia in the 20th century. Oleg A. Godin (Phys. Dept., Naval Postgrad. School, 833
Dyer Rd., Bldg. 232, Monterey, CA 93943-5216, oagodin@nps.edu)
Russian underwater acoustics traces its roots to the 19th century empirical sound propagation studies on the Sea Devil submarine and
theoretical predictions of guided propagation in shallow water. During the World War II, the acute needs to save lives and contribute to
the war effort led to rapid expansion of acoustic research and development, especially in mine countermeasures. The growth continued
in the post-war years until the USSR collapse and was inspired by the opportunities for naval and civilian applications, which had been
opened up by the discovery of the SOFAR channel and deeper understanding of the ocean physics. This paper will briefly review some
milestones of underwater acoustic research and development in Russia, from Mikhail Lomonosov to Leonid Brekhovskikh and from
Admiral Makarov’s current velocity probe to sonars on early nuclear-powered submarines. Applications of the sonar to improve navigation and characterize the underwater environment will be emphasized.
3705
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3705
10:00
3aUWa3. Fessenden and Boyle: Two Canadian Sonar Pioneers. Harold M. Merklinger (Defence R & D Canada – Atlantic (retired),
Dartmouth, NS, Canada) and Dale D. Ellis (Oceanogr., Dalhousie Univ., 18 Hugh Allen Dr., Dartmouth, NS B2W 2K8, Canada, daledellis@gmail.com)
Reginald Fessenden developed voice-modulated radio in the early 20th century using special alternators and microphones capable of
handling kilowatts of power. When the Titanic struck the iceberg in 1912, Fessenden, then in Boston, developed an audio-frequency
acoustic transmitter-receiver, which he used to detect an iceberg at two miles. When the “Great War” began, UK interests turned to
detecting submarines. Ernest Rutherford, on behalf of the Bureau of Investigation and Research (BIR), invited former students A. B.
Wood, and R. W. Boyle, to join the anti-submarine effort. Fessenden was also consulted. The team detected submarines using Fessenden’s device but experienced difficulty determining target bearing. After consulting Paul Langevin, Boyle agreed that quartz transducers
operating at ultrasonic frequencies should provide a solution. Boyle then developed the UK Type 112 “ASDIC” which was being fitted
to Royal Navy warships just as the War ended. Boyle returned to the University of Alberta after the war and continued work in ultrasonics. In 1929 he was appointed Director of Physics at Canada’s National Research Council. There he established an Acoustics Section,
and during World War II started a scientific activity in Halifax, NS, which later became the Naval Research Establishment.
10:20
3aUWa4. Sonar technology and underwater warfare from World War I to the launch of USS Nautilus in 1954, the first nuclear
submarine. Willem D. Hackmann (Linacre College, Oxford OX1 3JA, United Kingdom, magicwdh@aol.com)
Sonar research began in the First World War to curb the U-boat menace. Radical methods had to be devised both organizationally
and technically to combat this new form of warfare. Civilian scientists were drafted into naval research. A new field of science was created: underwater acoustics for passive listening, and ultrasonic echo-ranging for active detection of submarines. If the war had lasted
only a few more months, prototype Asidic equipment (the Royal Navy’s name for sonar) would have gone operational. During the interwar years the American and Royal Navies developed their sonar “searchlight” systems, while before 1935 the German Navy concentrated on sophisticated passive listening arrays (incorporating American WWI research) to protect their capital ships from enemy
submarines. Searchlight sonar technology evolved sharply in WWII. The nuclear submarine in 1954 required a complete rethink of the
sonar scanning techniques developed over the previous 40 years. This lecture is based on full access to naval documents and interviews
with scientists involved in this work in Britain and the USA for my book Seek and Strike (HMSO, 1984).
10:40–11:00 Break
11:00
3aUWa5. German Navy sonar development during the two world wars and interwar years. Willem D. Hackmann (Linacre College, Oxford OX1 3JA, United Kingdom, magicwdh@aol.com)
In the German Navy, as in the UK and America, the development of underwater acoustical detection was shaped by institutional, political, and technical constraints and in response to tactical events. German hydrophone development in WWI was less advanced than
that of the Allies, who focussed on combating an all-out U-boat war. The German Navy did not develop the variety of hydrophones
developed by the Allies in particular the towed hydrophone array of the American Navy, which nevertheless inspired their passive sonar
arrays known as “Gruppenhorchger€at” (GHG), German for “group listening device” in the interwar years, developed for the long-range
protection of their capital ships, until 1935 with the signing of the Anglo German Naval Agreement when they commenced their submarine building program. For strategic reasons, Germany continued to concentrate on pro-submarine research so that at the start of WWII
the German Navy had developed sophisticated GHG systems available which they improved further into the “Balkon” system of the
later high-speed U-boats, Type XXI and XXVI. Their operational echo-ranging Asdic equipment was tactically inferior to that of the
Royal Navy, and some interesting prototypes in development did not reach active service.
11:20
3aUWa6. Profiles in sonar: Reflections on F. V. Hunt. Anthony I. Eller (OASIS, Inc., 1927 Byrd Rd., Vienna, VA 22182, ellera@
oasislex.com) and Robert W. Pyle (11 Holworthy Pl, Cambridge, MA)
Frederick Vinton Hunt was one of the towering figures who helped to shepherd the development of SONAR, an acronym for Sound
Navigation and Ranging, from its infancy to near maturity. We two authors of this paper had the great advantage and pleasure of having
Ted Hunt as a mentor during the latter part of his career and the early stages of our own. In this presentation we outline briefly the history of SONARs development as a means of placing Hunt’s contributions in context. This would include largely his time at Harvard during the R&D phase of SONAR’s growth, followed by his role during the 25 years following WWII as educator. Ideas are discussed
related to what features of his mentorship are responsible for creating the next generation of leaders in science and technology.
11:40
3aUWa7. Overview of Chinese underwater acoustics. Lianghao Guo and Zhenglin Li (State Key Lab. of Acoust., Inst. of Acoust.,
CAS, China, No. 21 North 4th Ring Rd., Haidian District, Beijing, Beijing 100190, China, glh2002@mail.ioa.ac.cn)
French physicist Paul Langevin was the pioneer of active sonar [Ainslie, <Principles of Sonar Performance Modeling>, p.10,
Springer, 2010]. His graduate student and later collaborator, Dezhao Wang, founded the first Chinese underwater acoustics laboratory at
the Chinese Academy of Sciences (CAS) in 1958. Shie Yang founded the first department of underwater acoustics engineering of China
at Harbin Engineering University in 1959. In accordance with the development strategy of “from the near to the distant; from the shallow
to the deep,” Chinese underwater acoustics research has been active in shallow-water acoustics, and has increased its efforts in deep sea
acoustics in recent years. The Chinese underwater acoustics community has nurtured nine of CAS/CAE(Chinese Academy of
3706
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3706
Engineering) academicians and several world outstanding scholars in shallow-water acoustics. Under the support from the CAS and US
ONR, two comprehensive China-US joint sea-going experiments were conducted; many international conferences in ocean acoustics
were held in China. This paper briefly introduces the history of Chinese underwater acoustics research and overviews some underwater
acoustics progresses that have publicly been released in China, including basic research in ocean acoustics, signal processing, and underwater acoustic transducers. [Work supported by the National Natural Science Foundation of China under Grant Nos. 61571436,
11434012, and 41561144006.]
TUESDAY MORNING, 27 JUNE 2017
ROOM 306, 9:20 A.M. TO 12:20 P.M.
Session 3aUWb
Underwater Acoustics: Sound Propagation and Scattering in Three-Dimensional Environments III
Ying-Tsong Lin, Cochair
Applied Ocean Physics and Engineering, Woods Hole Oceanographic Institution, Bigelow 213, MS#11, WHOI,
Woods Hole, MA 02543
Frederic Sturm, Cochair
Acoustics, LMFA, Centre Acoustique, Ecole Centrale de Lyon, 36, Avenue Guy de Collongue, Ecully 69134, France
3a TUE. AM
Invited Paper
9:20
3aUWb1. Sound speed profile measurement uncertainties due to rough interfaces: A parametric study using the LangstonKirchhoff model. Samuel Pinson (SHOM, Appl. Sci. Bldg., Rm. 202a, State College, Pennsylvania 16802, samuelpinson@yahoo.fr)
In the context of sound-speed profile measurement by the image source method, interface roughnesses are responsible for result
uncertainties. The image source method models the reflected wave from a layered media as a collection of images which are the mirror
reflections of the real source over the interfaces. From image source positions, one can deduce the sound-speed profile. Interface roughnesses might blur these image sources and reduce the accuracy of their localization. Using the Langston-Kirchhoff model of a 3D layered media with rough interfaces, it is possible to perform a parametric study of these uncertainties as a function of roughness
parameters. In the aim of performing roughness parameter inversion, theoretical uncertainties are calculated and compared with estimated uncertainties from numerical experiments.
Contributed Papers
9:40
3aUWb2. Three-dimensional acoustic scattering in highly geometrically
constrained environments. Irena Lucifredi (SOFAR Acoust., 44 Garfield
Ave. #2, Woburn, MA 01801, euler001@yahoo.com) and Raymond J.
Nagem (Boston Univ., Boston, MA)
For active sonar systems, transmission Loss (TL) and reverberation
level (RL) are key parameters derived from the acoustic fields predicted by
models and used to assess sonar performance. The existing propagation
and scattering models may be appropriate for applications in the deep
ocean or open littorals, but sonar operators are increasingly being asked to
perform tasks including navigation or detection in significantly more confined waterways such as rivers or ports. Physics-based models are generally not available for predicting the acoustic field in such highly
geometrically constrained and dynamic 3D environments often characterized by highly variable azimuthal boundaries such as irregularly shaped
port geometries, piers, and breakwaters that may also have high large
tidally driven depth variations over short time periods. They also may be
populated with large scattering objects such as deep draft vessels and
mooring dolphins. A virtual source research model capable of predicting
the three-dimensional field, including propagation, scattering, and reverberation in such complex underwater environments is presented here. A
3707
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
complex, variable geometry, harbor scenario containing a large scattering
object such as a vessel hull has been modeled in order represent the resultant acoustic field and to investigate scattering mechanisms present. [Work
supported by ONR.]
10:00
3aUWb3. Scattering from objects buried in sandy sediments using a
fully scattered field finite element approach. Anthony L. Bonomo and
Marcia J. Isakson (Appl. Res. Labs., The Univ. of Texas at Austin, 10000
Burnet Rd., Austin, TX 78713, anthony.bonomo@gmail.com)
The finite element method is utilized to study the scattering response
from objects buried in a homogeneous sand half-space. An approach is
adopted where the total field is separated into contributions due to the incident, reflected, transmitted fields in the absence of the buried object and the
scattered field due to the presence of the buried object. Since the incident,
reflected, and transmitted fields can be determined analytically for the case
of a flat water-sediment interface, only the scattered field needs to be solved
for numerically. This approach results in much faster run times while reducing spurious reflections from the boundaries of the computational domain.
The buried object and the sediment half-space can be treated as fluid, viscoelastic, or poroelastic. [Work supported by ONR, Ocean Acoustics.]
Acoustics ’17 Boston
3707
10:20
11:20
3aUWb4. Application of smoothed finite element method to acoustic scattering from underwater elastic objects. Yingbin Chai (School of Naval
Architecture and Ocean Eng., Huazhong Univ. of Sci. and Technol., Wuhan,
Hubei, China), Zhixiong Gong (Dept. of Phys. and Astronomy, Washington
State Univ., Webster Physical Sci. 754, Pullman, WA 99164-2814, zhixiong.
gong@wsu.edu), Wei Li (Hubei Key Lab. of Naval Architecture and Ocean
Eng. HydroDynam., Huazhong Univ. of Sci. and Technol., Wuhan, Hubei,
China), and Tianyun Li (School of Naval Architecture and Ocean Eng., Huazhong Univ. of Sci. and Technol., Wuhan, Hubei, China)
3aUWb7. Modal analysis of split-step Fourier parabolic equation solutions in the presence of rough surface scattering. Mustafa Aslan (Turkish
Naval Acad., Postane Mah. Rauf Orbay Cad. No:290, Istanbul 34940, Turkey, maslan@dho.edu.tr) and Kevin B. Smith (Naval Postgrad. School,
Monterey, CA)
In this work, the smoothed finite element method (S-FEM) is employed
to solve the acoustic scattering from underwater elastic objects. The S-FEM,
which can be regarded as a combination of the standard finite element
method (FEM) and the gradient smoothing technique (GST) from the meshless methods, was initially proposed for solid mechanics problems and has
been demonstrated to possess several superior properties. In the S-FEM, the
smoothed gradient fields are acquired by performing the GST over the
obtained smoothing domains. Due to the proper softening effects provided
by the gradient smoothing operations, the original “overly-stiff” FEM model
is softened and the present S-FEM possesses a relatively appropriate stiffness of the continuous system. Therefore, the quality of the numerical
results can be significantly improved. The numerical results from several
typical numerical examples demonstrate that the S-FEM is quite effective to
handle acoustic scattering from underwater elastic objects and can provide
more accurate numerical results than the standard FEM.
Determining accurate solutions of acoustic propagation in the presence
of rough surface scattering is of significant interest to the research community. As such, there are various approaches to model the effects of rough
surface scattering. Typically, the models used assume an idealized pressure
release condition at the sea surface boundary. This boundary condition can
easily be accommodated through a variety of modeling techniques for flat
surfaces, but it is necessary to introduce additional complex methods in numerical models for rough surfaces. Parabolic equation (PE) models utilizing
split-step Fourier (SSF) algorithms have been employed to treat the rough
surface displacements. These include methods such as the field transformational technique (FFT), and direct modeling of the physical water/air interface discontinuity. Previous work highlighted phase errors in SSF-based
models with large density discontinuities at the interfaces, which were minimized by employing a hybrid split-step Fourier/finite-difference approach.
However, such phase errors were largely absent in the presence of rough
surface scattering. In this work, the PE solutions are decomposed into normal modes in order to determine which modes dominate the phase error in
the presence of flat surfaces, and to confirm that these modes are highly
scattered in the presence of rough surfaces.
10:40
11:40
3aUWb5. Kirchhoff approximation for scattering from solid spheres at
flat interfaces: Improved description at large grazing angles. Aaron M.
Gunderson and Philip L. Marston (Phys. and Astronomy Dept., Washington
State Univ., Pullman, WA 99164-2814, aaron.gunderson01@gmail.
com)
3aUWb8. Creating synthetic aperture images using experimental and
simulated backscattering of solid elastic cubes. Viktor Bollen, Timothy
Daniel, and Philip L. Marston (Phys. and Astronomy Dept., Washington
State Univ., Pullman, WA 99164-2814, viktor.bollen@wsu.edu)
The Kirchhoff approximation has previously been used to model scattering from partially exposed elastic spheres breaking through a flat interface,
with variable exposure level, grazing angle, and frequency [J. Acoust. Soc.
Am. 136, 2087 (2014)]. The limits of the Kirchhoff integral are determined
by the boundaries of illumination on the sphere for each scattering path.
Recent adaptations to the methods of boundary determination within the
Kirchhoff approximation have yielded faster numerical integration algorithms and higher similarity of results when compared to experimental scattering data and the exact solution at half exposure [J. Acoust. Soc. Am. 140,
3582-3592 (2016)]. Additional steps have been taken to account for the partial blocking of the interface by the partially exposed sphere, through inclusion of a new correction term. This correction is largest at high grazing
angles, low frequencies, and low target exposures, and drastically improves
the Kirchhoff approximation within these limits. [Work supported by ONR.]
11:00
3aUWb6. Bistatic scattering from underwater elastic spheres and cylinders: Interference and resonance phenomena. Aaron M. Gunderson
(Phys. and Astronomy Dept., Washington State Univ., Pullman, WA 991642814, aaron.gunderson01@gmail.com), Aubrey L. Espana (Appl. Phys.
Lab., Univ. of Washington, Seattle, WA), and Philip L. Marston (Phys. and
Astronomy Dept., Washington State Univ., Pullman, WA)
Scattering by underwater elastic spheres and cylinders is considered
over a full range of scattering angles. Three models are presented in the frequency domain: an exact solution for scattering from an elastic sphere, a finite cylinder approximate solution, and a finite element simulation for the
finite cylinder. Close agreement between the two cylinder models speaks to
the strength of the finite cylinder approximate solution. Within each model,
the dependence of the scattering on frequency and angle is discussed. Resonance structure is highlighted, and families of ridges and valleys in the data
can be described by interference models between near-side and far-side
Rayleigh paths and the specular reflection. A thorough understanding of
bistatic scattering from elastic targets is helpful for data acquisition from
moving sources (such as an AUV) or acoustic arrays, as well as studying
monostatic scattering from targets at an interface, where the interface may
direct bistatic paths to the receiver. [Work supported by ONR.]
3708
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
To study aspect and material dependent backscattering of objects, we
used cubes as their geometry has distinct exposure points for different backscattering mechanisms. We insonified two solid cubes, one brass and the
other steel, in underwater tank scale experiments, measuring their backscattering using circular, linear, and cylindrical synthetic aperture sonar. The
right angles of the edges allow for enhanced backscattering by Rayleigh
waves, which couple in at material-specific angles, since the waves are retroreflected by the cube’s edges [K. Gipson and P. L. Marston, J. Acoust.
Soc. Am. 105, 700-710 (1998)]. In concert with the experiments, we model
the backscattering from our cubes using Kirchhoff-Integration based simulations. The simulation isolates the specular responses, simplifying the complex responses from the cube’s geometry and allowing us to identify
specific effects, such as splitting from the corner reflections of a cube when
its top ridge is tilted. Using Fourier based back-projection algorithms, we
reconstructed target images from both the experimental and simulation
results. Three dimensional images were also recreated from the cylindrical
data. Using a combination of experimental and simulation results, we identified aspect dependent mechanisms. [Work supported by ONR.]
12:00
3aUWb9. Effect of nonlinear internal wave on monostatic reverberation
in the shallow water region with underwater sound channel. Jungyong
Park (Seoul National Univ., Bldg. 36, Rm. 212, Seoul National University,
1, Gwanak-ro, Gwanak-gu, Seoul 151 - 744, South Korea, ioflizard@snu.ac.
kr), Youngmin Choo (Defense System Eng., Sejong Univ., Seoul, South
Korea), Woojae Seong, and Sangkyum An (Seoul National Univ., Seoul,
South Korea)
The effect of nonlinear internal wave on reverberation is investigated for
the shallow water region having an underwater sound channel, which was
observed during SAVEX15 experiment. When source is located near the
channel axis, the reverberation from rough bottom is insignificant because
of trapped modes not interacting with the bottom. However, when a nonlinear internal wave is present, the reverberation level increases since trapped
modes transfer to bottom interacting modes due to sound fluctuation by the
nonlinear internal wave. This trend can be explained by the coupled mode
equation [(2014). J. Acoust. Soc. Am. 135, 610-625]. To theoretically
describe the increase of bottom reverberation, we simplify the situation
Acoustics ’17 Boston
3708
where two modes are present; one is a trapped mode and the other is bottom
interacting mode. In the situation, the trapped mode transfers to the bottom
interacting mode after its encounter with the nonlinear internal wave. The
bottom reverberation has different patterns and levels before and after the
internal wave, and thus, it highly depends on the distance between source/
receiver and internal wave.
TUESDAY MORNING, 26 JUNE 2017
EXHIBIT HALL D, 9:00 A.M. TO 12:00 P.M.
Exhibit
The instrument and equipment exhibit is located near the registration area in Exhibit Hall D.
The Exhibit will include computer-based instrumentation, scientific books, sound level meters, sound intensity systems, signal processing systems, devices for noise control and acoustical materials, active noise
control systems, and other exhibits on acoustics.
Exhibit hours are Sunday, 25 June, 5:30 p.m. to 7:00 p.m., Monday, 26 June, 9:00 a.m. to 5:00 p.m., and
Tuesday, 27 June, 9:00 a.m. to 12:00 noon.
TUESDAY AFTERNOON, 27 JUNE 2017
ROOM 206, 1:15 P.M. TO 3:20 P.M.
Session 3pAAa
Architectural Acoustics: Retrospect on the Works of Bertram Kinzey II
Gary W. Siebein, Cochair
Siebein Associates, Inc., 625 NW 60th Street, Suite C, Gainesville, FL 32607
David Lubman, Cochair
DL Acoustics, 14301 Middletown Ln., Westminster, CA 92683-4514
Chair’s Introduction—1:15
Invited Papers
1:20
3pAAa1. Standing on the shoulders of an ET giant. Lucky S. Tsaih (Dept. of Architecture, National Taiwan Univ. of Sci. and Tech.,
43 Keelung Rd., Sec. 4, Taipei 10607, Taiwan, akustx@mail.ntust.edu.tw)
Professor Bertram Y. Kinzey, Jr., is the main author of the Environmental Technologies in Architecture. This valuable book was first
published in 1951 and has been intellectually stimulating for Architecture students at UF and around the world. It was astonishing to
read through the Preface of the book. It reveals this giant’s farsighted provision on how an architect should take account of complex
environmental control systems and balance them with the physiological and psychological needs of the occupants during the design process. These environmental control systems include thermal, atmosphere, and environmental control, acoustics, sanitation, lighting, electrical machinery, and power distribution, coincide with current “hot” sustainable design topics. In particular, he expresses the idea of
seamless integration between the architect and his engineering consultants through entire life cycle of a project. This concept, serves as
the heart of the BIM development and its implementation process. Following his insight, the current architecture design curriculum and
sustainable research areas at NTUST has worked toward this holistic integration mode. As said by Newton, “If I have seen further it is
by standing on the shoulders of giants”!
3709
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3709
3a TUE. AM
Coffee breaks on Monday and Tuesday mornings, as well as an afternoon break on Monday, will be held
in the exhibit area.
1:40
3pAAa2. Benefiting from Bertram Kinzey’s legacy. Gary Madaras (Acoust., ROCKFON, 4849 S. Austin Ave., Chicago, IL 60638,
DoctorSonics@aol.com)
In 1991, when I rolled into Gainesville, FL, to begin doctoral studies at the University of Florida’s Department of Architecture, I did
not know who Bertram Kinzey was. I did not realize that he was actually the reason that I, and a first wave of doctoral students, could
study architectural acoustics in the United States. I entered an academic environment that had a solid and broad foundation in architectural acoustics ~ scale models, instrumentation, custom acquisition software, room acoustic measurement and subjective survey databases and faculty relationships throughout the university. This platform was part of Bertram Kinzey’s legacy. I will review how I, and
many others, benefited from it during our studies at the University of Florida and since then in our careers.
2:00
3pAAa3. Subjective listening tests: Perception and preference of simulated sound fields. Michael Ermann, Andrew Hulva (Architecture + Design, Virginia Tech, 201 Cowgill Hall (0205), Blacksburg, VA 24061-0205, mermann@vt.edu), Tanner Upthegrove (Inst.
for Creativity Arts and Technol., Virginia Tech, Blacksburg, VA), Randall J. Rehfuss (Architecture + Design, Virginia Tech, Dubllin,
VA), Walter Haim, Aaron Kanapesky, Trey Mcmillon, Caroyln Park, Alexander Reardon, Jeffrey Rynes, Sam Ye (Architecture +
Design, Virginia Tech, Blacksburg, VA), and Charles Nichols (Music, Virginia Tech, Blacksburg, VA)
Concert hall sound fields were simulated by architecture students and anechoic recordings were convolved to create auralizations in
those simulated performance spaces. Then an architectural feature was altered digitally and a second track was auralized. College music
students were recruited, tested for hearing loss, and brought to a low-reverberance room with a spatial sound array of 28 mounted speakers. They were asked to identify which of the two simulated tracks they prefer. We compared simulated performance spaces: (1) with
four tiers of balconies vs with one tier of balcony; (2) with an over-stage canopy vs without a canopy; (3) with separate balcony boxes
vs with a continuous balcony not fragmented by box walls; and (4) with a higher scattering coefficient vs a lower scattering coefficient.
Those in the audience will be invited to judge preference between the tracks for themselves. The study will be framed by the extraordinary career arc of Bert Kinzey who engaged architecture students in the study of architectural acoustics at both Virginia Tech and at the
University of Florida.
2:20
3pAAa4. Generating the spatial forms of auditoriums based on distributed sentience. Ganapathy Mahalingam (Architecture and
Landscape Architecture, North Dakota State Univ., Dept. 2352, PO Box 6050, Fargo, ND 58108-6050, Ganapathy.Mahalingam@ndsu.edu)
The generation of the spatial form of an auditorium based on acoustical parameters related to Reverberance, Loudness, Clarity, Lateral Energy, and Balance was defined in the early 90s using a concept called “acoustic sculpting.” The central engine in the generation
of the spatial form of the auditorium was the locus of the direct and reflected paths of sound from a source to a receiver. This defined an
elliptical volume with two focii (source and receiver). In the initial implementation of “acoustic sculpting” in design systems for auditoria, this elliptical locus was used to generate the spatial form of the auditorium with just one source-receiver pair. In this paper, the initial
concept of “acoustical sculpting” is extended, with loci generated by multiple source-receiver pairs. The elliptical locus is shown to
address the key acoustical parameters. The generation of the spatial form of an auditorium from multiple elliptical loci, which define the
distributed sentience of the audience, is proposed using Boolean operations. The generation of the spatial form of the auditorium proceeds from multiple desires for performance parameters at specific spatial locations, and is a desire-driven design. This will enable
design solutions such as preferred seating and a tunable auditorium.
2:40
3pAAa5. Bertram Kinzey, Jr.: Teaching across generations. Keely Siebein (Siebein Assoc., Inc., 625 NW 60th St., Ste. C, Gainesville, FL 32607, ksiebein@siebeinacoustic.com)
Bertram Kinzey, Jr., is an educator, author, acoustician, organ builder, architect, and mentor to many in the world of architecture and
architectural acoustics. This paper looks at several meaningful personal accounts experienced by the author growing up. This paper
examines several of the many people he directly impacted though his professorship, research and practice, and how they have in turn
impacted others. It also provides examples of a second generation of people who have benefited from those he directly impacted and
how they have gone on to promulgate Bert’s ideas throughout our field and the world. Mr. Kinzey has influenced an ever-expanding web
of multiple generations of architects and architectural acousticians around the world.
3:00
3pAAa6. Other testimonials for the life and work of Professor Bertram Y. Kinzey, Jr. Gary W. Siebein (Siebein Assoc., Inc., 625
NW 60th St., Ste. C, Gainesville, FL 32607, gsiebein@siebeinacoustic.com)
This session invites testimonials from people in the audience or in the form of letters from those who cannot attend to the life, work,
and influences of Professor Bertram Y. Kinzey, Jr.
3710
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3710
TUESDAY AFTERNOON, 27 JUNE 2017
ROOM 207, 1:15 P.M. TO 3:20 P.M.
Session 3pAAb
Architectural Acoustics: Architectural Acoustics and Audio: Even Better Than the Real Thing I
K. Anthony Hoover, Cochair
McKay Conant Hoover, 5655 Lindero Canyon Road, Suite 325, Westlake Village, CA 91362
Alexander U. Case, Cochair
Sound Recording Technology, University of Massachusetts Lowell, 35 Wilder St., Suite 3, Lowell, MA 01854
Wolfgang Ahnert, Cochair
Ahnert Feistel Media Group, Arkonastr. 45-49, Berlin D-13189, Germany
Chair’s Introduction—1:15
Invited Paper
1:20
3pAAb1. Unusual architectural spaces and the challenges they present to musicians and performance artists. Thomas J. Plsek
(Brass/Liberal Arts, Berklee College of Music, MS 1140 Brass, 1140 Boylston St., Boston, MA 02215, tplsek@berklee.edu) and Joanne
G. Rice (Mobius, Quincy, MA)
Contributed Paper
1:40
their RT. The method presented here includes statistical analysis, sensitivity
to changes of different parameters, validation and interpretation. It calculates averages of a series of lengths of individual paths for each ray emanating from a sound source and its direction cosines, chosen at random from a
uniform distribution and considering range by means of pseudo random generator (PRG). The process recurs for a fixed number of rays and a computer
run for an average of one block and its MFP. Each run involves an initial
PRG seed. The analysis of a fixed number of paths of one ray is a result of
deterministic algorithm and thus its values are dependent. Therefore, its
averages constitute a population that follows the large numbers theorem,
which does not necessarily follow the central limit theorem.
3pAAb2. Reverberation time analysis for nonrectangular rooms using
the Monte Carlo method. Giora Rosenhouse (Acoust., SWANTECH Ltd.,
9 Kidron St., Haifa 3446310, Israel, giora@swantech.co.il)
Reverberation time (RT) of a nonrectangular room involves its mean
free path (MFP). Statistics of sound ray consecutive collisions of the enveloping room surfaces solve the MFP value. The classical halls RT formulae
are based on physical models and assumptions, including discrepancies.
Thus, we use here the Monte Carlo Method (MCM) to calculate MFP and
the probabilities of sound rays colliding with their surrounding surfaces and
Invited Papers
2:00
3pAAb3. Rapid impedance tube determination of fabric transparency. Peter D’Antonio (Chesapeake Acoust. Res. Inst. LLC, 15618
Everglade Ln, #106, Bowie, MD 20716, dr.peter.dantonio@gmail.com) and Trevor J. Cox (Acoust. Eng., Univ. of Salford, Salford,
United Kingdom)
Acousticians are continually being asked to verify fabric transparency for applications with absorptive and diffusive surfaces, as
well as in sound reinforcement. Standard reverberation chamber methods can be used, but require large fabric and fiberglass samples. A
quick and simple impedance tube method has been developed requiring only a 160 mm x 160 mm sample. Two measurements are made.
One with an anechoic wedge termination and another with a rigid termination in which the fabric is applied to a 50 mm 6pcf fiberglass
substrate. Four microphones are placed at a quarter and three quarters of the square tube’s width and height and the signal is summed.
3711
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3711
3p TUE. PM
For almost 15 years, trombonist Tom Plsek and performance artist Joanne Rice have been exploring the notion of sound and visual
performance in spaces not conducive to any normal practices. This presentation will focus on the acoustics of two very different spaces,
the abandoned quarries in Quincy, Massachusetts, and the 808 Gallery at Boston University, a 16,000 square foot rectangular space with
tall ceilings, and how practices were developed to create performances under what most would consider very challenging environments.
This unique microphone placement minimizes the 1st, 2nd, and 3rd harmonics resulting in a fourfold increase in the upper frequency
limit. Impulse response measurements are made at three distances from the sample to calculate the reflection factor, impedance and normal incidence absorption coefficient from 63 Hz to 4000 Hz in one measurement. The small fabric sample is large enough to minimize
non-homogeneous effects. A study of fabrics with a wide range of transparencies reveals how both acoustically transparent and backed
fabrics can be used as sonic equalizers depending on the application.
2:20
3pAAb4. Comparison of methods used in design simulation and in emulation of electrical and acoustic systems for audio. Douglas
Rollow (Res. and Innovation, Sennheiser, 550 15th St, Ste. 37, San Francisco, CA 94103, tad.rollow@sennheiser.com)
Contemporary audio recording and reproduction systems offer exceptional signal quality in terms of bandwidth, resolution, and linearity. Simulations used in the design and analysis of acoustic spaces, electroacoustic transducers, and electronic components are effective in predicting specific metrics of performance, but end-to-end system simulation would be exceedingly difficult even over extended
run time. In media production workflows, emulation is used to create perceptually useful renderings of these same systems, with the
real-time digital signal path providing emulation of acoustic environments coupled with transducers, and of nonlinear euphonic elements. These algorithms serve some of the same function as those in the design tools, and the present work uses the opportunity to compare engineering goals, economic constraints, and performance requirements.
2:40
3pAAb5. The room they want to hear: Subjective preference of reverberation in vocal recordings. Yuri Lysoivanov (Recording
Arts, Tribeca Flashpoint College, 28 N. Clark St. Ste. 500, Chicago, IL 60602, yuri.lysoivanov@tribecaflashpoint.edu)
Audio professionals in the digital era have hundreds of reverberation options at their disposal, from the uber-realistic to the dynamic
and creative. Often the choice of reverb is made with a set of heuristics—based on the engineer’s experience and opinion for what
achieves the best musical and dramatic effect. In this presentation, we turn to the listener and explore their preferences of reverberation
on vocal performances. Through a series of experiments, we evaluate the aesthetic preferences of the casual music consumer and compare them to those of the experienced audio engineer. In addition, we examine factors that may predispose listeners to prefer certain
reverberation characteristics over others.
3:00
3pAAb6. Sound reinforcement for a divisible auditorium. Deb Britton, Kevin Hodsgon (K2 Audio, 5777 Central Ave., Ste. 225,
Boulder, CO 80301, deb@k2audio.com), and Gain Foster (K2 Audio, Woodford, VA)
As part of an audiovisual renovation to a conference center, one particular space, an auditorium that can be divided into four separate
spaces, posed a particular sound reinforcement challenge. This paper describes the architecture of the space, its various uses, and
presents our approach to providing flexible, yet high-quality sound reinforcement.
3712
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3712
TUESDAY AFTERNOON, 27 JUNE 2017
ROOM 208, 1:20 P.M. TO 3:20 P.M.
Session 3pAAc
Architectural Acoustics: Robust Heavy and Lightweight Constructions for New-Build and Retrofit
Buildings
Matthew V. Golden, Cochair
Pliteq, 616 4th Street, NE, Washington, DC 20002
Stefan Schoenwald, Cochair
Laboratory for Acoustics/Noise Control, Empa Swiss Federal Laboratories for Materials Science and Technology,
€
Uberlandstrasse
129, D€
ubendorf 8606, Switzerland
Lieven De Geetere, Cochair
Division Acoustics, Belgian Building Research Institute, Lombardstraat 42, Brussel 1000, Belgium
Invited Papers
1:20
3pAAc1. Predicting sound radiation and sound transmission in orthotropic cross-laminated timber panels. Andrea Santoni, Paolo
Bonfiglio, Patrizio Fausti (Eng. Dept., Univ. of Ferrara, via Saragat, 1, Ferrara, FE 44122, Italy, andrea.santoni@unife.it), and Stefan
ubendorf, Switzerland)
Schoenwald (Lab. for Acoust. and Noise Control, Empa Swiss Federal Labs. for Material Sci. and Technol., D€
3p TUE. PM
In the last decades, new materials and new technologies which satisfy sustainability and energy efficiency demands have been developed for the building construction market. Lightweight structures are becoming increasingly popular, but it has been proved that they
cannot provide satisfactory sound insulation. Therefore a proper acoustic treatment needs to be specifically designed, considering both
airborne and structure-borne sound sources. Cross-laminated timber (CLT) elements, for example, have had great success in the last
twenty years, both in Europe and North America. CLT plates, due to their peculiar sub-structure, exhibit an orthotropic behavior; they
have different stiffness properties along their two principle directions. This paper investigates prediction models for orthotropic plates
designed to evaluate sound radiation due to mechanical excitation, and sound transmission due to acoustic excitation. Particular attention
is paid to the influence on sound radiation of non-resonant vibration, or near-field vibration, in the case of mechanical excitation. The
purpose of these simplified models is to be an efficient tool for acousticians, architects and engineers, helpful in the design process for
new buildings and retrofitting of existing ones. The validation of numerical results with experimental data are reported. The applicability
of the models and their limitation are finally discussed.
1:40
3pAAc2. Advanced methods to determine sound power radiated from planar structures. Stefan Schoenwald (Lab. for Acoustics/
€
Noise Control, Empa Swiss Federal Labs. for Mater. Sci. and Technol., Uberlandstrasse
129, D€
ubendorf 8606, Switzerland, stefan.
schoenwald@empa.ch), Sven Vallely (Univ. of New South Wales, Sydney, NSW, Australia), and Hans-Martin Tr€
obs (Lab. for Acousubendorf, Switzerland)
tics/Noise Control, Empa Swiss Federal Labs. for Mater. Sci. and Technol., D€
In building acoustics, the most fundamental aim is to determine the sound power radiated by building elements. In this paper, two
methods that are more sophisticated than the conventional measurement of sound pressure in a receiving room are presented and discussed. Both methods, the Discrete Calculation Method and the Integral Transform Method, require only the surface velocity measured
in a grid on the radiating surface as input data. Thus the sound power is univocally associated to the considered element. The first
assumes a series of radiating piston sources on the surface that move with same velocity and phase relationship as the structure. The second uses spatial Fourier Transformations to determine the radiated sound power in the wavenumber domain, analogous to Nearfield
Acoustic Holography. The Integral Transform Method additionally obtains the angle dependent flexural wavenumbers of the structure,
which is essential for the analysis of the element dynamics, and as input data for the prediction of sound radiation efficiency and transmission loss of orthotropic building elements, for example, Cross-Laminated-Timber elements. Both methods were applied in some exemplary cases; based on their performance and results, conclusions are drawn on the benefits of both methods.
2:00
3pAAc3. Analysis of impact sound insulation properties of wooden joist and cross laminated timber (CLT) floor constructions.
Anders Homb (SINTEF Bldg. & Infrastructure, Hïgskoleringen 7B, Trondheim 7465, Norway, anders.homb@sintef.no)
During the last years, there has been an increased interest of developing lightweight constructions with improved sound insulation
properties compared to previous, well-known solutions. There have been a progress on prediction tools as well as new solutions, for
instance, from the WoodWisdom-Net project “Silent Timber Build.” Previous research establish the fact that including the frequency
3713
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3713
range below 100 Hz is crucial to get satisfying correlation between perceived and measured impact sound insulation quantities. Research
work at SINTEF Building & Infrastructure concerning lightweight floor constructions include therefore frequencies down to at least 50
Hz. The paper will present analysis of a number of laboratory measurement results from joist floor constructions and CLT floor constructions. The paper will focus on the most challenging parameter, the impact sound insulation including the spectrum adaptation term CI,502500. Analysis will include parameters such as the mass of the floors, the height of the floors, and of course structural connections and
properties of the resilient layers involved. Improved low frequency properties introduce of course more mass and or stiffness of the floor,
but the result also show that optimization of the contributing components and material properties is necessary for the development of robust and environmental friendly solutions.
2:20
3pAAc4. New acoustic solutions for cross laminated timber based buildings. Lieven De Geetere, Bart Ingelaere, and Arne Dijckmans (Div. Acoust., Belgian Bldg. Res. Inst., Lombardstraat 42, Brussel 1000, Belgium, lieven.de.geetere@bbri.be)
Cross laminated timber is becoming a popular building material for the construction of multifamily dwellings, offices, hotels, etc.
The acoustic challenges are important: in contrast to light weight timber frame constructions, it is far more sensitive to flanking transmission and its equally relatively light weight and orthotropic character in comparison with traditional heavy stone requires special solutions
for the direct airborne and impact sound insulation. The paper presents a new, special, and patented building system that almost annihilates flanking transmission and will detail optimized low frequency sound insulation solutions for floors.
2:40
3pAAc5. Comparison of sound isolation performance of floor/ceiling assemblies across structural systems. Benjamin Markham
(Acentech Inc., 33 Moulton St., Cambridge, MA 02138, bmarkham@acentech.com)
The recent building boom in greater Boston has given rise to multifamily buildings utilizing a wide range of structural systems, including
cast-in-place concrete, pre-cast concrete, steel, heavy timber, and various wood frame structural systems. Both laboratory and field measurements indicate that typical floor/ceiling assemblies utilized with each structural system have certain characteristic acoustical attributes.
Although these assemblies perform differently, virtually all of these structural systems have been utilized in buildings advertising “luxury” residential living. This presentation compares recently obtained sound isolation data among the various structural types. Specific acoustical design
challenges and constraints (such as budget, available floor-to-floor height, and others) are identified, and guidelines for achieving sound isolation commensurate with a “luxury” standard are outlined for each of the structural systems examined.
3:00
3pAAc6. Achieving high sound isolation between rooms with stone wool ceilings and plenum barriers when the ceiling grid runs
continuously over partial height demising walls. Gary Madaras (Acoust., ROCKFON, 4849 S. Austin Ave., Chicago, IL 60638, DoctorSonics@aol.com) and Andrew Heuer (Acoust., NGC Testing Services, Buffalo, NY)
The Optimized Acoustics Research Program is a multi-year investigation being conducted by ROCKFON, ROXUL, and NGC Testing Services into efficient and economical means of constructing interior architecture that complies with the higher sound absorption
and higher sound isolation criteria in standards, guidelines, and building rating systems. Prior program updates have included the negative effects of noise flanking paths through ceiling light fixtures and air distribution devices (Inter-Noise 2015) and optimizing the combination of absorptive, stone wool, ceilings and lightweight plenum barriers to achieve sound transmission class (STC) ratings of 40, 45,
and 50 without full height demising walls (Noise-Con 2016). This update addresses a worst-case scenario, when the suspended ceiling
grid runs continuously over partial height demising walls. Building the walls to the underside of the ceiling grid avoids the need to
reconstruct the ceiling if the walls are relocated in the future. However, this approach has historically not complied with the standards
and results in poor acoustic performance. The current research shows how to install standard 16 mm (5/8 in.) thick, stone wool, modular,
acoustic ceilings and 38 mm (1.5 in.) thick stone wool plenum barriers to achieve ratings up to STC 52.
3714
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3714
TUESDAY AFTERNOON, 27 JUNE 2017
ROOM 313, 1:20 P.M. TO 3:00 P.M.
Session 3pAB
Animal Bioacoustics: Comparative Bioacoustics: Session in Honor of Robert Dooling II
Micheal L. Dent, Cochair
Psychology, University at Buffalo, SUNY, B76 Park Hall, Buffalo, NY 14260
Amanda Lauer, Cochair
Otolaryngology-HNS, Johns Hopkins University School of Medicine, 515 Traylor, 720 Rutland Ave., Baltimore, MD 21205
Contributed Papers
3pAB1. Environmental vocalization adaptation: Animals compensating
for restricted visibility and mobility. David Browning (Browning Biotech,
139 Old North Rd., Kingston, RI 02881, decibeldb@aol.com) and Peter
Herstein (Browning Biotech, Westerly, RI)
To survive, animals must always be aware of their surroundings and
they adopt different communication strategies to adapt to changing environmental conditions. This paper describes some vocalization adaptations.
Horses in open pasture usually communicate visually but when placed in
separate barn stalls, with restricted visibility and mobility, must rely more
on their vocalizations, referred to as barntalk. In addition they rapidly learn
to recognize sounds from unseen objects of interest such as a feed cart or
their owner. In some cases animals utilize a new vocalization, Asiatic wild
dogs (Dholes) have developed a short whistle to keep contact among their
hunting pack when visibility is reduced by underbrush, while not interfering
significantly with listening for their prey. Nowhere is there more vocalization adaptation than in a jungle, with limited visibility, difficult mobility,
and the added complication of many other competing sounds. As a result
Sumartran rhinos have developed a far more complex vocalization than their
African relatives on the open plains. An interesting variety of jungle vocalization strategies have developed, from the Tapirs sweeping whistles to the
toned howls of New Guinea singing dogs.
1:40
3pAB2. Laboratory mice can behaviorally discriminate between natural
and synthetic ultrasonic vocalizations. Anastasiya Kobrina, Laurel A.
Screven (Psych., SUNY Univ. at Buffalo, B23 Park Hall, Amherst, NY
14261, akobrina@buffalo.edu), Elena J. Mahrt (Biology, Washington State
Univ., Pullman, WA), Micheal L. Dent (Psych., SUNY Univ. at Buffalo,
Buffalo, NY), and Christine Portfors (Biology, Washington State Univ.,
Vancouver, WA)
Mice produce spectrotemporally complex ultrasonic vocalizations
(USVs), thought to be important for social interactions such as mating. Previous research has established that mice are capable of detecting and discriminating natural, synthetic, and altered USVs using behavioral
methodologies. The current study examined whether mice are capable of
discriminating natural USVs from their synthetic USV analogs. Discrimination performance was tested in five adult mice using operant conditioning
procedures with positive reinforcement. Mice were trained to nose poke to
one hole during a repeating natural or synthetic USV, which would begin a
trial. Subjects were required to poke to a second hole when they discriminated a change in the repeating background after a variable interval. The target stimuli were natural and synthetic versions of the same USVs, as well as
other USVs. Mice can discriminate between some natural USVs and their
synthetic renditions but not others. Discrimination performance across all
stimuli was correlated with spectrotemporal similarity. Mice utilized duration, bandwidth, and peak frequency differently for natural and synthetic
3715
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
USV discrimination. These results contribute to our understanding of the
ways USVs may be used for acoustic communication in mice. [This work
was supported by NIDCD grant R01-DC012302 to MLD.]
2:00
3pAB3. Relative salience of syllable order versus syllable fine structure
in Zebra Finch song. Shelby Lawson (Psych., Univ. of Maryland, 112 Beverley Ave., Edgewater, MD 21037, sllawson@smcm.edu), Adam Fishbein
(NACS, Univ. of Maryland, College Park, MD), Nora H. Prior (Biology,
Univ. of Maryland, College Park, MD), Bernard Lohr (Univ. of Maryland
Baltimore County, Baltimore, MD), Gregory F. Ball, and Robert Dooling
(Psych., Univ. of Maryland, College Park, MD)
Zebra finches have become a popular model for the investigation of the
motor and perceptual mechanisms underlying vocal learning. These birds
are closed-ended learners that have a brief sensitive period for song learning, after which a new song cannot be learned. This song is a single, highly
stereotyped, invariant, sequence of 3-8 harmonic syllables, termed a motif,
which is repeated several times throughout the song bout. Here, using an
operant conditioning discrimination task, we confirm earlier results that
these birds find syllable reversals (i.e., changes in temporal fine structure)
highly salient. By contrast, finches find syllable ordering in these natural
motifs much less salient. The ability to extract and learn sound patterns is a
common feature of animal bioacoustic systems including, of course, human
speech and language learning. We know from both single unit recordings in
the auditory forebrain as well as operant studies, that these birds encode the
syllable ordering in song-like stimuli and can learn to discriminate differences in syllable ordering. The large difference in salience between these two
sets of acoustic features in these natural motifs—syllable structure versus
syllable ordering—raises significant questions about how these natural
sounds are processed in the CNS.
2:20
3pAB4. Social experience influences ultrasonic vocalization perception
in laboratory mice. Laurel A. Screven and Micheal L. Dent (Psych., Univ.
at Buffalo, B29 Park Hall, Buffalo, NY 14260, laurelsc@buffalo.edu)
Mice emit ultrasonic vocalizations (USVs) which vary in spectrotemporal parameters (e.g., frequency, amplitude, and duration) in a multitude of
social situations. USVs are often assumed to possess speech-like characteristics, although it has not yet been established that mice are using USVs for
communication purposes. Previous studies have shown changes in auditory
cortex activity in maternal females to pup calls, but it is currently unknown
how previous social experience with other mice throughout development
affects perception of adult vocalizations. To test the effect of socialization,
we used an operant conditioning task to determine if discrimination of
USVs was negatively impacted by chronic social isolation compared to
mice that were group housed throughout their lifespan. Mice discriminated
between twelve USVs of three different categories. Mice that had been
Acoustics ’17 Boston
3715
3p TUE. PM
1:20
socially isolated since weaning showed deficits in discrimination of some
USVs. Additionally, socially isolated mice required more training and testing, and more trials to complete the task than socially experienced mice.
These results indicate that experiencing USVs during social interactions
affects how the mice perceive their vocalizations, suggesting that these
vocalizations could have context-specific meaning that is learned through
hearing USVs within the appropriate social context.
2:40
3pAB5. How canaries listen to their song. Adam Fishbein (Neurosci. &
Cognit. Sci., Univ. of Maryland, 4123 Biology-Psych. Bldg., College Park,
MD 20742, afishbei@umd.edu), Shelby Lawson (Psych., Univ. of Maryland, Edgewater, MD), Gregory F. Ball, and Robert Dooling (Psych., Univ.
of Maryland, College Park, MD)
Canaries have become important models for the study of vocal learning.
The male produces long song bouts up to a minute long consisting of
various syllables, each repeated in flexibly sequenced phrases. Little is
known about how the birds listen to song though behavioral observations
clearly show that female canaries are more sexually responsive to a special
song element—so called “sexy” syllables—which are distinguished by a
high syllable repetition rate, wide-bandwidth, and multiple notes. Here,
operant conditioning and psychophysical techniques were used to determine
the discriminability of variation in syllable and phrase morphology. Results
show that canaries can discriminate the subtle differences among syllables
and phrases using spectral, envelope, and temporal fine structure cues but
they are no better than budgerigars used as controls. There was also evidence that perception of sexy syllables is distinctive for canaries. On the
whole, while canaries can hear the fine details of the acoustic structure of
their song, the evidence suggests that they listen at a more synthetic rather
than analytic level. These results give clues to how the canary perceptual
system may be shaped to process male song and make judgments about
mate quality.
TUESDAY AFTERNOON, 27 JUNE 2017
ROOM 310, 1:20 P.M. TO 3:40 P.M.
Session 3pAO
Acoustical Oceanography and Underwater Acoustics: Acoustic Measurements of Sediment Transport and
Near-Bottom Structures II
James Lynch, Cochair
Woods Hole Oceanographic, MS # 11, Bigelow 203, Woods Hole Oceanographic, Woods Hole, MA 02543
Peter D. Thorne, Cochair
Marine Physics and Ocean Climate, National Oceanography Centre, National Oceanography Centre, Joesph Proudman
Building, 6 Brownlow Street, Liverpool L3 5DA, United Kingdom
Contributed Papers
1:20
3pAO1. Combining echo and hydrodynamic measurements for estimating non uniform sand transport under waves and currents. Rodrigo
Mosquera and Francisco Pedocchi (IMFIA, Universidad de la Rep
ublica Facultad de Ingenierıa, Julio Herrera y Reissig 565, Montevideo, Montevideo 11300, Uruguay, rmosquer@fing.edu.uy)
We present recently obtained measurements in the Atlantic coast of Uruguay, with a 1 MHz ADCP (Sentinel V20, Teledyne RDI, USA). The
ADCP was deployed at 16 m depth for 4 months using a light weight structure deployed from a small boat. During the deployment, 3.6 m significant
high, 12 s peak period waves, and 0.8 m/s currents were recorded. The vertical distribution of suspended sediment in this environment is controlled by
the turbulence level induced by both currents and waves. Assuming a representative sediment size and adapting the Rouse-Vanoni profile showed to
give results that did not agree with the concentration profiles inverted from
the recorded echo profiles. Therefore, a method that combined variations of
the sediment size distribution with the turbulence level was developed
showing very good agreement with the information from the echo profile. In
the developed method, the turbulent sediment diffusivity was computed
from the mean current and wave measurements. However, this showed limitations for conditions with strong waves and weak currents. As an
3716
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
alternative, direct measurements of the turbulent fluctuations from the instantaneous beam velocity recordings were used to compute the turbulent
diffusivity. In addition, different turbulent diffusivity distributions for combined current-wave conditions were evaluated.
1:40
3pAO2. Sound speed and attenuation in water-saturated granular materials at MHz frequencies. Jenna Hare and Alex E. Hay (Oceanogr., Dalhousie Univ., 1355 Oxford St., Halifax, NS B3H4R2, Canada, jenna.hare@
dal.ca)
Sound speed and attenuation measurements are reported for water-saturated granular materials (natural sand and glass beads) at frequencies of 1.0
to 1.8 MHz. Median grain diameters were 0.219 to 0.497 mm, corresponding to kd>1, i.e., the scattering regime. The measurements were made for
different thicknesses of sediment resting on a reflective surface using a
monostatic geometry. The attenuation estimates compare well with previously reported experimental results and to the predictions of multiple scattering theory. The sound speed estimates exhibit the negative dispersion
predicted by theory, but compared to previous measurements are biased low
by 100 m/s to 200 m/s. It is argued that this bias is due to microbubbles in
concentrations of O(10) ppm by volume.
Acoustics ’17 Boston
3716
3pAO3. Ambient noise from turbidity currents. Matthew Hatcher, Alex
E. Hay (Oceanogr. Dept., Dalhousie Univ., 1355 Oxford St., Halifax, NS
B3H 4R2, Canada, mhatcher@dal.ca), and John E. Hughes Clarke (Ctr. for
Coastal & Ocean Mapping, Univ. of New Hampshire, Durham, NH)
Sediment mass transport from the Squamish River delta into the adjacent fjord (Howe Sound, British Columbia) is dominated by discrete turbidity current events which have incised semi-permanent channels on the delta
front and out onto the prodelta. Acoustic data were collected in the spring of
2013, including both active and passive systems. Data from the active sonars
are used to determine flow speed, flow thickness and suspended sediment
concentration. The noise generated by these discrete turbidity currents is
broadband (10 to 200 kHz) and, based on the sediment grain size and flow
speeds, is shown to be due to sediment-generated noise, most likely at the
base of the flows.
2:20
3pAO4. Suspended sediment flux statistics in unidirectional flow, using
a pulse-coherent acoustic Doppler profiler. Greg Wilson (Oregon State
Univ., 104 CEOAS Administration Bldg., Corvallis, OR 97331-5503, wilsongr@coas.oregonstate.edu) and Alex E. Hay (Oceanogr., Dalhousie
Univ., Halifax, NS, Canada)
An experiment was conducted in the St. Anthony Falls Lab main channel flume, involving sand sediment (d50 = 0.4 mm) in unidirectional flow.
The flow conditions were 1 m/s flow rate and 1 m depth, and the bed state
consisted of quasi-linear sand dunes with ~1 m wavelength and 10-20 cm
height. A pulse-coherent acoustic Doppler instrument (MFDop) measured
high-resolution near-bed vertical profiles of velocity and backscatter amplitude at various positions spanning the dune profile. These measurements are
used to obtain probability distributions of instantaneous suspended sediment
concentration and flux, which are compared to predictions from two stochastic theories. While the stochastic theories were previously developed
and validated for low transport rates over a flat bed, the acoustic measurements enabled measurements at a much higher transport rate; despite this,
the new measurements remain in remarkably good agreement with the
theories.
2:40
3pAO5. Transient reflection from mud during molecular diffusion of
salt. Gabriel R. Venegas and Preston S. Wilson (Mech. Eng. Dept. and
Appl. Res. Labs., Univ. Texas at Austin, 204 E Dean Keeton St., Austin, TX
78712-1591, gvenegas@utexas.edu)
Harbor basins and estuarine environments experience drastic salinity
fluctuations in the water near the water-sediment interface, which can significantly affect how sound interacts with a low-velocity mud bottom. This
presents challenges in applications including mine detection, port protection
and shallow water sonar. In a previous investigation of this system, a mud
sample that was saturated with fresh water was instantaneously exposed to
salt water. Laboratory measurements of plane wave reflection from the
water-mud interface were obtained using a time-gated broadband impulse as
time evolved. Results suggested molecular diffusion of salt into the sample
had altered the reflected pulse’s amplitude and caused a depth-dependent
impedance profile in the mud. The sediment was discretized into layers
much thinner than a wavelength and a multi-layer steady-state model was
used to predict the reflection from the diffusing mud. As the effective diffusion length reached a critical value, predictions of the steady-state model
began to deviate from the measurements, indicating that a transient solution
was required. A model of the transient reflection of a finite-length pulse
3717
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
from a half space with an arbitrary impedance will be presented and model
results compared with laboratory measurements. [Work supported by
ONR.]
3:00
3pAO6. Influence of sand/silt particle size distributions on compressional wave attenuation in marine mud sediments. Elisabeth M. Brown
(Mathematical Sci., Rensselaer Polytechnic Inst., Troy, NY 12180,
browne6@rpi.edu), Allan D. Pierce (Boston Univ., East Sandwich, MA),
and William L. Siegmann (Mathematical Sci., Rensselaer Polytechnic Inst.,
Troy, NY)
High porosity marine mud from different sites typically contains different amounts of clay, sand, and silt particles, along with other material. A
recent talk [Pierce et al., ASA Honolulu, 5aAO1 (2016)] explored a mechanism for why sand and silt particles in suspension can provide the dominant
contributions to the frequency dependence of compressional wave attenuation. The card-house structure of the clay is critical in supporting the particles and keeping them separated. Example calculations for spherical
particles of the same size showed physically reasonable attenuation behavior
at low and high frequencies. This presentation considers extensions of the
approach, particularly accounting for distributions of particle sizes, and
emphasizes comparisons of attenuation predictions with available field data.
Using reasonable assumptions about the clay volume and the sand and silt
distributions, it is possible to estimate the numbers of sand and silt particles,
and consequently the attenuation, from the sediment porosity. Such results
can be compared with measurements of attenuation frequency dependence
for measured porosities, in order to validate or refine the model for the
mechanism. Of specific interest is determination of the frequency bands
over which the attenuation increases nearly linearly with frequency, as is often estimated or assumed. [Work supported by ONR.]
3:20
3pAO7. Role of clay particle electrostatics and the dielectric permittivity of suspended silt particles in sound attenuation in mud. Allan D.
Pierce (Retired, PO Box 339, 399 Quaker Meeting House Rd., East Sandwich, MA 02537, allanpierce@verizon.net), William L. Siegmann, and Elisabeth M. Brown (Mathematical Sci., Rensselaer Polytechnic Inst., Troy,
NY)
Sound attenuation in marine mud sediments is partly caused by viscous
dissipation of acoustically induced flow past suspended silt particles. Clay
particles in the surrounding lattice carry electrostatic charges, causing high
porosity, so one asks why silt particles do not settle because of gravity to the
bottom of the mud layer. Explanation of the suspension and the associated
attenuation of sound proceeds from consideration of a quartz sphere
immersed in mud. The somewhat-random electric field created by the clay
particles causes an electric dipole moment to arise in the sphere because of
its dielectric permittivity. This is proportional to the electric field and varies
with position, and the result is an electrostatic force on the sphere, the force
being proportional to the gradient of the electric field. In equilibrium, this
force is balanced by a gravity force. There is a natural spring constant associated with deviations from equilibrium, and the resulting dynamical model
is a fixed-mass sphere subjected to a spring force, the force of gravity, and
the viscosity-associated force caused by the motion of the surrounding fluid
and the no-slip condition at the sphere’s surface. Paper quantitatively discusses the model’s implications on the suspension theory of sound attenuation. Results suggest approximate validity for representative acoustic
frequencies of model where sphere has same density as water. [Work supported by ONR.]
Acoustics ’17 Boston
3717
3p TUE. PM
2:00
TUESDAY AFTERNOON, 27 JUNE 2017
ROOM 304, 1:20 P.M. TO 3:20 P.M.
Session 3pBAa
Biomedical Acoustics: Advances in Shock Wave Lithotripsy II
Robin Cleveland, Cochair
Engineering Science, University of Oxford, Inst. Biomedical Engineering, Old Road Campus Research Building,
Oxford OX3 7DQ, United Kingdom
Adam D. Maxwell, Cochair
University of Washington, Seattle, WA 98105
Julianna C. Simon, Cochair
Graduate Program in Acoustics, Pennsylvania State University, Penn State, 201E Applied Sciences Building,
University Park, PA 16802
Contributed Papers
1:20
1:40
3pBAa1. Assessing the effect of lithotripter focal width on the fracture
potential of stones in shockwave lithotripsy. Shunxiang Cao (Dept. of
Aerosp. and Ocean Eng., Virginia Tech, Rm. 332, Randolph Hall, 460 Old
Turner St., Blacksburg, VA 24060, csxtovt@vt.edu), Ying Zhang, Defei
Liao, Pei Zhong (Dept. of Mech. Eng. and Mater. Sci., Duke Univ., Durham,
NC), and Kevin G. Wang (Dept. of Aerosp. and Ocean Eng., Virginia Tech,
Blacksburg, VA)
3pBAa2. A multi-element HIFU array system: Characterization and
testing for manipulation of kidney stones. Mohamed A. Ghanem (Aeronautics and Astronautics Eng., Univ. of Washington, 123529 4th Ave. ne,
Seattle, WA 98125, mghanem@uw.edu), Michael Bailey (Ctr. for Industrial
and Medical Ultrasound, Appl. Phys. Lab., Univ. of Washington, Seattle,
WA), Adam D. Maxwell (Dept. of Urology, Univ. of Washington, Seattle,
WA), Bryan Cunitz, Wayne Kreider, Christopher Hunter (Ctr. for Industrial
and Medical Ultrasound, Appl. Phys. Lab., Univ. of Washington, Seattle,
WA), Vera Khokhlova (Ctr. for Industrial and Medical Ultrasound, Appl.
Phys. Lab., University of Washington, Seattle WA and Phys. Faculty, Moscow State Univ., Seattle, Washington), and Oleg A. Sapozhnikov (Ctr. for
Industrial and Medical Ultrasound, Appl. Phys. Lab., University of Washington, Seattle WA and Phys. Faculty, Moscow State Univ., Moscow, Russian Federation)
This talk presents a combination of computational and experimental
study on the effect of lithotripter focal width on the fracture potential of
stones treated at various distances from the lithotripter focus. Two representative lithotripter fields are considered: (1) the original Siemens Modularis
with a focal width of 7.4 mm and (2) a modified version with a larger focal
width of 11.0 mm with comparable acoustic pulse energy of 40 mJ. The
interaction of these two lithotripter fields with spherical and cylindrical
model stones located at 0 to 12 mm from the shock wave axis is investigated. Specifically, a three-dimensional CFD (Computational Fluid Dynamics)—CSD (Computational Solid Dynamics) coupled computational
framework is used to simulate the propagation of stress waves, as well as
the initiation and propagation of fractures. The two-scale Tuler-Butcher
fracture criterion will be employed: it will be calibrated experimentally,
then applied to assessing stress-induced stone damage. An element deletion
method will be applied to simulate fracture. Characteristic changes in wave
focusing and interference in relation to the buildup of the maximum tensile
stress inside the stone will be presented. The physical mechanism(s) responsible for the different fracture patterns observed at different off-axis distances will be discussed.
3718
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Most progress in acoustic trapping and levitation has been achieved with
the use of multiple sound sources, standing waves, and low density or very
small objects. Repositioning kidney stones in the human body is an important clinical application where acoustic levitation could be employed; however, it requires manipulation of larger and heavier objects. The goal of this
work was to calibrate the acoustic output of a 1.5-MHz, 256-element array
designed in our laboratory for HIFU research, which is also capable of generating vortex beams to manipulate mm-sized objects in water and to move
them in any direction without moving the source. Electrical circuits were
developed for matching each element of the array to an output channel of a
Verasonics ultrasound engine to allow for efficient power transfer to the
transducer. Acoustic holography was used to calibrate and equalize outputs
across all channels. Manipulation of artificial kidney stone targets made of
plastic, glass, or cement (2—8 mm) and comparable in size to or larger than
the wavelength in water by electronic steering of the vortex beam lateral to
the acoustic axis was demonstrated. [Work supported by NIH P01
DK043881, K01 DK104854, R01 EB007643, and NSBRI through NASA
NCC 9-58.]
Acoustics ’17 Boston
3718
2:00
2:40
3pBAa3. Finite element optimization of circular element used in lithotripsy and histotripsy piezoelectric transducers. Gilles Thomas, JeanYves Chapelon, and Cyril Lafon (U1032, INSERM, 151, cours Albert
Thomas, LYON 69424, France, gilles.thomas@inserm.fr)
3pBAa5. The acoustic field of a spherical self-focusing Eisenmenger
electromagnetic shockwave source (EMSE). Abtin Jamshidi Rad and Friedrich Ueberle (Life Sci., HAW Hamburg, Ulmenliet 20, Hamburg 21033,
Germany, Abtin.Rad@HAW-Hamburg.de)
Multi-element piezoelectric transducers used in shock wave generators
for disrupting kidney stones (lithotripsy) or ablating soft tissues (histotripsy)
are emitting high amplitude burst waves of one to ten pulses. Thus, most of
the signal is sent while the piezoelectric transducers vibrate in a transient
state. This work aims at optimizing the design of these elements. A transient
finite element model of a piezoelectric circular element is presented. It
includes the piezoelectric disk, the epoxy and plastic casing, the surrounding
water and the RLC discharge circuit. The model was then used for parametric optimization of the electrical components and the front and back layers.
It has been validated by comparing the numerical and experimental results
of one 400 kHz, 37.3 mm diameter LT02 element (EDAP-TMS). The surface pressure field as measured by a fiberoptic hydrophone was in good
agreement with the simulation. Elements with parameters resulting of optimization from different objective functions, which depend of the application
desired, were built and validated with experimental results. The model can
be effectively used for the rapid and optimal design of piezoelectric circular
elements working in a transient state. [Work supported by an industrial grant
from EDAP-TMS.]
Shockwave sources are used in extracorporeal lithotripsy, pain therapy
and a wide range of other medical applications. Typical lithotripter source
pulses mostly achieve positive pressure amplitudes of ca. 35 to 100 MPa
and tensile pressures amplitudes up to -20 MPa. Already 1962, Eisenmenger
designed a plane electromagnetic source, which was further developed into
a spherically shaped, self-focusing electromagnetic source (ca. 1991). We
have one of his prototype sources (Membrane diameter 120 mm, focal distance 200 mm), which was measured according to the standard IEC61846.
Focus and field measurements were done using a single-spot fiberoptic
hydrophone and compared to a multi-spot optical hydrophone. Very low
variations of the acoustic output were found (for the peak positive pressure
and for the energy). Notably, in contrast to many other shockwave sources,
the spherical EMSE provides steep shockwaves (10...20 ns risetime) in the
focus at comparably low pressures (33...36 MPa), even at lower energy settings. Peak negative pressures were in the range of < -10 MPa. Focus and
field measurements show the interesting properties of the spherical self-focusing EMSE, also in comparison to stronger focusing setups.
3pBAa4. Investigation of stone damage patterns and mechanisms in
nano pulse lithotripsy. Chen Yang, Ying Zhang, and Pei Zhong (Duke
Univ., PO BOX 90300, 144 Hudson Hall, Durham, NC 27708, cy71@duke.
edu)
Nano Pulse Lithotripter (NPL) utilizes spark discharges of ~30-ns duration, released at the tip of a flexible probe under endoscopic guidance, to
break up kidney stones. Different damage patterns have been observed using
BegoStone samples, including crater formation underneath the probe tip,
crack development from the distal wall, and radial and ring-shape crack initiation in the proximal surface of the stone. Multiple mechanisms have been
proposed for stone disintegration in NPL: dielectric breakdown near the
probe tip, shockwave induced by the spark discharge, and asymmetric collapse of bubbles. Experiments have been performed to correlate the proposed mechanisms with the damage patterns observed. Comparison between
micro-CT images of the damage initiation sites and COMSOL simulation of
the stress field in the stone indicates that the observed cracks are most likely
to be produced by the locally intensified tensile stresses produced by surface
acoustic waves (SAW) generated by the incidence of the spark-generated,
spherically divergent shockwave on the proximal surface of the stone, and
their interactions with bulk acoustic waves (P or S) upon reflection at stone
boundaries. Dielectric breakdown may contribute to crater formation. However, the contribution of cavitation to stone fragmentation in NPL appears to
be minimal.
3719
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
3:00
3pBAa6. Nonlinear elasticity imaging with dual frequency ultrasound.
Johannes Kvam (Dept. of Circulation and Medical Imaging, Norwegian
Univ. of Sci. and Technol., Maridalsveien 221, Oslo 0467, Norway,
johannes.kvam@gmail.com), Stian Solberg, Ole Martin Brende, Ola Finneng Myhre, Alfonso Rodriguez-Molares (Dept. of Circulation and Medical
Imaging, Norwegian Univ. of Sci. and Technol., Trondheim, Norway),
Jïrgen Kongsro (Norsvin SA, Oslo, Norway), and Bjïrn A.J. Angelsen
(Dept. of Circulation and Medical Imaging, Norwegian Univ. of Sci. and
Technol., Trondheim, Norway)
Elasticity imaging has shown to enhance diagnostic capabilities of disease as tissue elasticity is often associated with pathological condition.
Methods such as Acoustic Radiation Force Impulse imaging and Supersonic
Shear Wave imaging have shown good results at shallow depths. Secondorder ultrasound field (SURF) imaging is a dual band imaging technique
that utilizes a low frequency (LF) pulse to manipulate the nonlinear elasticity of the medium observed by a co-propagating high frequency pulse (HF).
The manipulation of the material parameters causes the HF to experience a
change in propagation velocity which depends on the manipulation pressure
and nonlinear elasticity of the medium. The change in propagation velocity
causes an accumulative delay or advancement compared to a single HF
pulse, called the nonlinear propagation delay (NPD). By transmitting multiple pulse complexes with different LF polarities the technique has proven
capabilities for suppression of multiple scattering noise. By observing local
variations in the development of NPD, it is possible to estimate the nonlinear elasticity variation of the medium. Simulations with k-Wave toolbox
have shown the methods feasibility, also at greater depths. In-vitro and invivo experiments have also yielded promising results.
Acoustics ’17 Boston
3719
3p TUE. PM
2:20
TUESDAY AFTERNOON, 27 JUNE 2017
ROOM 312, 1:40 P.M. TO 3:20 P.M.
Session 3pBAb
Biomedical Acoustics: Partial Differential Equation Constrained and Heuristic Inverse Methods in
Elastography II
Mahdi Bayat, Cochair
Biomedical and Physiology, Mayo Clinic, 200 1st St. SW, Rochester, MN 55905
Wilkins Aquino, Cochair
Civil and Environmental Engineering, Duke University, Hudson Hall, Durham, NC 27708
Contributed Papers
1:40
3pBAb1. Shear wave phase velocity dispersion estimation in viscoelastic
materials using the Multiple Signal Classification method. Matthew W.
Urban (Dept. of Radiology, Mayo Clinic College of Medicine and Sci., 200
First St. SW, Rochester, MN 55905, urban.matthew@mayo.edu), Piotr
Kijanka (Dept. of Robotics and Mechatronics, AGH - Univ. of Sci. and
Technol., Krakow, Poland), Bo Qiang (The Nielsen Co., Oldsmar, FL), Pengfei Song (Dept. of Radiology, Mayo Clinic College of Medicine and Sci.,
Rochester, MN), Carolina Amador (Dept. of Physiol. and Biomedical Eng.,
Mayo Clinic College of Medicine and Sci., Rochester, MN), and Shigao
Chen (Dept. of Radiology, Mayo Clinic College of Medicine and Sci.,
Rochester, MN)
expansion of D, we find the following general expression for the shear wave
speed: c2/c02¼ðW1 ðDÞ þ W2 ðDÞD2=3 Þ=D2=3 ðW1 ð1Þ þ W2 ð1ÞÞ. Here, c0 is the
shear wave speed in the unpressurized material, Wj ¼ @W=@Ij , and Ij is an
invariant of the Cauchy-Green strain tensor. We also find important restrictions on the form of the strain energy function, which are typically not satisfied by strain energy functions commonly assumed for soft tissues.
2:20
3pBAb3. Magnetoelastic waves in a soft electrically conducting solid in
a strong magnetic field. Daniel Gendin and Paul E. Barbone (Mech. Eng.,
Boston Univ., 110 Cummington Mall, Boston, MA 02215, digendin@bu.
edu)
Shear wave elastography (SWE) is clinically used for the measurement
of soft tissue mechanical properties. Most SWE methods assume that the tissue is elastic, but soft tissues are inherently viscoelastic. The viscoelasticity
can be characterized by examining phase velocity dispersion. Methods to
extract the phase velocities from the spatiotemporal data, v(x,t), involve
using a two-dimensional Fourier transform. The Fourier representation,
V(k,f), is searched for peaks that correspond to the phase velocities. We
present a method that uses the Multiple Signal Classification (MUSIC)
method to provide robust estimation of the phase velocity dispersion curves.
We compared results from the MUSIC method with the current approach of
searching for peaks in the V(k,f) representation. We tested this method on
digital phantom data created using finite element models (FEMs) in viscoelastic media excited by a simulated acoustic radiation force push from a
curved linear array. We evaluated the algorithm with different levels of
added noise. Additionally, we tested the methods on data acquired in viscoelastic phantoms with a Verasonics system. The MUSIC algorithm provided
dispersion estimation with lower errors than the conventional peak search
strategy. This method can be used for evaluation of wave velocity dispersion
in viscoelastic tissues and guided wave propagation. [This work was supported in part by grant R01DK092255.]
Shear wave motion of a soft, electrically conducting solid in the presence of a strong magnetic field excites eddy currents in the solid. These, in
turn, give rise to Lorentz forces that resist the wave motion. We derive a
mathematical model for linear elastic wave propagation in a soft electrically
conducting solid in the presence of a strong magnetic field. The model
reduces to an effective anisotropic dissipation term resembling an anisotropic viscous foundation. We consider the application to magnetic resonance elastography, which uses strong magnetic fields to measure shear
wave speed in soft tissues for diagnostic purposes. We find that for typical
values of magnetic field, mass density, and electrical conductivity of soft tissues, eddy current dissipation is negligible. For materials with higher conductivity (e.g., metals), the effect can be stronger.
2:00
Shear wave elasticity imaging determines the mechanical parameters of
soft tissue by analyzing measured shear waves induced by an acoustic radiation force. Currently, the widely used time-of-flight method calculates the
correlation between shear waveforms at adjacent lateral observation points
to estimate the shear elasticity value. Although this method provides accurate estimates of the shear elasticity in purely elastic media, our experience
suggests that this approach overestimates the shear elasticity values in viscoelastic media because the effects of diffraction, attenuation, and dispersion
are not taken into account. To address this problem, we have developed an
approach that directly accounts for all of these effects when estimating the
shear elasticity. This new approach simulates shear waveforms using a
Green’s function-based approach with a Voigt model, while the shear elasticity and viscosity values are estimated using an optimization-based
approach by comparing measured shear waveforms with simulated shear
3pBAb2. Shear waves in pressurized poroelastic media. Navid Nazari
(Biomedical Eng., Boston Univ., 44 Cummington Mall, Boston, MA 02215,
navidn@bu.edu) and Paul E. Barbone (Mech. Eng., Boston Univ., Boston,
MA)
Shear wave elastography measures shear wave speed in soft tissues for
diagnostic purposes. In Rotemberg et al. [J. Biomech. 46(11), (2013), pp.
1875-1881] and Rotemberg et al. [Phys. Med. Biol. 57(2), (2012), pp. 329341], shear wave speed measurements were shown to depend on prestrain,
but not necessarily prestress, in a perfused canine liver. We model this phenomenon by examining incremental waves in a pressurized poroelastic medium with incompressible phases. For a poroelastic material with strain
energy function W, which due to pressurization undergoes a volume
3720
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
2:40
3pBAb4. Shear elasticity and shear viscosity imaging in viscoelastic
phantoms. Yiqun Yang (Michigan State Univ., 2152 Eng. Bldg., 428 S
Shaw Ln, East Lansing, MI 48824, yangyiqu@msu.edu), Pengfei Song, Shigao Chen, Matthew W. Urban (Dept. of Radiology, Mayo Clinic College of
Medicine and Sci., Rochester, MN), and Robert J. McGough (Michigan
State Univ., East Lansing, MI)
Acoustics ’17 Boston
3720
waveforms in the time-domain. This operation is then performed on a pointby-point basis to generate images. The results indicate that there is good
agreement between the simulated and measured shear velocity waveforms,
and that this approach yields improved images of the shear elasticity and
shear viscosity. [Work supported, in part, by NIH Grant R01DK092255.]
(LUSWE), for measuring the elastic properties of superficial lung tissue. In
LUSWE, a small, local, and 0.1 second harmonic vibration is generated on
the chest of a subject. The speed of surface wave on the lung is measured by
using an ultrasound probe. We are evaluating LUSWE for assessing patients
with interstitial lung disease (ILD). LUSWE may be useful for assessing
ILD because most ILD patients have typical fibrotic scars in the peripheral
and subpleural regions of the lung. In a large clinical study of ILD patients,
we measure both lungs through six intercostal spaces for patients and controls. The surface wave speed is measured at 100 Hz, 150 Hz, and 200 Hz.
In an example, the surface wave speed is 1.88 6 0.11 m/s and 3.3 6 0.37
m/s, respectively, for a healthy subject and an ILD patient at 100 Hz and in
the same intercostal space. LUSWE may compliment the clinical standard
high-resolution computed tomography for assessing ILD. LUSWE may be
useful for assessing other lung disorders.
3:00
3pBAb5. An ultrasound surface wave elastography technique for noninvasive measurement of surface lung tissue. Xiaoming Zhang, Thomas
Osborn, Boran Zhou, Brian Bartholmai, James F. Greenleaf, and Sanjay
Kalra (Mayo Clinic, 200 1st ST SW, Rochester, MN 55905, zhang.xiaoming@mayo.edu)
Ultrasonography is not widely used in clinic for lung assessment
because ultrasound cannot image deep lung tissue. In this abstract, we present a novel technique, lung ultrasound surface wave elastography
TUESDAY AFTERNOON, 27 JUNE 2017
ROOM 205, 1:20 P.M. TO 3:20 P.M.
Session 3pEA
Engineering Acoustics and Physical Acoustics: Microelectromechanicalsystems (MEMS)
Acoustic Sensors II
Vahid Naderyan, Cochair
Physics/National Center for Physical Acoustics, University of Mississippi, NCPA, 1 Coliseum Drive, University, MS 38677
Kheirollah Sepahvand, Cochair
Mechanical, Technical University of Munich, Boltzmannstraße 15, Garching bei Munich 85748, Germany
3p TUE. PM
Robert D. White, Cochair
Mechanical Engineering, Tufts University, 200 College Ave., Medford, MA 02155
Invited Papers
1:20
3pEA1. MEMS-Based acoustic sensors for fluid mechanics and aeroacoustics. Mark Sheplak (Dept. Elec. and Comput. Eng., Univ.
of Florida, 216 Larsen Hall, PO Box 116200, Gainesville, FL 32611-6200, sheplak@ufl.edu)
This talk presents the development of several microelectromechanical systems (MEMS)-based acoustic transducer technologies for
fluid mechanics and aeroacoustics applications. Specifically, this presentation will focus on several aluminum nitride piezoelectric
MEMS dynamic pressure sensors and microphones. These devices offer the promise of reducing cost, improving performance, and
increasing mounting flexibility over existing conventional microphone technologies. Specifically, a transducer with no external power
requirement has a key advantage for a large-channel count, widespread deployment. The modeling and design aspects of these devices
are reviewed. First, the electroacoustic transduction is predicted via piezoelectric composite plate theory. Lumped element models are
then synthesized to describe the dynamic characteristics of the transducer diaphragm and the cavity/vent structure. Constrained nonlinear
design optimization using a sequential quadratic programming scheme is then performed to determine the design parameters. Representative results for several applications will then be presented. Finally, unresolved technical issues are summarized for future sensor
development.
1:40
3pEA2. MEMS resonant acoustic wake-up sensor. Jonathan Bernstein, Mirela Bancu, Daniel Reilly, Marc Weinberg, Doug Gauthier,
and Richard Elliot (Draper Lab., Draper Lab., 555 Technol. Square, Cambridge, MA 02139, jbernstein@draper.com)
Draper has built zero-power MEMS wake-up sensors for DARPA’s N-ZERO Program. This program aims to enable unattended sensor arrays that last for years, limited only by battery discharge rates. For some targets, characteristic frequency signatures allow detection
using narrow band passive acoustic resonators. The MEMS sensors use ambient acoustic inputs to actuate a wake-up electrical relay.
3721
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3721
Resonant sensors from 30 Hz to 180 Hz have been fabricated. The device rotates in response to an acoustic input, thereby avoiding large
displacements due to gravity which would occur with a linear actuator. An adjustable acoustic cavity designed as part of the package is
used to tune the resonant frequency to match a particular target. FEA Modeling was performed to achieve desired spring constants and
resonant frequency. A rotary-acoustic lumped element equivalent circuit model was used to analyze the effect of the cavity and leakage
resistances on the device performance. We will show finished MEMS devices and acoustic test data. [This research was developed with
funding from the Defense Advanced Research Projects Agency (DARPA).]
2:00
3pEA3. Field-effect transistor-based transduction and acoustic receiving transducers. Wonkyu Moon, Min Sung (Dept. of Mech.
Eng., Pohang Univ. of Sci. and Technology(POSTECH), PIRO 405, POSTECH, San31, Hyoja-dong, Nam-gu, Pohang, Kyungbuk
790784, South Korea, wkmoon@postech.ac.kr), Kumjae Shin (Dept. of Mech. Eng., Pohang Univ. of Sci. and Technology(POSTECH),
Pohang, Gyungbuk, South Korea), and Junsoo Kim (Dept. of Mech. Eng., Pohang Univ. of Sci. and Technology(POSTECH), Pohang,
Gyeongsangbuk-do, South Korea)
Microphones and hydrophones are representative acoustic receiving transducers. To properly receive sound waves, a receiver must
be smaller than the wavelength of the target sound. The target wave characteristics do not impose any lower limits on the size of microphones. When the performance of a smaller microphone or hydrophone will be satisfactory, users generally choose a smaller device
since smaller receivers are easier to install and use. However, miniaturized microphones are less sensitive at low frequencies and conventional infrasound detectors are considerably larger than those for higher frequency sounds. These trends in receiver size can be
explained by considering the transduction characteristics of microphones and hydrophones. We describe two transduction mechanisms
based on field-effect transistors (FET) and use them to develop new microphones and hydrophones. We used theoretical analysis and
experiments to show that the sensitivity and frequency response functions of FET-based microphones and hydrophones are size-independent. These results suggest that more sensitive micro-machined microphones and hydrophones, with better frequency response functions, may be available for use in the near future. [Work supported by the Institute of Civil Military Technology Cooperation Center
(16-CM-SS-18).]
2:20
3pEA4. Sensing sound with a nanoscale fiber. Ronald Miles and Jian Zhou (Mech. Eng., SUNY Binghamton, Dept. of Mech. Eng.,
Vestal, NY 13850, miles@binghamton.edu)
A feasibility study is presented of the use of a thin fiber to detect sound. The fiber is assumed to be supported on each end, with a
deflection in response to a sound wave that propagates in the direction perpendicular to its long axis. The driving force on the fiber is the
result of viscous forces in the oscillating air, which are well-known to be very important in determining the flow-induced motion of
small structures. A simplified analytical model of the fiber’s response is presented where it is argued that for fibers that are sufficiently
thin, elastic and inertial effects become strongly dominated by viscous forces in the fluid. As a result, the fiber’s motion becomes a very
close approximation of that of the acoustic flow in its vicinity. Electrodynamic transduction of the fiber’s motion provides a means of
sensing sound with remarkable accuracy if the fiber diameter is taken to be measurably below one micron.
Contributed Papers
2:40
3:00
3pEA5. Characterization of a virtual array based on MEMS microphones for the analysis of acoustic sources. Alberto Izquierdo, Juan J. Villacorta (TeleCommun. Eng. School. Signal Theory Dept., Univ. of
Valladolid, Valladolid, Spain), Lara del-Val (Industrial Eng. School. Mech.
Eng. Dept., Univ. of Valladolid, Paseo del Cauce, Valladolid, Spain, lvalpue@eii.uva.es), and Luis Suarez (Superior Tech. School, Civil Eng. Dept.,
Univ. of Burgos, Burgos, Spain)
3pEA6. Capacitive micromachined ultrasound transducers for acoustic
anemometry on Mars. Gardy K. Ligonde, Daniela A. Torres, Eric F.
Abboud, Wang-Kyung Sung, James Vlahakis (Mech. Eng., Tufts Univ., 200
College Ave., Anderson 204, Medford, MA 02155, gardy.ligonde@tufts.
edu), Donald J. Banfield (Astronomy, Cornell Univ., Ithaca, NY), and Robert D. White (Mech. Eng., Tufts Univ., Medford, MA)
Using arrays with digital MEMS microphones and FPGA-based acquisition/processing systems allows to build systems with hundreds of sensors at
a reduced cost. This work analyzes the performance of a virtual array with
6400 MEMS (80 80) microphones. The system is composed by a 2D positioning system that places a physical array of 64 microphones (8 8) in a
grid with 8 x 8 positions, obtaining a spatial aperture of 2 x 2 meters. The
measured beampattern is compared with the theoretical one for several frequencies and pointing angles. The beampattern of the physical array is also
estimated for each one of the 64 positions used by the positioning system.
Also, the measured beampattern and the focusing capacity are analyzed,
since beamforming algorithms assume spherical wave due to the large
dimensions of the array. Finally, frequency and spatial responses for a set of
different acoustic sources are obtained showing angular resolutions of the
order of tenths of degree.
3722
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
An acoustic anemometer is under development for measuring wind
speeds on Mars. Acoustic anemometry allows simultaneous measurement of
wind speed and the speed of sound by measuring the acoustic time of flight
in the forward and reverse directions. Acoustic anemometry avoids some
sources of measurement error that plague other techniques for measuring
winds in planetary atmospheres, such as hot wire measurements, or laser
based tracking of scattered light from dust. The particular focus of this paper
is the ultrasound transducers needed for the instrument. Capacitive micromachined ultrasound transducers (CMUT) fabricated at Tufts University
have previously been described for atmospheric pressure operation (ASA
Fall Meeting 2012). In this work, the transducers have been modified and
tested under low pressure conditions similar to the atmospheric conditions
expected on Mars (4.5 Torr). We describe a comparison between the modeled and measured transducer frequency response. The CMUT resonant frequency decreased from 204 kHz at 760 Torr to 116 kHz at 1 Torr. This is
predicted by the models. The quality factor increased with decreasing pressure as expected, and is accurately modeled above 50 Torr. However, at
pressures below 50 Torr, unmodeled damping mechanisms dominate acoustic losses, and a purely acoustic model underpredicts damping.
Acoustics ’17 Boston
3722
TUESDAY AFTERNOON, 27 JUNE 2017
BALLROOM A, 1:20 P.M. TO 3:40 P.M.
Session 3pMU
Musical Acoustics: General Topics in Musical Acoustics I (Poster Session)
Thomas Moore, Chair
Department of Physics, Rollins College, 1000 Holt Ave., Winter Park, FL 32789
All posters will be on display from 1:20 p.m. to 3:40 p.m. To allow contributors in this session to see the other posters, authors of
odd-numbered papers will be at their posters from 1:20 p.m. to 2:30 p.m. and authors of even-numbered papers will be at their posters
from 2:30 p.m. to 3:40 p.m.
Contributed Papers
In this paper, we report on the development of a perceptually orientated
and automatic classification system of timbre content within orchestral
audio samples. Here, we have decided to investigate polyphonic timbre, a
phenomenon emerging from the mixture of instruments playing simultaneously. Moreover, we are focusing on the perception of the entire orchestral
sound, and not individual instrumental sound. For accessibility to nonAcoustics experts, we chose to use verbal descriptors of timbre quality, such
as brightness and roughness, to represent the timbral content of the samples.
We based our acoustic analysis on the existing research into the perception
and description of timbre. However, with a lack of agreed metrics, we had
to establish a comparative scale for each timbre attribute implemented in
our system, which is based on an analysis of audio recordings, in order to
identify the dominant timbral attribute. To improve the classification accuracy, the system continually calibrates this scale as new audio files are analyzed. Preliminary analysis of our results shows a correlation between the
system’s classification and human perception, which is promising for further
developments, such as standardizing metrics for perceived responses of timbral attributes or implementing systems for music production tasks.
3pMU2. A model for the observed decorrelation of partials in the overtone spectra of bowed stringed instruments. Sarah R. Smith and Mark F.
Bocko (Univ. of Rochester, 405 Comput. Studies Bldg., P.O. Box 270231,
Rochester, NY 14627, sarahsmith@rochester.edu)
It has been shown that the overtone frequencies in the spectra of bowed
stringed instruments played with vibrato exhibit less than perfect pairwise
correlations. However, these results are inconsistent with the mechanism of
performing vibrato by changing the length of the string. Since modulating
the string length affects the frequencies of all string modes proportionately,
it is curious that the overtones exhibit less than perfect correlations. The
observed decorrelations, therefore, may be attributed to the filtering of the
string’s vibrations by the mechanical-acoustic resonant modes of the instrument body. The exact frequency deviations depend upon the frequencies of
instrument’s resonant modes in relation to the string’s overtone frequencies
and the width and rate of the vibrato. By modelling the instrument body as a
sum of resonant modes driven by a frequency modulated saw tooth wave,
we develop an analytical model relating the observed frequency deviations
to the modal properties of the instrument. The effect of a single resonant
mode on the instantaneous frequency trajectories is found analytically and
informs numerical simulations of instruments with multiple modes. The
simulated results compare well with data from recorded violin tones.
3723
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
3pMU3. Characterization of free field radiation in brass instrument
horns under swept sine excitation using a linear microphone array.
Amaya L
opez-Carromero (School of Phys. and Astronomy, Univ. of Edinburgh, 1608, James Clerk Maxwell Bldg., Peter Guthrie Tait Rd., Edinburgh, Scotland/Midlothian EH9 3FD, United Kingdom, s1374028@sms.
ed.ac.uk), Jonathan A. Kemp (Univ. of St Andrews, St Andrews, Fife,
United Kingdom), and D. Murray Campbell (School of Phys. and Astronomy, Univ. of Edinburgh, Edinburgh, United Kingdom)
A linear array with 23 microphones is used to scan a planar section of
the sound field radiated into an anechoic environment by a range of brass
instruments excited by a sinusoidal sine sweep. The planar section contains the symmetry axis of the bell and covers a rectangular area of 0.9 by
0.6 metres, starting at the plane of the bell and extending away from it
along the longest side. The linear microphone array is perpendicular to the
symmetry axis and is stepped along the axis. The resulting matrix of signals is processed to separate the linear and non-linear parts of the
response; the three-dimensional pressure distribution in the sound field is
deduced on the assumption that the field is cylindrically symmetric. This
data then allows the visualization and further analysis of the frequency dependence of radiated wave fronts in brass instrument bells. Comparison is
drawn between the observations and the predictions of several popular
radiation models.
3pMU4. Non-invasive measurement of acoustic coupling between the
clarinet bore and its player’s vocal tract. Steven M. Lulich (Speech and
Hearing Sci., Indiana Univ., 4789 N White River Dr., Bloomington, IN
47404, slulich@indiana.edu)
As acoustically coupled resonators, the bore of a clarinet or other woodwind instrument and the vocal tract of the player interact in ways that affect
timbre and pitch. Pitch in particular is strongly dependent on vocal tract
acoustics when the bore-tract coupling is strong, such as when a tract impedance maximum is close in frequency and amplitude to a bore impedance
maximum. Direct investigation of bore-tract coupling requires invasive
measurement of bore and tract input acoustic pressures (or impedances),
and one particular technique makes us of the ratio of these pressures (in the
frequency domain, Pt/Pb) at harmonics of the reed vibration fundamental
frequency. A non-invasive, model-based approach to investigating boretract coupling has been developed, which depends on a free-field microphone recording the sound produced by the instrument (P1), and an accelerometer placed against the skin of the neck recording skin vibrations (P2)
related to intra-tract acoustic pressures. An additional, model-based calibration step is required. The ratio of these two signals (in the frequency domain, P2/P1) following calibration is qualitatively similar to the ratio Pt/Pb,
and approaches quantitative identity as the model-based calibration step
improves.
Acoustics ’17 Boston
3723
3p TUE. PM
3pMU1. A perceptually orientated approach for automatic classification
of timbre content of orchestral excerpts. Aurelien Antoine and Eduardo
Miranda (Interdisciplinary Ctr. for Comput. Music Res. (ICCMR), Univ. of
Plymouth, The House Bldg. - Rm. 304, Plymouth PL4 8AA, United Kingdom, aurelien.antoine@postgrad.plymouth.ac.uk)
3pMU5. Construction of a finite element model of the Japanese koto.
Angela K. Coaldrake (Music, Univ. of Adelaide, Elder Conservatorium of
Music, University of Adelaide, Adelaide, SA 5005, Australia, kimi.coaldrake@adelaide.edu.au)
This paper presents the steps in developing a finite element model of the
R v5.2a and some
Japanese koto (13-stringed zither) in Comsol MultiphysicsV
of the issues encountered. As the instrument is 1.8 meters in length and
hand crafted there are many internal irregular shapes. Early attempts at creating a geometry were unsatisfactory. To address this issue a CT scan with
2400 cross-sections was used to measure the internal details. A mesh was
R software. The result was a mesh
created from the scan using SimplewareV
with 430,000 elements for the instrument alone, placed in a sphere of air
resulting in over 7 million degrees of freedom. This new model therefore
has required the use of high performance computing to produce a second of
acoustic output. The issue of the physical properties of paulownia, the less
well-characterized highly anisotropic wood used to construct the koto, has
proven more intractable. Scanning electron microscopy, frequency response
and acoustic camera studies of the original instrument provided important
insights into paulownia in particular and developing the model in general. A
number of studies have been undertaken to validate the model including
comparing it with the original instrument. Further studies of the acoustics of
the koto are in progress.
3pMU6. Simplified exponential horns. Jean-Pierre Dalmont, Carole
Kameni, and Philippe Bequin (Laboratoire d’Acoustique de l’Universite du
Maine, Ave. Olivier Messiaen, Le Mans 72085, France, jean-pierre.dalmont@univ-lemans.fr)
The exponential horn is known as the shape realizing the best matching
between a source and the external field for frequencies higher than its cutoff frequency. In practice, the horn being of finite length the effective cutoff is significantly higher and resonances appear as waves are reflected at
the end of the horn. So, the response of the horn is far from being flat. Then,
a question arises: in what extend the shape of an exponential horn has to be
strictly respected in order to keep its main acoustical properties. The present
paper intend to answer this question. First, a review of the different ways to
calculate the input impedance of a horn and their accuracy is made. Comparison with input impedance measurements show that plane wave approximation is often sufficient even when the horn is strongly bended. Second,
some criteria are proposed to characterize horns ability to match with the
external field. These criteria are finally used to compare the performances of
some simplified geometries of horns to that of strictly exponential horns.
3pMU7. Numerical study of nonlinear distortion of resonant states in
acoustic tubes. Roberto Velasco-Segura and Pablo L. Rendon (Laboratorio
de Acustica y Vibraciones, Centro de Ciencias Aplicadas y Desarrollo Teconoma de Mexico, Circuito Exterior S/
nologico, Universidad Nacional Aut
N, C. U., Delegacion Coyoacan, Mexico City 04510, Mexico, roberto.
velasco@ccadet.unam.mx)
A numerical study of nonlinear acoustic propagation inside tubes is presented. Thermoviscous attenuation is included, giving rise to wall losses
associated with the boundary layer. The full-wave simulation is performed
in the time domain, over a 2D spatial domain assuming axial symmetry, and
it is based on a previously validated open source code, using Finite Volume
Method implemented in GPU (FiVoNAGI) [Velasco & Rend
on, A finite
volume approach for the simulation of nonlinear dissipative acoustic wave
propagation, 2015]. One intended application is the identification of resonance frequency shifts in the nonlinear regime in brass musical instruments
as a function of bore profile and amplitude of the driving stimulus. To gain
insight on the nonlinear processes taking place inside the tube, visualizations are presented, differentiating spectral components and traveling waves
in both directions.
3724
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
3pMU8. Use of H1-H2 to quantify formant tuning for notes and portions of the vibrato cycle in the second passaggio of a professional
female singer. Richard C. Lissemore (Speech-Language-Hearing Sci.,
Graduate Center/The City Univ. of New York, 499 Fort Washington Ave.,
Apt. #4F, New York, NY 10033, rlissemore@gradcenter.cuny.edu), Christine H. Shadle (Haskins Labs., New Haven, CT), Kevin Roon, and D. H.
Whalen (Speech-Language-Hearing Sci., Graduate Center/The City Univ.
of New York, New York, NY)
Singing voice pedagogy emphasizes that an acoustic change occurs in
standard, classical, techniqued soprano voice between the musical notes
D5\ (587 Hz) and F5\ (698 Hz). For low vowels, this involves a transition
from tuning of the second resonance to the second harmonic (F2/H2) to tuning of the first resonance to the fundamental (F1/F0). In this single-subject
study, we quantified the acoustics of this transition as the amplitude difference between the first and second harmonics (H1-H2). Results showed a
clear and substantial change from negative to positive H1-H2 values at a
pivot point between E5[ and E5\, implying the resonance tuning. Non-techniqued singing, with the same singer, showed no such change. F0 fluctuation
(vibrato) of 690 cent at the pivot point resulted in positive H1-H2 values at
vibrato maxima and negative ones at vibrato minima. Additionally, H1-H2
values were consistently higher at vibrato maxima than minima throughout
the transition area. Potential explanations for the latter result are: (i) vocal
tract resonances are located just above the sung F0, or (ii) the vibrato cycle
is accompanied by an articulatory change, possibly laryngeal movement.
This illustrates the intricacies of formant tuning and suggests future possibilities for numerical assessment of vocal technique.
3pMU9. Velocity analysis of the vacuum-driven clarinet reed. Carter K.
Richard and Whitney L. Coyle (Phys., Rollins College, 1000 Holt Ave.,
Winter Park, FL 32789, crichard@rollins.edu)
A vacuumed artificial mouth has been assembled and tested to measure
reed velocity for a Bb clarinet along the width of the reed. Reed velocity
measurements may be useful for better estimation of parameters in physical
models, such as the relevant surface area of the vibrating reed. Use of a vacuum system instead of a pressurized mouth chamber allows straightforward
observation and manipulation of the mouthpiece apparatus. Point measurements of reed velocity were obtained via a laser-Doppler vibrometer
directed at the reed surface when artificially blown. Simultaneous highspeed exposures were recorded to visualize reed motion. Preliminary results
indicate that the velocity amplitude of any torsional motion in the reed is
negligible compared to an asymmetric reed velocity, likely caused by natural limitations of the clarinet ligature. Velocity measurements also indicate
that the reed may sometimes rebound against the mouthpiece in its oscillatory period. High-speed exposures support this conclusion by visualizing
the reed deformation as it collides with the mouthpiece. This “rebound” deformation may contribute flow into the clarinet system. Further work will
expand this measurement technique for a full grid along the surface of the
reed, with various ligature mounts, and will seek to verify experimental
measurement with analytic models.
3pMU10. Stepwise regimes in elephant trumpet calls: Similarities with
brass instrument behavior. Jo€el Gilbert (Laboratoire d’Acoustoique de
l’Universite du Maine - CNRS, Ave. Olivier Messiaen, LE MANS 72085,
France, joel.gilbert@univ-lemans.fr), Angela Stoeger (Dept. of Cognit. Biology, Univ. of Vienna, Vienna, Austria), Benjamin Charlton (School of Biology & Environment Sci., Univ. College Dublin, Dublin, Ireland), and David
Reby (School of Psych., Univ. of Sussex, Brighton, United Kingdom)
Trumpet calls are very loud voiced signals given by highly aroused elephants, and appear to be produced by a forceful expulsion of air through the
trunk. Beyond their characteristic “brassy quality” previously attributed to
shockwave formation, some trumpet calls are also characterized by stepwise
fundamental frequency increase and decrease. Here we used spectral analysis to investigate the frequency composition of trumpet calls from one Asian
and one African elephant. We found that the frequency interval between the
steps were consistent with resonances expected in the exceptionally long
elephant vocal tract. Such stepwise regimes are commonly observed in brass
instruments as self-sustained oscillations transiently align on the bore’s resonance frequencies during arpeggios. We suggest that this production
Acoustics ’17 Boston
3724
3pMU11. Embrace for impact: Formant reconstruction in sudden pitch
raise while singing. Chi-Yang Long (Graduate Inst. of Foreign Literatures
and Linguist, National Chiao Tung Univ., 4F., No.20, Ln. 185, Sec. 2, Jinshan S. Rd., Da’an Dist., Taipei 10644, Taiwan, garylung710@gmail.com)
and Yuwen Lai (Graduate Inst. of Foreign Literatures and Linguist, National
Chiao Tung Univ., Hsinchu, Taiwan)
Professional singers are trained to maintain vocal configuration by suppressing laryngeal elevation when singing high notes. The present study investigates the organization of vocal filter in sudden pitch raise condition by
examining the corresponding acoustic correlates. More specifically, we are
interested in whether this phenomenon may have differential effect on vowels
with different height. In the experiment, high and low vowels were embedded
in a nonsense Mandarin carrier sentence “kuai-lai-kuai-lai___yi-po” with
C4E4G4E4___D4D4 pitch contour. Thirty amateur singers were recorded
singing the sentence in four melodic conditions with the pitch intervals
between the preceding syllable /lai/ to target word manipulated. The conditions are Micro (E4 to D4), Macro (E4 to D5), all High (every word sung in
D5) and Null (only the last three words sung in D5-D5D5 contour). The results
show that high pitch singing (Macro, High, and Null) indeed induces formant
reconstruction when compared to Micro and the effect is markedly stronger in
Macro. Furthermore, high vowel are more susceptible than low vowel and
undergoes greater degree of formant reconstruction. The results provide acoustic grounding for the possible interplay between diction and pitch contour.
3pMU12. A study on assessing clarity in pronunciation of soprano singers. Uk-Jin Song and myungjin bae (Dept. Information and TeleCommun.
Soongsil Univ., Sangdo 1-dong, Dongjak-gu, Seoul 156743, South Korea,
imduj@ssu.ac.kr)
The voice of soprano singers reaches at the highest notes among female
vocalists. Soprano singers usually have low clarity with respect to pronunciation because their jaw joints and mouth shapes tend to stay rigid in order to
maintain high notes for a long time. Five formants found in human voice are
differently shaped depending on his or her different physical structures. Especially, high clarity in pronunciation has a distinct formant shape from F1
through F5, because the jaw joint and the mouth shape have a large influence
on F4 and F5 appearing in the high frequency range. This paper comparatively
analyzes the vocal voices of four different Korean soprano singers concerning
clarity in pronunciation. The results of acoustic analysis of these four singers
shows that the singer A and the singer D show from F1 ~ up to F3 above 2
kHz but F4 and F5 do not appear in this range, and the singer C had somewhat
inconsistency with her formant characteristics as a whole. In the singer B, formants from F1 through F5 are distinctly shown even above 2 kHz. The study
concludes that the singer B has the best clarity in pronunciation.
3pMU13. On a sound analysis of Korean Trot singer Nam in-su’s voice.
Bong Young Kim, Ik-Soo Ann, and Myungjin Bae (Sori Sound Eng. Lab,
Soongsil Univ., 1212 Hyungnam Eng. Building 369 Sangdo-Ro, Dongjak_Gu, Seoul, Seoul 06978, South Korea, bykim8@ssu.ac.kr)
The word “‘trot” means “walk quickly” and the fox-trot, which refers to
the performance rhythm of social dance and it is now being used as a word
relating to musical genre and performance. The late singer Nam In-su was Korean who performed in the early stage of Korean trots, and he had sung about
1,000 songs for about 20 years since his debut in 1938. His representative songs
are “Sorrowful Serenade” (1938), “Vanish Away, the 38th Parallel (the Ceasefire Line of the Korean War)” (1949) and “Parting at Busan Station” (1953).
His songs consoled Korean people against the sorrow of a lost and colonized
home country and healed the pain generated from the division between South
and North Korea and the loss due to the Korean war. This paper analyzes the
acoustic components of his vocal voice that induce the softness and liveliness.
His voice crosses over three octaves, and even when he sings quickly, his highpitched sound spreads wide. The sound connection between measures is natural
3725
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
and smooth. In addition, deep vibrations appear at all frequency bandwidths,
and pronunciations of lyrics are accurate, so his voice sounds very lively. This
study will be helpful in understanding the value of popular singers’ vocal voice
supported with acoustic analysis.
3pMU14. On characteristics of Trot singer Bae Ho’s singing voice. Sang
Bum Park and Myungjin Bae (Soongsil Univ., Sando-ro 369, Dongjak-gu,
Seoul 06978, South Korea, sbpark8510@naver.com)
In Korea, the trot began shaping its own style in the 1960s. Then, later
in the 1970s, it became more specialized having four-fourths beat rhythm of
fox-trot, and the Korean style has been finally established with a strong beat
and a unique chopping technique. Although there are many trot singers in
Korea, the singer Bae Ho was very special in using the heavy bass accompaniment which was later frequently used by other pop singers. The singing
voice of Bae Ho gives us the feeling of special softness and appealingness.
In addition, his vocalization features the addition of a deep vibe to the song,
so that the listeners feel sympathetic and comfortable. This study compares
the singing voice of Bae Ho to that of his mimic singers by examining
acoustic characteristics of their voices including amplitude, frequency and
duration. The acoustic analysis proved that the singing voice of Bae Ho is
clearer and has longer duration of vibration in the bass section than that of
mimic singers. Bae Ho has a natural talent in expressing the bass part as
well as in the midrange part without changing the tone of his voice, while
the mimic singers reveal many unnatural connections between measures.
3pMU15. Comparison of classification of musical genre obtained by
subjective tests and decision algorithms. Aleksandra Dorochowicz (Multimedia Systems Dept., Gdansk Univ. of Technol., Gdansk, Poland), Agata
Majda
nczuk (Audio Acoust. Lab., Gdansk Univ. of Technol., Gdansk,
Poland), Piotr Hoffmann (Multimedia Systems Dept., Gdansk Univ. of
Technol., Gdansk, Poland), and Bozena Kostek (Audio Acoust. Lab.,
Gdansk Univ. of Technol., Narutowicza 11/12, Gdansk 80-233, Poland,
bokostek@audioakustyka.org)
The aim of the study is to conduct subjective tests on audio excerpt
assignment to music genre and to carry out automatic classification of musical genres with the use of decision algorithms. First, the musicology background of classifying music into styles and genres is discussed. Then, an
online survey is created to perform subjective tests with a group of listeners,
whose task is assigning audio samples to selected music genres. Next, a set
of music descriptors is proposed and all music excerpts are parametrized.
For checking parameter redundancy the Principal Component Analysis
(PCA) is performed. The created database containing feature vectors is then
utilized for automatic music genre classification. Two classifiers, namely:
Belief Networks and SMO (Sequential Minimal Optimization Algorithm)
are employed for the purpose of music genre classification. The last step of
this study is to compare the efficiency of the listeners classification with the
automatic music genre classification system designed by the authors. Conducted tests show to what extent listeners’ assignment and the automatic
classification results agree. It is also observed that very known performers
are often rated without problems. Contrarily, songs of less known artists are
more difficult to assign to the given genre.
3pMU16. Robust Hidden Markov Models for limited training data for
birdsong phrase classification. Kantapon Kaewtip, Abeer Alwan (Elec.
Eng., UCLA, 623 1/2 Kelton Ave., Los Angeles, CA 90024, jomjkk@
gmail.com), and Charles Taylor (Dept. of Ecology and Evolutionary Biology, UCLA, Los Angeles, CA)
Hidden Markov Models (HMMs) have been studied and used extensively
in speech and birdsong recognition but they are not robust to limited training
data and noise. This work present a novel method to training GMM-HMMs
with extremely limited data—and possibly noisy—by sharing HMM components and generating more training samples that cover the variation of the models. We propose an efficient state-tying algorithm that takes advantage of
unique characteristics of birdsongs. Specifically, the algorithm groups HMM
states based on the spectral envelopes and fundamental frequencies, and the
state parameters are estimated according to the group assignments. For noiserobustness, prominent time-frequency regions (time-frequency ranges expected
Acoustics ’17 Boston
3725
3p TUE. PM
mechanism may constitute a rare example of source-filter interaction (where
the filter properties affect the behavior of the source) in the vocal system of
a terrestrial mammal. These preliminary observations also emphasize how
the generalization of musical acoustic models can provide useful insight
into the production of animal vocal signals.
to contain high energy for a particular HMM state) are used to compute the
state emitting probability. In Cassin’s Vireo phrase classification using 75
phrase types, the results show that the proposed state-tying algorithm significantly outperforms both traditional state-tying algorithms and baseline HMMs
in most training conditions (using 1, 2, 4, 8, and 16 samples). Factors such as
number of training data, number of shared components, and level of background noise are also studied in this work.
3pMU17. Real-time localization and tracking of musical instruments
with spherical microphone arrays. Jonathan Mathews and Jonas Braasch
(Architectural Sci. - Architectural Acoust., Rensselaer Polytechnic Inst.,
110 8th St., Greene Bldg., Troy, NY 12180, mathej4@rpi.edu)
This research describes a method for dynamic beamforming with a particle filter to localize and track musical instruments in a real-time context.
Using a spherical harmonic framework, spherical microphone arrays are
able to decompose three-dimensional sound fields into their basic components, which enables detailed analysis and efficient spatial filtering algorithms. In recent years, methods for determining relative source positions
around an array using steerable beams have been studied. By creating multiple weighting functions based on spherical harmonic components, many
beams can be generated simultaneously and can be used to dynamically
track instruments via an iterative process.
3pMU18. Acoustical characteristics of Chinese musical instrument
bamboo flute. Linhui Peng (Ocean Technol., Ocean Univ. of China, 238
Songling Rd., Information College, Qingdao, Shangdong 266100, China,
penglh@ouc.edu.cn) and Tao Geng (Music performance, School of Music
of South China Normal Univ., Guangzhou, Guangdong, China)
There are few reports on acoustical study about Chinese music instruments,
which is an area worthy of researching. Nowadays there are more and more
new and comprehensive ways that can be used to analyze acoustical characteristics and quality of musical instruments, such as experimental modal analysis,
finite element software. It is said that music is a kind of a world language. However, the music is an expression of a culture, which is expressed by the musical
instrument with its specific acoustical property and character. Therefore, it is
necessary to research the acoustical characteristics that can express Chinese culture in the acoustical music instrument study. Bamboo flute is one of the most
important musical instruments in any Chinese music ensemble or Chines orchestra. Acoustical structure and characteristics of bamboo flute tone are
researched. Meanwhile, acoustical characteristic features for some main playing
technique of bamboo flute are also researched. Then, the identified characteristics of bamboo flute tone related with Chinese music and culture are analyzed.
3726
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
3pMU19. Intonation detection in a melodic context. Gabriella Marrone
(Commun. Disord., Stockton Univ., 101 Vera King Farris Dr., Galloway, NJ
08205, marroneg@go.stockton.edu) and Neil L. Aaronson (Natural Sci. and
Mathematics, Stockton Univ., Galloway, NJ)
Listeners with a wide range of formal and informal musical experience
were asked to listen to an eight-tone diatonic C Major scale, generated using
a piano sample library, in which one of four notes (D4, F4, A4, or C5) would
be mistuned in 13 different mounts between -32¢ and + 32¢. Listeners were
told which note might be mistuned and were simply asked to indicate whether
the scale was in-tune or not. Each listener was exposed to each degree of mistuning ten times. The frequency with which they said a scale was in-tune as a
function of the degree of mistuning was plotted for each note and listener, to
which a three-parameter pseudo-normal distribution (mean, standard deviation, height) was fitted. The standard deviation indicated the sensitivity of the
listener to intonation in each case (large deviation implied low sensitivity to
intonation). Listeners were then ranked based on their musical background,
training, and experience. The effect of musical training on intonation sensitivity was a significant factor (p<0.001). There was also a significant effect of
the particular note on the sensitivity of listeners, with the intonation of A4
being most difficult to detect across listeners (p<0.001).
3pMU20. Exploring some questions on occlusion effect in the human auditory system when a musician or singer’s external ear canal is blocked.
Amitava Biswas (Speech and Hearing Sci., Univ. of Southern MS, 118 College Dr. #5092, USM-CHS-SHS, Hattiesburg, MS 39406-0001, Amitava.
Biswas@usm.edu)
Sometimes some musicians and singers prefer to use their palm or other
objects to cover or occlude at least one ear during performance. This practice may be helpful to enhance their self monitoring of the sound production
using the occlusion effect. The basic occlusion effect in the human auditory
system has been explored and reported in the literature by several individuals. According to those reports, the musician or singer can hear his or her
own voice or musical instrument significantly louder when the ear canal is
blocked at the outer end. Many clinicians routinely utilize detectability of
vibrations from a tuning fork when placed on the mastoid process and the
ear canal is occluded. This is popularly known as the Bing effect. These empirical data suggest existence of an efficient acoustic connectivity from the
vocal tract to the ear canal for healthy individuals. Therefore, this study will
explore quantified effects of the classic Bing effect for normal healthy musicians and singers across the audio spectrum.
Acoustics ’17 Boston
3726
TUESDAY AFTERNOON, 27 JUNE 2017
ROOM 202, 1:15 P.M. TO 3:20 P.M.
Session 3pNSa
Noise: Implications of Community Tolerance Level Analysis for Prediction of Community Reaction to
Environmental Noise
Sanford Fidell, Cochair
Fidell Associates, Inc., 23139 Erwin St., Woodland Hills, CA 91367
Truls Gjestland, Cochair
Acoustics, SINTEF Digital, SINTEF, Trondheim N 7465, Norway
Chair’s Introduction—1:15
Invited Papers
1:20
3pNSa1. Community tolerance level as a paradigmatic shift in development of dosage-response relationships. Sanford Fidell
(Fidell Assoc., Inc., 23139 Erwin St., Woodland Hills, CA 91367, sf@fidellassociates.com)
3p TUE. PM
Dosage-response functions relating the prevalence of a consequential degree of annoyance in communities to cumulative exposure
to transportation noise have typically been derived by correlational methods. Such non-causal analyses fit a curve to a set of field observations of annoyance prevalence rates in multiple communities, typically by means of univariate logistic regression. The resulting curve
passes through the centroid of a cloud of data points, but provides no insight into the great variability among communities in annoyance
prevalence rates at similar noise exposure levels. Community Tolerance Level analysis partitions annoyance into acoustic and nonacoustic components. This approach is a normative (that is, causal and prescriptive) one that fits data sets to an a priori function in order
to estimate a second parameter. The second parameter is the deviation from the assumed (effective loudness) growth function, expressed
in decibel-denominated units as the level of cumulative noise exposure at which half of a community is predicted to be highly annoyed
by transportation noise. CTL analyses of community response to transportation noise thus permit direct quantification not only of the
growth of annoyance with noise exposure, but also of the aggregate effect of all non-acoustic influences on annoyance prevalence rates.
1:40
3pNSa2. Estimating community tolerance for wind turbine noise annoyance. Stephen Keith and David Michaud (Health Canada,
Canadian Federal Government, 775 Brookfield Rd., Ottawa, ON J9H7J9, Canada, david.michaud@hc-sc.gc.ca)
A-weighted WTN contributed less than 10% to the strength of the multiple regression model developed for wind turbine noise
(WTN) annoyance in Health Canada’s Community Noise and Health Study (CNHS). Improvements required consideration of non-LAeq
variables unknown or even inapplicable beyond the CNHS. To facilitate cross-study comparisons, an analysis of WTN annoyance was
conducted based on the community tolerance level (CTL) model. The rate of increase in WTN annoyance was effectively estimated
using a loudness function, as shown in Eq. (1). %HA = 100 exp (-(1/[10((DNL-CTL + 5.306)/10)]0.3)). (1) By convention, CTL is the
DNL from Eq. (1) where 50% of the community would be highly annoyed. The CTL for WTN annoyance from field studies published
to date ranges from 57.1 to 64.6 DNL (mean = 62, r = 3). CTL values developed by others for transportation noise sources suggest that,
on average, communities are between 11 dB and 26 dB less tolerant of WTN than of other sources, depending on the source. Confidence
in these results should increase as future studies in this area produce additional estimates for the relationship between WTN level and
the prevalence of high annoyance. The CTL analytical methods, assumptions, strengths, and limitations are presented.
2:00
3pNSa3. Multilevel modeling and regression as applied to community noise annoyance surveys. D. K. Wilson (Cold Regions Res.
and Eng. Lab., U.S. Army Engineer Res. and Development Ctr., 72 Lyme Rd., Hanover, NH 03755, david.k.wilson90.civ@mail.mil), Nicole
M. Wayant (Geospatial Res. Lab., U.S. Army Engineer Res. and Development Ctr., Alexandria, VA), Edward T. Nykaza (Construction Eng.
Res. Lab., U.S. Army Engineer Res. and Development Ctr., Champaign, IL), Chris L. Pettit (Aerosp. Eng., U.S. Naval Acad., Annapolis,
MD), and Chandler M. Armstrong (Construction Eng. Res. Lab., U.S. Army Engineer Res. and Development Ctr., Champaign, IL)
Tolerance to noise varies between communities, and between individuals comprising the communities. A multilevel modeling
approach is useful for capturing such individual- and community-level variations. Here, we consider a model in which the communitylevel variations are sampled explicitly with community noise surveys, while the individual-level variations are sampled only in a statistical sense (i.e., hidden). The community-level variations are specifically quantified by the community tolerance level (CTL) [Fidell et al.,
J. Acoust. Soc. Am. 130(2), 791-806 (2011)]. Simulations based on the multilevel model indicate that the community- and individuallevel variations have distinct statistical signatures, both of which are evident in noise annoyance surveys involving transportation noise.
3727
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3727
The annoyance curve for a previously unsurveyed community depends on the mean and variance of the CTL, and the sum of the hidden
variances in noise tolerance and exposure among individuals in the community. Regression analyses of transportation noise annoyance
using a multilevel, generalized linear model (GLM) enable noise tolerances and their variations at the two model levels to be distinguished and quantified.
2:20
3pNSa4. CTL—A useful tool for inter-survey comparisons. Truls Gjestland and Femke B. Gelderblom (Acoust., SINTEF Digital,
SINTEF, Trondheim N 7465, Norway, truls.gjestland@sintef.no)
The concept of Community Tolerance Level was presented in 2011 and in subsequent analyses the CTL has proven to be a useful tool for
comparing results from different surveys on noise annoyance. The CTL provides a robust single-number description of the prevalence of
annoyance in a community noise survey. It has been shown that accurate predictions of the prevalence of highly annoyed residents can be
achieved by relatively few survey interviews as the selection of respondents does not have to cover the whole range of exposure. A re-analysis
of existing studies on aircraft noise annoyance has shown that there has been virtually no change in people’s response to this type of noise
over the past 50 years, but there is a distinct difference in the response depending on the rate of which the noise situation has been changing.
A similar analysis of results on road traffic noise from surveys conducted over a period of more than 50 years, will be presented.
2:40
3pNSa5. Observations of non-zero asymptotes for high annoyance at low sound exposure levels. Richard Horonjeff (RDH Acoust.,
81 Liberty Square Rd. #20-B, Boxborough, MA 01719, rhoronjeff@comcast.net)
Social survey data relating the prevalence of a high annoyance to transportation noise typically indicate a decrease in annoyance rate with
decreasing exposure. At very low exposure levels, the expectation is that the percentage of highly annoyed respondents will asymptotically
approach a value of zero. In many studies this has in fact been the case. In some studies, however, the lower asymptote does not appear to be
zero, however, but rather some small, non-zero value. Such non-zero asymptotic values are sometimes nearly constant over an exposure range of
as much as 10 decibels. In other words, the observed prevalence of noise-induced annoyance appears to be independent of sound level over an
extended range, before beginning to depend on noise dose at higher levels. Several examples of this phenomenon are presented and discussed.
3:00
3pNSa6. Applying community tolerance level to high-energy impulse sounds. Paul D. Schomer (Schomer and Assoc. Inc., 2117
Robert Dr., Champaign, IL 61821, schomer@SchomerAndAssociates.com)
Fidell, Schultz and Green [JASA 84, pp. 2109-2113 (1988)] and Green and Fidell [JASA 89, pp. 234-243 (1991)] suggested a systematic approach to distinguishing loudness-driven from community-specific contributions to the prevalence of transportation noiseinduced annoyance. Schomer [JASA 86, pp. 835-836 (1989)] showed how the approach could be adapted to predicting community
response to high amplitude impulsive noise, simply by changing the exponent of the dosage-response function. The current presentation
demonstrates in a more accessible fashion that the basic approach identified in the 1989 paper can be applied to predicting community
response to high amplitude impulsive noise. The approach builds on an application by Schomer et al. [JASA 131, pp. 2772-2786 (2012)]
of the CTL approach to road and rail traffic noise. The latter paper explains the “Community Tolerance Level” method from the perspective of an observer watching a process, rather than as a participant in the process. These principles are used in this paper to reinforce the
conclusions of the 1989 analysis by Schomer.
3728
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3728
TUESDAY AFTERNOON, 27 JUNE 2017
ROOM 203, 1:15 P.M. TO 3:20 P.M.
Session 3pNSb
Noise, Physical Acoustics, ASA Committee on Standards, and Structural Acoustics and Vibration: Sonic
Boom Noise V: Turbulence, Predictions, and Measurements
Philippe Blanc-Benon, Cochair
Centre acoustique, LMFA UMR CNRS 5509, Ecole Centrale de Lyon, 36 avenue Guy de Collongue, Ecully 69134 Ecully
Cedex, France
Victor Sparrow, Cochair
Grad. Program in Acoustics, Penn State, 201 Applied Science Bldg., University Park, PA 16802
Chair’s Introduction—1:15
Invited Papers
1:20
3pNSb1. Sonic booms in atmospheric turbulence ground measurements in a hot desert climate. Edward A. Haering (Res. AeroDynam., NASA Armstrong, M.S. 2228, PO Box 273, Edwards, CA 93523, edward.a.haering@nasa.gov)
3p TUE. PM
The Sonic Booms in Atmospheric Turbulence (SonicBAT) Project flew a series of 20 F-18 flights with 69 supersonic passes at
Edwards Air Force Base in July 2016 to quantify the effect of atmospheric turbulence on sonic booms. Most of the passes were at a pressure altitude of 32,000 feet and a Mach number of 1.4, yielding a nominal sonic boom overpressure of 1.6 pounds per square foot.
Atmospheric sensors such as GPSsonde balloons, Sonic Detection and Ranging (SODAR) acoustic sounders, and ultrasonic anemometers were used to characterize the turbulence state of the atmosphere for each flight. Spiked signatures in excess of 7 pounds per square
foot were measured at some locations, as well as rounded sonic-boom signature with levels much lower than nominal. This presentation
will quantify the range of overpressures and Perceived Level of the sonic boom as a function of turbulence parameters, and also present
the spatial variation of these quantities over the array. Comparisons with historical data will also be shown. The NASA Armstrong
Research Center’s team is made up of KBR Wyle, Gulfstream, Boeing, Pennsylvania State University, Lockheed Martin, Eagle Aeronautics, and Laboratoire de Mecanique des Fluides et d’Acoustique (LMFA in France).
1:40
3pNSb2. Statistics of supersonic signatures propagated through simulated atmospheric turbulence. Trevor A. Stout and Victor
Sparrow (Graduate Program in Acoust., The Penn State Univ., 201 Appl. Sci. Bldg., University Park, PA 16802, tastout6@gmail.com)
The issue of annoyance caused by acoustical signatures from supersonic aircraft may be quantitatively predicted through propagation
simulations. In particular, a model of energy focusing or defocussing due to turbulence in the atmospheric boundary layer is necessary
to predict the real variations found whenever measurements are taken at multiple locations or at multiple times. The present algorithm
solves the Khoklov-Zobolotskaya-Kuznetzov (KZK) equation and attempts to account for all important atmospheric effects such as
refraction, thermoviscous absorption, relaxation processes, and nonlinearity, as well as turbulence effects due to temperature and vector
wind variations. The turbulence model uses von Karman spectra, with vertically-varying length scales to better model the full extent of
the boundary layer. The presentation will discuss some early results and relevant statistics, as well as the feasibility of comparison to
recent overflight tests in NASA’s SonicBAT program. [Work supported by NASA via a subcontract from KBRwyle.]
2:00
3pNSb3. Removing distortions from ground sonic boom measurements for certification of over land flight. John M. Morgenstern
(Lockheed Martin Aeronautics, 1011 Lockheed Way, Palmdale, CA 93599, john.morgenstern@lmco.com)
Ground measurements of sonic booms contain distortions acquired during their propagation through the atmosphere that greatly
changes their loudness. Quiet shaped boom vehicles are being proposed for acceptable over land flight. Because of distortion loudness
variations, many ground measurements would be needed to develop accurate statistics of loudness. And highly turbulent atmospheres
increase loudness variations and affect average loudness, so a limited range of atmospheric conditions would be required for a certification procedure of acceptable loudness. More flights and limited conditions would substantially increase vehicle certification cost. A measurement de-turbulencing technique is shown to greatly reduce atmospheric distortions. It improves loudness accuracy to (hope to
quantify) from a single pass over a line of at least 25 microphones. The method combines a de-turbulencing technique developed by
Plotkin, with a spatial averaging technique used in wind tunnel measurements of sonic boom, and with additional analytical methods
being developed. The de-turbulencing and spatial averaging has been applied to F-5SSBD measurements, resulting in a consistent loudness 2 PLdB quieter than analytical predictions due to greater rounding of the averaged turbulence distortions.
3729
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3729
2:20
3pNSb4. Prediction of various sonic boom signatures observed in large scale experiment. Masashi Kanamori, Takashi Takahashi,
and Yoshikazu Makino (Inst. of Aeronautical Technol., Japan Aerosp. Exploration Agency, 7-44-1, Jindaijihigashi-machi, Chofu, Tokyo
182-8522, Japan, kanamori.masashi@jaxa.jp)
Results of predicting sonic boom signatures observed in D-SEND#2 flight test are presented in this study. D-SEND#2 flight test was
held in northern Sweden in 2015 in order to demonstrate JAXA’s low-boom design concept. The flight test provides three kinds of waveforms: conventional N wave, diffracted U-shaped waveform, and the low-boom waveform. In this study, the method of predicting one
or more waveforms above will be introduced.
2:40
3pNSb5. Evaluation of finite difference approximations of absorption and dispersion implemented in sonic boom propagation
model equations. Matthew T. Collmar (Gulfstream Aerosp. Corp., 32 Innovation Dr., Savannah, GA 31408, matthew.collmar@gulfstream.com) and Joseph A. Salamone (Gulfstream Aerosp. Corp., Tybee Island, GA)
Recently developed one-dimensional far-field sonic boom propagation model equations are variants of an augmented Burgers equation. The Gulfstream Aerospace Corporation far-field sonic boom propagation tool, GACBoom, employs a numerical approach that consists of operator splitting in the time-domain. The mechanisms in the equation include geometric spreading, atmospheric stratification,
nonlinearity, absorption, and dispersion. For propagation of sonic booms in the atmosphere, the predominant dissipative mechanisms are
thermoviscous attenuation and molecular relaxation due to diatomic oxygen and nitrogen, the latter of which also contribute to dispersion. One method for utilizing predictions with these mechanisms is to employ a finite difference approximation to the dissipative and
dispersive terms in the sonic boom propagation model equation. This paper uses a discrete Fourier analysis to investigate both spatial
and temporal discretization ramifications for the dissipative and dispersive components included in the model equations and compares
the results against analytic formulations found in the literature. Concluding remarks include numerical considerations when accounting
for the influence of absorption and dispersion in sonic boom propagation.
3:00
3pNSb6. The use of optical methods for measuring irregular reflection of weak acoustic shock pulses from a solid surface in air.
Maria M. Karzova, Petr V. Yuldashev (Phys. Faculty, M.V. Lomonosov Moscow State Univ., Leninskie Gory 1/2, Phys. Faculty, Dept.
of Acoust., Moscow 119991, Russian Federation, masha@acs366.phys.msu.ru), Didier Dragna (LMFA - UMR CNR 5509, Univ. Lyon,
Ecole Centrale de Lyon, Ecully, France), Sebastien Ollivier (LMFA - UMR CNR 5509, Universite Lyon 1, Ecole Centrale de Lyon,
Ecully, France), Vera Khokhlova (Phys. Faculty, M.V. Lomonosov Moscow State Univ., Moscow, Russian Federation), and Philippe
Blanc-Benon (LMFA - UMR CNR 5509, Univ. Lyon, Ecole Centrale de Lyon, Ecully, France)
Irregular reflection ofacoustic weak shockwaves (acoustic Mach numbers less than 0.01) is known as the von Neumann paradox and
could not be described by the three-shock theory. In this work, nonlinear reflection regimes were studied experimentally using sparkgenerated spherically divergent N-waves reflecting from a plane rigid surface. Two optical methods were used in the measurements: a
Schlieren system to visualize reflection patterns and a Mach-Zehnder interferometer to reconstruct pressure waveforms. The reconstruction was performed by applying the inverse Abel transform to the phase of the signal measured by the interferometer. The Mach stem
formation was observed close to the surface as a result of collision of the incident and reflected front shocks of the N-wave and further
away from the surface where the reflected front shock interacted with the incident rear shock. It was shown that irregular reflection
occurred in a dynamic way and the length of the Mach stem increased while the N-wave propagated along the surface. In addition,
reflection patterns were analyzed for several rough plane surfaces with different roughness size. The height of the Mash stem was found
shorter for surfaces with larger dimension of the roughness.
3730
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3730
TUESDAY AFTERNOON, 27 JUNE 2017
BALLROOM A, 1:20 P.M. TO 3:40 P.M.
Session 3pNSc
Noise: Effects of Noise and Perception (Poster Session)
William J. Murphy, Chair
Hearing Loss Prevention Team, Centers for Disease Control and Prevention, National Institute for Occupational Safety and
Health, 1090 Tusculum Ave., Mailstop C-27, Cincinnati, OH 45226-1998
All posters will be on display from 1:20 p.m. to 3:40 p.m. To allow contributors in this session to see the other posters, authors of oddnumbered papers will be at their posters from 1:20 p.m. to 2:30 p.m. and authors of even-numbered papers will be at their posters from
2:30 p.m. to 3:40 p.m.
Contributed Papers
What is a safe noise exposure level for the public? This question should
underlie basic and applied research on the effects of sound on humans, and
regulatory efforts to control public noise exposure. The safe public noise exposure level cannot be an unadjusted occupational standard. Unlike other
occupational exposures, noise exposure continues outside the workplace.
Occupational limits must be adjusted for increased exposure time, from 8 to
24 hours daily, 240 workdays to 365 days annually, and from 40 work years
to the entire lifespan. Recommended safe noise exposure levels depend on
which adverse noise effect is being considered. To prevent hearing loss, the
U.S. Environmental Protection Agency (EPA) adjusted the U.S. occupational recommended exposure level of 85 A-weighted decibels for additional exposure time to calculate a 70 decibel time weighted average (TWA)
exposure level. EPA did not adjust for lifespan years so the correct safe exposure level is likely lower. The World Health Organization (WHO) also
recommends 70 decibels to prevent hearing loss. EPA and WHO determined
that non-auditory health impacts of noise occur at 55 decibels TWA, with
annoyance starting at 45 decibels. These are the safe noise exposure levels
for the public.
3pNSc2. Noise exposure in berthing rooms of Naval ships. Shakti K.
Davis, Christopher Smalt, and Paul Calamia (BioEng. Systems and Technologies, MIT Lincoln Lab., 244 Wood St., Lexington, MA 02420, shakti@ll.
mit.edu)
Noise on aircraft carriers is known to exceed hazardous noise levels as
jets launch and land on the flight deck and loud machinery operates below
deck. Crew members often reach their daily noise allowance while performing work duties but the conditions for auditory recovery onboard are not
well understood. To address this gap, we assisted the Navy in recording 24h
persistent noise measurements in several berthing rooms on the USS Nimitz
(CVN-68). During flight operations, the 8h time-weighted average (TWA)
noise levels in these below-deck living spaces ranged between 75 and 81
dBA. While the levels fall below the Department of Defense TWA limit of
85 dBA, these conditions may not support auditory recovery from temporary
threshold shifts that occurred during work hours. Another potential noise
hazard in these rooms is impulse noise from flight deck catapults and arresting wires, with peak levels as high as 143 dB. In this presentation, we
describe an analysis of the 24h noise exposure from aircraft-carrier berthing
rooms including steady-state and impulse noise exposure metrics defined in
MIL-STD-1474E. [Work supported by the ONR under Air Force Contract
No. FA8721-05-C-0002.]
3731
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
3pNSc3. Development of laboratory-based test methods for the comparative evaluation of performance of advanced hearing protection devices.
Theodore F. Argo, Brian Zadler, Cortney LeNeave, and Jennifer Jerding
(Appl. Res. Assoc., Inc., 7921 Shaffer Parkway, Littleton, CO 80127,
targo@ara.com)
Standard acoustic test methods may not fully capture the performance
characteristics of advanced passive and active hearing protection devices.
Development of new laboratory-based test methods with the ability to discriminate between the performance characteristics of these devices, such as
level-dependent or nonlinear effects, without the use of human subjects was
undertaken. Measurements of hearing protection device performance with
respect to signal quality, sound localization, self-noise, and impulse
response were performed. Signal quality and sound localization were both
tested using a compact 3D positioning apparatus which gave us directivity
information without the use of human subjects or a large hemispherical test
fixture. A sound isolation box was used not only for self-noise but also for
impulse response measurements. Results of the impulse testing were compared to freefield shock tube results to ensure consistency among methods.
This new evaluation methodology, when performed on an array of advanced
hearing protection devices, can provide supplemental or alternative performance data for the relative comparison of devices under test. The results
allow for distinction between devices based on preferred characteristics, for
example, low self-noise and good signal quality favored over better impulse
protection. [Research supported by US Army Natick Soldier Research, Development, & Engineering Center.]
3pNSc4. Evaluation of a calculation method of noise exposure from
communication headsets. Hilmi R. Dajani (School of Elec. Eng. and Comput. Sci., Univ. of Ottawa, Ottawa, ON K1N6N5, Canada, hdajani@site.uottawa.ca), Flora G. Nassrallah (Hearing Res. Lab., Univ. of Ottawa, Ottawa,
ON, Canada), Caroline Chabot (Audiology/SLP Program, Univ. of Ottawa,
Ottawa, ON, Canada), Nicolas N. Ellaham (Hearing Res. Lab., Univ. of
Ottawa, Ottawa, ON, Canada), and Christian Giguère (Audiology/SLP Program, Univ. of Ottawa, Ottawa, ON, Canada)
Specialized standardized methods for the measurement of noise exposure from communication headsets or sound sources close to the ear include
the Microphone in Real Ear and manikin techniques, as specified in ISO
11904-1/2. The 2013 version of Canadian standard Z107.56 introduced a
simpler calculation method to increase accessibility to communication headset exposure assessments for the widest range of stakeholders in hearing
loss prevention. The calculation method only requires general and widely
accessible sound measurement equipment and basic computational steps
that account for the main determinants of exposure such as the background
noise around the user, the sound attenuation of the communication headset,
and the expected communication duration and effective listening signal-tonoise ratio. This paper reviews recent research on the effects of the spectral
and temporal characteristics of the background noise and the headset
Acoustics ’17 Boston
3731
3p TUE. PM
3pNSc1. What is a safe noise exposure level for the public? Daniel Fink
(The Quiet Coalition, The Quiet Coalition, P.O. Box 533, Lincoln, MA
01733, DJFink@thequietcoalition.org)
configuration on the speech listening level. Results indicate that the listening
level is largely insensitive to spectral and temporal variations in the background noise and that A-weighted noise level is a good predictor of listening
level once headset attenuation is taken into account. It is also found that
one-sided headsets increase exposure by 6-7 dB compared to two-sided
headsets due to binaural summation.
calibration of stimuli signals in the case of the Distortion Product OAE
(DPOAE) approach used. Algorithms were recently developed to quantify,
in real-time, from the OAE probe microphones and receivers signals, the
passive attenuation of the OAE probes. The validation of this approach was
conducted, in laboratory conditions, on five human subjects exposed to
industrial and pink noise recordings at realistic levels.
3pNSc5. On-body and in-ear noise exposure monitoring. Christopher
Smalt, Shakti K. Davis, Paul Calamia, Joe Lacirignola (MIT Lincoln Lab.,
244 Wood St., Lexington, MA 02420, Christopher.Smalt@ll.mit.edu), Olha
Townsend (Brown Univ., Lexington, MA), Christine Weston, and Paula
Collins (MIT Lincoln Lab., Lexington, MA)
3pNSc8. Face the music: Classical music students and the sound of performance. Stephen Dance (School of the Built Environment and Architecture, London South Bank Univ., Borough Rd., London SE1 0AA, United
Kingdom, dances@lsbu.ac.uk)
Accurately estimating noise dosage can be challenging for personnel
who move through multiple noise environments. Dose estimates can also be
confounded by the requirement to wear hearing protection in some areas,
but not others. One concept for improved dose estimates under these conditions is to capture noise in-the-ear and on-body simultaneously. An additional benefit of this setup is that hearing protection fit can be assessed in
real time. To evaluate this dual-microphone approach, we prototyped a
noise dosimetry device for military environments where loud impulse noise
such as weapons fire drives the need for high dynamic range and a high sampling rate. In this presentation, we describe a system where the in-ear microphone is acoustically coupled to a disposable foam or flange hearing
protection eartip. Initial laboratory tests with a shock tube and acoustic test
fixture (ATF) show more than 30 dB noise reduction between the on-body
and in-ear microphones. Furthermore, in-ear levels are consistent with eardrum measurements in the ATF. One concern with body-worn dosimeters is
their susceptibility to shock artifacts from microphone motion or handling.
To address this issue, we also investigate co-locating an accelerometer with
the on-body microphone to help remove shock artifacts. [Work supported
by the Office of Naval Research.]
3pNSc6. An assessment of the permissible exposure limit for industrial
complex noise exposure. Wei Qiu (SUNY Plattsburgh, 101 BRd. St.,
Plattsburgh, NY 12901, wei.qiu@plattsburgh.edu), Meibian Zhang, and
Jianmin Jiang (Zhejiang Provincial Ctr. for Disease Control and Protection,
Hangzhou, Zhejiang, China)
An 8-hr time-weighted average exposure of 85 dBA was adopted as the
permissible exposure limit (PEL) by most current international standards for
exposure to noise. Using the definition of material hearing impairment
(MHI) as the average of hearing threshold levels for both ears exceeded 25
dB at 1, 2, 3, and 4 kHz, NIOSH estimated that the excess risk was 8% for
workers exposed to an average noise level of 85 dBA over a 40-year working lifetime. However, this 85 dBA PEL was based on the data that was
acquired from workers exposed to steady state (Gaussian) noises. In this
study a database of 648 workers (age 36.9 yrs + /- 7.8, exposure duration
12.8 yrs + /- 8.0) exposed to non-Gaussian noises were used to assess the 85
dBA PEL. Among them 222 subjects exposed to noise at level of 80-84
dBA and 426 subjects at 85-89 dBA. Although the average duration of exposure from non-Gaussian database was much less than 40 years, the prevalence of MHI was 16.7% at level of 80-84 dBA and 31.5% at level of 84-89
dBA. The results show that 85 dBA PEL may not well protect hearing for
workers exposed to non-Gaussian complex noise exposure.
3pNSc7. Real-time estimation of the passive attenuation for otoacoustic
emission measurement probes. Vincent Nadon (ETS,
6298 rue d’Aragon,
Montreal, QC H4E 3B1, Canada, vincent.nadon@etsmtl.ca) and Jeremie
Voix (ETS,
Montreal, QC, Canada)
Despite hearing loss prevention programs in place at work, noiseinduced hearing loss (NIHL) remains the first reported occupational disease.
To improve the detection of over-exposure to noise, an initial proof-of-concept for field monitoring of inner-ear health using otoacoustic emissions
(OAE) was developed and successfully validated in laboratory conditions.
However, in real-life situations, the unsupervised placement of the OAE
probes remains a challenge: proper fit of the probe in the ear canal is
required to ensure adequate hearing protection of the wearer, to improve the
signal-to-noise ratio of OAE measurements and to maintain the proper
3732
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Since the implementation and enforcement of the European Union Physical Agents Directive (Noise) the Acoustics Group has collaborated with the
Royal Academy of Music creating the noise team formed from administrators, scientists, and senior management. Our challenge was to allow these
highly talented artists to practice, rehearse, and perform safely during their
time at the Royal Academy of Music. This ten year project involved more
than 3000 musicians measuring sound exposure of each instrument group
and the hearing acuity of every student, as well as hearing surveillance of a
sample of graduates. At each occurrence, the students were questioned as to
their aural environment. The paper will focus upon the hearing acuity of
undergraduates after studying music for a period of four years.
3pNSc9. Quiet rim driven ventilation fan design. Mark P. Hurtado, Daniel Wu, and Ricardo Burdisso (Virginia Polytechnic Inst. and State Univ.
(Virginia Tech), 506 Broce Dr., Apt. 16, Blacksburg, VA 24060,
phmark15@vt.edu)
Auxiliary ventilation fans are commonly used as a form of temperature
and humidity control of working environments. However, they often generate high noise levels that cause permanent hearing impairment from prolonged exposure. Dominant noise sources from ventilation fans such as
noise due to the aerodynamic interaction of the fan with the struts to support
the hub motor and the blade-duct tip gap can be eliminated using a rim
driven fan. However, tones from blade steady loading and thickness and the
broadband airfoil self-noise of the fan are not eliminated. This noise is proportional to the 4-6th power of the tip speed. Hence, the approach here is to
design a ring driven ventilation fan that reduces the fan tip speed while optimizing the blade profile to retain aerodynamic performance. The fan design
has been 3D printed and implemented using a motor mounting system to
validate the predicted performance. The motor mounting system integrates
the fan design, a bell mouth shaped duct and a brushless DC motor. The
experiments show good agreement with the predicted mechanical power,
axial flow, and radiated noise over a range of speeds. Results suggest that a
ring driven fan reduces noise while maintaining aerodynamic performance.
3pNSc10. Smart-phone application to collect ambient noise field-data
for users, audiologists, and researchers. Roger M. Logan (12338 Westella, Houston, TX 77077, rogermlogan@sbcglobal.net)
Could / should a smart-phone application be developed to measure and
analyze social ambient noise that would: provide results to the user, store
analyses locally for use by audiologists, and upload data for scientific
research?
3pNSc11. The importance of attentive listening. Andrew S. Harris.
(Andrew S. Harris, Inc., 26 Tappan St., Manchester, MA 01944, asharrisinc@gmail.com)
We all live in a world full of sounds. During my life, I have been aware
of the sounds around me. Until I was about 23 years old, I experienced
many noise environments and tried to remember how they sounded, but did
not have any knowledge of acoustics. My knowledge of acoustics began
when I took Bob Newman’s architectural acoustics courses at MIT. It continued with work at BBN. Bob used examples of sounds, to illustrate his
teaching. Bob stressed the importance of listening carefully and remember
what you heard. If you need to analyze noise environments, it vitally important to listen very carefully and to remember the sounds. Always participate
in noise measurements if you are responsible for reporting the noise and
Acoustics ’17 Boston
3732
3pNSc12. Social benefit analysis of reduced noise from electrical city
buses in Gothenburg. Krister Larsson (Bldg. Technology/Sound & Vibrations, RISE Res. Institutes of Sweden, Box 857, Boras SE-50115, Sweden,
krister.larsson@sp.se) and Maria Holmes (Environmental agency, City of
Gothenburg, Gothenburg, Sweden)
The city of Gothenburg is the second largest city in Sweden and has an
ambitious traffic strategy to increase the share of public transport substantially until 2035. At the same time the city is growing due to urbanization
and the densification of the city leads to an anticipated growth in city bus
traffic. However, noise from buses might lead to negative consequences for
the citizens and electrical buses could be a way to reduce noise and emissions from the public transport system. In this study a comparison of noise
levels and social costs of bus types with different powertrains are presented.
Initially, noise emissions from three bus types were measured on a test
track. The propulsion noise was extracted and coefficients for the Nord2000
Road prediction model were adapted. The Nord2000 model was used to calculate façade noise levels in the city center, as well as in a smaller focus
area. The predicted noise levels were used to calculate health effects according to DALYs, as well as social costs according to ASEK. In addition,
indoor maximum noise levels were calculated for typical façade cases based
on the measurements. The results show that the largest benefits from electrical buses are obtained during acceleration, for example, at bus stops, and for
maximum levels indoors. However, these situations are not taken appropriately into account in current social cost models.
3pNSc13. Numerical prediction of the broadband sound attenuation of
a commercial earmuff: Impact of the cushion modeling. Kevin Carillo
(Mech., Ecole
de Technologie Superieure, 1100, rue Notre-Dame Ouest,
Montreal, QC H3C 1K3, Canada, kevin.carillo.1@ens.etsmtl.ca), Franck C.
Sgard (IRSST, Montreal, QC, Canada), and Olivier Doutres (Mech., Ecole
de Technologie Superieure, Montreal, QC, Canada)
Passive earmuffs are commonly used when the sound level cannot be
reduced at the source. They are mainly characterized by their sound attenuation that can be either measured or simulated. In this work, the sound attenuation of a commercial earmuff is calculated using a finite element model
3733
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
from 100 Hz to 5 kHz. Emphasis is put on the foam-filled cushion which is
the trickiest component to model because of its physical complexity. This
multiphasic cushion is modeled in a simplified way as an equivalent solid,
either isotropic or transverse isotropic in order to take into account the
added transverse stiffness due to the bulging of the cushion polymeric
sheath. The accuracy of these models is investigated by comparison with
measurements. The insertion loss (IL) predicted with the isotropic cushion
model is highly underestimated between 500 Hz and 2.5 kHz due to the
presence of an unrealistic mode of transverse deformation. It is found that
(1) neglecting the acoustic excitation on the cushion’s flanks of the isotropic
model or (2) using the transverse isotropic cushion model significantly
improves the simulated IL.
3pNSc14. Evaluation of noise exposures and hearing loss at a hammer
forge company. William J. Murphy (Hearing Loss Prevention Team, Centers for Disease Control and Prevention, National Inst. for Occupational
Safety and Health, 1090 Tusculum Ave., Mailstop C-27, Cincinnati, OH
45226-1998, wjm4@cdc.gov), Scott E. Brueck, Judith Eisenberg (Hazard
Evaluation and Tech. Assistance Branch, Centers for Disease Control and
Prevention, National Inst. for Occupational Safety and Health, Cincinnati,
OH), Edward L. Zechmann (Hearing Loss Prevention Team, Centers for
Disease Control and Prevention, National Inst. for Occupational Safety and
Health, Cincinnati, OH), and Edward F. Krieg (Statistics Team, Centers for
Disease Control and Prevention, National Inst. for Occupational Safety and
Health, Cincinnati, OH)
The NIOSH health hazard evaluation program evaluated employees’
exposures to high level continuous and impact noise at a hammer forge
company. Personal dosimetry data were collected from 38 employees and
noise exposure recordings were collected during two visits to the facility.
Extensive audiometric records were reviewed and trends for hearing loss,
threshold shifts and risk of hearing loss were assessed. The effectiveness of
hearing protection devices for hammer forging was evaluated with an acoustic test fixture. A longitudinal analysis was conducted on the audiometric
data set that included 4750 audiograms for 483 employees for the years
1981 to 2006. The analysis of the audiometric history for the employees
showed that 82% had experienced a NIOSH-defined hearing threshold shift
and 63% had experienced an OSHA-defined standard threshold shift. The
mean number of years from a normal baseline audiogram to a threshold shift
was about 5 years for a NIOSH threshold shift and was about 9 years for an
OSHA threshold shift. Overall hearing levels among employees worsened
with age and length of employment. The NIOSH audiometric test criteria in
addition to OSHA threshold shift criteria to assess threshold shifts could
provide an opportunity for early intervention to prevent future hearing loss.
Acoustics ’17 Boston
3733
3p TUE. PM
recommending noise reduction treatments. During the 55 years since I
began studying acoustics, there have been many changes in the issues we
face. Focusing on the importance of careful listening, this paper will consider major sources of changes in the outdoor noise environment, i.e.,
increased numbers and kinds of sources; increased levels of activity; and
increased desire for quiet.
TUESDAY AFTERNOON, 27 JUNE 2017
ROOM 210, 1:15 P.M. TO 3:40 P.M.
Session 3pPAa
Physical Acoustics: Chains, Grains, and Origami Nonlinear Metamaterials
Julian D. Maynard, Cochair
Physics, Penn State University, 104 Davey Lab, Box 231, University Park, PA 16802
Vincent Tournat, Cochair
LAUM, CNRS, Universit
e du Maine, Av. O. Messiaen, Le Mans 72085, France
Chair’s Introduction—1:15
Invited Papers
1:20
3pPAa1. Amplitude-dependent shock wave propagation in three-dimensional microscale granular crystals. Morgan Hiraiwa and
Nicholas Boechler (Dept. of Mech. Eng., Univ. of Washington, Mech. Eng. Bldg., Box 352600, Seattle, WA 98195, boechler@uw.edu)
Ordered arrays of elastic spherical particles in contact, often referred to as granular crystals, have been used as a model system for
understanding the dynamics of granular media and explored as a type of designer material for acoustic wave tailoring applications. Due to
the Hertzian interactions between the particles, granular crystals composed of macroscale particles have been shown to support strongly
nonlinear phenomena including shock and solitary waves. In this presentation, we will describe recent progress in our studies of laser-generated shock wave propagation in self-assembled three-dimensional microscale granular crystals, where adhesive forces between the particles play a major role. Specific features studied include the dependence of the shock wave velocities and absorption on excitation
amplitude. Dynamic failure processes such as crater formation and spallation are also explored. The experimental measurements are compared with reduced-dimension discrete element model simulations. New understanding of strongly nonlinear phenomena in microgranular
media has potential applications to areas including energetic material dynamics, shock mitigation, and ultrasonic wave tailoring.
1:40
3pPAa2. Acoustics in disordered granular materials: From the particle scale to force networks. Karen Daniels (Dept. of Phys.,
North Carolina State Univ., Box 8202, Raleigh, NC 27695, kdaniel@ncsu.edu)
Granular materials are inherently heterogeneous, and continuum models of properties such as the shear modulus and sound speed often fail to quantitatively capture their dynamics. One likely reason, particularly in soft materials, is the transmission of forces via a
chain-like network of strong forces. I will present several methods of characterizing sound transmission within such materials, combining both high-speed imaging and particle-scale piezoelectric measurements. In experiments on compressed granular materials, we
observe that the amplitude of propagating sound is on average largest within particles experiencing the largest forces, due to the
increased particle contact area. In addition, we find that the particle-scale density of vibrational modes exhibits systematic changes associated with the amount of compression, shear strain, or disorder in the system. We have found that a helpful theoretical framework is to
consider a network representation in which the nodes (particles) are connected by weighted edges obtained from contact forces (visualized using photoelastic materials). Such network representations provide a mathematical framework which allows for the comparison of
features and dynamics at different spatial scales. Collectively, these measurements highlight the importance of this force chain network
in controlling the bulk properties of the material.
2:00
3pPAa3. Modelling the transmission of ultrasound pulses through grain-to-grain contacts. Bart Van Damme (Acoustics/Noise
Control, Empa, Ueberlandstrasse 129, D€
ubendorf 8600, Switzerland, bart.vandamme@empa.ch)
Wave propagation through packed spherical particles is characterized by two distinct mechanisms, depending on the frequency of
the wave content. Coherent wave pulses occur when the Hertzian contact model can be used, i.e., for frequencies low enough so that the
granules behave as rigid bodies. Above a certain frequency, a chaotic time signal is the result of diffusive energy transmission through
the grain contacts. This so-called coda wave is important for applications tracking the microstructure of materials, e.g., in nondestructive
testing. This work looks for parallels between the two approaches by investigating the transmission of short ultrasound pulses in the unit
cell of a granular material: two spheres in contact. Laser measurements on two identical large steel spheres show that the energy transmission happens in discrete steps, due to guided surface waves. The measured and modeled energy distribution evolution are similar to
the one predicted by the diffusive theory. However, we show that the Hertz contact law can be applied locally in the region of the contact
to quantify the pulse transmission, despite the fact that the entire sphere no longer behaves as a rigid body. This approach allows for the
design of a configurable non-linear waveguide.
3734
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3734
2:20
3pPAa4. Nonlinear elastic waves in architected soft solids. Bolei Deng (SEAS, Harvard Univ., Cambridge, MA), Jordan R. Raney
(Dept. of Mech. Eng. and Appl. Mech., Univ. of Pennsylvania, Philadelphia, PA), Katia Bertoldi (SEAS, Harvard Univ., Cambridge,
MA), and Vincent Tournat (LAUM, CNRS, Universite du Maine, Av. O. Messiaen, Le Mans 72085, France, vincent.tournat@univlemans.fr)
In this presentation, we analyze different regimes of elastic wave propagation in a family of architected soft solids, the rotating
square structures, known to exhibit negative Poisson ratio. We show that it is possible, via a discrete model of finite size masses coupled
by soft and highly deformable elastic ligaments, to describe the nonlinear wave propagation of displacement and rotation modes in a
two-dimensional configuration. In turn, the geometrical characteristics and local elastic parameters of the architected structures can be
put in correspondence with the dispersive and nonlinear wave properties, thus allowing for the dispersion and nonlinearity management.
By exploring several designs and the influence of geometry, we show that the parameters of the governing nonlinear wave equations can
be controlled, and even the type of governing equation and their dominant nonlinearity can be modified. In particular, we report that for
several studied configurations, vector elastic solitons are predicted and experimentally observed. These results could be useful for the
design of nonlinear elastic metamaterials, aiming at controlling high amplitude vibrations and elastic waves, or achieving amplitude dependent operations on waves.
2:40
3pPAa5. Transformable origami-inspired acoustic waveguides. Katia Bertoldi (Harvard Univ., 29 Oxford St., Cambridge, MA
02138, bertoldi@seas.harvard.edu), Vincent Tournat (Harvard Univ., Le Mans, France), Sahab Babaee, and Johannes Overvelde (Harvard Univ., Cambridge, MA)
Using foldable origami-like structures, we design reconfigurable and switchable acoustic waveguides composed of interconnected
periodic arrays of hollow tubes to manipulate and guide sound. We demonstrate both numerically and experimentally that upon application of external deformation, the structure is folded and transformed to one-, two-, and three-dimensional waveguide in which sound
waves travel through the tubes. The proposed design expands the ability of existing acoustic waveguides by enabling tunability induced
by reversible deformation and folding shape transformation over a wide range of frequencies, opening avenues for the design of novel
tunable waveguides and adaptive sound filters.
Contributed Papers
3pPAa6. Numerical investigation on the effect of structure parameter of
acoustic field rotator on its operating bandwidth. Xiuhai Zhang, Zhiguo
Qu, Di Tian, and Zhi Liu (Key Lab. of Thermo-Fluid Sci. and Eng. of Ministry of Education, School of Energy and Power Eng., Xi’an Jiaotong Univ.,
No.28, Xianning West Rd., Xi’an 710049, China, zhangxiuhai000@stu.xjtu.
edu.cn)
The theory about transformation acoustics is introduced to indicate that
designing an acoustic field rotator, in reality, is feasible. Then, numerical
research is conducted to investigate the effect of geometrical parameters on
the high cut-off frequency, including the size of building block, the aspect
ratio of building block, the size of acoustic field rotator, the external diameter of acoustic field rotator, and the internal diameter of acoustic field rotator. The results suggest that the sizes of building blocks of acoustic field
rotators are inversely proportional to their high cut-off frequencies in the
studied frequencies. The effect of the size of acoustic field rotator mainly
lies in the size of building block instead of the number of the building block.
There is a linear relationship between the aspect ratio of the building block
and high cut-off frequency in the studied frequencies. In addition, the layer
of building block has an effect on high cut-off frequency as well. The
3735
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
current numerical study is helpful to give a better guidance for designing the
acoustic field rotator.
3:20
3pPAa7. Comparison of results for an analogous acoustical and optical
scattering problem: The double sphere. Cleon E. Dean (Phys., Georgia
Southern Univ., PO Box 8031, Math/Phys. Bldg., Statesboro, GA 304618031, cdean@georgiasouthern.edu) and James P. Braselton (Mathematical
Sci., Georgia Southern Univ., Statesboro, GA)
A comparison of results for an analogous scattering problem is made for
the interactive scattering of a double sphere in both acoustics and electromagnetic scattering. For initial testing purposes, the problem is simplified to
that of two identical spheres with the same size and physical properties. The
acoustical scatterer is modeled to match optical characteristics of the already
studied optical scattering problem [G. W. Kattawar and C. E. Dean, Opt.
Lett. 8, 48-50 (1983).]. Particular study is made of situations that lead to
large side scattering responses due to resonance phenomena. The model for
the acoustical scattering problem was discussed in a previous Acoustical Society of America meeting [C. E. Dean and J. P. Braselton, J. Acoust. Soc.
Am. 139, 1121 (2016)].
Acoustics ’17 Boston
3735
3p TUE. PM
3:00
TUESDAY AFTERNOON, 27 JUNE 2017
ROOM 200, 1:20 P.M. TO 3:40 P.M.
Session 3pPAb
Physical Acoustics: General Topics in Physical Acoustics II
Brent O. Reichman, Cochair
Brigham Young University, 453 E 1980 N, #B, Provo, UT 84604
Sarah M. Young, Cochair
Physics, Brigham Young University-Idaho, ESC N203, Provo, UT 84601
Contributed Papers
1:20
1:40
3pPAb1. Time-resolved dynamics of micrometer-sized bubbles undergoing shape oscillations. Matthieu Guedra (Univ. Lyon, Universite Lyon 1,
INSERM, LabTAU, F-69003, LYON, France, 151 cours Albert Thomas,
Lyon 69424, France, matthieu.guedra@inserm.fr), Sarah Cleve (Univ.
Lyon, Ecole Centrale de Lyon, INSA de Lyon, CNRS, LMFA UMR 5509,
F-69134 Ecully CEDEX, VILLEURBANNE CEDEX, France), Cyril
Mauger (Univ. Lyon, Ecole Centrale de Lyon, INSA de Lyon, CNRS,
LMFA UMR 5509, F-69134 Ecully CEDEX, Lyon, France), Claude Inserra
(Univ. Lyon, Universite Lyon 1, INSERM, LabTAU, F-69003, LYON,
France, Lyon, France), and Philippe Blanc-Benon (Univ. Lyon, Ecole Centrale de Lyon, INSA de Lyon, CNRS, LMFA UMR 5509, F-69134 Ecully
CEDEX, Ecully, France)
3pPAb2. Triggering of surface modes by bubble coalescence at high
pressure amplitudes. Sarah Cleve (Univ. Lyon, Ecole Centrale de Lyon,
INSA de Lyon, CNRS, LMFA UMR 5509, F-69134 Ecully CEDEX, Lyon,
FRANCE, 20 Ave. A. Einstein, VILLEURBANNE CEDEX 69621, France,
sarah.cleve@ec-lyon.fr), Matthieu Guedra (Univ. Lyon, Universite Lyon 1,
INSERM, LabTAU, F-69003, LYON, France, Lyon, FRANCE, Lyon, France),
Cyril Mauger (Univ. Lyon, Ecole Centrale de Lyon, INSA de Lyon, CNRS,
LMFA UMR 5509, F-69134 Ecully CEDEX, Lyon, FRANCE, Lyon, France),
Claude Inserra (Univ. Lyon, Universite Lyon 1, INSERM, LabTAU, F-69003,
LYON, France, Lyon, FRANCE, Lyon, France), and Philippe Blanc-Benon
(Univ. Lyon, Ecole Centrale de Lyon, INSA de Lyon, CNRS, LMFA UMR
5509, F-69134 Ecully CEDEX, Lyon, FRANCE, Ecully, France)
Bubbles in a cavitation cloud can exhibit shape oscillations when insonified at large acoustic pressure amplitudes, which are generally connected to
collective effects of the bubbles population (subharmonic emission, erratic
motions of bubbles, coalescence, or fragmentation). The onset conditions
and the oscillating properties of such shape modes are analyzed experimentally through highly time-resolved dynamics of micrometric bubbles. Single
bubbles of radius ranging from 30 to 80 micrometers are nucleated with a
laser pulse, trapped in a 30kHz ultrasonic field and imaged using two triggered CCD cameras at an acquisition rate of 180 kfps. A large parametric
analysis over the bubbles radius and driving amplitude allows to recover the
stable / unstable areas of the parametrically excited shape modes. Experimental evidence of nonzonal harmonics are reported, together with nonlinear modal interactions for sufficiently high driving amplitudes and large
shape deformations, which are highlighted through (1) the subsequent excitation of nonresonant shape modes, (2) the trigger of the translational
motion of the bubble, and (3) an alteration of the spherical response of the
bubble. [Work supported by the French National Research Agency, LabEx
CeLyA (ANR-10-LABX-0060) and granted by the ANR-MOST project
CARIBBBOU (ANR-15-CE19-0003).]
In order to study surface instabilities of small bubbles, a single spherical
bubble is usually trapped in an ultrasound field and submitted to increasing
acoustic pressure until reaching the necessary threshold. In the current work,
coalescence between two bubbles is used as a trigger for non-spherical oscillations. Experiments are conducted in water with air bubbles of radii ranking
from of 10 to 80 lm, at a driving frequency of 30 kHz and captured at 67 kHz.
While most literature deals with bubble coalescence at relatively low pressure
amplitudes implying spherical bubbles, coalescence at high pressure amplitudes
(in the present case up to 30 kPa) leads to surface instabilities during and after
the coalescence. During the impact and the immediately following oscillations,
transitory surface deformations appear. After this transitory period, the bubbles
stabilize either being in a purely spherical oscillation mode or exhibiting a stable surface mode. We analyze under which conditions either of the two cases
applies. Lastly, apart from conventional coalescence of two bubbles becoming
one single bubble, observations of long-lasting bouncing of a pair of bubbles
are reported. [Work supported by the LabEx Centre Lyonnais d’Acoustique of
the Universite de Lyon, operated by the French National Research Agency
(ANR-10-LABX-0060/ANR-11-IDEX-0007).]
2:00
3pPAb3. The effects of finite amplitude drop shape oscillations on the
inference of material properties. Vahideh Ansari Hosseinzadeh and Ray
Holt (Mech. Eng., Boston Univ., 110 Cummington Mall, Boston, MA
02215, vansari@bu.edu)
Acoustic levitation of drops provides a non-contact means of isolation,
stable positioning and static and dynamic manipulation. Despite a long history of the use of drop shape oscillations to infer both surface and bulk material properties, and an equally long history of observing behaviors that
result in incorrect inference of material properties, researchers continue to
employ finite amplitude oscillations for their experimental observations. In
this study we use the inference of surface tension and viscosity of known
samples of glycerin-water mixtures to illustrate the fact that finite-amplitude
oscillations are the dominant mechanism for many reports of ill-behaved
drop dynamics, including modal peak-splitting, vorticity, and strong field-
3736
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3736
2:20
3pPAb4. To predict a thermoacoustic engine’s limit cycle from its impedance measurement. Valentin Zorgnotti, Guillaume Penelet, Ga€elle
Poignand (LAUM, 17 rue du Dr Leroy, Le Mans 72000, France, valentin.
zorgnotti@univ-lemans.fr), and Steven L. Garrett (Grad. Prog. in Acoust.,
Penn. State, State College, PA)
Thermoacoustic engines are self-oscillating systems converting thermal
energy into acoustic waves. Recent studies on such engines highlight much
nonlinear effects responsible for the engine’s saturation, leading to a limit
cycle, which can be stable or not. Those effects however, are not sufficiently
known, even with today knowledges, to accurately predict a limit cycle oscillations amplitude. This work suggests a new approach, based on acoustic impedance measurement at large forcing amplitudes, to predict the limit amplitude
in steady state for a given engine. This method allows one to predict an
engine’s saturation amplitude without studying in detail its intern geometry. In
the case of a quarter wave length engine, its input impedance can easily be
obtained from an impedance sensor for example. Increasing the speaker’s forcing leads to a nonlinear impedance depending on the acoustic field amplitude.
Once measured, this function contains information such as the limit cycle amplitude, its stability and the engine’s efficiency depending on parameters as the
applied heating and stack’s position. In the case of a standing wave prime
mover, this study shows first results obtained from this method. Later on, this
work should lead to steady state predictions for any, unknown, given engine.
2:40
3pPAb5. Acoustic frequency splitting in thermoacoustically driven
coupled oscillators. Bonnie Andersen, Jacob H. Wright, Cory J. Heward,
Emily R. Jensen, and Justin S. Bridge (Phys., Utah Valley Univ., MS 179,
800 W University Pkwy, Orem, UT 84057, bonniem@uvu.edu)
Frequency splitting, or level repulsion, occurs near the point where two
resonant modes of coupled oscillators intersect as one parameter is varied
such that the resonance of one passes through the resonance of the other. A
thermacoustic stack, which provides internal self-sustained oscillations,
placed inside the neck of a closed bottle-shaped resonator can set up standing waves of the coupled neck-cavity system. The neck behaves as a quarter-wave resonator because it is closed at the top of the bottle and open at
the bottom where it is attached to the cavity. The cavity being closed at the
bottom and mostly closed near the neck behaves as a half-wave resonator. A
one-dimensional wave equation with appropriately applied boundary conditions is used to generate solutions of the coupled neck-cavity system. These
solutions reveal mode splitting near the intersections of the uncoupled neck
and cavity modes. Thermoacoustic engines with bottle-shaped resonators
were tested while varying one of three geometric parameters: the neck
length, the cavity length, and the cavity radius. Graphs of the coupled solutions readily illustrate mode splitting of the coupled oscillator system and in
agreement with experimental results.
3737
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
3:00
3pPAb6. Numerical simulation of key linear alternator performance
indicators under thermoacoustic-power-conversion conditions. Ahmed
Y. Abdelwahed (Mech. Dept., The American Univ. in Cairo, School of Sci.
& Eng., School of Sci. and Eng., American University in Cairo, 74 South
90th St., Fifth Settlement, New Cairo, Cairo 11835, Egypt, ahmed_yassin@
aucegypt.edu), A. H. Ibrahim Essawey (The American Univ. in Cairo,
School of Sci. & Eng., 11835 New Cairo, Cairo, Egypt.On leave from
Mech. Power Dept., Faculty of Eng., Cairo Univ., Egypt, New Cairo,
Egypt), and Ehab Abdel-Rahman (Professor of Phys., Dept. of Phys., The
American Univ. in Cairo, New Cairo, Cairo, Egypt)
Thermoacoustic power converters consist of thermoacoustic heat engine
and linear alternator. The linear alternator converts the acoustic power generated by the thermoacoustic engine to electric output. Efficient and stable
operation of a thermoacoustic power converter requires acoustic matching
between the engine and the alternator. It also requires matching between the
linear alternator and the connected load. An experimental setup was built to
measure and analyze the linear alternator performance under different thermoacoustic power converter operating conditions. The effects of the different design and operation factors on the key linear alternator performance
parameters such as mechanical stroke, the generated electric power, the
acoustic-to-electric conversion efficiency, the mechanical motion loss, the
electric loss, and the fluid-seal loss were investigated experimentally and
numerically. The experimental results were simulated using DeltaEC and
reasonable agreement was obtained.
3:20
3pPAb7. Effect of RC load on the performance of a three-stage looped
thermoacoustic engine. Tao Jin (Inst. of Refrigeration and Cryogenics,
Zhejiang Univ.;Key Lab. of Refrigeration and Cryogenic Technol. of Zhejiang Province, Rd. 38 West Lake District, Zhejiang University, Yuquan
Campus, Hangzhou, Zhejiang 310027, China, jintao@zju.edu.cn), Yi Wang,
Rui Yang, Jingqi Tan, and Ye Feng (Inst. of Refrigeration and Cryogenics,
Zhejiang Univ., Hangzhou, Zhejiang, China)
Thermoacoustic heat engine is a type of machine converting thermal
energy into acoustic energy with the attracting characteristics of high reliability and environmental friendliness. This work proposed a three-stage
looped thermoacoustic engine, where a compliance tube is utilized as the
phase adjuster, to realize the high acoustic impedance and the near traveling-wave acoustic field in the regenerator, which is significant for effective
thermoacoustic conversion. In order to investigate its performance with
low-grade thermal energy, the output acoustic power of the engine is measured with the variable load method. It can be found that as the resistance
decreases, the efficiency of the engine may rise, but the required heating
temperature also rises. Thus, there exists a trade-off between the efficiency
and the heating temperature, and the relative Carnot efficiency is adopted as
a main index to evaluate the performance of the system. The maximum relative Carnot efficiency of 12.6% (the corresponding efficiency was 3.2%)
was achieved in our experiments, when the heating temperatures of the three
stages were 120 C.
Acoustics ’17 Boston
3737
3p TUE. PM
coupling. Since this “bad” behavior leads to incorrect inferences of surface
tension and viscosity, we show that small amplitude oscillations yields recovery of correct inferences for these properties for known samples. Finally,
we show results from experiments with bovine and human blood. [Work
supported by NSF grant # 1438569.]
TUESDAY AFTERNOON, 27 JUNE 2017
BALLROOM A, 1:20 P.M. TO 3:40 P.M.
Session 3pPAc
Physical Acoustics: Topics in Physical Acoustics (Poster Session)
Michael R. Haberman, Cochair
Applied Research Laboratories, The University of Texas at Austin, 10000 Burnet Rd., Austin, TX 78758
Kevin M. Lee, Cochair
Applied Research Laboratories, The University of Texas at Austin, 10000 Burnet Road, Austin, TX 78758
All posters will be on display from 1:20 p.m. to 3:40 p.m. To allow contributors in this session to see the other posters, authors of oddnumbered papers will be at their posters from 1:20 p.m. to 2:30 p.m. and authors of even-numbered papers will be at their posters from
2:30 p.m. to 3:40 p.m.
Contributed Papers
3pPAc1. Metal scaffolds for acoustic trapping of polystyrene particles
in water suspensions. Iciar Gonzalez and Zuri~
ne Bonilla del Rio (Consejo
Superior de Investigaciones Cientıficas CSIC, Serrano 144, Madrid 28006,
Spain, iciar.gonzalez@csic.es)
The concept of ultrasonic 3D caging is based in wells designed to host
three orthogonal half-wave acoustic resonances defined by its three dimensions generating a single-pressure-node in the center where particles collect
to aggregate. A replacement of the planar shapes in these cavities by curved
walls introduce changes in the pressure patterns, generating different effects
on the particles exposed to the acoustic field. Here we present an experimental study of the acoustic behavior of aqueous suspensions of micron-sized
polystyrene particles (Cv~1%) exposed to ultrasounds at a frequency
f~1MHz in a metal scaffold made up of stainless steel wires crossed-linked
orthogonally to form a mesh with a light of 1mm (something larger than k/
2). Once applied the acoustic field, the particles are rapidly attracted by the
rods of the mesh distances much larger than their diameter (Up = 6mm),
where are trapped and remain adhered to the scaffold during the acoustic
actuation and even later, providing stable aggregates. Hydrodynamic mechanisms associated to viscous disturbances and mutual radiation pressure
induced by the metal rods could be responsible of this massive trapping
effect on the polymeric particles according to previous theoretical and experimental studies carried by the authors in aerosols.
3pPAc2. Improving infrasonic location estimates for underground nuclear explosions Improving infrasonic location estimates for underground nuclear explosions. Fransiska K. Dannemann, Philip Blom (Los
Alamos National Lab., P.O. Box 1663, MS D446, Los Alamos, NM 87545,
fransiska@lanl.gov), Junghyun Park (Southern Methodist Univ., Dallas,
TX), Omar Marcillo (Los Alamos National Lab., Los Alamos, NM), Brian
W. Stump (Southern Methodist Univ., Dallas, TX), and Il-Young Che (Korean Inst. of GeoSci. and Mineral Resources, Daejeon, South Korea)
Infrasound data from underground nuclear explosions conducted by North
Korea in 2006, 2009, 2013 and 2016 were recorded on six seismo-acoustic
arrays co-operated by Southern Methodist University (SMU) and the Korean
Institute of Geosciences and Mineral Resources (KIGAM). No infrasound signals were observed during the 2006 test, while signals from the others have
been used to determine event locations and yield estimations. Prior location
studies have demonstrated that wind corrections for back azimuth deviation
improve location estimates. Additionally, recent improvements to the Bayesian Infrasonic Source Localization (BISL) methodology have shown to reduce
90% confidence contours for location by 40% through the utilization of propagation-based likelihood priors for celerity and backazimuth deviation from
seven years of archival atmospheric specifications. Relocations of the 2009,
2013 and 2016 nuclear explosions will be presented to demonstrate the application of BISL to underground nuclear explosions.
3738
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
3pPAc3. Seismo-Acoustic numerical simulation of North Korea nuclear
tests. Gil Averbuch (Dept. of GeoSci. and Eng., Delft Univ. of Technol.,
Graswinckelstraat 64, Delft 2613 PX, Netherlands, g.averbuch@tudelft.nl),
Jelle D. Assink, Pieter S. Smets, and L€aslo G. Evers (R&D Dept. of Seismology and Acoust., KNMI, De Bilt, Netherlands)
A seismo-acoustic event is an event in which seismic energy transfers to
acoustic energy in the oceans and/or atmosphere and vice versa. Although
measurements confirms the coupling between the mediums, a numerical
investigation of these events may provide us with a deeper understanding of
the phenomena. In this presentation, a finite element, elasto-acoustic Fast
Field Program (FFP) is presented and is applied to the recent 2013 and January 6th, 2016 North Korea underground nuclear tests. The aim of this study
is to model the elastic and acoustic wave fields of these events and use this
information to estimate the source depths. The modeled depths will be compared to those from seismo-acoustical observations.
3pPAc4. Photoacoustic effect of ethene: Sound generation due to plant
hormone gasses. David W. Ide and Han Jung Park (Chemistry, Univ. of
Tennessee at Chattanooga, 615 McCallie Ave., Chattanooga, TN 37403,
hanjung-park@utc.edu)
Ethene (C2H4), which is produced in plants as they mature, was used to
study its photoacoustic properties using photoacoustic spectroscopy. Detection of trace amounts, with N2 gas, of C2H4 gas was also applied. The gas
was tested in various conditions- temperature, concentration of the gas, gas
cell length, and power of the laser to determine their effect on the photoacoustic signal, the ideal conditions to detect trace gas amounts, and concentration of C2H4 produced by an avocado and banana. A detection limit of 10
ppm was determined for pure C2H4. A detection of 5% and 13% (by volume) concentration of C2H4 produced for a ripening avocado and banana,
respectively, in closed space.
3pPAc5. Cavitation detection in a nozzle flow. Huy K. Do, Purity DeleOni, Tony Tang, Daniel Poe, James Bird, Sheryl Grace, Emily Ryan, and
Ray Holt (Mech. Eng., Boston Univ., 44 Parker Hill Ave., Apt. 2, Boston,
MA 02120, xhuydo@bu.edu)
The presence of cavitation inside fuel injector nozzles has been linked
not only to damage associated with cavity collapse near the walls, but also
more intriguingly to improved spray atomization. Previous studies have
shown that cavitation is associated with increased spray angle. Our goal is
to investigate the underlying mechanics. In this talk we describe our initial
efforts to employ both acoustic techniques (passive cavitation detection, or
PCD) and optical techniques (optical cavitation detection, or OCD) to characterize nozzle cavitation. Experiments are conducted with acrylic nozzles
of various geometry. Unfocused single element transducers are used for
Acoustics ’17 Boston
3738
3pPAc6. A transfer function approach for measuring the characteristic
impedance and propagation constant of porous materials. Zhehao Huang
and Xiaolin Wang (Lab. of Noise and Vib. Res., Inst. of Acoust., Chinese
Acad. of Sci., 21 Beisihuanxi Rd., Beijing 100190, China, wangxl@mail.
ioa.ac.cn)
A four-microphone one-load transfer function (4M1L) method using an
impedance tube is proposed for estimating the characteristic impedance and
propagation constant of porous materials, referring to the standard two- and
four-microphone methods (ISO 10534-2, ASTM E2611-09). The material in
this single measurement method does not have to be geometrically symmetrical and homogeneous. Even the material is geometrically asymmetrical
and inhomogeneous, like a multi-layered system, an equivalent-impedance
model can be assumed. Moreover, this method can also be used in the presence of a mean flow. First, in this presentation, the measuring theories of
conventional two-microphone transfer function (2M2L), three- and fourmicrophone transfer matrix (3M2L, 4M2L) methods are discussed and compared with this 4M1L method,. Second, a direct measurement for various
multi-layered materials mounted with a hard termination is used to verify
this measurement method and material characterization by the equivalentimpedance model. Last, measurement and calculation results are presented
and compared with the conventional 4M2L method. The 4M1L method can
be of help in acoustic properties measurement of sound absorbing materials.
3pPAc7. Fiber maps from acoustic anisotropy in rodent cardiac tissue.
Michelle L. Milne (Phys., St. Mary’s College of Maryland, 47645 College
Dr., St. Mary’s City, MD 20686-3001, mlmilne@smcm.edu) and Charles S.
Chung (Physiol., Wayne State Univ., Detroit, MI)
Previous studies have demonstrated that 3D myocardial fiber maps of
excised large mammal hearts can be generated from ultrasonic images by utilizing the acoustic anisotropy of cardiac tissue. The goal of this paper is to
demonstrate that detection of acoustic anisotropy and the creation of myocardial fiber maps using ultrasound is also feasible in rodents. Acoustic anisotropy of rat myocardium was confirmed using 2-mm diameter cores taken
from the left-ventricular free wall using a 21 MHz probe (VisualSonics
Vevo210) in B-mode. The relationship between fiber orientation and ultrasonic backscatter was obtained. These data were confirmed in segments of
left ventricular free wall that were scanned and subsequently histologically
sectioned serially from the epi- to endo-cardium. Subsequently, a series of
long-axis images were taken from intact rat hearts to generate 3D rodent fiber
maps. Preliminary data from mouse hearts using a 40 MHz probe produces
similar results. We conclude that it is feasible to obtain a cardiac fiber map
from ex vivo rodent hearts using echocardiography. Further development of
this method may allow for in-vivo fiber direction analysis in live rodents.
[Work supported by the American Heart Association (14SDG20100063 to
CSC) and a grant from The Patuxent Partnership (MLM).]
3pPAc8. Rayleigh surface wave in a porothermoelastic solid half-space.
Baljeet Singh (Dept. of Mathematics, Post Graduate Government College,
Sector 11, Chandigarh 160011, India, bsinghgc11@gmail.com)
In the present paper, the Rayleigh wave at a stress free thermally insulated surface of a generalized porothermoelastic solid half-space is considered. The governing equations of generalized porothermoelasticity are
solved for general surface wave solution. The particular solutions satisfying
the required radiation conditions are obtained. These solutions are applied
to boundary conditions at stress free thermally insulated surface. In order to
satisfy the relevant boundary conditions, a secular equation for wave speed
of Rayleigh wave is obtained. Numerical simulations are done using an
experimental data given by Yew and Jogi (1976) [J. Acoust. Soc. Am. 60,
2-8]. The wave speed is computed to observe the effects of frequency,
3739
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
porosity, coefficients of thermal expansion, and thermoelastic coupling
coefficients.
3pPAc9. Bianisotropic acoustic metasurfaces for independent control of
reflection and transmission wave. Choonlae Cho and Namkyoo Park
(Elec. and Comput. Eng., Seoul National Univ., Bldg. 301-913, 1 Gwanakro, Gwanak-gu, Seoul 08826, South Korea, lemon03@snu.ac.kr)
We propose bianisotropic acoustic metasurfaces which manipulate
reflection and transmission wave-front independently. We design onedimensional bianisotropic meta-atom controlling density (q), inverse of
bulk modulus (B-1) and bianisotropy (n) near zero-index point in water,
based on separation of characteristic oscillations. Through derivation of the
density, bulk modulus and bianisotropy as a function of S-parameters, we
show full-access to the reflection and transmission amplitude and phase in
the tailored bianisotropic media. Utilizing bianisotropic meta-atom, we
numerically demonstrate independent control of reflection and transmission
wave. Furthermore, design process of the metasurfaces of independent
manipulation of forward- and backward- reflection waves is presented.
3pPAc10. Generation of ultrasonic finite-amplitude waves through a
multiple scattering medium by time reversal in a waveguide. Gonzalo A.
Garay, Nicolas Benech, Carlos Negreira, and Yamil Abraham (Instituto de
Fısica, Facultad de Ciencias, UdelaR, Igua 4225, Montevideo, Montevideo
11400, Uruguay, ggaray@fisica.edu.uy)
Time-reversal process has been studied and applied in several acoustical
systems. Two particular systems have gain our attention: an acoustical
waveguide and a multiple scattering medium. Applied in a waveguide,
time-reversal process can be employed to generate finite-amplitude waves
by using low-power electronics. We used seven low-power ultrasonic transducers attached to the waveguide. After a 1-bit time-reversal, the waveform
clearly shows time-trace distortion typical of shock waves. We analyzed the
spectrum of the focal spot. As expected in a non-linear regime, it showed
higher order harmonics. We find that for low input amplitude levels, the second harmonic have a lower amplitude in the focal point than in the surrounding region. As the input amplitude increase, the second harmonic’s
amplitude in the focal point reaches the maximum level. In a second stage,
we interpose a multiple scattering medium between the guide and the receiver. Its width was larger than the mean free path. The nonlinear wave is
still present but with lower amplitude. However, we observe a narrower
focal spot with reduced side-lobes. Thus, the multiple scattering medium
improves the quality of acoustic focalization while it still allows the formation of shock waves.
3pPAc11. Finite element models of crystallized white dwarf cores: A
gateway to undergraduate physical acoustics and computational modeling of complex systems. Kenneth A. Pestka II, Robert C. Highley, and
Laura K. Deale (Chemistry and Phys., Longwood Univ., 201 High St.,
Farmville, VA 23909, pestkaka@longwood.edu)
In this work we present details of several finite element (FE) models of
white dwarf stars with cores composed of layered crystalline carbon, oxygen
and neon. The FE models, produced by undergraduate physics majors, are
constructed using a commercially available software package Femap with
NX Nastran. These models can be used to understand the effect of stellar
composition and crystallization ratios on white dwarf behavior including
vibrational modes, surface velocity and variation in luminosity. While the
nature of these ultra-dense stellar remnants can appear quite exotic, the
physical acoustic principles required to build and analyze the FE models are
directly related to those commonly utilized by research acousticians. This
gateway project was designed for undergraduate physics majors in order to
illustrate this connection and to encourage interest in applied physical
acoustics. The inherent interdisciplinary nature of the project also provides
an opportunity for undergraduate physics majors to explore fields that are
often considered disparate while improving their computational modeling
skills.
Acoustics ’17 Boston
3739
3p TUE. PM
PCD, while digital imaging is used for OCD. Cavitation onset thresholds
and development are studied as functions of flow rate, nozzle geometry, and
upstream fluid preparation. Cavitation characterization results will be compared with an in-house computational code being developed to model cavitation in fuel injectors.
3pPAc12. Inhibition of Rayleigh-Benard convection through acceleration modulation for thermoacoustic devices. Anand Swaminathan (Graduate Program in Acoust., The Penn State Univ., 201 Appl. Sci. Bldg.,
University Park, PA 16802, azs5363@psu.edu), Steven L. Garrett (151 Sycamore Dr., State College, PA), and Robert W. Smith (Appl. Res. Lab., The
Penn State Univ., State College, PA)
The ability to dynamically stabilize Rayleigh-Benard convection using
acceleration modulation is of interest to groups who design and study thermoacoustic machines, as the introduction of unwanted convection can have
deleterious effects on the desired operation and efficiency of the device.
These performance issues caused by suspected convective instability have
been seen both in traveling wave thermoacoustic refrigerators and in cryogenic pulse tube chillers. This presentation reports the results of an experiment intended to determine the vibratory, fluidic, and geometric conditions
under which a small, rectangular container of statically unstable fluid may
be stabilized by vertical vibration, evaluating the computational methods of
R. M. Carbo [J. Acoust. Soc. Am. 135 654 (2014)]. Measurements are
obtained using a long-displacement kinematic shaker of a unique design
with the convecting gas characterized using both thermal transport measurements and flow visualization employing tracer particles illuminated by a
diode laser light sheet phase-locked to the shaker. [Work supported by the
Julian Schwinger Foundation for Physics Research, the Pennsylvania Space
Grant Consortium Graduate Research Fellowship, and the Paul S. Veneklasen Research Foundation.]
3pPAc13. Computation of nonlinear acoustic waves using smoothed
particle hydrodynamics. Yong Ou Zhang (School of Transportation,
Wuhan Univ. of Technol., Wuhan, China), Sheng Wang (School of Automotive Eng., Wuhan Univ. of Technol., Wuhan, China), Zhixiong Gong
(School of Naval Architecture and Ocean Eng., Huazhong Univ. of Sci. and
Technol., Webster Physical Sci. 754, Pullman, WA 99164-2814, Pullman,
Washington 99164-2814, zhixiong.gong@wsu.edu), Tao Zhang, Tianyun Li
(School of Naval Architecture and Ocean Eng., Huazhong Univ. of Sci. and
Technol., Wuhan, Hubei Province, China), and Qing Zhi Hou (School of
Comput. Sci. and Technol., Tianjin Univ., Tianjin, China)
A Lagrangian approach for solving nonlinear acoustic wave problems is
presented with direct computation from smoothed particle hydrodynamics.
The traditional smoothed particle hydrodynamics method has been applied
to solve linear acoustic wave propagations. However, nonlinear acoustic
problems are common in medical ultrasonography, sonic boom research,
and acoustic levitation. Smoothed particle hydrodynamics is a Lagrangian
meshfree particle method that shows advantages in modeling nonlinear phenomena, such as the shock tube problem, and other nonlinear problems with
material separation or deformable boundaries. The method is used to solve
the governing equations of fluid dynamics for simulating nonlinear acoustics. The present work also tests the method in solving the nonlinear simple
wave equation based on Burgers’ equation. Effects of initial particle spacing, kernel length, and time step are then discussed based on the wave propagation simulation. Different kernel functions are also evaluated. The
results of numerical experiments are compared with the exact solution to
confirm the accuracy, convergence, and efficiency of the Lagrangian
smoothed particle hydrodynamics method.
3pPAc14. State changes in lipid interfaces observed during cavitation.
Shamit Shrivastava and Robin Cleveland (Univ. of Oxford, Old Rd. Campus
Res. Bldg., Oxford OX3 7DQ, United Kingdom, shamit.shrivastava@eng.
ox.ac.uk)
Here we investigate the cavitation phenomenon at a lipid interface of
multilaminar vesicles (MLVs) subjected to acoustic shock waves. The lipid
membranes contain a fluorescent dye, Laurdan, which produces a fluorescence emission sensitive to the thermodynamic state of the interface. Fluorescence emissions were measured at 438nm and 470nm using two
photomultiplier tubes (with 8 MHz bandwidth) from which the temporal
evolution of the interface’s thermodynamic state was determined with
3740
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
submicrosecond resolution. Acoustic emissions were recorded simultaneously in order to detect the presence of cavitation. Different lipids were
used to prepare the MLVs in order to observe cavitation phenomenon as a
function of the state of the interface. It was deduced that the interface
behaves as an adiabatic system decoupled from the bulk, where the entropy
increase due to vaporization during cavitation is compensated by the entropy decrease resulting from condensation and dehydration of the lipids.
These results show that cavitation physics critically depends on the thermodynamics of the interface. While applied here on a simple system of pure
lipid MLVs, the thermodynamic approach is applicable to native biological
membranes and cavitation phenomenon in general. [Work supported by UK
EPSRC EP/L024012/1.]
3pPAc15. Lagrangian meshfree particle method for modeling acoustic
wave propagation in moving fluid. Yong Ou Zhang (School of Transportation, Wuhan Univ. of Technol., Wuhan, Hubei, China), Qing Zhi Hou
(School of Comput. Sci. and Technol., Tianjin Univ., Tianjin, China), Zhixiong Gong (School of Naval Architecture and Ocean Eng., Huazhong Univ.
of Sci. and Technol., Webster Physical Sci. 754, Pullman, WA 99164-2814,
Pullman, Washington 99164-2814, zhixiong.gong@wsu.edu), Tao Zhang,
Tianyun Li (School of Naval Architecture and Ocean Eng., Huazhong Univ.
of Sci. and Technol., Wuhan, Hubei Province, China), Jian Guo Wei
(School of Software Eng., Tianjin Univ., Tianjin, China), and Jian Wu Dang
(School of Comput. Sci. and Technol., Tianjin Univ., Tianjin, China)
Introducing the Lagrangian approach to acoustic simulation is supposed
to reduce the difficulty in solving problems with deformable boundaries,
complex topologies, or multiphase media. Specific examples are sound generation in the vocal track and bubble acoustics. As a Lagrangian meshfree
particle method, the traditional smoothed particle hydrodynamics (SPH)
method has been applied in acoustic computation but in a quiescent medium. This study presents two Lagrangian approaches for modeling sound
propagation in moving fluid. In the first approach, which can be regarded as
a direct numerical simulation method, both standard SPH and the corrective
smoothed particle method (CSPM) are utilized to solve the fluid dynamic
equations and obtain pressure change directly. In the second approach, both
SPH and CSPM are used to solve the Lagrangian acoustic perturbation
equations; the particle motion and the acoustic perturbation are separated
and controlled by two sets of governing equations. Subsequently, sound
propagation in flows with different Mach numbers is simulated with several
boundary conditions including the perfected matched layers. Computational
results show clear Doppler effects. The two Lagrangian approaches demonstrate convergence with exact solutions, and the different boundary conditions are validated to be effective.
3pPAc16. Diffraction effects of an acoustic beam propagating through a
flowing medium. Kjell E. Frïysa (Elec. Eng., Western Norway Univ. of
Appl. Sci., Postbox 7030, Bergen 5020, Norway, kef@hvl.no)
Ultrasonic transit time difference flow meters are today industrially
accepted for custody transfer measurements of oil and for natural gas. Such
meters are currently planned to be used also subsea, where the calibration
possibilities are few. In such subsea applications the velocity of sound measured by these meters will be a powerful input for estimation of density and
calorific value of the flowing oil or gas. The ultrasonic transit time measurements in such meters are carried out in a flowing oil or gas, over a range typically between 6 and 40 inches. For precise transit time measurements over
such ranges, diffraction corrections may be of high importance. Diffraction
effects for an acoustic beam in a flowing medium are therefore studied
numerically. The flow direction and the propagation direction of the acoustic beam will be different. The investigation is based on the solution for the
acoustic field from a point source in a homogeneous flowing medium.
Acoustic beams are modeled using two-dimensional arrays of point sources.
The results will be compared to the no-flow case in order to identify effects
of flow on the diffraction correction for application of precise transit time
measurements in a flowing medium.
Acoustics ’17 Boston
3740
FloDesign Sonics has developed a technology to enable a single use
(gamma irradiated) continuous cell concentration and wash application for
manufacturing of cell-based therapies. The device has been designed to be
able to process several liters of a suspended cell culture, e.g., T-cells, at concentrations of 1 to 10 M cells/ml. The cell suspension flows through the device and the acoustic radiation force field is used to trap and hold the cells
in the acoustic field. After concentrating the cells, one or multiple washing
steps are accomplished by flowing the washing fluid through the device,
using the acoustic field to trap the cells while displacing the original cell
culture fluid. The holdup volume of the device is about 30 ml. Results are
shown for prototypes with a 1x0.75 inch flow chamber driven by 2 MHz
PZT-8 transducers operating at flow rates of 1-2L/h, measured cell recoveries of 90% have been achieved with concentration factors of 20 to 50 for
Jurkat T-cell suspensions, depending on cell concentration and initial volume of the cell suspension. Scaling strategies used previously for cell clarification will be used to scale up the current cell concentration device to
accommodate larger volumes.
3pPAc18. Effect of ultrasound pressure and bubble-bubble interaction
on the nonlinear attenuation and sound speed in a bubbly medium.
Amin Jafari Sojahrood (Dept. of Phys., Ryerson Univ., 350 Victoria St., Toronto, ON M5B2K3, Canada, amin.jafarisojahrood@ryerson.ca), Qian Li
(Biomedical Eng., Boston Univ., Boston, MA), Hossein Haghi, Raffi Karshafian (Dept. of Phys., Ryerson Univ., Toronto, ON, Canada), Tyrone M.
Porter (Biomedical Eng., Boston Univ., Boston, MA), and Michael C.
Kolios (Dept. of Phys., Ryerson Univ., Toronto, ON, Canada)
The presence of bubbles changes the attenuation and sound speed of a
medium. These changes in medium properties depend on the nonlinear
behavior of bubbles which are not well understood. Previous studies
employed linear models for the calculation of the attenuation and sound
speed of bubbly mediums. These predictions are not valid in the regime of
nonlinear oscillations. In addition, bubble-bubble interactions are often
3741
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
neglected. In this work, we have numerically simulated the attenuation and
sound speed of a bubbly medium by solving a recently developed nonlinear
model and considering the bubble-bubble interactions. A cluster of 52 interacting bubbles was simulated, with sizes derived from experimental measurements. Broadband attenuation measurements of monodisperse solutions
were performed with peak pressures ranging within 10-100 kPa. The bubble
solutions had mean diameters of 4-6 micron and peak concentrations of
1000 to 15000 bubbles/ml. At lower concentrations (with minimal microbubble interactions) model predictions are in good agreement with experimental measurements. At higher concentrations, new secondary peaks in
plots of attenuation and sound speed as a function of frequency appear. By
simulating bubble-bubble interactions, the numerical results could predict
the frequency shift of the peaks of attenuation and sound speed, and the generation of secondary peaks.
3pPAc19. Smoothed particle acoustics with variable smoothing length
and its application to sound propagation with complex boundary.
Futang Wang, Qing Zhi Hou, Zhe Wang (School of Comput. Sci. and Technol., Tianjin Univ., No.135 Yaguan Rd., Haihe Education Park, Tianjin
300350, China, futangwang@tju.edu.cn), Yong Ou Zhang (School of Transportation, Wuhan Univ. of Technol., Wuhan, China), and Jianwu Dang
(School of Comput. Sci. and Technol., Tianjin Univ., Tianjin, China)
Lagrangian smoothed particle hydrodynamics (SPH) method has shown
its high potential for solving acoustic wave propagations in complex domain
with multi-mediums. Typical applications are sound wave propagation in
speech production and multi-phase flow. For these problems, SPH with
adaptive particle distribution might be more efficient, with analog to meshbased methods with adaptive grids. If the fluid flow or moving boundary is
taken into account, initially evenly distributed particles will become irregular anyway (dense somewhere and sparse somewhere else). For irregular
particle distribution, conventional SPH with constant smoothing length suffers from low accuracy, phase error and instability problems. The main aim
of this work is to apply variable smoothing length into SPH and apply it to
2D sound wave propagation in a domain with complex boundary. In addition, the effects of several strategies for variable smoothing length on phase
error in smoothed particle acoustics are fully investigated by numerical
examples and theoretical analysis. Numerical results indicate that the phase
error is reduced by the use of variable smoothing length.
Acoustics ’17 Boston
3741
3p TUE. PM
3pPAc17. A novel acoustic cell processing platform for cell concentration and washing. Jason P. Dionne (Flodesign Sonics, 499 Bushy Hill Rd.,
Simsbury, CT 06070, j.dionne@fdsonics.com), Brian Dutra, Kedar C. Chitale, Goutam Ghoshal, Chris Leidel (Flodesign Sonics, Wilbraham, MA),
and Bart Lipkens (Flodesign Sonics, Springfield, MA)
TUESDAY AFTERNOON, 27 JUNE 2017
ROOM 311, 1:15 P.M. TO 3:20 P.M.
Session 3pPP
Psychological and Physiological Acoustics: A Celebration of Nat Durlach and His Contributions to Sensory
Communications
Barbara Shinn-Cunningham, Cochair
Boston University, 677 Beacon St., Boston, MA 02215
H. Steven Colburn, Cochair
University of Rhode Island, Boston University, 44 Cummington Mall, Boston, MA 02215
Chair’s Introduction—1:15
Invited Papers
1:20
3pPP1. The contributions of Nathaniel (Nat) Durlach to binaural hearing research. H. Steven Colburn (Biomedical Eng., Boston
Univ., 44 Cummington Mall, Boston, MA 02215, colburn@bu.edu)
The important and extensive work of Nat Durlach in the area of binaural hearing will be reviewed. Nat started working on this topic in
the context of work on bat sonar processing at Lincoln Laboratories, with an early publication on his hearing modeling in 1960. Nat’s thinking about the sonar problem and signal processing in noise led to his long-term interest and work in human hearing, and he joined the Sensory
Communications group at MIT in 1963. In addition to his well-known work on the Equalization-Cancellation (EC) model, his important contributions to other binaural hearing models and experiments will be discussed, including work in detection, discrimination, and estimation.
Nat’s binaural work also allows a consideration of his deep thinking about problems, his approach to modeling in general, and his distinctive
style of interacting with other scientists, both young and old. The ongoing impact of Nat’s binaural hearing work, his personal example for
approaching research, and his deep influence on the personal lives of his students and colleagues continues strongly into the future.
1:40
3pPP2. Nat Durlach and the context-coding and trace modes of intensity perception. Louis D. Braida (Res. Lab. of Electronics,
Massachusetts Inst. of Technol., Cambridge, MA 02139, ldbraida@mit.edu)
Nat came to M.I.T. from Lincoln Laboratory and became involved in the teaching of the subject 6.37, Sensory Communication. Nat soon
realized that there were certain problems with teaching the subject. Not only were there many different ways to quantify the magnitude of
stimuli (detection, discrimination, identification, category and ratio scaling, to name a few), but also the time variable affected comparisons in
ways that were not independent of the range of intensities. This bothered Nat as a mathematician. However within two years of effort, Nat
assembled a unifying picture that permitted some of the roadblocks to be overcome. All one-interval experiments were assumed to use the context-coding mode exclusively and were described in terms of two experimental parameters. Two-interval experiments were more complicated,
using the trace mode that interacted with context-coding mode, and requiring an additional parameter. Nat assumed that an optimal combination of modes was used. Subsequent work extended these models to the cases where Weber’s Law did not hold and to the comparison of loudness of different types of stimuli. As the result of Nat’s efforts, 14 papers appeared in the Journal.
2:00
3pPP3. All I really need to know I learned from Lou and Nat: Nat Durlach. Michael Picheny (Watson Multimodal, IBM TJ Watson
Res. Ctr., POB 218, Yorktown Heights, NY 10598, picheny@us.ibm.com)
Nat Durlach was one of my two primary mentors in graduate school at MIT. In the context of my PhD thesis, Nat taught me the
invaluable lesson of how to think. In this talk I will describe how I applied to Speech Recognition what I learned from him about the
value of reviewing prior literature, how to write a strong research proposal, and the need to question basic assumptions. These are eternal
gifts Nat bequeathed to me, and I will highlight them in a series of applications using examples over my career at IBM. Specifically, I
will draw from work in making advances in core speech recognition, the creation of the interdisciplinary multi-site NSF MALACH project on providing access to large spoken archives of speech, and building one of the early Speech Recognition systems for Mandarin. In
addition, I will also describe applications of our work in Clear speech to a set of speech recognition related problems, including the issue
of “Sheep and Goats”—why speech recognition works well on some speakers but not others—and also work done some years ago on
using Speech Recognition to improve perception and understandability of speech by non-native speakers of English.
3742
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3742
2:20
3pPP4. Contributions of Nat Durlach to the field of haptics: Research on tactual communication of speech and manual sensing of mechanical properties. Charlotte M. Reed (Res. Lab. of Electronics, Massachusetts Inst. of Technol., Rm. 36-751, MIT, 77 Massachusetts Ave.,
Cambridge, MA 02139, cmreed@mit.edu) and Hong Z. Tan (School of Elec. and Comput. Eng., Purdue University, West Lafayette, IN)
As a sensory scientist, Nat’s earliest contributions were through his theoretical and experimental work in the area of binaural hearing.
Early in his career, however, he demonstrated an interest in comparative sensory processing as evidenced by a study to determine if the
masking-level difference observed in audition would be found for stimulation on the skin (it was). Nat’s interest in the sense of touch
continued through his research concerned with the use of the tactual sense as a substitute for hearing in the communication of speech
and language for individuals with profound hearing impairment. This research encompassed studies with experienced deaf-blind users
of natural methods of tactual communication (to establish an “existence proof” for the information-bearing capacity of the tactual sense)
as well as research on methods for encoding and displaying acoustic signals for presentation through tactile aids. In addition, Nat also
spearheaded efforts concerned with the development of haptic displays for use in virtual environment and teleoperator systems. His
research in this area was concerned with advancements in knowledge regarding manual sensing and manipulation through a set of basic
psychophysical studies. In this talk, we will summarize some of Nat’s important contributions in both of these areas.
2:40
3pPP5. The contributions of Nathaniel I. Durlach to the study of informational masking. Gerald Kidd (Speech, Lang. & Hearing
Sci. and Hearing Res. Ctr., Boston Univ., 635 Commonwealth Ave., Boston, MA 02215, gkidd@bu.edu)
Beginning in the year 2000, Nat Durlach collaborated with our research group on the study of auditory masking and, in particular,
the study of informational masking. During this collaboration of more than one and a half decades, Nat’s energy, enthusiasm, creativity,
and insight provided the stimulus for many research projects and ultimately a variety of publications. These many contributions to our
understanding of auditory masking are embodied in journal articles, letters to the editor, grant proposals and papers at scientific conferences. This presentation will review the diverse body of empirical work that Nat drew on to formulate his views on informational masking and the implications of his work on this topic for contemporary auditory theory.
3:00–3:20 Panel Discussion
TUESDAY AFTERNOON, 27 JUNE 2017
ROOM 201, 1:20 P.M. TO 3:40 P.M.
3p TUE. PM
Session 3pSAa
Structural Acoustics and Vibration, Physical Acoustics, and Engineering Acoustics:
Acoustic Metamaterials IV
Christina J. Naify, Chair
Acoustics, Naval Research Lab, 4555 Overlook Ave. SW, Washington, DC 20375
Contributed Papers
1:20
3pSAa1. Space-time modulation of electric boundary conditions in a
one-dimensional piezoelectric phononic crystal. Charles Cro€enne (ISEN,
IEMN UMR CNRS 8520, Lille, France), Olivier Bou Matar (Ecole Centrale
de Lille, IEMN UMR CNRS 8520, Villeneuve d’Ascq, France), Jer^
ome 0.
Vasseur (Universite de Lille, IEMN UMR CNRS 8520, Lille, France),
Anne-Christine Hladky-Hennion (ISEN, IEMN UMR CNRS 8520, Lille,
France), Pierre A. Deymier (Univ. of Arizona, Tucson, AZ), and Bertrand
Dubus (ISEN, IEMN UMR CNRS 8520, 41 boulevard Vauban, Lille cedex
59046, France, bertrand.dubus@isen.fr)
A phononic crystal constituted by a one-dimensional piezoelectric material with a periodic distribution of electrodes submitted to space and
time-dependent electrical boundary conditions is considered in this work.
Interaction of an incident elastic pulse propagating with such phononic crystal is studied using a specific Finite Difference Time Domain model. Simulations are conducted in the case of periodic grounded electrodes “moving”
at constant subsonic or supersonic speed. Various nonlinear phenomena
resulting from this interaction are observed: Brillouin-like acoustic
3743
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
scattering, non-reciprocity of transmission at fundamental frequency, parametric amplification of the incident wave. The dispersion curve of the wave
propagating in the phononic crystal with space-time modulated electrical
boundary conditions is also deduced from simulation results.
1:40
3pSAa2. Exploring phononic crystal tunability using dielectric elastomers. Michael A. Jandron (Naval Undersea Warfare Ctr., Naval Undersea
Warfare Ctr., Code 8232, Bldg. 1302, Newport, RI 02841, michael.jandron@navy.mil) and David Henann (School of Eng., Brown Univ., Providence, RI)
Tunable phononic crystals give rise to interesting opportunities such as
variable-frequency vibration filters. By using soft dielectric elastomers,
which undergo large deformations when acted upon by an external electric
field, the frequency ranges of these band gaps may be adjusted, or new band
gaps may be created through electrical stimuli. In this talk, we will discuss
our finite-element-based numerical simulation capability for designing electrically-tunable, soft phononic crystals. The key ingredients of our finiteAcoustics ’17 Boston
3743
element tools are (i) the incorporation of electro-mechanical coupling, (ii)
large-deformation capability, and (iii) an accounting for inertial effects. We
present a demonstration of our simulation capability to the design of phononic crystals consisting of both square and hexagonal arrays of circularcross-section threads embedded in a dielectric elastomeric matrix. Finally,
we will consider electro-mechanical instabilities as alternative route for
enhanced tunability. [This work was currently funded through the Naval
Undersea Warfare Center Research program.]
2:00
3pSAa3. Long-range elastic metamaterials. Antonio Carcaterra (Mech.
and Aerosp. Eng., La Sapienza, Univ. of Rome, Tarquinia, VT, Italy), Francesco Coppo, Federica Mezzani, and Sara Pensalfini (Mech. and Aerosp.
Eng., La Sapienza, Univ. of Rome, Via Eudossiana 18, Rome 00184, Italy,
sara.pensalfini@uniroma1.it)
The problem of wave propagation control in one-dimensional systems,
including electrical charges and dipole magnetic moments is investigated.
The waveguide is characterized by long-range and nonlinear interaction
forces of Coulomb and Lorentz nature. Wave propagation properties are
derived by a method based on an equivalent partial differential equation that
replaces the discrete equation of motion of the chain. The paper shows how
the waves propagating in these special systems have characteristics, such as
phase and group velocity, that are function of the electrical and magnetic
property distribution along the chain. Of interest are also possible wavestopping phenomena. The paper presents an outline of some basic principles
developed by some of the authors in recent theoretical papers and shows
also numerical experiments illustrating wave propagation in metamaterials
characterized by long-range elastic-electromagnetic interactions.
2:20
3pSAa4. Bi-anisotropy in acoustic scattering problems. Li Quan and
Andrea Alu (Dept. of Elec. and Comput. Eng., The Univ. of Texas at Austin,
1616 Guadalupe St., UTA 7.215, Austin, TX 78712, liquan@utexas.
edu)
Electric and magnetic dipole moments dominate the scattering properties
of small nanoparticles in optics. Ordinarily, electric dipole moments are
excited by the local electric field, and magnetic dipole moments by the magnetic field. Bi-anisotropy, or magneto-electric coupling, largely enriches the
electromagnetic response of materials. Similarly, in acoustics the dominant
scattering contributions from objects smaller than the wavelength, monopole
and dipole moments, are commonly excited by pressure and velocity, respectively. The recent interest in Willis acoustic materials, for which these
responses are coupled, leads us to analyze the opportunities offered by the
analogues of bi-anisotropic nanoparticles in acoustics and their relevance in
practical applications to tailor sound. We also present an extraction procedure
to determine the acoustic polarizability tensor of an arbitrary object and relate
its bi-anisotropic response to its geometry and material properties.
2:40
3pSAa5. Pressure enhancement in water-based passive acoustic metamaterials. Bogdan Ioan Popa (Mech. Eng., Univ. of Michigan, 2350 Hayward St., Ann Arbor, MI 48109, bipopa@umich.edu)
It has recently been shown that anisotropic passive acoustic metamaterials can enhance significantly the pressure of sound waves (Chen et al.,
3744
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Nature Commun. 2014). The effect was attributed to strong wave compression inside carefully designed anisotropic metamaterials and was shown to
lead to directional acoustic sensors whose sensing threshold is greatly
improved. The improved sensing strategy was demonstrated in air but cannot be trivially ported to a water environment where it could find many
applications in sonar systems, ultrasound imagers, or underwater communication systems. In this presentation, we will demonstrate new strategies different from Chen et al. that lead to pressure amplification in passive
metamaterials, and, more importantly, we show that these strategies are suitable to water-based applications. We will further show that the wave compression phenomenon proposed by Chen et al. is not a necessary
requirement, and both isotropic and anisotropic metamaterial structures can
be used to obtain strong pressure enhancements. We will present specific
metamaterial structures for underwater operation designed using the new
approach, and their significant pressure amplification ability and directivity
will be quantified.
3:00
3pSAa6. On the unique features of mechanical metamaterials with nonlinear local oscillators. Priscilla B. Silva, Varvara G. Kouznetsova, and
Marc G. Geers (Dept. of Mech. Eng., Eindhoven Univ. of Technol., Mech.
of Mater. Group, P.O. Box 513, Eindhoven 5600 MB, Netherlands, p.brandao.silva@tue.nl)
Over the last two decades, metamaterials have attracted a high number
of researches motivated by the possibility of designing structures capable of
manipulating wave propagation. The basic mechanism underlying the
behavior of mechanical metamaterials is their negative effective dynamic
parameters (mass and/or stiffness). Within the framework of mechanical
metamaterials, most of the developments up to now have considered linear
material behavior only. There is a natural need to understand the effect of
nonlinear material behavior on the wave propagation through such engineered composites. In this paper, the dynamic behavior of a discrete lattice
system composed of a series of nonlinear local resonators is investigated.
By making use of the harmonic balance method, approximating dispersion
relations are derived. Unlike previous works, super/sub-harmonic generation
has been considered and revealed new phenomenon: the possibility of generating multiple transmission dips. The analysis also showed the tunability
and multistability features of the system. The semi-analytical predictions
were verified with direct numerical simulations.
3:20
3pSAa7. Negative refraction and superresolution by a steel-methanol
phononic crystal. Ukesh Koju and Joel Mobley (Phys. and Astronomy,
Univ. of MS, 145 Hill Dr., P.O. Box 1848, University, MS 38677, ukoju@
go.olemiss.edu)
Negative refraction and the associated lensing effect of a two-dimensional (2D) phononic crystal (PC) in the MHz regime were studied both
experimentally and numerically. The PC consists of a hexagonal array of
steel cylinders (r = 0.4 mm, a = 0.5 mm) in a methanol matrix for use in an
aqueous medium. FEM simulations of the pressure field show negative
refraction of plane waves through a prism shaped crystal and superresolution lensing through a rectangular crystal. These phenomena were observed
with hydrophone scans of the transmitted pressure fields through the steelmethanol PC in a water tank.
Acoustics ’17 Boston
3744
TUESDAY AFTERNOON, 27 JUNE 2017
ROOM 204, 1:20 P.M. TO 3:40 P.M.
Session 3pSAb
Structural Acoustics and Vibration: Energy Methods in Acoustics and Vibration II
Donald B. Bliss, Cochair
Mechanical Engineering, Duke University, 148B Hudson Hall, Durham, NC 27705
Linda P. Franzoni, Cochair
Dept. of Mech. Eng. and Materials Sci., Duke Univ., Box 90271, Durham, NC 27708-0271
Otto von Estorff, Cochair
Institute of Modelling and Computation, Hamburg University of Technology, Hamburg 21073, Germany
Contributed Papers
3pSAb1. Blocking oblique wave sound transmission through a multi-element barrier using alternate resonance tuning. Meredith C. Fleming
(Phys., Duke Univ., Durham, NC), Donald B. Bliss, and Mauricio Villa
(Mech. Eng., Duke Univ., Mech. Eng., 148B Hudson Hall, Durham, NC
27708, donald.bliss@duke.edu)
Alternate Resonance Tuning (ART) utilizes a structural barrier subdivided into dynamic panel subsystems with different resonant behaviors. The
underlying idea involves adjacent panel subsystems tuned differently to take
advantage of the outofphase vibratory behavior, causing the 180 phase shift
at different frequencies. In the intermediate range between the two resonance frequencies, the panels vibrate out of phase, leading to strong cancellation of the transmitted sound field. The method has been demonstrated
analytically and experimentally for waves striking a barrier at normal incidence. The current work considers the effectiveness of ART to block the
transmission of incident oblique waves, and considers both discrete frequencies and angles, and more realistic broadband random incidence fields. The
research goal is to show that flexibility and controlled resonant behavior in
subsystems can substantially block sound transmission, even for low structural damping. The subsystems alter the vibrating surface wavenumber spectra to reduce coupling between the structure and the acoustic field. Not only
is the transmission of incident oblique waves reduced, but the transmitted
and reflected waves radiate at a variety of angles due to the modification of
the surface wavenumber spectrum. Applications include the development of
lightweight flexible sound blocking barriers for vehicles and architectural
spaces.
1:40
3pSAb2. Reflection and transmission of acoustic energy by an elastic
plate with structural discontinuities forced by multiple angle oblique
broadband sound waves. Mauricio Villa, Donald B. Bliss, and Linda P.
Franzoni (Dept. of Mech. Eng. and Mater. Sci., Duke Univ., Box 90300
Hudson Hall, Durham, NC 27708, mauricio.villa@duke.edu)
An analysis is presented for acoustic reflection and transmission from an
infinite fluid-loaded plate with spatially periodic discontinuities. The plate,
with similar or dissimilar acoustic fluids on both sides, is excited by an
oblique wave incident acoustic field. The fully-coupled structural/acoustic
problem is treated by the method of Analytical-Numerical Matching
(ANM). The ANM framework separates the problem into global numerical
and local analytical solutions, and handles rapid spatial variation around the
structural discontinuities in closed form, improving the numerical accuracy
and convergence rate. The ANM approach includes a novel way to handle
difficulties associated with coincidence frequencies. The periodic spatial
3745
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
discontinuities, modeled by various boundary conditions, create deviations
from specular directivity with multiple reflection and transmission angles,
the effect being most pronounced at structural resonances. The periodic discontinuities redirect part of the structural energy into reverberant resonant
substructures having wavenumbers different from the oblique wave forcing,
reradiating with different directivity angles. Discrete frequency and broadband diffuse results are presented. These results are also compared to a
baffled finite barrier with a fluid loading correction introduced to the structural wavenumber. The goal is to develop efficient methods for structuralacoustic reflection and transmission of broadband acoustic energy between
coupled acoustic domains.
2:00
3pSAb3. The energy method for solving electroacoustic problems using
combining finite element analysis and analytical methods. David A.
Brown (ECE, Univ. of Massachusetts Dartmouth, 151 Martine St., Fall
River, MA 02723, dbAcoustics@cox.net), Xiang Yan, and Boris Aronov
(BTech Acoust. LLC, Fall River, MA)
The energy method for solving electroacoustic transducer problems
requires calculation of the electrical, mechanical and electromechanical
energies of a piezoelectric body from the vibration mode shapes, which are
often difficult to obtain analytical particularly with structures having complex boundary conditions. An alternative approach is to determine the vibration mode shapes either experimentally or by finite element analysis and to
proceed with computing the energies analytically. The frequency response
can then be determined from an equivalent electromechanical circuit
approach. An example of this hybrid approach applied to a flextensional
transducer will be presented with comparison to experimental results.
[Work supported by ONR Code 321.]
2:20
3pSAb4. Nonlinear unsteady energy analysis of structural systems.
Antonio Culla, Gianluca Pepe (Dept. of Mech. and Aerosp. Eng., Univ. of
Rome La Sapienza, via Eudossiana 18, Rome 00184, Italy, antonio.culla@
uniroma1.it), and Antonio Carcaterra (Dept. of Mech. and Aerosp. Eng.,
Univ. of Rome La Sapienza, Tarquinia, VT, Italy)
The problem of vibration of large systems undergoing shocks and
unsteady loads is one of the field of great interest in vibro-acoustic engineering. Statistical Energy Analysis-SEA is one of the most acknowledged
methods in this field. However, SEA has many limitations, and is based on
several questionable hypotheses. In the present paper, on the basis of a new
theory of vibration thermodynamics, the authors consider a set of systems
characterized by (i) unsteady loads, such as shocks, (ii) nonlinear coupling
between the different subcomponents. The analysis is carried on considering
Acoustics ’17 Boston
3745
3p TUE. PM
1:20
different prototype systems, starting from a very simple pair of nonlinear
resonators, a 2-dof system, up to consider a system of plates coupled
through nonlinear joints. It is shown how the energy flow relationship
between subsystems pair, comes out to be a power series of the energy storage difference. These results are systematically considered in the light of
the thermodynamic theory of vibrating systems, showing how a general
energy approach to complex systems is feasible.
2:40
3pSAb5. Diffuse elastic waves in a nearly axisymmetric body: Distribution, transport, and dynamical localization. Richard Weaver and John
Yoritomo (Phys., Univ. of Illinois, 1110 West Green St., Urbana, IL, rweaver@uiuc.edu)
We report measurements on the distribution and evolution of diffuse ultrasonic waves in elastic bodies with weakly broken axisymmetry. Aluminum cylinders with dimensions large compared to wavelength were excited
by transient point sources at the center of the upper circular face. The resulting power spectral density was then examined as a function of time and frequency and position. It was found that this energy density showed a marked
concentration at the center at early times, a concentration that subsequently
slowly diminished towards a state of uniformity across the face, over times
long compared to ultrasonic transit time across the sample. The evolution is
attributed to scattering by symmetry breaking heterogeneities. Relaxation
did not proceed all the way to uniformity and equipartition, behavior shown
to be consistent with Enhanced Backscatter and with Dynamical Anderson
Localization among subspaces of different angular momentum.
3:00
statistical energy analysis or reverberant sound diffusing from one room
into others through small windows), at infinite time the energy density in
the starting room will still be above its equipartition value. We offer a
theory to predict the amount of this Anderson localization in such systems
of weakly coupled substructures. We compare these predictions with numerical results obtained using random matrix substructures.
3:20
3pSAb7. A higher order shear deformation model of a periodically sectioned plate. Andrew J. Hull (Naval Undersea Warfare Ctr., 1176 Howell
St., Newport, RI 02841, andrew.hull@navy.mil)
This talk develops a higher order shear deformation model of a periodically sectioned plate. A parabolic deformation expression is used with periodic analysis methods to calculate the displacement field as a function of
plate spatial location. The problem is formulated by writing the transverse
displacement field and the in-plane rotations as a series solution of unknown
wave propagation coefficients multiplied by an exponential indexed wavenumber term in the direction of varying structural properties multiplied by
an exponential constant term in the direction of constant structural properties. These expansions, along with various structural properties written using
Fourier summations, are inserted into the governing differential equations
that were derived using Hamilton’s principle. The equations are now algebraic expressions that can be orthogonalized and written in a global matrix
format whose solution is the wave propagation coefficients, thus yielding
the transverse and in-plane displacements of the system. This new model is
validated with finite element theory and Kirchhoff plate theory for a thin
plate simulation and verified with comparison to experimental results for a
0.0191 m thick sectional plate.
3pSAb6. Anderson localization amongst weakly coupled substructures.
John P. Coleman, John Y. Yoritomo, and Richard Weaver (Phys., Univ. of
Illinois, 1110 West Green St., Urbana, IL 61801, jpcolem2@illinois.
edu)
It is shown that if vibrational energy is put into one substructure and
allowed to diffuse into other weakly coupled substructures (as in the case of
TUESDAY AFTERNOON, 27 JUNE 2017
BALLROOM A, 1:20 P.M. TO 3:40 P.M.
Session 3pSC
Speech Communication: Aging and Development (Poster Session)
Christina Kuo, Chair
Communication Sciences and Disorders, James Madison University, MSC4304, 801 Carrier Drive, Harrisonburg, VA 22807
All posters will be on display from 1:20 p.m. to 3:40 p.m. To allow contributors in this session to see the other posters, authors of
odd-numbered papers will be at their posters from 1:20 p.m. to 2:30 p.m. and authors of even-numbered papers will be at their posters
from 2:30 p.m. to 3:40 p.m.
Contributed Papers
3pSC1. Linguistic masking release in older adults. Sarah Alageel (Dept.
of Otolaryngol. and Commun. Sci., King Faisal Specialist Hospital and Res.
Ctr., Zahrawi St., Al Maather, Riyadh 12713, Saudi Arabia, Salageel97@
kfsrc.edu.sa), Stanley Sheft, and Valeriy Shafiro (Commun. Disord. & Sci.,
Rush Univ. Medical Ctr., Chicago, IL)
Past studies of speech-on-speech masking in children and young adults
(YA) indicate that intelligibility of target speech can improve when target
and masker speech are in different languages. We investigated whether such
3746
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
linguistic release from masking is obtained in older adults (OA) with agetypical hearing abilities. All participants were asked to recognize English
sentences in the presence of two-talker maskers spoken in either English or
Spanish presented at four different Signal-to-Noise ratios (SNR) to each
group. Differences in energetic masking between the masker languages
were minimized. Overall sentence recognition accuracy was greater for YA
participants. However, both groups equally benefited from a linguistic mismatch in the target-masker language with a significant masking release for
the Spanish language masker. The magnitude of masking release increased
Acoustics ’17 Boston
3746
3pSC2. The impact of context and competition on speech comprehension in younger and older adults revealed using eye-tracking and pupillometry. Nicole Ayasse and Arthur Wingfield (Volen National Ctr. for
Complex Systems, Brandeis Univ., 415 South St., MS 062 Brown Psych.
Office, Waltham, MA 02453, nayasse@brandeis.edu)
Although younger and older adults can use context effectively to understand spoken language, at times the context may fit multiple semantic competitors, and choosing the correct one can be crucial for comprehension.
Given the ambiguities present in the real world, and the inhibitory control
deficit common in aging, it is critical to understand how adults of all ages
comprehend sentences. An experiment is reported to explore the interplay
of context and competition in sentence comprehension using a variation on
a visual world eye-tracking paradigm. Spoken sentences were presented
with either high or low expectancy (context) for a sentence-final (target)
word and with either high or low response entropy (uncertainty or competition); these target words were then paired with either a contextual competitor or an unrelated lure. Results support the expectation that lower context
and greater competition slow comprehension and increase cognitive effort.
Results will be discussed in terms of aging and individual differences.
[Work supported by NIH Grants RO1 AG 019714 and T32 GM 084907.]
3pSC3. Rhythmic characteristics in aging: A study on Zurich German.
Elisa Pellegrino (Univ. of Zurich, Plattenstrasse 54, Zuerich 8032, Switzerland, pellegrino.elisa.1981@gmail.com), Lei He, Natalie Giroud, Martin
Meyer, and Volker Dellwo (Univ. of Zurich, Zurich, Switzerland)
working memory in two groups (younger: 18-23yo, n = 41; older: 54-76yo,
n = 41). We used mean target fixation proportion from 200 to 750 ms after
word onset in a visual world task as a proxy for lexical access speed as listeners identified spoken words (varying on low/high: lexical frequency,
neighborhood density, and cohort density; Magnuson et al., 2007). ABR
consistency significantly predicted speed of word identification across variations in neighborhood and cohort densities, but contra previous findings,
cognitive measures did not improve model fits. Interactions involving age,
vocabulary, and lexical frequency suggest age-related linguistic expertise
influences lexical access of uncommon words. We conclude that older
adults exhibit increases in phonological competition due to declines in auditory encoding, suggesting that a consistent neural response to sounds leads
to more efficient speech processing and lexical access.
3pSC5. Velar-vowel coarticulation across the lifespan and in people
who stutter: Findings and model. Stefan A. Frisch and Nathan D. Maxfield (Commun. Sci. and Disord., Univ. of South Florida, 4202 E Fowler
Ave., PCD1017, Tampa, FL 33620, sfrisch@usf.edu)
The study of anticipatory coarticulation provides insight into the speech
production planning process. In the present study, the task involved repeating velar-vowel consonant combinations in a carrier sentence (e.g., for /ke/,
“Say a cape again”). Data for velar-vowel coarticulation were analyzed
using the Articulate Assistant Advanced software to create tongue traces
that were quantified following the procedures for average of the nearest
neighbor point-to-point distance between curves (Zharkova & Hewlett
2009, Journal of Phonetics). There were 126 participants total in child (812), young adult (18-39), and older adult (55-75) ages and groups of typical
speakers (n = 21, 23, 29) and people who stutter (n = 15, 23, 11). Data analysis found a decrease in coarticulatory influence of the vowel on the velar
across the lifespan, but no differences in coarticulation for people who stutter. Analysis of variability found greater variability for children and people
who stutter. A two allophone model of coarticulation provided the best fit to
the data, replicating Frisch & Wodzinski (2016, Journal of Phonetics).
Age-related changes in speech production influence both speech segmental and suprasegmental characteristics. Previous research focussed on
changes in voice quality, vowel formant patterns, f0 and speech rate due to
aging but only little attention has been paid on speech rhythm (durational
and dynamic). In this study we analyzed the segmental durational variability
as well as the syllabic intensity variability between 60 Zurich German
speakers ranging in age between 20 and 81 years. Speakers read 90 sentences in Zurich German. Between-speaker durational variability across age
was quantified through a variety of different rhythmic variables (%V,
DCLn, DVLn, rPVI-C, nPVI-V, %VO, varcoVO, nPVI-VO, DPeak, varcoPeak, and nPVI-Peak). Intensity variability was computed by taking the
standard deviations, variation coefficients and PVIs of average syllable intensity and syllable peak intensity values across sentences. Results based on
durational measurements show that with aging there is an increase in %V
and r-PVI-C. Aged voices also present lower variability in the consonantal
and especially in the vocalic intervals (nPVI-V and DVLn). We argue that
changes in the physical characteristics as well as in the neural control mechanisms of the articulators might play a significant role in age-related rhythmic changes.
3pSC6. Influences on lip-reading ability: Aging, sensory, and cognitive
functions. Katherine M. Dawson and D. H. Whalen (Speech-LanguageHearing, City Univ. New York Graduate Ctr., 365 5th Ave., New York, NY
10016, kdawson2@gradcenter.cuny.edu)
3pSC4. Speed of lexical access relates to quality of neural response to
sound, not cognitive abilities, in younger and older adults. Alexis R.
Johns (Memory and Cognition Lab (Volen Complex), Brandeis Univ. MS
013, Waltham, MA 02453, ajohns@brandeis.edu), Emily B. Myers, Erika
Skoe (Speech, Lang., and Hearing Sci., Univ. of Connecticut, Storrs, CT),
and James S. Magnuson (Psychol. Sci., Univ. of Connecticut, Storrs,
CT)
3pSC7. Three-dimensional analysis of liquid sounds produced by first
graders. Olivia Foley, Amy W. Piper, and Steven M. Lulich (Speech and
Hearing Sci., Indiana Univ., 4789 N White River Dr., Bloomington, IN
47404, slulich@indiana.edu)
Previous work has demonstrated a relationship between age-related
declines in inhibitory control and difficulties identifying words of low lexical frequency and high neighborhood density (Sommers & Danielson,
1999). We hypothesized that declines in consistency of the auditory brainstem response (ABR; neural response to a repeated sound; Anderson et al.,
2012) might also impede lexical access in older adults. We measured audiometric thresholds, ABR consistency, vocabulary, inhibitory control, and
3747
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
To explore the previously reported effect of aging on lip-reading ability,
older (60-75 years) and younger (18-35) adults gave oral responses to sentences from a modified version of the build-a-sentence (BAS) test [TyeMurray et al., Int. J. Acoust., 47(S2), S31-S37, (2008)]. These sentences
have predictable syntactic form but contain some words selected from a
randomized list of nouns, e.g., ‘The duck watched the cop’. Participants
identified these nouns from videos of a single female talker, and responses
were transcribed by the experimenter. The 30 participants (15 native English
speakers in each age group) were also tested for general cognitive function,
vision, hearing, working memory (WM), attention, inhibition, and speech
motor ability. They were also asked to gauge their own lip-reading abilities.
Preliminary results indicate that the cognitive variables have effects independent of age. Sentence length and syntactic complexity contributed differently in the two age groups, perhaps due to WM capacity.
The liquids /r/ and /l/ in American English are complex sounds whose
productions are highly variable among adult speakers, but there is currently
little knowledge about how children articulate these sounds, which are typically acquired late in development. In this study, the Goldman-Fristoe Test
of Articulation (GFTA-3) was administered to typically developing first
graders (6 and 7 years old) while a 4D ultrasound system imaged the tongue
with synchronous audio and webcam video recordings. The GFTA-3 contains words in which /r/ and /l/ occur in a variety of syllable positions and
phonetic contexts. Among 14 first grade participants in this study, the socalled “bunched /r/” is overwhelmingly preferred over the so-called
Acoustics ’17 Boston
3747
3p TUE. PM
as SNR decreased, ranging across SNR conditions from 1 to 27 percentage
points. Age, hearing-in-noise ability and hearing sensitivity of OA listeners
did not correlate with masking release. Results confirm previous findings of
masking release associated with a linguistic mismatch between target and
masker speech, and indicate that in speech-on-speech masking older listeners can improve speech intelligibility by utilizing nonenergetic linguistic
differences between the target and masker speech.
“retroflexed /r/.” In contrast, three-dimensional tongue shapes in the production of syllable initial /l/ are substantially more variable with two basic configurations (“coronal” and “dorsal”), while syllable final /l/ productions are
more consistently “dorsal.” Examples of 3D tongue shapes will be presented, along with results from a Principal Components Analysis.
3pSC8. Naturalistic coding for prelinguistic speech at 9 and 12 months
of age from two Mandarin-learning children differing in auditory function. Hsin-yu Li and Li-mei Chen (National Cheng Kung Univ., 1 University Rd., Tainan, Taiwan, claion11@gmail.com)
This study investigated the feasibility of Fagan’s (2005) naturalistic coding system for prelinguistic speech of Mandarin-learning children. This system includes 10 categories: single vowel, single consonant, consonant
combination, vowel combination, syllable containing glottal sound and
vowel, syllable containing supra-glottal consonant and vowel (SGCV), reduplicated vowel, reduplicated consonant, reduplicated CV, and reduplicated
“SGCV" (RSGCV). The first 50 clear utterances produced by one normalhearing (NH) child and one hearing-impaired (HI) child in the audio recordings at 9 and 12 months old were transcribed into 10 categories for comparison. Major findings are (1) compared with NH child, HI child demonstrated
no obvious changes in vocalization from 9 to 12 months old; (2) at 12
months old, canonical babbling ratio of HI child was 0, while those of NH
child were 0.84 (utterance as unit) and 0.48 (syllable as unit); (3) compared
with NH child, HI child did not manipulate supra-glottal sounds and had
limited consonant inventory; (4) HI child produced no RSGCV (e.g., /baba/)
while NH child was at reduplicated babbling stage at 12 months old. Naturalistic coding system revealed difference in prelinguistic speech between a
NH child and a HI child. More participants should be included to verify the
findings.
3pSC9. Irregular pitch periods as a feature cue in the developing speech
of English-learning children. Helen Hanson (ECE Dept., Union College,
807 Union St., Schenectady, NY 12308, helen.hanson@alum.mit.edu), Stefanie Shattuck-Hufnagel (RLE, MIT, Cambridge, MA), and John Pereira
(ECE Dept., Union College, Schenectady, NY)
Changes in phonation patterns have long been studied as correlates of
various linguistic elements, such as the occurrence of irregular pitch periods
(IPPs) at significant locations in prosodic structure (in phrase-initial, phrasefinal, and pitch accented contexts) and word-final voiceless stops, especially
/t/. But less is known about the development of this phonation pattern in
children [cf. Song et al., JASA, 131, 3036-50, 2012], particularly in toddlers
between the ages of 2;6 and 3;6. The study of its course of acquisition may
shed light on the mechanisms involved, since child vocal folds are very different physiologically from those of adults, and change strikingly during development. Monosyllabic target words from the Imbrie Corpus of speech
from 10 toddlers 2 1/2 to 3 1/2 years old, ending in /t, d/ were examined for
evidence of IPPs. Preliminary results based on three adult/child pairs suggest that both adults and children produce IPPs preceding coda /t/ about
50% of the time. But children produce fewer IPPs before coda /d/ than
adults do (38% v 8%), suggesting (like earlier reports) that children are not
simply imitating the cues produced by the adults around them. Data from
additional adult/child pairs will be presented.
3pSC10. Development of acoustic speech discrimination abilities in
school-aged children. Pamela Trudeau-Fisette (Phonet. Lab., Dept. of Linguist, Universite du Quebec a Montreal, Montreal, QC, Canada), Melinda
Maysounave, Camille Vidou, and Lucie Menard (Phonet. Lab., Dept. of
Linguist, Universite du Quebec a Montreal, CP 8888, succ. Centre-Ville,
Montreal, QC H3C 3P8, Canada, menard.lucie@uqam.ca)
The acquisition of speech perception skills is challenging for children.
Although some studies have shown that categorical perception boundaries
become steeper during childhood and are sometimes shifted in children
compared to adults, very few experiments on discrimination abilities in children have been conducted. To investigate this, we conducted a perceptual
discrimination task in school-aged children. Sixty-seven 6- to 12-year-old
native Quebec French-speaking children were asked to complete a speech
perception task that used an AXB scheme. The children were asked to
3748
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
discriminate synthesized 5-formant vowels belonging to three continuums: /
i-e/ and /e-ï/ (in which stimuli were equally stepped along F1) and /e-ï/ (in
which stimuli were equally stepped along F2). There was a significant effect
of age on peak discrimination scores, with older children having higher
peak scores than younger children. There was no effect of age on the location of the categorical boundary on the three tested continuums. These findings support the hypothesis that speech development and learning
experience play significant roles in the establishment of strong phonological
targets.
3pSC11. A bear called Baddington? Variability and contrast enhancement in accented infant-directed speech. Jessamyn L. Schertz (Dept. of
Lang. Studies, Univ. of Toronto Mississauga, Dept. of Lang. Studies, Ste.
301, Erindale Hall, 3359 Mississauga Rd., Mississauga, ON L5L1C6, Canada, jessamyn.schertz@utoronto.ca), Helen Buckler, Chris Klammer, and
Elizabeth Johnson (Dept. of Psych., Univ. of Toronto Mississauga, Mississauga, ON, Canada)
This work examines the realization of the English stop voicing contrast
in read speech directed to infants (IDS) and adults (ADS), as well as in
words in isolation, as produced by three groups of speakers: native speakers
of Canadian English (where /b/ and /p/ differ in aspiration), native speakers
of languages in which /b/ and /p/ differ in phonetic voicing (e.g., Spanish),
and native speakers of languages which have a 4-way stop contrast /b, p, bh,
ph/, where both aspiration and voicing are contrastive (e.g., Hindi). In words
in isolation, speakers from both “accented” groups tended to produce English voiceless stops as unaspirated, and voiced stops as phonetically voiced.
However, there was variability in accented speakers’ voiceless stops, as
well as in native speakers’ use of phonetic voicing in voiced stops, and this
variability appeared to be augmented in the read speech conditions (ADS
and IDS). We test the hypotheses (1) that IDS results in phonologicallyinformed contrast enhancement, with accent-specific modifications expected
for the three groups of speakers, and (2) that speakers aim for more precision in phonetic targets when talking to infants, resulting in less withinspeaker variability in realization of the contrast in IDS as compared to ADS.
3pSC12. Do mothers enhance the tonal contrasts in their monosyllabic
Cantonese tones directed to their infants? Puisan Wong and Hoi Yee Ng
(Speech and Hearing Sci., The Univ. of Hong Kong, Rm. 757 Meng Wah
Complex, Pokfulam NA, Hong Kong, pswResearch@gmail.com)
Introduction Some studies reported that adults enhanced acoustic differences of phonemes when speaking to young children. We examined whether
mothers improved the contrasts among the lexical tones when speaking to
infants. Methods Nineteen native Cantonese-speaking mothers produced the
six Cantonese tones in 45 monosyllabic words to an adult and their 7- to 12month-old infants. The 1709 words were low-pass filtered to preserve the
pitch contours but eliminate lexical information. Five judges categorized the
tones of the filtered words. Acoustic analysis was performed. Results Infantdirected tones had higher fundamental frequency (F0) and longer duration,
but were identified with lower, though not significantly different, accuracy.
Larger mean F0 differences were found in four pair of tones in infantdirected speech. However, these increased acoustic contrasts did not lead to
higher perceptual accuracy of these tone pairs. Despite substantial perceptual confusion between T2 (HR) and T5 (LR) has been reported in previous
tone studies, the difference in the slopes of the two tones was not enhanced
in infant-directed speech. Conclusion Mothers acoustically modified their
tones when speaking to infants. However, little evidence supported that
mothers enhanced the phonetic contrasts of the tones in infant-directed
speech. [Work supported by Research Grants Council of Hong Kong.]
3pSC13. Differentiating infant cry from non-cry vocalizations based on
perception of negativity and acoustic features. Hyunjoo Yoo, Gavin
Bidelman, Eugene Buder, Miriam van Mersbergen, and David K. Oller
(School of Commun. Sci. and Disord., The Univ. of Memphis, 4055 N Park
Loop, Memphis, TN 38117, hyoo2@memphis.edu)
This study seeks to determine how human listeners discriminate cry vs.
non-cry sounds by investigating acoustic factors that may contribute to the
perception of negativity in infant vocalizations (e.g., cry, whine, and vowel-
Acoustics ’17 Boston
3748
intelligibility and the sensitivity to detecting temporal envelopes (i.e., amplitude modulation detection) is not well understood. This study measured
speech reception thresholds in quiet, stationary and speech-modulated noise
in three listener groups: young (yNH) and older normal-hearing (oNH) listeners and hearing-impaired (HI) listeners. In addition to broadband speech
and noise signals, we adopted low and high-pass filtered versions of the
stimuli to study the contribution of different coding mechanisms in basal
and apical cochlear regions. For the same listeners, amplitude-modulation
(AM) detection thresholds were measured in quiet and in the presence of
broadband masking noise for 70 dB AM tones of 0.5, 2 and 4 kHz with either 100 or 5 Hz modulation frequency. Even though group trends were
clearer than individual differences, AM detection thresholds showed a relationship to speech recognition in the low-pass filtered speech-modulated
noise condition, and for yNH and oNH listeners in the high-pass filtered
condition. Overall, this study sheds light on the importance of temporal envelope coding sensitivity for speech recognition and its relationship to nearand supra-threshold hearing deficits.
like sounds). The assumption is that identification of cry is self-evident;
therefore, there has been no attempt to systematically differentiate cry from
non-cry vocalizations. Twelve exemplars each of cry, whine, and vowel-like
sound segments (36 total) were selected from archival audio recordings of
infant vocalizations. Categories were selected from expert-judged audio signals of vocal development. Adult listeners identified each utterance as either
cry, whine, or vowel-like sound as quickly and accurately as possible. They
also judged the extent of negativity of each utterance. Acoustic features of
each utterance were analyzed in association with the categories and degrees
of negativity. Results suggest a continuum of negativity from cries (most
negative) to vowel-like sounds (least negative), and that acoustic variables
are gradated across the negativity continuum. However, preliminary results
suggest that peak F0, peak RMS, and spectral slope best differentiate the
categories.
3pSC14. Amplitude-modulation detection and speech recognition in
normal and hearing-impaired listeners. Sarah Verhulst (Ghent Univ.,
Technologiepark 15, Zwijnaarde 9052, Belgium, s.verhulst@ugent.be) and
Anna Warzybok (Oldenburg Univ., Oldenburg, Germany)
Even though temporal speech envelopes may form a salient cue when
listening to speech in noisy backgrounds, the relationship between speech
TUESDAY AFTERNOON, 27 JUNE 2017
ROOM 302, 1:20 P.M. TO 3:20 P.M.
Session 3pSP
Signal Processing in Acoustics, Engineering Acoustics, and Architectural Acoustics: Signal Processing
for Directional Sensors IV
3p TUE. PM
Kainam T. Wong, Chair
Dept. of Electronic & Information Engineering, Hong Kong Polytechnic University, DE 605,
Hung Hom KLN, Hong Kong
Invited Papers
1:20
3pSP1. Modelling and estimation of the spatial impulse response in reverberant conditions. Ivan J. Tashev, Hannes Gamper, and
Lyle Corbin (Microsoft Res. Labs, Microsoft Corp., One Microsoft Way, Redmond, WA 98052, ivantash@microsoft.com)
Modern audio signal processing and speech enhancement relies more and more on machine learning approaches, which require vast
amount of data for training. One of the ways to create a dataset for training is by convolving measured impulse response between the
sound source and the device with clean speech and adding noise. This approach is limited to the pair of used sound source and microphone, as it incorporates not only the reverberation of the room, but also the radiation pattern of the sound source (typically mouth simulator or head and torso simulator) and the directivity patterns of the microphones in the device under test. In this paper we propose using
a spherical loudspeaker array as a transmitter and a spherical microphone array as a receiver to create a sound source and receiver independent impulse response. During the dataset synthesis this spatial impulse response is modified to model the impulse response between
transmitter and receiver with given directivity patterns.
1:40
3pSP2. Minimum Energy Method (MEM) microphone array back-propagation for measuring musical wind instruments sound
hole radiation. Rolf Bader (Inst. of Systematic Musicology, Univ. of Hamburg, Neue Rabenstr. 13, Hamburg 20354, Germany, R_
Bader@t-online.de), Jost L. Fischer (Inst. of Systematic Musicology, Univ. of Hamburg, Hambug, Germany), and Markus Abel (Inst. of
Phys., Univ. of Potsdam, Potsdam, Germany)
Using a 128 microphone array, the sound source distribution of musical wind instruments from their blowing and finger holes are
measured. The Japanese \emph{shakuhachi} flute, the Chinese \emph{dizi} transverse flute, the Balinese \emph{suling} bamboo flute
and flue organ pipes are investigated. The sound radiation is measured by a rectangular microphone array in the near field, and back3749
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3749
propagated onto the flute radiation plane using the Minimum Energy Method (MEM). Here, the radiation is assumed to be composed of
as many sound sources as there are microphones, where the source positions can be chosen arbitrarily in space. Using a regularization
parameter $\alpha$ the virtual sound sources, which are monopoles for $\alpha = 0$ are narrowed for higher $\alpha$. Calculating the
reconstruction energy on the radiation plane while using different $\alpha$, this energy becomes minimal for the correct $\alpha$. The
ill-posedness of the solved Fredholm integral at the presence of measurement noise is met by easing $\alpha$ slightly, therefore stabilizing the solution in a robust way. This is especially necessary for problems like musical wind instruments, where the sound sources are
the air itself and often a precise sound source position cannot be known in advance. The flutes show complex radiation behavior and
sometimes even a coupling of the sound field between finger holes outside the flute. Coupling between organ pipes can also lead to synchronization between the pipes.
Contributed Paper
2:00
spatial null sensitivity point at the noise source position. Our previous
method, however, could reduce the noise only in a narrow area around the
null point. To extend the noise reduction area, we developed a new method
that creates some adjacent null points with more than two microphones. The
experimental result showed that, if the target source was located 8 m distant
from the microphone, the new method with three microphones could create
a -20 dB noise reduction area whose center was 6 m distant from the microphone with a radius of 0.5 m, and captured the target sound without changing its timbre.
3pSP3. Capturing sound of target source located beyond noise source
by line microphone array. Akio Ando, Kodai Yamauchi, and Kazuyoshi
Onishi (Electric and Electronics Eng., Faculty of Eng., Univ. of Toyama,
3190 Gofuku, Toyama 930-8555, Japan, andio@eng.u-toyama.ac.jp)
We had developed a method that captured the target sound when the
noise source was located in front of the target source. It used a line microphone array whose direction was toward the noise source and created a
Invited Paper
2:20
3pSP4. Model-error-compensated reiterative adaptive beamforming on a physically shaded cylindrical array. Jonathan Botts
(ARiA, 209 N. Commerce St, Ste 300, Culpeper, VA 22701, jonathan.botts@ariacoustics.com), Jason E. Summers, and Charles F. Gaumond (ARiA, Washington, DC)
The authors formulate and compare beamspace and element-space formulations of Reiterative Superresolution (RISR) [S. D. Blunt
et al., IEEE Trans. Aero. & Elec. Sys. 47, 332-346 (2011)] on a physically shaded cylindrical array. The beamspace adaptive approach
minimizes computational load while use of a model-based structured covariance estimates enables adaptive beamforming with Doppler
sensitive waveforms having little sample support. While structured covariance estimates reduce sample-support requirements and facilitate stable inversion, they also introduce degradation due to model-mismatch errors. In space-time adaptive processing (STAP), covariance matrix tapers (CMT) have been used to counter the effects of off-axis arrivals and other forms of model mismatch [J. R. Guerci,
Space-Time Adaptive Processing for Radar (Artech, 2014)]. In adaptive pulse compression, CMT have been used to increase range sidelobe suppression in the presence of Doppler [T. Cuprak, M.S. Thesis (2013)]. In this work, we have formulated CMT to address model
mismatch by accounting for interbeam arrivals. We also consider the degree to which CMT can be used to address model-mismatch in
RISR resulting from the physical shading effects on the directionality of elements in a large cylindrical array. [Portions of this material
are based upon work supported by the Naval Sea Systems Command.]
Contributed Paper
2:40
3pSP5. Comparison of time domain noise source localization techniques: Application to impulsive noise of nail guns. Thomas Padois (Mech.,
ETS, 1100 Rue Notre-Dame Ouest, Montreal, QC H3C 1K3, Canada,
Thomas.Padois@etsmtl.ca), Marc-Andre Gaudreau (Mech., Cegep Drummondville, Montreal, QC, Canada), Olivier Doutres (Mech., ETS, Montreal,
QC, Canada), Franck C. Sgard (IRSST, Montreal, QC, Canada), Alain Berry
(Mech., Universite de Sherbrooke, Sherbrooke, QC, Canada), Pierre Marcotte (IRSST, Montreal, QC, Canada), and Frederic Laville (Mech., ETS,
Montreal, QC, Canada)
Microphone array techniques are an efficient tool to detect acoustic
source positions. The standard technique is the delay and sum beamforming.
In the time domain, the generalized cross correlation of the microphone
3750
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
signals is used to compute the Spatial Likelihood Functions (SLF) of all
microphone pairs and the noise source map is provided by the arithmetic
mean of these functions. To improve the former noise source map, which
means narrowing the main lobe and removing side and spurious lobes, several techniques have been developed in the past. In this work, the performances of three of these techniques (in terms of source position detection,
amplitude estimation and computation time) are compared in the case of
both synthetic and real data: (1) energetic and geometric criteria are applied
in order to remove the SLF with useless information, (2) the arithmetic
mean is replaced by the generalized mean and (3) linear inverse problem is
solved with sparsity constraint. In the case of real data, the source to be
located and quantified is an impulsive noise radiated by nail guns which is
recorded by a spiral arm microphone array.
Acoustics ’17 Boston
3750
Invited Paper
3:00
3pSP6. Directional information extracted from time reversal scanning to image stress corrosion crack orientation. Brian E.
Anderson (Phys. and Astronomy, Brigham Young Univ., MS D446, Provo, UT 84602, bea@byu.edu), Timothy J. Ulrich, Pierre-yves Le
Bas (Detonator Technol., Los Alamos National Lab., Los Alamos, NM), Marcel Remillieux (Geophys. Group, Los Alamos National
Lab., Los Alamos, NM), and Brent O. Reichman (Phys. and Astronomy, Brigham Young Univ., Provo, UT)
The time reversed elastic nonlinearity diagnostic (TREND) is a nondestructive inspection technique based on the principle of time
reversal and used to scan the surface of a sample to identify and characterize cracks and other defects. TREND utilizes a series of individual time reversal experiments conducted on a grid of points in a region of interest to map out the spatial extent of surficial expressions
of cracks and subsurface features that are less than a wavelength below the surface. The focal signatures from each of these experiments
can be used to determine not only the location but also the orientation of the crack. We will discuss how this technique can be applied to
a stainless steel sample with stress corrosion cracking (SCC) using stationary piezoelectric transducers broadcasting ultrasonic waves
and a scanning laser vibrometer setup used to detect the three dimensional vibration of the sample surface. The orientation of the crack
is critical to estimate how soon a crack might penetrate through the wall thickness of a structure. [This work was funded by the U.S.
Dept. of Energy, Fuel Cycle R&D, Used Fuel Disposition (Storage) campaign and through a Nuclear Energy University Program Integrated Research Project (IRP-15-9318).]
TUESDAY AFTERNOON, 27 JUNE 2017
ROOM 309, 1:20 P.M. TO 2:20 P.M.
Session 3pUWa
Underwater Acoustics, Acoustical Oceanography, Engineering Acoustics, and Signal Processing
in Acoustics: A Century of Sonar II
3p TUE. PM
Michael A. Ainslie, Cochair
Underwater Tech. Dept., TNO, P.O. Box 96864, The Hague 2509JG, Netherlands
Kevin D. Heaney, Cochair
OASIS Inc., 11006 Clara Barton Dr., Fairfax Station, VA 22039
Invited Papers
1:20
3pUWa1. The impact of naval requirements on sonar development. David L. Bradley (Univ. of New Hampshire, 6934 Traveler’s
Rest Circle, Easton, MD 21601, dlb25@psu.edu)
A “walk” through history from WWI to today provides an overview of sonar system evolution. The time and spatial constraints of
Naval sonar systems are not necessarily consistent with those required for scientific studies of the ocean and its boundaries. The pressure
from these competing purposes has resulted in major advances, but also compromises in sonar design and implementation. The circumstances of the Cold War led to complications in the relations between ocean scientists and undersea warfare specialists, in turn leading
to competition for limited resources and consequent weak cooperation between the communities. The result was a slowing of the development pace of acoustic systems. More recently, re-convergence has occurred and improved, multi-purpose sonar systems are the norm.
1:40
3pUWa2. A century of sonar performance prediction. Kevin D. Heaney (OASIS Inc., 11006 Clara Barton Dr., Fairfax Station, VA
22039, oceansound04@yahoo.com)
Developments in sonar technology went hand in hand with increased scientific understanding of the factors affecting sonar performance The first known reference is from Lichte (Physikalische Zeitschrift 1919) where cylindrical spreading for propagation loss was
accounted for in deep water (convergence zone, CZ) propagation. Users of sonar in the 1920s encountered an “afternoon effect”, a temporary dip in performance that occurred during warm summer evenings, was explained in the 1930s by Iselin and Batchelder in terms of
vertical temperature gradients in the sea. A significant amount of work occurred in both World War I(Klein 1962, Wood 1965) and
World War II to understand, and therefore be able to predict, the capabilities of sonar systems to detect submerged targets.
3751
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3751
Understanding of sonar performance increased rapidly during WW2 and continued to advance during the Cold War. Advances in the
understanding of sonar performance modeling before, during and after the Second World War are described.
Contributed Paper
2:00
a carbon nanotube (CNT) sheet submerged in deionized water. These devices worked well for demonstrational purposes, but the CNT sheet would
become damaged when retracted from the water, which made characterization difficult. More recently, new methods for enhancing the robustness of
freestanding sheets have been developed which allow few layer CNT sheets
to be repeatedly dipped and withdrawn from a water bath without damage.
Such methods have enabled revisiting the study of submerged thermoacoustic projectors. Our studies of CNT thermophones in various liquid baths
give evidence to the mechanism for acoustic wave generation in these systems. The potential to improve the impedance matching between the thermoacoustic source and surrounding fluid media suggest enhanced designs
for compact sonar transducers.
3pUWa3. Liquid filled encapsulation for thermoacoustic sonar projectors. Nathanael K. Mayo and John B. Blottman (Div. Newport, Naval
Undersea Warfare Ctr., Naval Undersea Warfare Ctr., Div. Newport, 1176
Howell St., Newport, RI 02841-1708, nathanael.mayo@navy.mil)
Thermoacoustic projectors produce sound by rapidly heating and cooling a material with low heat capacity. These “thermophones” were originally demonstrated in 1917 using thin platinum filaments [Arnold, H., I.B
Crandall, (1917) Phys. Rev. 10(1):22-38], but were very limited in their efficiencies and bandwidth until the much more recent discovery of new nanomaterials. The first underwater thermophones were made by Aliev et al. in
2010 [Aliev, A. E et al, (2010) Nano letters 10 (7), 2374-80] which utilized
TUESDAY AFTERNOON, 27 JUNE 2017
ROOM 306, 1:20 P.M. TO 3:20 P.M.
Session 3pUWb
Underwater Acoustics: Sound Propagation and Scattering in Three-Dimensional Environments IV
Ying-Tsong Lin, Cochair
Applied Ocean Physics and Engineering, Woods Hole Oceanographic Institution, Bigelow 213, MS#11, WHOI, Woods Hole,
MA 02543
Frederic Sturm, Cochair
Acoustics, LMFA, Centre Acoustique, Ecole Centrale de Lyon, 36, avenue Guy de Collongue, Ecully 69134, France
Invited Paper
1:20
3pUWb1. The internal wave prediction component of the Integrated Ocean Dynamics and Acoustics Project. Timothy F. Duda,
Ying-Tsong Lin, James Lynch, Arthur Newhall, Weifeng G. Zhang, Karl R. Helfrich (Woods Hole Oceanographic Inst., WHOI AOPE
Dept. MS 11, Woods Hole, MA 02543, tduda@whoi.edu), and Pierre F. Lermusiaux (Mech. Eng., Massachusetts Inst. of Technol., Cambridge, MA)
The accuracy of ocean sound propagation modeling depends on properly representing the environment. The water column portion of
this has phenomena and features that are time- and space-dependent, covering a huge range of scales. Nonlinear internal gravity waves
(nonlinear internal waves, NIW) in shallow areas are important features that are moving, evolving and anisotropic, and whose effects
can be handled properly only with 3D sound modeling. Thus, 3D NIW field predictions would be needed for comprehensive modeling.
One challenging part of our project to make acoustic condition forecasts from available data involves making this 3D NIW field prediction using data-assimilating regional models that do not faithfully handle NIW. Our methods for extracting internal tide signals from the
models, analyzing their propagation into regions where they transform in reality to NIW (but not in the models), and predicting NIW
conditions in these regions are explained here. Outstanding challenges such as how to parameterize internal-tide coupled mode propagation on slopes and how to model highly nonlinear crossing NIW groups will be presented. The question of what constitutes an effective
NIW prediction for acoustic purposes will be addressed.
3752
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3752
Contributed Papers
1:40
2:20
3pUWb2. Four dimensional sound speed environments in ocean acoustic simulations. EeShan C. Bhatt and Henrik Schmidt (Mech. Eng., MIT,
Rm. 5-223, 77 Massachusetts Ave., Cambridge, MA 02139, eesh@mit.
edu)
3pUWb4. Three-dimensional sound propagation and scattering in an
ocean with surface and internal waves over range-dependent seafloor.
Ying-Tsong Lin and James Lynch (Appl. Ocean Phys. and Eng., Woods
Hole Oceanographic Inst., Bigelow 213, MS#11, WHOI, Woods Hole, MA
02543, ytlin@whoi.edu)
2:00
3pUWb3. Three-dimensional numerical simulation of sound waves
propagating near the west coast of Brittany. Frederic Sturm (Ctr. Acoustique, LMFA, UMR CNRS 5509, Universite de Lyon, Laboratoire de
Mecanique des Fluides et d’Acoustique, Universite de Lyon, Ctr. Acoustique, Ecole Centrale de Lyon, 36, Ave. Guy de Collongue, Ecully Cedex
69134, France, frederic.sturm@ec-lyon.fr)
Numerical results of sound wave propagation in a realistic three-dimensional (3-D) oceanic environment are reported. The region of interest is the
west coast of Brittany, near the harbor of Brest, France. The numerical simulations were performed running a fully 3-D parabolic equation based code
considering omnidirectional point sources emitting at low frequencies (e.g.
50—100 Hz) and located in the water column near the entrance of the strait
linking the roadstead of Brest to the Atlantic Ocean (also known as ’Goulet
de Brest’). Numerical simulations clearly show that, depending on position
of the acoustic sources, sound waves can be strongly affected by out-ofplane propagation effects resulting from complicated multiple reflections
off the sloping bottom and channeling effects due to the three-dimensionally
varying bathymetry in this particular region, and hence can predict interesting modal arrivals at specific receivers (with typical source-receiver distances of 30 km), not predicted by two-dimensional models. Note that the 3-D
effects predicted here for a realistic marine environment are very similar to
the ones described in detail for (now classical) benchmark problems (e.g. 3D wedge and 3-D canyon test cases), though the environmental parameters
are different. Several source depths and positions are investigated.
Underwater sound propagation in an oceanic waveguide can be influenced by environmental fluctuations on the boundaries at the sea surface
and the sea floor and also in the ocean interior. These fluctuations can in
fact cause three-dimensional acoustic propagation and scattering effects,
especially when the horizontal/azimuthal gradients of the fluctuations are
significant. Many studies have only been considering individual environmental factor, but the current work presented in this talk is investigating the
joint effects by surface and internal waves over range-dependent seafloor
consisting of sand waves, ripples, or scours. This scenario represents better
the reality in some dynamic areas on the edge of continental shelf (shelfbreak), continental slopes, submarine canyons, and also riverine and estuarine environments. Two research methods are taken here: one is theoretical
analysis utilizing acoustic mode theory, and the other is numerical modeling
with three-dimensional parabolic-equation models. The frequency dependency of the joint effects will be analyzed, as well as the dependencies on the
source and receiver positions, acoustic mode numbers, and/or ray angles.
Numerical examples of underwater sound propagation and scattering with
realistic environmental conditions will be presented with statistical analysis
on the temporal and spatial variability. [Work supported by ONR.]
2:40
3pUWb5. Parameter dependence of acoustic quantities in a nonlinear
internal wave duct. Brendan J. DeCourcy, Matthew Milone (Mathematical
Sci., Rensselaer Polytechnic Inst., 110 8th St., Troy, NY 12180, decoub@
rpi.edu), Ying-Tsong Lin (Appl. Ocean Phys. & Eng., Woods Hole Oceanographic Inst., Woods Hole, MA), and William L. Siegmann (Mathematical
Sci., Rensselaer Polytechnic Inst., Troy, NY)
Parameter dependence of acoustic quantities in a nonlinear internal
wave duct BJD, Matt Milone, YTL Ocean features with 3-D spatial variability in shallow water can significantly affect acoustic propagation. One
example is a curved front modeled with a discontinuous sound speed change
over a sloping shelf [Lin and Lynch, JASA-EL (2012)], which has an extension to a continuous sound-speed change. An approach using normal modes
and perturbation approximations yields convenient formulas that show how
acoustic quantities depend on environmental parameters [DeCourcy et al.,
ASA, Salt Lake City (2016)]. Another common 3-D example is nonlinear
internal waves, with wave fronts that pairwise can produce acoustic ducting,
radiating, and scattering effects often observed in field data. The previous
approach is applied to this feature, using a well model with two sound-speed
jumps for such a duct [Lin et al., (2013)]. Approximate formulas for acoustic wavenumbers and phase speeds are determined in order to estimate sensitivity to changes in environmental parameters. All mode types will be
considered (whispering gallery, fully bouncing, and leaky), highlighting differences from those in the single-front example. [Work supported by ONR
Grants N00014-14-1-0372 and N00014-11-1-0701.]
3:00–3:20 Panel Discussion
3753
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3753
3p TUE. PM
Current ocean acoustic simulation environments often rely on a single
depth-varying sound speed profile. This work introduces a robust and configurable Octave/C + + implementation of a four-dimensional sound speed
environment (4D SSP) for use with Bellhop in the MIT/LAMSS software
package for acoustic simulations. This tool shows considerable advances in
that the 4D SSP environment can be created from MIT GCM model output,
CTD data, or manually varied from existing historical profiles. By considering the fluctuations in sound speed in longitude, latitude, depth, and time,
output pressure data better approximates experimental data. Two experiments in the Arctic and the Santa Barbara Channel (SBC) are simulated in
1-D and 4-D to compare to experimental data taken. A framework for acoustic data assimilation using this new simulated data environment in tandem
with experimental data is shown to derive the true sound speed field. [Work
supported by ONR under the Information in Ambient Noise MURI.]
TUESDAY EVENING, 27 JUNE 2017
ROOM 309, 5:30 P.M. TO 8:00 P.M.
Session 3eED
Education in Acoustics and Women in Acoustics: Listen Up and Get Involved
Keeta Jones, Cochair
Acoustical Society of America, 1305 Walt Whitman Rd., Suite 300, Melville, NY 11787
Tracianne B. Neilsen, Cochair
Brigham Young University, N311 ESC, Provo, UT 84602
This workshop for Boston area Girl Scouts (age 12–17) consists of a hands-on tutorial, interactive demonstrations, and a panel discussion about careers in acoustics. The primary goals of this workshop are to expose the girls to opportunities in science and engineering
and to interact with professionals in many areas of acoustics. A large number of volunteers are needed to make this a success.
Please e-mail Keeta Jones (kjones@acousticalsociety.org) if you have time to help with either guiding the girls to the event and helping
them get started (5:00 p.m. to 6:00 p.m.) or exploring principles and applications of acoustics with small groups of girls (5:00 p.m. to
7:30 p.m.).
We will provide many demonstrations, but feel free to contact us if you would like to bring your own. Following is a description of one
of the demonstrations.
Contributed Papers
3eED1. Demonstration of nonlinear tuning curve vibration of granular
medium supported by a clamped circular elastic plate using a soil plate
oscillator. Emily V. Santos and Murray S. Korman (Physics Dept., U. S.
Naval Academy, Annapolis, MD 21402, santosemily08@gmail.com)
A demonstration will be conducted in order to show how a soil plate
oscillator (SPO) filled with granular material will create a nonlinear system
due to shifting peaks in a tuning curve with incremental increases in the
swept drive amplitude. An SPO has two flanges clamping an elastic plate
which supports a circular column of granular material. The plate (with a
magnet and accelerometer fastened to the underside) is driven below by an
3754
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
amplified swept sinusoidal current applied to an AC coil. Past results have
used masonry sand, granular edible uncooked materials, and glass beads in
order to study the nonlinear tuning curve response near a resonance with (1)
a fixed soil column while changing drive amplitude and (2) mass loading of
a soil column versus the resonant frequency response of the system at a
fixed drive amplitude. When the experiments are performed at a fixed drive
but with a changing mass layer the resonant frequency increases then
decreases with increased added mass due to an increase in flexural rigidity
of the granular media disk layer which dominates the effect of adding mass.
SPO tuning curves resemble the nonlinear mesoscopic elastic behavior of
resonant effects of geomaterials such as sandstone.
Acoustics ’17 Boston
3754
TUESDAY AFTERNOON, 27 JUNE 2017
BALL ROOM B, 3:30 P.M. TO 6.00 P.M.
Plenary Session and Awards Ceremony
Michael R. Stinson, Cochair
President, Acoustical Society of America
Jorge Patricio, Cochair
President, European Acoustics Association
Presentation of Certificates to New ASA Fellows
Douglas A. Abraham – For contributions to our understanding of the effect of non-Rayleigh reverberation
on active sonar
Joshua G. Bernstein – For contributions to our understanding of normal and impaired pitch and speech
perception
David Braslau – For contributions to noise mitigation and quieter communities
Tim Colonius – For contributions to numerical modeling of cavitation, medical acoustics, and aeroacoustics
Elisa E. Konofagou – For contributions to diagnostic and therapeutic applications of ultrasound
Ying-Tsong Lin – For contributions to three-dimensional computational and shallow water acoustics
Tyrone M. Porter – For contributions to therapeutic ultrasound
3p TUE. PM
James A. TenCate – For contributions to nonlinear acoustics of earth materials
Blake S. Wilson – For the development and enhancement of cochlear implants
Introduction of ASA Award Recipients and Presentation of ASA Awards
Student Mentor Award to Daniel A. Russell
William and Christine Hartmann Prize in Auditory Neuroscience to Cynthia Moss
Medwin Prize in Acoustical Oceanography to Jennifer Miksis-Olds
R. Bruce Lindsay Award to Bradley E. Treeby
Helmholtz-Rayleigh Interdisciplinary Silver Medal to Blake S. Wilson
Gold Medal to William M. Hartmann
Vice President’s Gavel to Ronald A. Roy
President’s Tuning Fork to Michael R. Stinson
Introduction of EAA Award Recipients and Presentation of EAA Awards
The EAA AWARD for lifetime achievements in acoustics to Hugo Fastl
The EAA AWARD for contributions to the promotion of Acoustics in Europe to Antonio Perez Lopez.
3755
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3755
Price: $52.00
Due to security risks and Payment Card Industry (PCI) data security standards e-mail is NOT an acceptable way to transmit credit
card information. Please use our secure web page to process your credit card payment (http://www.abdi-ecommerce10.com/asa)
or securely fax this form to (412-741-0609).
ACOUSTICAL SOCIETY OF AMERICA
R. BRUCE LINDSAY AWARD
Bradley E. Treeby
2017
The R. Bruce Lindsay Award (formerly the Biennial Award) is presented in the Spring to a member of the Society
who is under 35 years of age on 1 January of the year of the Award and who, during a period of two or more years
immediately preceding the award, has been active in the affairs of the Society and has contributed substantially,
through published papers, to the advancement of theoretical or applied acoustics, or both. The award was presented
biennially until 1986. It is now an annual award.
PREVIOUS RECIPIENTS
Richard H. Bolt
Leo L. Beranek
Vincent Salmon
Isadore Rudnick
J. C. R. Licklider
Osman K. Mawardi
Uno Ingard
Ernest Yeager
Ira J. Hirsh
Bruce P. Bogert
Ira Dyer
Alan Powell
Tony F. W. Embleton
David M. Green
Emmanuel P. Papadakis
Logan E. Hargrove
Robert D. Finch
Lawrence R. Rabiner
Robert E. Apfel
Henry E. Bass
Peter H. Rogers
Ralph N. Baer
Peter N. Mikhalevsky
William E. Cooper
Ilene J. Busch-Vishniac
Gilles A. Daigle
Mark F. Hamilton
Thomas J. Hofler
3757
1942
1944
1946
1948
1950
1952
1954
1956
1956
1958
1960
1962
1964
1966
1968
1970
1972
1974
1976
1978
1980
1982
1984
1986
1987
1988
1989
1990
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Yves H. Berthelot
Joseph M. Cuschieri
Anthony A. Atchley
Michael D. Collins
Robert P. Carlyon
Beverly A. Wright
Victor W. Sparrow
D. Keith Wilson
Robert L. Clark
Paul E. Barbone
Robin O. Cleveland
Andrew J. Oxenham
James J. Finneran
Thomas J. Royston
Dani Byrd
Michael R. Bailey
Lily M. Wang
Purnima Ratilal
Dorian S. Houser
Tyrone M. Porter
Kelly J. Benoit-Bird
Kent L. Gee
Karim G. Sabra
Constantin-C. Coussios
Eleanor P. J. Stride
Matthew J. Goupell
Matthew W. Urban
Megan S. Ballard
1991
1991
1992
1993
1994
1995
1996
1997
1998
1999
2000
2001
2002
2002
2003
2004
2005
2006
2007
2008
2009
2010
2011
2012
2013
2014
2015
2016
Acoustics ‘17 Boston
3757
3758
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ‘17 Boston
3758
CITATION FOR BRADLEY E. TREEBY
. . . for contributions to the modeling of biomedical ultrasound fields
BOSTON, MASSACHUSETTS • 27 JUNE 2017
Bradley Treeby grew up outside the small town of Albany in South Western Australia.
He received a Bachelor of Engineering degree with 1st Class Honors from the Department
of Mechanical Engineering at the University of Western Australia in 2003. Continuing as
a graduate student in the same department, Brad began his first in-depth studies in acoustics while working under the supervision of Professors Roshun Paurobally and Jie Pan in
the Department’s Centre for Acoustics, Dynamics and Vibration. Brad’s dissertation work
examined the effect of hair on human sound localization cues, bringing challenges that
necessitated the development of novel scattering models valid over non-rigid surfaces. For
his research accomplishments he received the Robert and Maude Gledden Postgraduate
Research Scholarship (2004), the F. S. Shaw Memorial Postgraduate Scholarship for excellence in applied mechanics research (2005), and earned the title of Doctor of Engineering
in 2007.
Upon completing his degree, Brad moved to the UK to become a Research Fellow at
University College London (UCL). Working with Dr. Ben Cox in the in the Department
of Medical Physics and Bioengineering, Brad began investigating methods for fast tissuerealistic modeling of wave propagation relevant to photoacoustics.
From this time onward Brad focused his career on the development of fast and accurate
models for describing ultrasound waves traveling through the human body. His work at
UCL on pseudospectral time domain models for acoustic wave propagation grew into what
is today the well-known open source “k-Wave” Matlab toolbox for modelling biomedical
ultrasound fields.
“Brad developed k-Wave all the way from theoretical principles, through coding and
validation, to its applications to real world problems. In doing so, he has demonstrated
considerable expertise and attention to detail in an impressive range of areas of acoustics,”
notes Cox.
Key steps leading to k-Wave’s widespread use include Brad’s theoretical work on using
fractional Laplacian terms to model tissue-realistic acoustic absorption and dispersion, his
extension of the model from the linear to the nonlinear regime, his developments to make
the model computationally efficient, his formation of the model for large-scale simulations
on high performance computing architectures, his validation of the models against experimental measurements, and his active daily support to users through an online user forum.
Since this freely-distributed software was released in 2009 there have been seven subsequent releases, each overseen by Brad. k-Wave now has some 9,000 registered users in at
least 70 countries, and a paper describing the first release of the toolbox has been cited over
375 times since its publication in 2010 (Treeby, B.E. and B. T. Cox “k-Wave: MATLAB
toolbox for the simulation and reconstruction of photoacoustic wave fields,” J Biomed Opt
2010; 15: 021314).
Between 2010 and 2013 Brad spent time at Australian National University (ANU) serving as a research fellow. While continuing his research throughout this period, Brad’s dedication as an educator and mentor became clear during his time: He received both a Top
Supervisor Award and a Dean’s Commendation for Teaching Excellence in his relatively
short tenure at ANU.
In 2013 Brad returned to London to establish a new Biomedical Ultrasound Group–
or “BUG”– at UCL and has nurtured its growth since. Currently supported by an Early
Career Fellowship from the Engineering and Physical Sciences Research Council, he has
maintained notable levels of grant funding, allowing growth of the lab’s experimental and
computational capabilities. Brad is very active in the application of full wave models to
clinical problems, notably for accurate targeting and dosimetry in therapeutic ultrasound.
To this end, he has established collaborations with a wide circle of clinicians, scientists,
3759
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ‘17 Boston
3759
manufacturers, and other interested parties from around the world. His ongoing research,
“sits at the interface between physical acoustics, biomedical ultrasound, numerical methods, and high performance computing,” which has earned much respect and attention
among the ultrasound modeling community and has attracted both graduate students and
postdocs to his lab.
Brad’s affinity for acoustics is clearly not limited to the scientific. He is an accomplished
guitarist and vocalist, whose most prominent works stem from the band of his namesake,
Brad Treeby & the Simplists. Founded in Perth in 2008, the group moved to London along
with Brad and remained active through 2013 with their “unmistakable syndicate of beats and
melody forged from the combination of acoustic guitar, vocals, bass, and human beat-box.”
While he is clearly accomplished in numerical methods, Brad is equally interested and
active in experimental work. As his mentor and colleague Cox has observed a hallmark
of Brad’s character as an investigator with his “insistence that research needs a symbiotic
approach that includes not only modelling and experimentation, but also an eye on the end
goal, on the application. Because of his k-Wave software, some might mistakenly categorize Brad as just an expert in numerical methods of acoustics. That would be an injustice.”
We are delighted to congratulate Bradley E. Treeby on behalf of his colleagues, friends,
and supporters throughout the Acoustical Society of America on being selected for the
2017 R. Bruce Lindsay Award.
NATHAN J. MCDANNOLD
3760
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ‘17 Boston
3760
ACOUSTICAL SOCIETY OF AMERICA
HELMHOLTZ-RAYLEIGH INTERDISCIPLINARY
SILVER MEDAL
in
Psychological and Physiological Acoustics,
Speech Communication, and Signal Processing in Acoustics
Blake S. Wilson
2017
The Silver Medal is presented to individuals, without age limitation, for contributions to the advancement of science,
engineering, or human welfare through the application of acoustic principles, or through research accomplishment in
acoustics.
PREVIOUS RECIPIENTS
Helmholtz-Rayleigh Interdisciplinary Silver Medal
Gerhard M. Sessler
David E. Weston
Jens P. Blauert
Lawrence A. Crum
William M. Hartmann
Arthur B. Baggeroer
David Lubman
Gilles A. Daigle
Mathias Fink
1997
1998
1999
2000
2001
2002
2004
2005
2006
Edwin L. Carstensen
James V. Candy
Ronald A. Roy
James E. Barger
Timothy J. Leighton
Mark F. Hamilton
Henry Cox
Armen Sarvazyan
2007
2008
2010
2011
2013
2014
2015
2016
Interdisciplinary Silver Medal
Eugen J. Skudrzyk
Wesley L. Nyborg
W. Dixon Ward
Victor C. Anderson
Steven L. Garrett
3761
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
1983
1990
1991
1992
1993
Acoustics ‘17 Boston
3761
3762
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ‘17 Boston
3762
ENCOMIUM FOR BLAKE S. WILSON
. . . for contributions to the development and adoption of cochlear implants
BOSTON, MASSACHUSETTS • 27 JUNE 2017
Blake S. Wilson directed the Neuroscience Program, then the Center for Auditory Prosthesis Research, at Research Triangle Institute in North Carolina, over a 20+ year period
beginning in 1983. During this time, Blake and his research teams developed a suite of
highly effective signal processing strategies for cochlear implants–devices that restore
hearing and speech understanding to infants born deaf and to adults who have lost most,
or all, of their hearing. Today, the signal-processing strategies developed by Blake and his
teams, or direct descendants of those strategies, are the heart of the cochlear implants used
worldwide by over 400,000 individuals ranging in age from a few months to over 100. Cochlear implants are the first and most successful neural prosthesis for a sensory system and
have been described as one of the most significant medical developments in the second half
of the twentieth century. Blake’s work has been central to this remarkable achievement.
Blake received a B.S. in Electrical Engineering from Duke University in 1974 and probably set a record for the number of humanities courses, predominately English, taken by
an EE major. Having educated both sides of his brain, he immediately went to work down
the road at the Research Triangle Institute (now RTI International) where he stayed for
33 years working his way up from Research Engineer to Senior Fellow. He became Chief
Strategy Officer for MED-EL GmbH, a manufacturer of cochlear implants, in 2007, and in
2008 he founded the Duke Hearing Center, with Debara L. Tucci, M.D. The next year he
became director of the MED-EL Laboratory for Basic Research.
Blake’s first paper was on “pinna reflections as a cue for localization.” (J. Acoustical
Soc. Am. 56, 957-962, 1974). Other early publications reported results on bat biosonar and
the effects of microwave action on the auditory system. His introduction to the problems
resulting from deafness came from a project in which the outputs of speech analyses were
sent to LED displays mounted on the frame of eyeglasses worn by the deaf in an effort to
disambiguate visual information about speech.
This experience lead to a successful bid on a contract from the Neural Prosthesis Program at the National Institutes of Health (NIH) in 1983 to design signal-processing strategies for cochlear implants. At this point in the development of cochlear implants, despite
the presence of as many as 22 electrodes in the cochlea, speech-understanding scores were
very poor. What was missing was a highly effective signal-processing strategy. In 1989,
Blake and his team invented and began to test such a strategy--the continuous interleavedsampling strategy or CIS, building on previous work at the University of San Francisco by
Michael Merzenich, at Stanford University by Robert White and Blair Simmons, and at the
Massachusetts Eye and Ear Infirmary by Donald Eddington and William Rabinowitz. In
this strategy, speech is filtered into a number of bands, the energy in each band is estimated
and pulses, proportional to the energy in the band, are output to electrodes in the scala
tympani. Two aspects of this strategy were critical to its success. First, the pulses were
sequenced over time in an interleaved fashion across electrodes so that vector summation
of electric fields, that would arise from simultaneous pulse outputs, was minimized.
Second, the rate of stimulation was much higher than had been used before and, as a
consequence, both spatial information about place of cochlear stimulation and temporal
information were represented. The results of the first clinical test of this strategy were
published in 1991 in Nature (Nature 352: 236-238, 1991). Scores on tests of sentence
understanding in quiet improved significantly with the majority of the patients achieving
scores of greater than 90% correct. This paper heralded a new era in the field of sensory
prosthetics and is the most cited paper in the field of cochlear implants. Later work by
Blake and his research teams produced multiple variants of this strategy, all of which are
used in cochlear implants today.
3763
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ‘17 Boston
3763
Blake has been the recipient of many awards as is fitting for a person making discoveries
that have restored functional hearing for deaf infants and deafened adults. These include
the Lasker-DeBakey Clinical Medical Research Award (2013) “for the development of the
modern cochlear implant” and the Fritz J. and Dolores H. Russ Prize from the National
Academy of Engineering (2015) “for engineering cochlear implants that allow the deaf
to hear.” As befits his multidisciplinary work, he was awarded two honorary degrees in
medicine in 2015, one from Uppsala University, Sweden, and one from the University of
Salamanca, Spain.
Two aspects of Blake’s career are critical to an appreciation of his achievements. First,
his discoveries with respect to signal processing for cochlear implants came when he was
armed with only a baccalaureate degree in EE. He did not have a Ph.D. program to teach
him the research enterprise, or a postdoctoral fellowship in an important laboratory to
sharpen his skills. He made his discoveries by building a multi-disciplinary research team
and then spending decades of long hours in the laboratory. Having made the odd discovery
or two, he then acquired a D.Sc. degree from the University of Warwick in the U.K., a
Doctor of Engineering degree from the University of Technology, Sydney, and a Ph.D. in
EE from his Alma Mater, Duke University.
Second, early on in his tenure at RTI, Blake, working in conjunction with the RTI
administration, made the well-considered decision to place all of his work in the public
domain and, in doing so, to relinquish rights to his intellectual property. This decision was
made in order to speed the adoption of his work by manufacturers of cochlear implants.
A conservative estimate of the value of his intellectual property rights is 10’s of millions
of dollars.
Given his long time residence in the Raleigh Durham area, it is no surprise that Blake
is an avid fan of Duke basketball and Coach K. Indeed, it was Duke basketball that gave
Blake his largest audience. Several years ago, during a nationally televised game, the TV
camera was panning over the audience and settled on Blake, who looked very comfortable
in the ‘standing room only’ section. He is a tennis fan, as well as enthusiastic player, and
can regularly be seen poring over scientific texts while sporting the colorful shoes of his
on-court heroes.
There is no question that Blake’s pioneering research on cochlear implants has linked
auditory physiology with auditory perception, and speech perception and spoken-language
processing in adults and children. His life’s work has made a major contribution to improving the quality of life for many profoundly deaf individuals. For these reasons, we
are pleased to congratulate Blake Wilson for being awarded the ASA Helmholtz-Raleigh
Interdisciplinary Silver Medal in Speech Communication, Psychological and Physiological
Acoustics and Signal Processing in Acoustics.
MICHAEL DORMAN
FAN GANG ZENG
JOHN HANSEN
3764
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ‘17 Boston
3764
ACOUSTICAL SOCIETY OF AMERICA
GOLD MEDAL
William M. Hartmann
2017
The Gold Medal is presented in the spring to a member of the Society, without age limitation, for contributions to
acoustics. The first Gold Medal was presented in 1954 on the occasion of the Society’s Twenty-Fifth Anniversary
Celebration and biennially until 1981. It is now an annual award.
PREVIOUS RECIPIENTS
Wallace Waterfall
Floyd A. Firestone
Harvey Fletcher
Edward C. Wente
Georg von Békésy
R. Bruce Lindsay
Hallowell Davis
Vern O. Knudsen
Frederick V. Hunt
Warren P. Mason
Philip M. Morse
Leo L. Beranek
Raymond W. B. Stephens
Richard H. Bolt
Harry F. Olson
Isadore Rudnick
Martin Greenspan
Robert T. Beyer
Laurence Batchelder
James L. Flanagan
Cyril M. Harris
Arthur H. Benade
Richard K. Cook
Lothar W. Cremer
Eugen J. Skudrzyk
Manfred R. Schroeder
Ira J. Hirsh
3765
1954
1955
1957
1959
1961
1963
1965
1967
1969
1971
1973
1975
1977
1979
1981
1982
1983
1984
1985
1986
1987
1988
1988
1989
1990
1991
1992
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
David T. Blackstock
David M. Green
Kenneth N. Stevens
Ira Dyer
K. Uno Ingard
Floyd Dunn
Henning E. von Gierke
Murray Strasberg
Herman Medwin
Robert E. Apfel
Tony F. W. Embleton
Richard H. Lyon
Chester M. McKinney
Allan D. Pierce
James E. West
Katherine S. Harris
Patricia K. Kuhl
Thomas D. Rossing
Jiri Tichy
Eric E. Ungar
William A. Kuperman
Lawrence A. Crum
Brian C. J. Moore
Gerhard M. Sessler
Whitlow W. L. Au
1993
1994
1995
1996
1997
1998
1999
2000
2001
2002
2002
2003
2004
2005
2006
2007
2008
2009
2010
2011
2012
2013
2014
2015
2016
Acoustics ‘17 Boston
3765
3766
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ‘17 Boston
3766
ENCOMIUM FOR WILLIAM M. HARTMANN
. . . for contributions to research and education in psychological acoustics and service to
the society
BOSTON, MASSACHUSETTS • 27 JUNE 2017
William (Bill) Hartmann studied electrical engineering at Iowa State University starting
in 1957. His (incomplete) conversion to the study of physics began in 1960 when the United
States launched the Echo satellite, a large metalized balloon designed to reflect both microwave signals and sunlight so that it could be seen by the entire world. Bill was given the task
of using orbital parameters from the US Naval Observatory to predict the appearance of the
Echo over Ames, Iowa, which he achieved by programming the Cyclone, a five-ton vacuumtube machine with 1024 words of 40-bit memory stored on Williams tubes. At the time, both
artificial earth satellites and computing were hot topics. Perhaps this is why he was awarded
a Rhodes scholarship to study at Oxford University in the UK, from 1961 to 1965.
At Oxford, Bill studied condensed matter theory under the direction of Roger Elliott.
He completed a two-part thesis – neutron scattering from liquid and solid hydrogen, and
infra-red absorption from defective rare-gas crystals. Ever drawn to sources of cold neutrons, Bill then took a post-doctoral position at Argonne National Laboratory (1965-1968).
There, he determined how to introduce short-range order into the lattice dynamics of alloys
such as copper-gold. Easily the most important result of Bill’s stay at Argonne was meeting Christine Rein, a widely travelled school teacher and summer-time tennis instructor.
She and Bill married in 1967 and they have been playing tennis ever since. Chris is a good
friend to many members of the ASA. They have two children, Mitra, a professor of engineering at Northwestern, and Daniel, a biomechanical engineer with Eli Lilly.
Bill joined the faculty of the Department of Physics at Michigan State University in
1968. In the early 1970s, his life took a sudden turn beginning with “Switched on Bach,”
which got Bill hooked on electronic music. Op-amp chips made it easy to build analog
music synthesis electronics, and Bill began to explore making music electronically and to
teach a course in musical acoustics. In making electronic music, Bill discovered effects
that did not seem to make sense. His perceptions did not square with what he knew his
electronics were generating. Bill began to do perceptual experiments, mostly on pitch,
to try to understand what he was hearing. By accident, he met David Wessel, then assistant professor of Psychology at Michigan State, who told Bill that there were people who
made a living by studying such problems in auditory perception. Bill demanded to know
where such people were and whether any of them ever wrote about what they were doing.
David explained that such people could be found at meetings of the Acoustical Society
of America (ASA), and that they published in the Journal of the Acoustical Society of
America (JASA). David advised Bill to learn the tricks of the trade with David Green at
Harvard University. So Bill and Chris packed up the family and spent the academic year
(1976-1977) with David Green.
Bill joined the ASA in 1976 and attended the fall meeting in San Diego. He went right
to work and organized a special session on electronic music for the next meeting. Bill has
missed only one ASA meeting since then. His occasional performances of rap at jam sessions are not to be missed.
Bill’s contributions to our field can be divided into three broad categories: research,
education, and service to the ASA. We consider them in that order.
Bill’s research has spanned a broad set of interrelated topics in acoustics and psychoacoustics and he has made seminal contributions in multiple areas. His work is characterized by a rigorous specification of the problem, careful measurements of complex phenomena, a rigorous mathematical description and analysis of the results, and discussion of
implications for more general problems. His very considerable technical and mathematical
skills enable him to perform analyses and construct models in a way that would be difficult
or impossible for many others working in the same fields.
Bill has made major contributions in several areas: (1) the ability of humans to localize sounds in space, especially when room reverberation and other sources are present;
3767
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ‘17 Boston
3767
(2) pitch phenomena, including pitches created from differences in the sounds at the two
ears; (3) the perceptual analysis of mixtures of sounds from different sources, often called
“scene analysis”; and (4) modulation detection, including AM, FM, and mixed modulations. His more recent work includes modeling that incorporates current knowledge and
ideas about processing in the auditory periphery and neural coding up to the midbrain.
As an example of the nature of Bill’s contributions in the areas listed above, we consider
his substantial contributions to knowledge in the field of pitch perception. Some of Bill’s
early work was concerned with the perception of frequency modulation. He proposed an
influential model to explain the detection of frequency modulation. He also explored the influence of the envelope amplitude of a sound on its pitch and he measured how the pitch of
a single mistuned harmonic in a complex tone was influenced by the amount of mistuning.
Somewhat later, he conducted an important series of experiments on the effects of mistuning a harmonic in a complex tone on pitch perception and auditory scene analysis. Bill also
showed that the “octave enlargement effect” occurs for Huggins pitch (a pitch created by
binaural interaction), demonstrating that the effect was unlikely to have a peripheral origin,
as previously assumed. All of these papers had a considerable influence on theories of pitch
perception.
Bill’s interest in auditory scene analysis stemmed from his work on pitch perception,
and he was one of the first to give a comprehensive overview of the role of pitch in auditory scene analysis. He published a highly influential paper on the factors that influence the
perceptual organization of rapid sequences of sounds [Hartmann, W. M., and Johnson, D.
(1991). “Stream segregation and peripheral channeling,” Music Percept. 9, 155-184]. He
also studied the role of spatial and temporal factors that influence the ability to understand
speech in the presence of other sounds.
Bill has contributed greatly to education in acoustics via his two books, Sound, Signals,
and Sensation (now in its fifth printing) and Principles of Musical Acoustics - as well as many
chapters in other books. In addition, Bill has educated many undergraduate and postgraduate
students over the years, several of whom have gone on to become distinguished researchers in
their own right. Bill has organized six special sessions at various ASA meetings. He was the
ASA Technical Program Chair for the Acoustics’08 Paris meeting held jointly with the European Acoustics Association, the largest ASA meeting ever held, with over 5,000 registrants.
Bill has made many additional contributions to the ASA. He was elected a Member of
the Executive Council (1992-1995), and later as Vice President (1998-1999), PresidentElect (2000-2001) and President (2001-2002) of the ASA. He has served on many ASA
committees, including the Medals and Awards Committee, Committee on Education in
Acoustics, Investments, Panel on Public Policy, and the Technical Committee on Psychological and Physiological Acoustics. He was chair of the Technical Committee on Musical
Acoustics, the Books+ Committee, and the Rules and Governance Committee. He was an
active member of the Re-Creation Committee (1993-1994) and Vision 2010 (2004 and
2005), which set the direction for the future of the Society.
In 2011 the William and Christine Hartmann Prize in Auditory Neuroscience was established by the ASA to recognize and honor research that links auditory physiology with
auditory perception or behavior in humans and other animals. The prize was underwritten
by a substantial donation from Bill and Chris.
Bill was awarded ASA’s Science Writing Award for Professionals in Acoustics in
December 2000. In 2001, he received the ASA Helmholtz-Rayleigh Interdisciplinary
Silver Medal “for research and education in psychological and physiological acoustics, architectural acoustics, musical acoustics, and signal processing”, showing the diverse areas
of acoustics in which he has made substantial contributions.
In summary, Bill has made tremendous contributions to knowledge and education in
acoustics and he has shown outstanding loyalty and devotion to the ASA. We congratulate
him most warmly on the award of the Gold Medal of the Acoustical Society of America.
BRIAN C. J. MOORE
H. STEVEN COLBURN
3768
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ‘17 Boston
3768
ENCOMIUM FOR HUGO FASTL
BOSTON, MASSACHUSETTS • 27 JUNE 2017
EAA AWARD for Lifetime Achievements in Acoustics
Hugo Fastl is an expert in the field of psychoacoustics and related fields. He not only has established the foundations
of models for various psychoacoustic quantities and sound quality metrics, but he also serves as a very prominent reference and central node in a large network of international research and standardization.
Hugo Fastl first studied music (string bass) at the Music Conservatory Munich (graduated 1969), followed by electrical
engineering at Technical University of Munich (graduated 1970), and received his Ph.D. from the Technical University
of Munich in 1974 with supervision from Eberhard Zwicker. He then continued his academic career by adding a second
doctorate (habilitation) in 1981; both dissertations made outstanding contributions to hearing research, particularly on
masking thresholds as a measure for temporal and spectral resolution of the hearing organ. In 1987, he served as visiting professor at the University of Osaka in Japan, and since 1988 he has served as professor and head of the “Technical
Acoustics” group at the Technical University of Munich.
His work covers a wide range of topics, including psychoacoustic quantities (masking, loudness, sharpness, roughness, and fluctuation strength), audio communication and speech intelligibility, noise abatement on vehicles and road
surfaces, sound quality and sound design, musical acoustics (pitch perception, pitch strength), and audiology (hearing
devices, cochlea implants, and the eponymous “Fastl noise” for speech audiometry). Even with his enormous efforts
in fundamental research, Hugo Fastl has always found ways to transfer his results into practical and industrial applications. He has also been key in promoting psycho-acoustical quantities within acoustic engineering, successfully bridging the fields of diverse disciplines. In particular, his book “Psychoacoustics – Facts and Models”, written initially with
Eberhard Zwicker and subsequently revised and published under his own name multiple times, has gained very high
international recognition lasting until today – the 2013 edition alone scores about 4500 citations on Google Scholar. No
education in Psychoacoustics fails to integrate this remarkable work.
For his research, Hugo Fastl followed a strong multidisciplinary approach in collaboration with Japanese researchers
and in supervising many joint projects in the field of Psychology. The intercultural difference in multimodal perception
was one of the most significant aspects in these projects.
Among various memberships in national and international councils and boards, Hugo Fastl has been active in the
German Acoustical Society (as chair of its TC Hearing Acoustics 1991-1997, as treasurer 1996-2004, as conference
chair 2005, and finally as president 2004-2007). From 2004 to 2010, he also served as the treasurer of the ICA. He has
received various prestigious national and international awards: among them, the 1990 Fellow of the Acoustical Society
of America, the 1998 Research Award of the Japan Society for the Promotion of Science, the 2003 Rayleigh Medal of
the British Institute of Acoustics, the 2010 Helmholtz Medal of the German Acoustical Society, and in 2014 he was
appointed as Distinguished International Member of INCE-USA.
At their core, Hugo Fastl’s interests are based in psychological and physiological acoustics. Ultimately, however, a
rich balance of academic and applied work characterizes his career. This has enabled his participation in a large number
of projects and the support of numerous young researchers in his field. His graduate and postgraduate students carry on
his ideas and inspiration, and they do so very successfully in industry and academia alike.
The EAA now honors Hugo with its award for lifetime achievements. This sounds final and complete, especially
since in the present case the honorable colleague has already crossed the age limit of university positions in Germany.
3769
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ‘17 Boston
3769
Not so with Hugo Fastl, who does not even think about resting on his achievements. Anyone who knows him is aware
that retirement is by no means a consideration. So we will be happy to meet him again at national and international
events, to discuss acoustics, and share a Bavarian beer at DAGA 2017 in Munich. In recognition of his outstanding
contributions, we congratulate Hugo Fastl for being awarded the EAA Award for Lifetime Achievements in Acoustics.
MICHAEL VORLAENDER
BRIGITTE SCHULTE-FORTKAMP
3770
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ‘17 Boston
3770
ENCOMIUM FOR ANTONIO PÉREZ LÓPEZ
BOSTON, MASSACHUSETTS • 27 JUNE 2017
EAA AWARD for contributions to the promotion of Acoustics in Europe
Antonio Pérez López is perhaps the person who will be considered in the history of European Acoustics as the most
representative of a European Acoustician.
He graduated from the University of Madrid with a degree in Physics and later he got the Diploma in Audiophonology from the same University. He continued his studies in England obtaining a Master D.I.C. by the Imperial College,
University of London. He started his career in Acoustics at the Spanish National Research Council in Madrid and served
as the Chief of the Noise Lab. Later, he worked as the General Director of the Spanish Branch of the German Group
“Rheinhold & Mahla”.
From the very beginning, Antonio showed a unique ability to combine science with education and production. He
was able to speak with students at the University presenting, in a professional way, the scientific aspects of sound and
noise, to share new ideas with scientists working exclusively in the research sector, and at the same time to decide on
the production of new materials related to noise and vibration control. He also demonstrated his administrative skills
working in the industry and taking care of its running issues including the working relations of the employees of all
ranks.
His involvement in all possible aspects of acoustics, in connection with his personal character and his belief in the
necessity for collaboration between people and the dissemination of knowledge, dictated the way he dealt with the scientific unions devoted to science and in particular to acoustics. First, he was involved in Spanish acoustics becoming
founding member of the Spanish Acoustical Society (Sociedad Española de Acústica (SEA)) and serving it as General
Secretary and later as President, a position he holds until now. Under the leadership of Antonio the SEA has been now
widely recognized as a model society in Europe. Checking the activities of the SEA which are rich of innovations and
consistent activities, people will soon recognize that Antonio is behind the scenes.
Feeling European, he also dedicated his activities to the European Acoustics Association (EAA). He was the person
to resolve the legal status of EAA by registering the association under the Spanish Law and undertaking all the necessary
preparatory actions. He, also, offered the headquarters of the Spanish Acoustical Society to host the office of EAA. Thereafter, Antonio was appointed as the Director of the EAA office in Madrid, but being determined to promote the idea of
a unified Europe he offered himself as the vital link between all the EAA boards and the National Societies. He was the
person to look after all the European National Societies, especially the small or new ones, being in close contact with their
boards and trying to show them the road map to success. Being member of the EAA executive council he was the person
to ensure the continuation of the function of the EAA and its relation with other international Federations, Commissions,
and/or Associations. He was always “pulling the strings” in the background, suggesting and implementing innovations in
administrative, educative, and scientific aspects, mitigating conflicts, thinking forward, and giving the EAA a friendly and
cooperative spirit.
Antonio has many beliefs: He believes in Science, in Solidarity, in Europe, in People, in Cooperation. He treats
everybody as if he or she is a member of his family, trying to educate, comfort, introduce motivations, discuss future
3771
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ‘17 Boston
3771
plans, and resolve current issues. He is always a good friend, “father” or “brother” to his collaborators. He is a real volunteer in life and science. He likes and believes in youth. He is able to communicate effectively with the young people.
Among the many activities he has devoted to the youth, one may mention the production of educational material for
young students aiming at their introduction to sound and noise, his support to the Young Acousticians Network of the
EAA always suggesting the Presidents and the board to invest on the young scientists, and his continuous efforts to
organize seminars and special courses for young acousticians.
Antonio is well known worldwide for his activities, having served the science of acoustics from different posts. He is
a Member of the board of the International Commission for Acoustics since 2007 and Treasurer since 2010. He has been
President of the FIA (Ibero-American Federation of Acoustics), currently being member of the board. He has organized
many International Conferences and Symposia which remain in history of acoustics for their success. He has established
good relations with many non-european countries for instance Morocco, Tunisia, Nigeria, and Algeria bringing to them the
air of European acoustics and asking them to actively participate in European acoustics events.
Antonio will keep going, showing the way to all his friends and fellow acousticians.
MICHAEL TAROUDAKIS
3772
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ‘17 Boston
3772
WEDNESDAY MORNING, 28 JUNE 2017
ROOM 206, 7:55 A.M. TO 10:40 A.M.
Session 4aAAa
Architectural Acoustics: Recent Developments and Advances in Archeo-Acoustics and Historical
Soundscapes I
David Lubman, Cochair
DL Acoustics, 14301 Middletown Ln., Westminster, CA 92683-4514
Miriam A. Kolar, Cochair
Architectural Studies; Music, Amherst College, School for Advanced Research, 660 Garcia St., Santa Fe, NM 87505
Elena Bo, Cochair
DAD, Polytechnic Univ. of Turin, Bologna 40128, Italy
Chair’s Introduction—7:55
Invited Paper
8:00
4aAAa1. Archaeoacoustics of Mexico City’s cathedral. Alejandro Ramos-Amezquita (Comput. Sci. Dept., Tecnol
ogico de Monterrey,
Calle del Puente 222, Colonia Ejidos de Huipulco Tlalpan, Mexico City, Mexico DF 14380, Mexico, alejandro.ramos.amezquita@
itesm.mx), Pablo Padilla (Fitzwilliam College, Univ. of Cambridge, Mexico City, Mexico City, Mexico), Ana M. Jaramillo (Architectural Acoust., AFMG Services North America, LLC, Brooklyn Park, MN), Braxton B. Boren (Princeton Univ., Astoria, New York),
ogico y de Estudios Superiores de Monterrey, Mexico City, Mexico), Julio Gonzalez
Guadalupe Caro (Humanities, Instituto Tecnol
(Comput. Sci. Dept., Tecnol
ogico de Monterrey, Mexico City, Mexico), Victor H. Mendoza (Univ. of Mexico (UNAM), Mexico City,
Mexico), Francisco Salazar (Instituto Politecnico Nacional, Mexico City, Mexico), Gabriela Perez (CENIDIM, Mexico City, Mexico),
ogico de Monterrey, Mexico City, Mexico), Rodrigo Tapia (Facultad de Ciencias, Univ. of
Alberto Rivera (Comput. Sci. Dept., Tecnol
Mexico (UNAM), Mexico City, Mexico), Carlos Paz (Comput. Sci. Dept., Tecnol
ogico de Monterrey, Mexico City, Mexico), and Jezzica Zamudio (Instituto Politecnico Nacional, Mexico City, Mexico)
4a WED. AM
We consider the acoustics of the architectural design of Mexico City’s Cathedral. Using measurements of the impulse response of
the building and a virtual reconstruction of the architectural space, the reconstruction of the soundscape is developed with statistical and
geometric methods and a standard computational platform (EASE). This reconstruction and contrast of experimental and simulated
results allow to pose meaningful hypotheses related to the acoustical functionality of this temple, which played and continues to play a
significant role in the religious and social activities of the country. We also present possible connections with other architectural, historical, and musicological aspects.
Contributed Papers
8:20
4aAAa2. Acoustics in Dalby church in Middle Ages and today. Delphine
Bard (Lund Univ., John Ericssonv€ag 1, Lund 22100, Sweden, delphine.
bard@construction.lth.se)
The subject for the research is Dalby church in Sweden which supposes
to be the oldest stone church in Scandinavia. There are holes in the ceiling
of Dalby church. Behind these holes are ceramic pots. In Swedish Musicology studies, the relationship between acoustic pots and acoustic resonators
is poorly documented. Similarly, there is limited information on the difference in reverberation time in churches from late Middle Ages compared to
3773
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
present time. In the late medieval churches, the reverberation time must
have been different than it is in churches today. There were more visitors
and they had clothes made of heavy fabrics which would have affected the
reverberation time. Currently, the reverberation time measurements are
made without visitors which affect the reverberation time itself. This investigation is based on theoretical studies combined with practical reverberation measurements in Dalby church and simulations in ODEON in a 3Dmodel of the church. Following questions will be answered: What was the
difference in reverberation time in Dalby church in late Middle Ages compared to present time? What’s the contribution of ceramic pots to reverberation time, speech transmission index, or speech intelligibility?
Acoustics ’17 Boston
3773
8:40
This point of view has been questioned by numerous authors, especially in
the domain of historical sciences with solid arguments. Based on one of the
largest enquiries on French churches with pots achieved these last 15 years
by a multidisciplinary team, we propose a scheme of interpretation of this
technique taking into account the point of view of archaeologists, musicologists, and linguists and involving experimentation with singers. We show
that voice was the target of this technique, by analyzing the frequency of the
pots and their positioning in churches in connexion to liturgy. Their symbolic meaning will also be presented in relation to the medieval conception
of sciences and music. In the end, the controversial question of a deliberate
action on the acoustic of the churches will also be enlightened by the analysis of texts, particularly the vocabulary of French 16th century translation of
Vitruvius.
4aAAa3. The contribution of human sciences to the interpretation of the
use of acoustic pots in France and in bordering countries from the 12th to
the 17th century. Jean-Christophe Valiere (PPRIME UPR 3346, CNRS - Universite de Poitiers, 6 rue Marcel Dore, Poitiers 86073, France, jean.christophe.valiere@univ-poitiers.fr), Benedicte Bertholon-Palazzo (CESCM - CNRS
UMR 7302, Poitiers, France), Pauline Carvalho (PPRIME UPR 3346, CNRS
- Universite de Poitiers, Poitiers, France), Estele Dupuy (FORELL A EA
3816, Poitiers, France), David Fiala (CESR, CNRS-UMR 7323, Tours,
France), and Vasco Zara (CESR, CNRS-UMR 7323, Dijon, France)
In occidental Europe, the technique of inserting acoustic pots in
churches’ walls spread from the 12th to the 17th century. Acousticians consider this technique as a pre-scientific anticipation of Helmholtz Resonator.
about 21 pots per cubic meter, by far the largest such installation in France.
The archeological study shows that the pots have been inserted during the
construction. It has also been proved by x-ray analysis that the pots were
probably produced by potters close to l’Aber-Wrach, in Lannilis, where pottery tradition is attested from antiquity to the 20th century. The pots were
obviously designed for exclusive acoustic use, as shown by their lack of
glaze and their rounded shape which cannot be put properly on a table.
Seven kinds of pots have been found with frequencies uniformly distributed
between 150 and 300 Hz, unlike to what can be usually observed elsewhere
in France. This frequency range corresponds to male singing voices (Baritone-Tenor). This suggests a rational choice as well concerning the musical
point of view but also concerning acoustics. The owner would like to restore
the church including all the acoustic pots.
9:00–9:20 Break
9:20
4aAAa4. An example of the restoration of a Monastic Churches with
acoustic pots: L’Abbaye des Anges of l’Aber-Wrach (Britany). JeanChristophe Valiere (PPRIME UPR 3346, CNRS - Universite de Poitiers, 6
rue Marcel Dore, Poitiers 86073, France, jean.christophe.valiere@univ-poitiers.fr), Benedicte Bertholon-Palazzo, and Nadia Barone (CESCM - CNRS
UMR 7302, Poitiers, France)
“L’Abbaye des Anges” is a Franciscans Convent founded in 1507 in farwestern France, close to Brest. The small church of this convent (2320 m3)
contains in its liturgical choir’s walls about 110 acoustic pots, which is
Invited Papers
9:40
4aAAa5. A generalized version of the Lubman-Kiser theory of historical acoustics and worship spaces. Braxton B. Boren (Performing Arts, American Univ., 30-91 Crescent St., 5B, Astoria, New York 11102, bbb259@nyu.edu)
Although some relationship between liturgy, theology, and acoustics has been inferred by architectural acousticians at least since
Hope Bagenal, this phenomenon has best been cataloged in a brief paper by Lubman and Kiser (ICA, 2001). Although all historical theories are difficult to distill in a single sentence, it might be briefly summed up as the statement that all things being equal, a change in
acoustical conditions will result in a change in liturgy, and a change in theology will result in a change in acoustical conditions. This is
usually described along a one-dimensional continuum between greater clarity or greater reverberance, and Lubman and Kiser describe
much of this motion along this continuum throughout the history of the Western church through the Protestant Reformation and up to
the present day. An attempt is made to generalize this theory and to examine whether the Western acoustical progression is mirrored in
the historical experiences of other religious traditions and cultures.
10:00
4aAAa6. Efficient spatialization techniques for immersive real-time soundscapes. Matthew Azevedo (Acentech Inc., 33 Moulton
St., Cambridge, MA 02138, mazevedo@acentech.com)
A simulated soundscape requires many independent sound sources to create an immersive experience. However, the challenges of
rendering many simultaneous sources can force a choice between an interactive real-time simulation with limited sources and a static,
pre-rendered soundscape with more sources. We will discuss production techniques for optimizing soundscape auralizations to maximize the number of concurrent sources which can be rendered in real time to achieve an immersive level of detail with the potential for
a high level of interactivity. Topics covered with include source prioritization, benefits and limitations of various spatialization techniques, including ambisonics, VBAP, and MIAP, and hybrid techniques for independent rendering of direct and reflected sound.
10:20–10:40 Panel Discussion
3774
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3774
WEDNESDAY MORNING, 28 JUNE 2017
BALLROOM C, 8:00 A.M. TO 10:20 A.M.
Session 4aAAb
Architectural Acoustics: Acoustic Regulations and Classification of New and Retrofitted Buildings III
(Poster Session)
Birgit Rasmussen, Cochair
SBi, Danish Building Research Institute, Aalborg University Copenhagen, A.C. Meyers Vænge 15,
Copenhagen SV 2450, Denmark
Jorge Patricio, Cochair
LNEC, Av. do Brasil, 101, Lisbon 1700-066, Portugal
David S. Woolworth, Cochair
Oxford Acoustics, 356 CR 102, Oxford, MS 38655
All posters will be on display from 8:00 a.m. to 10:20 a.m. To allow contributors in this session to see the other posters, authors of oddnumbered papers will be at their posters from 8:00 a.m. to 9:10 a.m. and authors of even-numbered papers will be at their posters from
9:10 a.m. to 10:20 a.m.
Contributed Papers
The paper presents requirements acting in Russian Federation for acoustic regulation of an internal climate in new and retrofitted buildings. The
positions of two federal laws are considered which regulate the requirements
to acoustic parameters of dwellings and harmful influences. The list of the
basic sanitary regulations setting hygienic requirements to acoustic factors
is given. Substantive provisions of the code of rules are resulted, which establish obligatory requirements in norms of admissible noise in premises of
buildings of different function, an order of carrying out of acoustic calculations according to a noise mode in these premises of buildings, an order of a
choice and application of various methods and means for decrease in calculated or actual noise levels to meet the sanitary regulations. Requirements to
the declaration of airborne sound insulation characteristics of building
sound-proof products, methods for the establishment and verification of the
declared values, and also the data which should be included in the technical
and operational documentation at the statement of characteristics of airborne
sound insulation according to the national standard GOST 56235-2014
are stated.
3775
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
4aAAb2. Reduction performance of floor impact sound of the existing
apartment buildings in Korea. Won-Hak Lee (Construction & Energy
Business Div., Korea Conformity Labs., 73, Yangcheong 3-gil, Ochangeup, Cheongju-si 28115, South Korea, whlee@kcl.re.kr), Gukgon Song,
Yonghee Kim, Yongjin Yoon (Construction & Energy Business Div., Korea
Conformity Labs., Cheongju-si, Chungbuk, South Korea), and Myung-Jun
Kim (Univ. of Seoul, Seoul, South Korea)
Recently, the revision of regulation in the Ministry of Land, Infrastructure, and Transport, new apartment buildings are required to construct concrete slabs of 210 mm or more and to ensure floor construction with impact
insulation performance within 4th grades. However, in Korea, most of the
apartment buildings was constructed with a slab thickness of 120 ~ 180 mm
prior to the enactment of the related laws in 2006. There are causing disputes and complaints due to floor impact sound. Therefore, this study investigates reduction performances of floor impact sound in the existing
apartment buildings constructed before the effectuation of the legal regulation from Ministry of Land, Infrastructure, and Transport in Korea. Since
the regulations were not applied to the existing apartment buildings, understanding of reduction performances of floor impact sound is important for
effective redevelopment or retrofit construction process. In this study, field
measurements according to KS F 2810-1 and KS F 2810-2 were carried out
in the 70 existing apartment buildings. In addition, including 124 data from
the literature review, the present state of reduction performances of floor
impact sound in the existing apartment buildings was discussed. [This
research was financially supported by the Seoul R&BD Program (No.
PS150001) through the Research and Development for Regional Industry.]
Acoustics ’17 Boston
3775
4a WED. AM
4aAAb1. Acoustic regulations and classification of new and retrofitted
buildings in Russia. Ilya E. Tsukernikov, Igor L. Shubin, and Tatiana O.
Nevenchannaya (Acoust. Lab., Res. Inst. of Bldg. Phys., Odoevskogo pr.
h.7, korp.2, fl.179, 21 Lokomotovny pr., Moscow 117574, Russian Federation, 3342488@mail.ru)
WEDNESDAY MORNING, 28 JUNE 2017
BALLROOM C, 8:00 A.M. TO 10:20 A.M.
Session 4aAAc
Architectural Acoustics: Topics in Architectural Acoustics (Poster Session)
Ian B. Hoffman, Chair
Judson Univ. Dept. of Architecture, 1151 N State Street, Elgin, IL 60123
All posters will be on display from 8:00 a.m. to 10:20 a.m. To allow contributors in this session to see the other posters, authors of oddnumbered papers will be at their posters from 8:00 a.m. to 9:10 a.m. and authors of even-numbered papers will be at their posters from
9:10 a.m. to 10:20 a.m.
Contributed Papers
4aAAc1. The acoustics of the catacombs of Vigna Cassia in Syracuse.
Ilaria Lombardi and Amelia Trematerra (Dipartimento di Architettura e Disegno Industriale, Universita degli Studi della Campania Luigi Vanvitelli,
Via San Lorenzo, Aversa, Caserta 81031, Italy, ilaria.lombardi@studenti.
unina2.it)
The aim of this study is the evaluation of acoustic characteristics of the
burial place in particular the Vigna Cassia’s catacombs in Syracuse. In late
antiquity (III—VI cent. A.D. ), the catacombs were the burial places of the
Christian communities, but they were also places for prayer and sacred functions, celebrated especially for the presence of the tombs of the martyrs. The
study of the acoustics of the catacombs was performed with the technique of
the impulse response, some firecrackers were detonated in different areas of
the catacombs and the answers were realized in function of the monaural parameters STI, T30, EDT, C80, and D50. The acoustic study was performed
to define whether in such places, with complex geometries and interconnected tunnels, there are favorable conditions for speech understanding and
for meditation and prayer.
4aAAc2. Room acoustics and the organbuilding of two great Boston
pipe organs. Nicole Cuff (Acentech, 33 Moulton St., Cambridge, MA
02138, ncuff@acentech.com) and Daniel A. Russell (Graduate Program in
Acoust., The Penn State Univ., University Park, PA)
Pipe organs are like giant snowflakes: each one perfectly distinct and a
marvel of craftsmanship, creativity, and design. This presentation will compare how master organbuilders designed their organs to complement different room acoustics. The first is one of the largest pipe organs in the world at
the Christian Science Church Extension Building in Boston, Massachusetts,
built in 1952 by Lawrence Phelps, and the other is the Cathedral of the Holy
Cross in Boston, built in 1875 by Elias and George Greenleaf Hook, known
as their most innovative and elaborate organ at the time of construction.
How did the master organbuilders work with two rooms with varied reverberation times, nearly a century apart? Why did these master organbuilders
construct particular organ stops in the ways that they did? How do their
works compare to organ design standards? What is the resulting sound in
the rooms? For local Bostonians, there will be organ concerts at the Christian Science Church June 13, 2017 from 12:15 to 12:45 pm and August 8,
2017 from 12:15 to 12:45 pm.
or more and to ensure floor construction with impact insulation performance
within 4th grades. However, most of the apartment buildings approved by
the government prior to the enactment of the related laws in 2004 are usually constructed with a slab thickness of 120–150 mm, causing disputes and
complaints over floor impact sound. This study investigated the mechanical
properties like as dynamic stiffness, loss factor, compressibility, dimensional stability, etc., of resilient materials for remodeling of aged apartments. About 200 kinds of resilient materials and structures applied in
commonly in recently 3 years were selected and investigated. Because of
notification revision including compressibility and dimensional stability,
dynamic stiffness of resilient materials was increased and lots of composite
layer resilient structures were developed. In addition, it is necessary to develop a resilient materials or structures that can reduce the total thickness of
floor structure because the aged apartment has low indoor height. [This
research was financially supported by the Seoul R&BD Program (No.
PS150001) through the Research and Development for Regional Industry.]
4aAAc4. Development of an automated system for American society for
testing and materials test method E1007. Jennifer M. Scinto (Mech. Eng.,
Tufts Univ., 200 College Ave., Anderson Hall, Medford, MA 02155, jennifer.scinto@tufts.edu)
The use of a tapping machine in four specific configurations is required
in the determination of the field impact insulation class rating of a floor-ceiling assembly as prescribed in the American Society for Testing and Materials Test Method E1007. When performed with presently available
equipment, the procedure requires either two acousticians be present—one
taking measurements in the room below while the other moves the machine
into the four required positions in the room above or one acoustician who
must repeatedly return to the machine to change its position after taking
each measurement. Either case results in a waste of man-hours, which could
be more effectively utilized. The objective of this work is to design a proportional-integral-derivative-controlled automated testing system using
National Instruments MyRIO robotics components to be retrofitted to an
existing tapping machine. This allows for the wireless remote control of the
position of the tapping machine from the floor below, allowing a single acoustician to perform the test more quickly and accurately.
4aAAc3. Materials properties of resilient materials used for remodeling
of aged apartment. Gukgon Song, Yonghee Kim, Won-Hak Lee, Yongjin
Yoon (Korea Conformity Labs., Ochang-eup, Cheongwon-gu, 73, Yangcheong 3-gil, Cheongju-si, Chungbuk 28115, South Korea, gsong@kcl.re.
kr), and Myung-Jun Kim (Univ. of Seoul, Seoul, South Korea)
4aAAc5. Experimental investigations on a light-weight aerogel layer in
gypsum wallboards for increased sound transmission loss. Ning Xiang,
Mathew A. Whitney (Graduate Program in Architectural Acoust., Rensselaer Polytechnic Inst., Greene Bldg., 110 8th St., Troy, NY 12180, xiangn@
rpi.edu), Hongbin Lu (Dept of Mechanical Eng., Univ. of Texas in Dallas,
Richardson, TX), and Nicholas Leventis (Dept. of Chemistry, Missouri
Univ. of Sci. and Technol., Rolla, MO)
In Korea, resilient materials such as EPS and EVA are used to reduce
the floor impact sound between adjacent houses in apartment. With the revision of the Ministry of Land, Infrastructure, and Transport (MOTT) Notice,
new apartment buildings are required to construct concrete slabs of 210 mm
It has long been considered challenging to increase sound transmission
loss of gypsum wallboards without significantly increasing the board thickness and weight. Low-density highly porous solid monolithic macroscopic
objects consisting of hierarchical mesoporous three-dimensional assemblies
3776
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3776
4aAAc6. Noise reduction research of high-speed railway U-shaped
beam. Jing Guo and xiang Yan (Acoust. Lab of Architecture School, Tsinghua Univ., Rm. 104, Main Academic Bldg., Beijing 100084, China, shinobu001@sina.com)
Railway noise has become an important factor restricting the further development of railway and endangering human health. Because of the particularity of the structure of the railway bridge, the noise radiation source is
strong, the noise varies with the distance and the noise reduction effect of
the sound barrier is different from that of the railway section. Therefore,
how to effectively control the impact of railway noise on residents on both
sides of the bridge section has become the focus of environmental protection
workers, research the influence of noise of railway bridge segment noise in
residential area along the railway is very important. In the acoustic test program of high-speed railway u-shaped beam, we will select the most appropriate noise reduction scheme through the computer simulation and model
test of HSR u-shaped beam with different structural forms of test, and provide reliable reference for practical engineering.
4aAAc7. Experimental methods for determining the cross-over frequency above which phaseless geometrical methods for room acoustics
computer simulation is suitable. Marcio Avelar (Academic Dept. of
Mech., Federal Univ. of Technol., PR, Brazil, Rua Deputado Heitor Alencar
Furtado, 5000, Curitiba, PR 81280-340, Brazil, marciogomes@utfpr.edu.
br), Paulo Bonifacio (Federal Inst. of Santa Catarina, Joinville, Brazil), Hilbeth Azikri de Deus, Elvis Bertoti (Academic Dept. of Mech., Federal Univ.
of Technol., PR, Brazil, Curitiba, Brazil), Eric Brandao, William D. Fonseca
(Federal Univ. of Santa Maria, Santa Maria, RS, Brazil), Alexandre Sarda
(Federal Univ. of Parana, Curitiba, Brazil), and Pedro Prestes (Academic
Dept. of Mech., Federal Univ. of Technol., PR, Brazil, Curitiba, Brazil)
The most used computer codes for simulating the sound propagation in
closed spaces are based on geometrical acoustics. Such an approach is
known to be suitable when sound waves interference does not play a major
role. For estimating a cross-over frequency above which this condition is
met, acousticians use a simple formula for the so called “Schroeder
frequency,” which in turn is related to modal overlap. In this work, two
methods for observing experimentally the cross-over frequency above which
wave interference may be disconsidered are presented. One of them is
related to the statistical independence of room impulse responses (in space),
while the other one is based on the analysis of impulse responses’ phase information within a frequency band. Preliminary results for a shoebox room,
based on analytically generated impulse responses, and measurements performed in an auditorium are shown, and indicate similar frequency values,
also comparable to the estimated Schroeder frequencies.
3777
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
4aAAc8. Measurement, visualization, and modeling of acoustics of a
barrel-vaulted sanctuary. Heather L. Lai and David J. Foote (Division of
Eng. Programs, SUNY New Paltz, 1 Hawk Dr., New Paltz, NY 12561,
laih@newpaltz.edu)
This investigation centers on a recently constructed church with a barrelvaulted sanctuary ceiling (13.1 m x 12.7 m x 6.0 m) exhibiting excessive
reverberation times. An acoustical engineer consulted during design recommended that the original arched ceiling be replaced with a series of flat surfaces constructed to create a similar visual appearance without causing focused
echoes. However, due to miscommunication, these recommendations were
not incorporated into the construction of the building. Based on complaints
of poor speech intelligibility at specific locations, an investigation was made
consisting of reverberation time measurements developed from logarithmic
sine sweep data and sound pressure level measurements in the diffuse sound
field. Reverberation times in excess of 5 seconds in the speech-frequency
range were observed at locations along the center aisle, along with correspondingly elevated sound pressure levels. A CAD model of the sanctuary
was developed, imported into Odeon and used to simulate the acoustical
behavior. Multiple means for visualizing the acoustical behavior are presented, demonstrating how the long reverberation time can be detected in the
measured data as well as the models. Odeon analysis of proposed solutions
for retrofit of the space to improve intelligibility are also described.
4aAAc9. Modification of the head-related transfer functions for auditory proximity in rooms. Jukka P€atynen (Dept. of Comput. Sci., Aalto
Univ. School of Sci., Konemiehentie 2, Espoo FI02150, Finland, jukka.patynen@aalto.fi)
The human hearing uses two signals entering the ears to interpret surrounding spaces and sound sources, and certain acoustic impressions are often regarded favorably by the listeners. At times, music consumers may
consider auditory proximity and intimacy as a desired sensation. In some
scenarios, the acoustic reflections, often arriving from lateral angles, succeed
in enhancing the perceived proximity, even though all acoustic events occur
at far distances. In contrast, natural sounds emanating from a very close
proximity are instinctively perceived as intimate. The perception of spatial
sound is based on the binaural cues, which can be modeled with head-related
transfer functions (HRTF). Earlier research has demonstrated that the interaural level difference for lateral incidence is the foremost difference
between near and far-distance HRTFs. In the context of room acoustics,
only lateral reflections arrive from angles which at high frequencies create
traces of interaural level difference otherwise characteristic to near-field
HRTF. This paper explores altering the auditory proximity of auralizations
by introducing near-field-type interaural level differences to widely available far-field HRTFs. Analysis of the application of these modified HRTF to
reflected sound provides more insight into the sources of auditory proximity.
4aAAc10. Comparison of frequency responses calculated across different time-windowed impulse responses for assorted types of rooms.
Brenna N. Boyd and Lily M. Wang (Durham School of Architectural Eng.
and Construction, Univ. of Nebraska - Lincoln, 11708 s 28th St., Bellevue,
NE 68123, bnboyd@unomaha.edu)
Recent work by Lokki et al. (2015) has shown that analyzing how frequency responses from concert halls change over different time-windowed
periods of the impulse response across low, mid, and high frequencies correlates well with human perception of how well the space performs. In this
project, such frequency response analyses across different time-windowed
impulse responses are applied to spaces other than concert halls to include a
variety of classrooms, worship spaces, and speech-centered performance
spaces. Preliminary results from the study will be presented, with particular
focus on comparing spaces designed primarily for speech, music, or both.
Acoustics ’17 Boston
3777
4a WED. AM
of nanoparticles (aerogels) are pursued mainly for their low thermal conductivity. Unlike classical aerogels of the most common kind based on silica
being fragile materials, structural fragility issue has been addressed successfully with materials referred to as polymer-crosslinked (or X-) aerogels. Xaerogels are very low-density materials, yet their mechanical strengths have
been significantly increased. This work explores ductile aerogels in potential
uses as constrained damping layers integrated into gypsum wallboards. Due
to their ductility and mechanical strength, light weight X-aerogel panels of
less than 1 cm in thickness can be conveniently integrated into gypsum wallboards. The objective of this work is to develop integrated wallboards to
achieve significantly increased sound transmission loss without significantly
increasing thickness and weight of integrated wallboard system. This paper
discusses preliminary test results carried out in chamber-based random-incident measurements, and further exploration on broadband dynamic properties of X-aerogels for better understanding of its excellent effect in
drastically increased sound transmission loss.
WEDNESDAY MORNING, 28 JUNE 2017
BALLROOM C, 8:00 A.M. TO 10:20 A.M.
Session 4aAAd
Architectural Acoustics: Simulation and Evaluation of Acoustic Environments (Poster Session)
Michael Vorl€ander, Cochair
ITA, RWTH Aachen University, Kopernikusstr. 5, Aachen 52056, Germany
Stefan Weinzierl, Cochair
Audio Communication Group, TU Berlin, Strelitzer Str. 19, Berlin 10115, Germany
Ning Xiang, Cochair
School of Architecture, Rensselaer Polytechnic Institute, Greene Building, 110 8th Street, Troy, NY 12180
All posters will be on display from 8:00 a.m. to 10:20 a.m. To allow contributors in this session to see the other posters, authors of oddnumbered papers will be at their posters from 8:00 a.m. to 9:10 a.m. and authors of even-numbered papers will be at their posters from
9:10 a.m. to 10:20 a.m.
Contributed Papers
8:00
4aAAd1. Two research on the effect of interior acoustics on the number
of user in library architecture: Metu and Atilim University libraries.
Filiz B. Kocyigit and Sevgi Lokce (Architecture, Atilim Univ., Ankara
06380, Turkey, slokce@atilim.edu.tr)
The library’s readership services can be likened to the generosity of the
ice mountain. A source goes through many stages until you reach the reader.
Therefore, when the library activities and sections are examined, it is seen
that different acoustical environments require different functions and spaces.
These places include direct or indirect connections and transitions. Acoustic
control should be solved by architectural methods in areas where there are
continuous, direct connections, and transitions, while the absence of distinct
acoustical features in areas with indirect transients does not create a problem. The librarian, the reader, or the user can experience the problem of
adapting in transitions between different places, or it can be noisy due to the
fact that there are no interrupters between the spaces. Acoustic comfort also
does not just mean quiet as it is generally known. It should be able to serve
different activities of users and readers and provide psychological acoustic
comfort. In interviews conducted with many readers, it has been observed
that the complete silent environment creates distrust due to insecurity in
mass and wide areas, and thus prevents the desire to stay in the space for a
long time. Comfort can be quantified and measured in terms of Articulation
Index (AI) or other metrics based on signal-to-noise ratio. In highly reverberant library spaces, Speech Transmission Index (STI) may be more correlative to subjective impression of acoustic comfort.
4aAAd2. Binaural reproduction of self-generated sound in virtual
acoustic environments. Johannes M. Arend, Philipp Stade, and Christoph
oln, Betzdorfer Str. 2,
P€orschmann (Inst. of Communications Eng., TH K€
Cologne 50679, Germany, johannes.arend@th-koeln.de)
Virtual acoustics aims to immerse the user in a virtual acoustic environment (VAE). However, most VAE systems do not feed self-generated sound
back into the virtual room, even though there is evidence that adequate
reproduction of self-generated sound affects the user’s perception and might
even enhance immersion. Thus, if at all possible, sonic interaction between
the user and the virtual room is very limited in most current systems. This
work presents a VAE system that is able to capture and reproduce self-generated sound in real time. Hence, the VAE is complemented with a reactive
component providing the acoustic response to the actions of the user. The
major difference compared to the few reactive VAEs introduced so far is
3778
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
that the system presented here considers the varying directivity of the user
or the sound source, and that it generally works with any arbitrary source.
The study includes a first technical evaluation of the system as well as an
example application of a virtual concert hall. The reactive VAE can be used
as a virtual practice room for musicians, or as a tool for psychoacoustic
experiments investigating the influence of self-generated sound on human
perceptual processes.
4aAAd3. Co-integration of acoustic simulation software and GIS for
speech intelligibility analysis in complex multi-source acoustic environments. Application to Toledo’s Cathedral. Antonio Pedrero, Luis Iglesias,
ustica ArquiRogelio Ruiz, and Cesar Dıaz (Grupo de Investigaci
on en Ac
tect
onica, Tech. Univ. of Madrid, E.T.S.Arquitectura (UPM), Avda. Juan de
Herrera 4, Madrid, Madrid 28040, Spain, antonio.pedrero@upm.es)
The speech intelligibility in complex multi-source acoustic environments depends on a variety of factors such as speech level, background
noise level, reverberation time, as well as psychoacoustic effects. Since
these factors can change for each source-receiver combination, in order to
integrate information from various sources, efficient tools and techniques
are required to determine the speech intelligibility at every listener position.
In drawing up this study, two types of tools are used: (i) an acoustic simulation software (ODEON) and (ii) spatial analysis tools of a Geographic Information System (ArcGIS). To determine the speech intelligibility, the
Speech Transmission Index (STI) has been calculated. Sound Pressure Level
and Reverberation Time have been calculated at points of a grid at intervals
of 1 meter. An automated workflow using ArcGis Modelbuilder has been
created in order to obtain STI. Also, Arcmap has been used to represent and
analyze the results over the complex geometry of the space. The proposed
procedure has been calibrated using information obtained in the Toledo Cathedral (Spain) and the results for acoustic situations related to different celebrations of the liturgy are presented.
4aAAd4. Auralization of a car pass-by using impulse responses computed with the pseudospectral time-domain method. Fotis Georgiou and
Maarten Hornikx (Built Environment, Eindhoven Tech. Univ., Rondom 70,
Eindhoven, Eindhoven 5612AP, Netherlands, f.georgiou@tue.nl)
Car noise is the main environmental noise source in the urban environment. In this paper, a method for auralization of a car pass-by in a street
from using a wave-based acoustic prediction method is explored. For the
transfer paths between sound source locations and a listener, binaural
Acoustics ’17 Boston
3778
impulse responses are computed with the pseudospectral time-domain
method for various source locations. A dry synthesized car signal is convolved with the binaural impulse responses of the different locations in the
street and cross-fade windows are used in order to make the transition
between the source positions smooth and continuous. The auralizations are
performed for the simplified scenarios where buildings are absent, and for
an environment where a long building block is located behind the car. A
subjective evaluation was carried out in order to detect the maximum spacing between the discrete source positions that still can produce a perceived
continuous car pass-by auralization.
traditional databases and newer in-situ methods, was carried out. This analysis utilized different architectural acoustic simulation software packages for
verification of results and further comparison. Initially, a shoebox model
was compared for reference and brief comparison between the software
packages was made. Then, the main analysis was conducted using the model
of the RWTH Aachen Institute of Technical Acoustics’ Seminar Room.
This room was simulated with all the different sets of absorption coefficients
through both simulation software packages. The purpose of this analysis
was to provide a reference for the variation of room acoustic parameters
from the different sources of absorption coefficients.
4aAAd5. A parametric model for the synthesis of binaural room
impulse responses. Philipp Stade, Johannes M. Arend, and Christoph
P€orschmann (Inst. of Communications Eng., TH K€
oln, Betzdorfer Str. 2,
Cologne 50679, Germany, philipp.stade@th-koeln.de)
4aAAd8. Evaluation of higher order sound particle diffraction with
measurements around finite sized screens in a semi-anechoic chamber.
€
Stefan Weigand (HafenCity Univ. Hamburg, Uberseealle
16, Hamburg,
Hamburg 20457, Germany, stefan.weigand@hcu-hamburg.de), Lukas
Asp€
ock (Inst. of Tech. Acoust., RWTH Aachen, Aachen, Germany), and
Uwe M. Stephenson (HafenCity Univ. Hamburg, Hamburg, Germany)
4aAAd6. The calibration of an aural spatial mapping tool using an architectural approach to the soundwalk method: A validation study.
Merate A. Barakat (Faculty of Environ. Technol. - The Dept. of Architecture and the Built Environment, Univ. of West of England, Univ. of the
West of England - Frenchay Campus, Coldharbour Ln, Frenchay, Stoke Gifford, Bristol, Avon BS16 1QY, United Kingdom, merate.barakat@uwe.ac.
uk)
A series of on-site surveys are conducted as part of the validation process of a research seeking to create a tool that integrates the theoretical spatial and soundscape design concepts, to aid architects when considering
sound as a design driver for urban design. The investigation is founded on
establishing a relationship between aural architecture theories and the urban
spatial experience and design. The surveys are pattern validation experiments that aim to observe possible qualitative aural pattern formations
occurring within Covent Garden Market in London by using spatial measurements as fundamental parameters. The method assimilates the Soundwalk technique and the Relative Approach from the fields of soundscape
and psychoacoustics, respectively, and are integrated within a customary architectural site-survey proposed to map the sonic morphology of urban
spaces. The experiment is designed to compare the tool’s preliminary prediction patterns to in situ listening and the spectral patterns recorded. The
patterns are assumed to deviate at this point because not all sound factors
are considered, and the patterns are assumed qualitative. However, the discussed comparative process aims to establish value in the current state of
this aural mapping tool and establishing the limitations provide an opportunity for further development.
4aAAd7. Comparative analysis of absorption coefficients’ effect on
room acoustic parameters determined by simulation software. Gabriel
Murray (Iowa State Univ., 2074 Hawthorn Court Dr., Apt. 8236, Ames, IA
50010, glmurray@iastate.edu), Lukas Asp€
ock, and Michael Vorlaender
(Inst. of Tech. Acoust., RWTH Aachen, Aachen, Germany)
A comparative analysis between different sets of absorption coefficients,
found or calculated from a variety of sources and methods, including
3779
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Noise prediction and room acoustic design rely on simulation methods
to model sound propagation. The sound particle simulation method (SPSM),
among others, is an increasingly popular choice to do so. In recent years, the
combination with the uncertainty based diffraction (UBD), where particles
are deflected according to edge bypass distances measured in wavelengths,
has helped to overcome typical high frequency limitations. To evaluate
SPSM with UBD, 3D simulations are compared to measurements in full
scale. As previous attempts to evaluate in real rooms suffered from difficulties to acquire accurate material coefficients, measurements are conducted
in a controlled environment. A semi-anechoic chamber allows to examine
free-field conditions as well as combinations with floor and wall reflections.
This paper focuses on diffraction measurements around one or multiple
screens of finite size. This allows the examination of higher order diffraction
in combination with reflections. Impulse responses are measured for several
receivers around screens. From these impulse responses, acoustical parameters are calculated and compared to SPSM results.
4aAAd9. Investigating energy integration time limits for listener envelopment perception using a real-time adaptable hybrid impulse response
method. Evan M. Savage, Matthew T. Neal, and Michelle C. Vigeant
(Graduate Program in Acoust., The Penn State Univ., 201 Appl. Sci. Bldg.,
University Park, PA 16802, ems5779@psu.edu)
Listener envelopment (LEV), the sense of being immersed in a sound
field, is a common perception in concert hall acoustics, but more work is
needed to establish a metric. The objective of this research is to further
investigate LEV utilizing custom software building on Dick & Vigeant’s
previous research to understand what contributes most to LEV in a concert
hall (JASA 140:3175 2016). For this study, spatial room impulse response
(SRIR) measurements were obtained using a 32-channel spherical microphone array in different performance venues. These SRIRs were used in a
subjective listening test and processed for 3rd order Ambisonic reproduction
over a 30-loudspeaker array. Utilizing a custom testing interface developed
in Max 7, pairs of equal-length SRIRs with contrasting high and low LEV
were time-windowed and summed together to create a hybrid SRIR. A dial
in the interface controlled the hybrid SRIR’s time transition point, and subjects were asked to adjust the timing dial to the point at which they perceived the highest sense of LEV. Results will be presented showing the
integration limits that produced noticeable changes in LEV and implications
to the proposed LEV metric will be discussed. [Work was supported by
NSF Award 1302741.]
4aAAd10. Measured and simulated room acoustic characteristics in
three concert halls with unique architectural geometry using beamforming techniques. Mojtaba Navvab (Architecture, Univ. of Michigan,
2000 Bonisteel Blvd, Art and Architecture Bldg., Ann Arbor, MI 481092069, moji@umich.edu) and Gunnar Heilmann (GFai Tech GmbH, BerlinAdlershof, Berlin, Germany)
Sound reflections and time delay between direct and reflected sounds are
key variables that shape and contribute to the acoustic quality and listening
experience within concert halls. To demonstrate these architectural space
Acoustics ’17 Boston
3779
4a WED. AM
Binaural room impulse responses (BRIRs) are often applied in spatial
audio for the auralization of acoustical environments. In the same field of
research, parametric audio coding is an established approach and part of different standards. The presented investigation aims for a parametric description of the sound field in order to synthesize BRIRs for a plausible
auralization. The model focuses on the main features which characterize a
BRIR as well as the acoustical environment. Thereto spherical microphone
arrays are applied for a spatio-temporal acoustical analysis using spherical
harmonics. Early reflections are determined with sound field decomposition
techniques and are described by directional parameters. Diffuse components
and the interaural coherence of the late reverberation are characterized with
additionally parameters. In two previous studies, the synthesis of the early
and the late part of BRIRs have been elaborated and perceptually evaluated
apart from each other. Now both approaches are combined to synthesize
entire BRIR datasets using the parametric approach. Fundamentals of the
sound field analysis are explained and synthetic BRIRs are compared to
their measured counterparts.
acoustical features of these halls. The simulated and measured results provide supportive data toward the recognition of key architectural elements in
each hall and show how the space geometry contributes to the high quality
of sound experienced by an audience. Final results are offered as quantitative and qualitative indicators that identify the significant geometry and
space volumes that shape listener experience in each hall. This research
direction is significant for the future architecture design of concert halls and
may contribute to understanding of the acoustical effects of architectural
elements and improve upon current standards.
characteristics from a designer’s standpoint, three internationally wellknown halls are simulated and selectively measured for their acoustic characteristics utilizing beamforming techniques. The results are analyzed given
recommended procedures within ISO3382 standards. Realistic computer
reconstructions of these concert halls provides the opportunity to examine
the sound quality of each and the differences in their performance using the
latest sound quality measures. Known indicators such as reverberation time,
sound strength, center time, echo, clarity-definition for speech and music,
speech transmission index and articulation loss are used to show important
WEDNESDAY MORNING, 28 JUNE 2017
ROOM 207, 8:15 A.M. TO 12:20 P.M.
Session 4aAAe
Architectural Acoustics, Speech Communication, Signal Processing in Acoustics, Psychological, and
Physiological Acoustics, ASA Committee on Standards, and Engineering Acoustics:
Assistive Listening Systems in Assembly Spaces
Damian Doria, Cochair
Stages Consultants LLC, 75 Feather Ln., Guilford, CT 06437-4907
Thomas Burns, Cochair
Starkey Hearing Technologies, 6600 Washington Ave. S, Eden Prairie, MN 55344
Peter Mapp, Cochair
Peter Mapp Associates, Copford, Colchester CO6 1LG, United Kingdom
Stephen Dance, Cochair
School of the Built Environment and Architecture, London South Bank University, London South Bank University,
Borough Road, London SE1 0AA, United Kingdom
Chair’s Introduction—8:15
Invited Papers
8:20
4aAAe1. Enjoying the performing arts with hearing loss: Challenges, options, and a look to the future. Karrie Recker (Starkey,
6600 Washington Ave. S., Eden Prairie, MN 55344, karrie_recker@starkey.com)
The presence of hearing loss can severely limit an individual’s enjoyment of the performing arts by making soft sounds inaudible
and by distorting sounds that are supra-threshold. Because hearing loss is correlated with aging, it often co-occurs with other impairments such as vision loss and cognitive decline. Hearing aids and other assistive technologies can help, but they have limitations. This
talk will provide an overview of the most common type of hearing loss experienced by an aging population, explore the difficulties faced
by these individuals, summarize the current strategies for improving their listening experiences in these venues, and touch on emerging
research in this field.
8:40
4aAAe2. Some effects of microphone format and location on assistive listening system performance. Peter Mapp (Peter Mapp
Assoc., 101 London Rd., Copford, Colchester co61lg, United Kingdom, peter@petermapp.com)
Approximately 10—14% of the general population (USA & Northern Europe) suffer from a noticeable degree of hearing loss and
would benefit from some form of hearing assistance or deaf-aid. However, many assistive listening systems do not provide the benefit
that they should, as they are often let down by their poor acoustic performance. The paper investigates the acoustic and speech intelligibility requirements for assistive listening systems and examines a number of microphone pick-up scenarios and configurations in terms
of their potential intelligibility and sound quality performance. The results of testing carried out in a number of rooms and venues are
3780
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3780
presented, mainly in terms of the resultant Speech Transmission Index (STI) measurements. The paper concludes by providing a number
of recommendations and “rules of thumb” for optimal microphone formats and placement. Although the research has primarily been
directed towards Audio Frequency Induction Loop Systems, the acoustic aspects are equally applicable to other technologies such as
Infrared and Wireless Systems.
Contributed Paper
9:00
will explore various research and how it relates to the listening experience
of people with hearing loss. Including how new technology has been
included to further improve the user and installer experience. Also, will
future technologies replace hearing loops and other traditional assistive listening? Come find out. Steve Thunder has a financial relationship to Listen
Technologies, a manufacturer/distributor of assistive listening equipment.
Tom Thunder has a financial relationship to Assistive Hearing Systems, a
firm specializing in the installation of hearing loops.
4aAAe3. The evidence and new technology of hearing loops. Steve Thunder (Listen Technologies, 14912 Heritage Crest Way, Bluffdale, UT 84065,
steve.thunder@listentech.com) and Thomas Thunder (Assistive Hearing
Systems, LLC, Huntley, IL)
In the last several years, the evidence has been growing to support a
stronger adoption of hearing loops to overcome adverse room acoustics. We
Invited Papers
9:20
4aAAe4. Assistive listening systems in assembly spaces. James S. Badrak (Dr. Phillips Ctr. for the Performing Arts, 155 E. Anderson
St., Orlando, FL 32801, jim.badrak@drphillipscenter.org)
This paper will show the perspective from the facility owner and the challenges encountered trying to offer premium assisted listening experiences. The recent opening of the Dr. Phillips Center for the Performing Arts in Orlando, FL, has brought to light many of these
challenges. Our experiences this past year included having to determine the viability of different systems and what type of system works
best in our spaces. The ultimate choice of system was different than what was originally installed and resulted from a process of working
with a local audiologist, patrons of the arts center, assistive listening manufacturers, and our house staff. This paper presents that process
along with our suggestions for further research and advancement of this technology.
9:40
4aAAe5. Degraded telecoil performance with assistive listening systems. Ryan T. Chester (ElectroAcoust. Eng., Starkey Hearing
Technologies, 5730 W 98 1/2 St. Circ, Bloomington, MN 55437, ryan_chester@starkey.com)
4a WED. AM
For more than 50 years, telecoils have been used in hearing aids to bypass the microphone, inductively couple to the telephone receiver, and improve audibility. Because they are widely used in European teleloop systems for public address applications, stringent
homologation testing is required before a device is accepted in any government contract. This testing includes benchmarking the performance of their directional nature, frequency response, signal-to-noise, and robustness to orientation within the inductive field. Optimizing a telecoil for one situation usually means degraded signal to noise and reduced bandwidth in other situations. This presentation
will discuss susceptibility of telecoils to perform poorly due to environmental factors related to position and orientation.
10:00–10:20 Break
10:20
4aAAe6. Technical considerations for assistive listening applications using wireless, digital audio streams. Thomas Burns (Starkey
Hearing Technologies, 6600 Washington Ave. S, Eden Prairie, MN 55344, tburns@starkey.com)
Assistive listening applications can be grouped into two categories: public announcements and live reinforcement. Each has unique
needs. For public announcements, system temporal latencies of hundreds of milliseconds are irrelevant. For live reinforcement, temporal
latencies are much more critical—especially in large venues where the assistive signal must be synchronized with the visual presentation
and acoustic performance. If the assistive signal is recorded with the intent to preserve stereo imaging of the performance, binaural latencies are also critical; one millisecond can produce severe comb filtering while tens of milliseconds can produce echo. Digital audio
streams take time to encode, transmit, receive, decode, and present to the user. While reviewing the technical requirements for digital
audio streaming in this application, and to demo the effect of latency offsets, a G.722 wideband stream will be broadcast within the
room to a handful of binaural devices available to the earliest attendees.
Contributed Paper
3781
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3781
10:40
shown to produce identical results to Speech Transmission Index (STI) in
the case of pure noise interference. Like STI, SCI has been shown to
decrease with a single reflection at longer latencies or at greater magnitude.
SCI always decreases monotonically with single reflection latency, whereas
STI varies up and down at extremely long latencies. For simulated reverberation, both STI and SCI have been shown to decrease with increasing reverberation time and reverberant level. However, SCI is more sensitive to
direct-to-reverberant level. This paper will compare SCI to STI under more
realistic conditions, such as speech signals and real-world impulse
responses. The effect of signal crest factor will also be examined.
4aAAe7. Speech coherence index with real speech and reverberation.
Tobi A. Szuts and Roger W. Schwenke (Meyer Sound Labs., 2832 San
Pablo Ave., Berkeley, CA 94610, tobi@meyersound.com)
Speech Coherence Index (SCI) is a proposed method of estimating
speech intelligibility in real time with program material. The coherence of
the complex valued transfer function is used to estimate the signal-to-noise
ratio on a per frequency basis. The transfer function is calculated using short
time windows at high frequencies and longer time windows at low frequencies to mimic the multi-resolution nature of human hearing. SCI has been
Invited Paper
11:00
4aAAe8. Improving the reproduction of an operatic performance in an IMAX cinema. Eric Ballestero and Stephen Dance (The
Built Environment and Architecture, London South Bank Univ., 103 Borough Rd., London SE1 0AA, United Kingdom, ericballestero@
outlook.com)
In 2016, the Acoustic Group was asked to assist the IMAX cinema in London with the aim of improving the emissive experience of
listening to the streamed New York Metropolitan Opera. IMAX cinemas are designed and constructed to meet specific standards they
were not built to reproduce an operatic experience. The IMAX cinema at London Waterloo was built in 1999 before streaming was
invented; as such it was optimized for the IMAX film format only. Room acoustic measurements were taken in the room in accordance
to ISO 3382-1 so that a CATT-Acoustics model of the auditorium could be calibrated. Auralization was then used in the model to create
a more immersive operatic experience. A questionnaire was then created so that the settings established from the auralization could be
subjectively tested using a two minute clip of Wagner. The paper will report on the results from three of the settings for 15 listeners in
the IMAX cinema.
11:20–12:20 Panel Discussion
WEDNESDAY MORNING, 28 JUNE 2017
ROOM 208, 8:20 A.M. TO 12:20 P.M.
Session 4aAAf
Architectural Acoustics: Simulation and Evaluation of Acoustic Environments I
Michael Vorl€ander, Cochair
ITA, RWTH Aachen University, Kopernikusstr. 5, Aachen 52056, Germany
Stefan Weinzierl, Cochair
Audio Communication Group, TU Berlin, Strelitzer Str. 19, Berlin 10115, Germany
Ning Xiang, Cochair
School of Architecture, Rensselaer Polytechnic Institute, Greene Building, 110 8th Street, Troy, NY 12180
Invited Papers
8:20
4aAAf1. Simulation of acoustic environments—Part 1: Computer models. Michael Vorlaender (ITA, RWTH Aachen Univ., Kopernikusstr. 5, Aachen 52056, Germany, mvo@akustik.rwth-aachen.de)
Simulation and auralization techniques are established in applications of room acoustics for quite a while. Research projects are currently focusing on a coordinated effort to improve the complete signal chain from the numerical modeling, the data acquisition within
numerical or real sound fields, the coding and transmission to the electro-acoustic reproduction by binaural technology or by sound field
3782
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3782
synthesis. Approaches for the comparative evaluation of real and simulated environments will enable the evaluation of the plausibility
and/or the authenticity of virtual acoustic environments. The state of the art is revisited and discussed along the series of the three past
“round robins on room acoustic computer simulation.”
8:40
4aAAf2. Simulation of acoustic environments—Part 2: Can we trust the computer? Michael Vorlaender (ITA, RWTH Aachen
Univ., Kopernikusstr. 5, Aachen 52056, Germany, mvo@akustik.rwth-aachen.de)
The lessons learned from the “round robins” are that the main bottlenecks in room acoustic computer simulations are lack of data of
3D characterization of sound sources and material parameters, and interfaces to spatial audio technology. This presentation focuses on
sources of uncertainties in such computer models and on the challenges in solving indoor acoustic problems. Also, a new initiative
towards a fourth “round robin” on auralization is presented, which will be further elaborated in other papers of the special session.
9:00
4aAAf3. Simulation of acoustic environments for binaural reproduction using a combination of geometrical acoustics and
Boundary Element Method. Jonathan A. Hargreaves, Luke Rendell, and Yiu W. Lam (Acoust. Res. Group, Univ. of Salford, Newton
Bldg., Salford M5 4WT, United Kingdom, j.a.hargreaves@salford.ac.uk)
Auralization of a space requires measured or simulated data covering the full audible frequency spectrum. For numerical simulation,
this is extremely challenging, since that bandwidth covers many octaves in which the wavelength changes from being large with respect
to features of the space to being comparatively much smaller. Hence, the most efficient way of describing acoustic propagation changes
from wave descriptions at low frequencies to geometric ray and sound-beam energy descriptions at high frequencies. These differences
are reflected in the disparate classes of algorithms that are applied. Geometric propagation assumptions yield efficient algorithms, but
the maximum accuracy they can achieve is limited at low frequencies in particular. Methods that directly model wave effects are more
accurate but have a computational cost that scales with problem size and frequency, thereby limiting them to small or low frequency scenarios. Hence, it is often necessary to operate two algorithms in parallel to handle the complete bandwidth. Here, we utilize Boundary
Element Method as the low frequency method in such a scheme. Using the SEACEN Round Robin scenarios as case studies, this paper
will discuss challenges including: representing source directivity; choosing suitable boundary condition data; encoding BEM results for
binaural presentation.
9:20
architectural design from the earliest design stage, as a part of a holistic
design process. A new procedure to integrate room acoustics into architectural
design is being developed in a Ph.D. project, with the aim of promoting this
early stage holistic design process. This project aims to develop a new hybrid
simulation tool combining wave-based and geometrical acoustics methods.
One of the important aspects is the flexibility to represent realistic geometric
shapes, for which the finite volume method (FVM) is chosen for the wavebased part of the tool. As a starting point, the computational efficiency of
high-order two-dimensional FVM for defining an efficient wave-based simulation tool is investigated. Preliminary two-dimensional FVM simulation
results are presented, which illuminate the suitability for handling complex
geometries compared to other wave based simulation methods.
4aAAf4. Finite volume method room acoustic simulations integrated
into the architectural design process. Finnur Pind (Henning Larsen Architects, Vesterbrogade 76, Copenhagen 1620, Denmark, fpin@henninglarsen.
com), Cheol Ho Jeong (Acoust. Technol., Tech. Univ. of Denmark, Lyngby,
Denmark), Allan P. Engsig-Karup (DTU Compute, Tech. Univ. of Denmark, Copenhagen, Denmark), and Jakob Strmann-Andersen (Henning
Larsen Architects, Copenhagen, Denmark)
In many cases, room acoustics are neglected during the early stage of
building design. This can result in serious acoustical problems that could
have been easily avoided and can be difficult or expensive to remedy at later
stages. Ideally, the room acoustic design should interact with the
Invited Papers
9:40
4aAAf5. Auralizations with loudspeaker arrays from a phased combination of the image source method and acoustical radiosity.
Gerd Marbjerg (Acoust. Technol., Tech. Univ. of Denmark, Ørsteds Plads 353, Kongens Lyngby 2800, Denmark, ghmar@elektro.dtu.
dk), Jonas Brunskog, Cheol Ho Jeong (Acoust. Technol., Tech. Univ. of Denmark, Kgs. Lyngby, Denmark), and Valentina ZapataRodriguez (DGS Diagnostics, InterAcoust. A/S, Kgs. Lyngby, Denmark)
In order to create a simulation tool that is well-suited for small rooms with low diffusion and highly absorbing ceilings, a new room
acoustic simulation tool has been developed that combines a phased version of the image source with acoustical radiosity and that considers the angle dependence of the surface properties. The new tool is denoted PARISM, and here PARISM is used to create loudspeaker
array-based auralizations. Different auralization techniques are compared, such as Ambisonics, vector-based panning, and the method of
nearest loudspeaker. The implementations of the auralization techniques with PARISM are described and compared to implementations
of auralizations with another geometrical acoustic simulation tool, i.e., ODEON and the LoRA toolbox that applies Ambisonics to
ODEON simulations. In opposition to the LoRA toolbox, higher order Ambisonics are also applied to the late part of the PARISM
impulse response, because more directional information is available with acoustical radiosity. Small rooms with absorbing surfaces are
tested, because this is the room type that PARISM is particularly useful for.
3783
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3783
4a WED. AM
Contributed Paper
10:00
4aAAf6. Evaluating simulations of acoustic environments by learning time-frequency kernels that optimally separate classes of
simulated signals. Jason E. Summers (ARiA, 1222 4th St. SW, Washington, DC 20024-2302, jason.e.summers@ariacoustics.com), Jonathan Botts (ARiA, Culpeper, VA), and Charles F. Gaumond (ARiA, Washington, DC)
Previously, the authors have presented techniques for classifying the aural accuracy of computed impulse responses for room-acoustic
simulation and for sonar-operator training using two-dimensional distance metrics operating on time-frequency representations (TFRs) calculated using computational auditory models and by a generalized TFR formed by an affine radial-Gaussian correlation kernel operating on
the Wigner-Ville distribution (i.e., Cohen’s class), with kernel parameters optimized for binary classification [J. E. Summers et al., J.
Acoust. Soc. Am. 115, 2514 (A) (2004); J. E. Summers et al., Proc. 19th Int. Cong. Acoust., 8, 4591-4596, RBA-05-015-IP (2007)]. Here,
we revisit the use of classification-optimized TFR for distinguishing between measured and simulated impulse responses and between
impulse responses simulated by different computational approaches in the domains of room-acoustics and submerged target scattering. The
learned parameters of the optimized TFR kernels are investigated with respect to the information they provide about the nature of the differences between impulse-response classes, particularly as they indicate temporal and spectral scales of those phenomena that particular computational models fail to represent. [Work supported by ARiA IR&D and the Office of Naval Research.]
10:20–10:40 Break
10:40
4aAAf7. Effects of two different modeling alternatives of diffusive surfaces on the objective and perceptual evaluation of the
simulated sound fields. Louena Shtrepi (Dept. of Energy, Politecnico di Torino, C.so DC Degli Abruzzi 24, Torino 10129, Italy,
louena.shtrepi@polito.it), Arianna Astolfi (Dept. of Energy, Politecnico di Torino, Turin, Italy), Giuseppina E. Puglisi, and Marco C.
Masoero (Dept. of Energy, Politecnico di Torino, Torino, Italy)
Different acoustic simulation software have been substantially improved over the last decades accounting for absorptive and diffusive acoustic properties of different surfaces. Whereas the characteristics of the absorptive surfaces have been thoroughly investigated
and implemented, the diffusive surface features are still subject to continuous research. So far, the degree of modeling detail of diffusive
surfaces, and consequently, the effects that this has on the objective and subjective accuracy of simulations is an open point. In order to
give more insight on this aspect, Odeon acoustic simulations have been performed in the model of a variable-acoustic concert hall using
a diffusive and a reflective condition of one of the lateral walls. Two different modeling alternatives of the wall diffusive condition have
been investigated. Objective acoustic parameters, such as early decay time (EDT), reverberation time (T30), clarity (C80), definition
(D50), and interaural cross correlation (IACC), have been compared between the two conditions. Furthermore, a subjective investigation
has been performed in order to determine the most accurate modeling alternative of the diffusive surfaces. Although the simulated objective results showed a good match with measured values, the subjective results highlighted significant perceptual differences for different
modeling alternatives.
Contributed Paper
11:00
4aAAf8. Estimating the diffuseness of sound fields: A wavenumber analysis method. Melanie Nolan (Acoust. Technol., Elec. Eng., Tech. Univ. of
Denmark, Ørsteds Plads, Bldg. 352, Kgs. Lyngby 2800, Denmark, melnola@elektro.dtu.dk), John L. Davy (School of Appl. Sci., RMIT, Melbourne, VIC, Australia), and Jonas Brunskog (Acoust. Technol., Elec. Eng.,
Tech. Univ. of Denmark, Kgs. Lyngby, Denmark)
The concept of a diffuse sound field is widely used in the analysis of
sound in enclosures. The diffuse sound field is generally described as
composed of plane waves with random phases, which wave number vectors
are uniformly distributed over all angles of incidence. In this study, an interpretation in the spatial frequency domain is discussed, with the prospect of
evaluating the diffuse field conditions in non-anechoic enclosures. This
work examines how theoretical considerations compare with experimental
results obtained in rooms with various diffuse field conditions. In addition,
the paper investigates how the results relate to the modal theory of room
acoustics, based on the conception that any mode, also in non-rectangular
rooms, can be expanded into a number of propagating waves.
Invited Papers
11:20
4aAAf9. Sound propagation through an aperture with edge diffraction modeling. U. Peter Svensson (Electron. Systems, Acoust.
Res. Ctr., Norwegian Univ. of Sci. and Technol., Trondheim NO - 7491, Norway, peter.svensson@ntnu.no), Andreas Asheim (Mathematical Sci., Norwegian Univ. of Sci. and Technol., Trondheim, Norway), and Sara R. Martın (Ctr. for Comput. Res. in Music and
Acoust. (CCRMA), Stanford Univ., Stanford, CA)
The scattering of sound by convex bodies with rigid surfaces can be modeled accurately without any wavenumber restriction using
an edge source integral equation (ESIE) [A. Asheim and U. P. Svensson, J. Acoust. Soc. Am. 133 (2013) 3681-3691]. This edge diffraction-based approach has known limitations for non-convex scattering geometries, and the most challenging case might be an aperture in
a thin screen. For such apertures, the ESIE predicts that only first-order diffraction occurs because the ESIE has no inclusion of the socalled slope diffraction effect. This effect occurs when the diffraction wave vanishes but its derivative is different from zero. This paper
focuses on the low-frequency sound transmission through apertures in screens, as well as into ducts. A reference solution is established
with a boundary element (BE) method for the particle velocity field in the aperture, which illustrates the singular behavior near the
3784
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3784
edges. The field in the aperture is also computed with first-order diffraction modeling. Possibilities to use this first-order diffraction modeling as part of a BE solution are then presented. Such an approach could facilitate that a BE model is used only for a low-frequency correction of the diffraction model.
11:40
4aAAf10. Exploring the physical theory of diffraction for solutions of wedge fields in room-acoustic simulation. Ning Xiang and
Aleksandra Rozynova (Graduate Program in Architectural Acoust., Rensselaer Polytechnic Inst., Greene Bldg., 110 8th St., Troy, NY
12180, xiangn@rpi.edu)
The need of developing accurate and cost-effective room-acoustic simulations has prompted recent investigations on edge diffractions with some progresses on solving multiple-order diffractions of finite wedges. This work explores an alternative approach based on
the physical theory of diffraction [P. Ya. Ufimtsev, J. Acoust. Soc. Am. 120, 631-635 (2006)] which is well suited for solving acoustic
scattering problems from reflecting objects for room-simulation purposes. Although the approaches based on the principle of geometrical
acoustics (GA) are widely applied, they are less suitable for geometrical shadow boundaries. The physical theory of diffraction still
relies on both geometrical and physical principles, yet emphasizes the physical one. One of the important features of this physical acoustics (PA) approach is the ability to calculate the sound field more accurately in shadow boundaries. To this end, exact and asymptotic
solutions of wedge fields are discussed for secondary edge sources induced by the incident wave. This paper will discuss implemented
results of several canonical cases with emphasis on Neumann boundary condition.
Contributed Paper
12:00
4aAAf11. Secondary surface sources at rigid wedges based on the physical theory of diffraction. Aleksandra M. Rozynova and Ning Xiang (Architecture, Rensselaer Polytechnic Inst., Greene Bldg., 110 8th St., Troy, NY
12180, sandrarozynova@gmail.com)
In the context of room-acoustic simulations, diffractions on obstacles
with edges still post challenges. In order to create an efficient and accurate
room acoustic simulation, this work focuses on solutions of diffraction on finite wedges. The solutions are based on the physical theory of diffraction
[P. Ya. Ufimtsev, J. Acoust. Soc. Am. 120, 631-635 (2006)]. A detailed
discussion on integral and asymptotic expressions for the diffracted field
radiated by secondary wedge sources will help elaborate problem-solving
strategies with the grazing incidence for exact and asymptotic solutions of
wedge fields. Due to the fact that the computational load of an asymptotic
solution is of orders of magnitude more efficient than that of the exact solution using the physical diffraction theory, any possible ways of reducing asymptotic errors without significantly increasing computational load will be
critically important in practical room-simulations. This paper will discuss
implementation based on the physical theory of diffraction. For some canonical wedge configurations, this paper will also elaborate on asymptotic
errors as a function of incident source distance and direction.
ROOM 206, 11:00 A.M. TO 12:20 P.M.
4a WED. AM
WEDNESDAY MORNING, 28 JUNE 2017
Session 4aAAg
Architectural Acoustics: Topics in Architectural Acoustics Related to Measurements I
Ian B. Hoffman, Chair
Judson Univ. - Dept. of Architecture, 1151 N State Street, Elgin, IL 60123
Contributed Papers
11:00
4aAAg1. Using color mapping of sound intensity measurement to detect
and identify sound isolation weak spots. Jean-François Latour (Acoust.
and vibrations, SNC-Lavalin, 2271 Fernand-Lafontaine, Longueuil, QC J5G
2R7, Canada, jean-francois.latour@snclavalin.com)
Typically, the ears of a trained acoustician could easily detect and identify the weakest element of a construction system when performing a sound
isolation measurement. This typical approach to identify the path that needs
3785
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
to be addressed in order to increase sound isolation performance is however
subjective. Because of this subjectiveness, this approach is sometimes challenged by skeptics and those that have a limited knowledge of acoustics. In
those situations, a more objective approach and a better communication tool
is required. For that purpose, a simple gridded sound intensity measurements combined with a color mapping algorithm is a powerful tool to illustrate the sound isolation weak spots. Examples are presented in order to
show how sound isolation deficiencies (or absence of it) could become visually appreciated by building designers, constructors and managers with the
use of sound intensity color mapping.
Acoustics ’17 Boston
3785
11:20
wind turbines found higher sample rates were required (than normally
expected) to reproduce the time signals associated with dynamically pulsed
amplitude modulation occurring at an infrasound rate. An assessment of
transient early decays in room acoustics found the need for higher sampling
rates to capture such transients. Audibly one can detect the difference in
such transients. The results of the above investigations are discussed.
4aAAg2. Experimental investigations on two potential sound diffuseness
measures in enclosures. XIN BAI, John Herder, and Ning Xiang (Architectural Acoust., Rensselaer Polytechnic Inst., Greene Bldg., 110 8th St., Troy,
NY 12180, baix2@rpi.edu)
This study investigates two different approaches to measure sound field
diffuseness in enclosures from monophonic room impulse responses. One
approach quantifies sound field diffuseness in enclosures by calculating the
kurtosis of the pressure samples of room impulse responses. Kurtosis is a
statistical measure that is known to describe the peakedness or tailedness of
the distribution of a set of data. High kurtosis indicates low diffuseness of
the sound field of interest. The other one relies on multifractal detrended
fluctuation analysis which is a way to evaluate the statistical self-affinity of
a signal to measure diffuseness. To test these two approaches, room impulse
responses are obtained under varied room-acoustic “diffuseness” configurations, achieved by using varied degrees of diffusely reflecting interior surfaces. This paper will analyze experimentally measured monophonic room
impulse responses, and discuss results from these two approaches.
12:00
4aAAg4. Comparative analysis of reverberation time methodology in
residential applications. Jacob Watrous, Sean Harkin, and Bonnie Schnitta
(Eng., SoundSense, LLC, PO Box 1360, Wainscott, NY 11975, jacob@
soundsense.com)
Impulsive sounds, such as balloon pops or the slapping of wood boards,
are utilized by acoustical consultants in order to measure the reverberation
time (T60) for residential applications with room volumes typically less
than 150 cubic meters. However, these methods are not approved for measurement of the T60 for utilization in ASTM E336 ASTC and E1007 AIIC
testing commonly practiced. Instead, the ASTM measurement of T60
requires a large loudspeaker and gated noise in order to gain performance in
the low frequencies. Utilizing gated noise requires additional equipment to
be carried in the field for AIIC and general T60 testing, making it a less
attractive option for acoustical consultants. This paper compares T60 data
collected utilizing various signal sources such as gated noise, balloon pops,
and wood slaps at various source and receiving locations in rooms of varying volumes in order to compare the performance of various T60 measurement methods for use in ASTC and AIIC testing.
11:40
4aAAg3. Are sample rates for wave file recordings too low for transient
signals? Steven E. Cooper (The Acoust. Group, 22 fred St., Lilyfield, NSW
2040, Australia, drnoise@acoustics.com.au)
While the concept of higher sample rates is understood for the recording
of high frequencies, measurement of infrasound pulsations associated with
WEDNESDAY MORNING, 28 JUNE 2017
ROOM 313, 7:55 A.M. TO 12:20 P.M.
Session 4aAB
Animal Bioacoustics: Fish Bioacoustics I: Session in Honor of Anthony Hawkins and Arthur Popper
Joseph A. Sisneros, Cochair
Psychol. Dept., Univ. of Washington, 337 Guthrie Hall, Seattle, WA 98195
Michaela Meyer, Cochair
Neurology, Boston Children’s Hospital, Harvard Medical School, 3 Blackfan Circle, Center for Life Sciences,
14th Floor, Room 14021, Boston, MA 02115
Chair’s Introduction—7:55
Invited Papers
8:00
4aAB1. Anthony Hawkins—A pioneer of fish bioacoustics. Joseph A. Sisneros (Psychol. Dept., Univ. of Washington, 337 Guthrie
Hall, Seattle, WA 98195, sisneros@uw.edu)
For over 50 years since his first description of underwater sounds by the haddock (Melanogrammus aeglefinus), Anthony (Tony)
Hawkins has investigated numerous topics on fish bioacoustics and has been a leading pioneer in examining the production and reception of sound by fishes and more recently the impacts of anthropogenic sounds on fishes. Tony has worked on a diverse number of fish
species including commercially important species such as the cod (Gadus morhua) and the Atlantic salmon (Salmo salar). He has investigated a number of important bioacoustic research topics that range from investigating the acoustic properties of swim bladders, examining how behaviorally-relevant sounds can be masked by sea noise, and determining the directional hearing sensitivities and
capabilities of fishes. Throughout his career, Tony has strived to perform behavioral experiments on fish under the appropriate biologically relevant acoustic conditions in the natural environment, which can often be a more difficult but an effective and rewarding
3786
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3786
approach to conducting fish bioacoustic experiments. His classic experiments and important research findings on fish hearing are still
referenced today and are regularly used in environmental impacts statements to estimate the hearing capabilities of fishes.
8:20
4aAB2. Arthur Popper’s contribution to bioacoustics. Michaela Meyer (Neurology, Boston Children’s Hospital, Ctr. for Life Sciences, Harvard Med. School, 3 Blackfan Circle, 14th Fl., Rm. 14021, Boston, MA 02115, meyerghose@gmail.com)
During his research career spanning over 40 years, Arthur Popper has had a profound impact on the field of bioacoustics. He studied
a broad range of vertebrate taxa, focusing on fish, but also including amphibians, reptiles, and mammals, and published over 200 papers.
His work has improved our knowledge of the morphology of vertebrate inner ears and has made significant advances to our understanding of the evolution of vertebrate hearing. In recent years Dr. Popper has focused his research on the important question of how underwater noise impacts aquatic vertebrate hearing. A long standing and fruitful collaboration with his friend and colleague, Richard Fay,
has resulted in numerous scientific papers and nearly 60 books including the well-known SHAR-series of the Springer Handbook of Auditory Research.
8:40
4aAB3. Acoustic signaling in fish: Does it contribute to increased fitness? M. Clara P. Amorim (Mare – Marine and Environ. Sci.
Ctr., ISPA-Instituto Universitario, Rua Jardim do Tabaco 34, Lisboa 1149-041 Portugal, Portugal, amorim@ispa.pt)
Animals often use acoustic signals when competing for limited resources. However, knowledge on how vocal behavior contributes
to individual fitness in fish lags considerably behind that for terrestrial taxa. Here, I provide examples on how fish acoustic signals may
confer an advantage in gaining access to food and mates. Fish sounds provide information on fish motivation and quality which is relevant for mutual assessment during agonistic interactions, such as competitive feeding, and during mate choice. Fish sounds also play a
role in synchronizing the reproductive behavior and gamete release to maximize external fertilization. Considering that anthropogenic
noise is increasingly changing the natural soundscape that have shaped fish acoustic signals, there is an urgent demand to better understand the importance of acoustic communication for fish survival and fitness.
9:00
4aAB4. Understanding what the fish ear tells the fish brain. Peggy L. Edds-Walton (S.E.E., Sci. Education and Enrichment, Riverside, CA 92506, seewalton@gmail.com)
A. N. Popper (1977, 1981) described the organization of sensory hair cells on the otolithic endorgans of closely related and taxonomically distant fishes and stimulated decades of research on the functional significance of the great diversity that he observed. Comparative physiological studies (many featured in the reviews found in SHAR volumes edited by Popper and Fay, published by SpringerVerlag beginning in 1992) have revealed that auditory processing in fishes has much in common with hearing in other vertebrates. In
addition, Arthur Popper, his collaborators, and his students have documented that the auditory information encoded by the ear provides
the fish with sensitivity to sound frequency, level, and source direction. Specific examples will be provided. Popper also has been instrumental in educating the public, a Pulitzer Prize winning columnist (Dave Barry), and environmental regulators about fish hearing, most
importantly, conveying the message that understanding hearing and the role of sound in the normal behavior of fishes is necessary for
the conservation of marine and freshwater ecosystems.
9:20
4a WED. AM
4aAB5. Popper, Hawkins, Fechner, Weber: Recalling the importance of experimental psychophysics in the study of environmental noise. Robert C. Gisiner (IAGC, 1225 North Loop West, Houston, TX 77008, bob.gisiner@iagc.org)
To bring the study of internal experiences into the reach of science, pioneers like Fechner and Weber framed a new branch of science
called psychophysics. Current leaders such as Art Popper and Tony Hawkins have illustrated through their research the power of psychophysical methods in parsing a complex external and internal world into lawlike patterns of sensory performance. Recent studies of
human influences at ecosystem and global scales have focused increasingly on our ability to use sophisticated statistical methods to analyze large complex data sets collected opportunistically, often without an a priori hypothesis. This celebration of the careers of two leading scientists in this field is an excellent opportunity to remind ourselves of the value of parsing complex phenomena into their
constituent parts; framing simple hypotheses for relatively simple statistical analysis; and perhaps most important of all, using experimental methods to minimize sources of bias inherent in the study of internal states not available to direct observation, such as the behavioral effects of manmade sound.
9:40
4aAB6. Designing a scalable passive acoustics network. David Mann (Loggerhead Instruments Inc., 6576 Palmer Park Circle, Sarasota, FL 34238, dmann@loggerhead.com)
To realize the potential of passive acoustics to study animal distribution and behavior over large spatial and temporal scales, systems
capable of processing acoustic data remotely and transmitting those results to cloud systems are needed. Passive acoustic recorders have
been important for allowing researchers to collect large amounts of data over long time periods, but they allow the decision of how to analyze the data to be delayed until after it is collected. Given the availability of inexpensive computing and cloud connectivity, one key
challenge in developing a scalable passive acoustics network is deciding what is important to know. A prototype scalable passive acoustics node to detect dolphin whistles and measure sound levels corresponding to fish sounds, boat noise, and snapping shrimp has been
developed. The decision was made to bin data over 10 minute periods (e.g., whistles per 10 minutes), and store only small amounts of
raw data for quality control. The system takes advantage of recent developments in consumer level Internet of Things (IoT) tools so that
the hardware is inexpensive and the cloud system can handle very large numbers of nodes.
3787
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3787
10:00
4aAB7. Vertebrate ground plan for the evolution of vocal-acoustic communication. Andrew Bass (Neurobiology and Behavior, Cornell Univ., Mudd Hall, Tower Rd., Ithaca, NY 14853, ahb3@cornell.edu)
A central goal of comparative and evolutionary neurobiology is to establish how phenotypic variation in neuronal phenotypes translates into naturally selected diversity in behavioral performance. In this context, can we establish how the vocal-acoustic phenotypes of
tetrapods were built over evolutionary time by identifying behavioral, anatomical, and physiological characters in sound producing/vocal
fish? Studies of teleost fish, in particular, now provide a comprehensive investigation of the behavioral, neural, and hormonal mechanisms of acoustic communication in a vocal vertebrate at multiple levels of biological organization. Using “champion species,” this presentation will discuss the extent to which the collective evidence supports the hypothesis that three vocal-acoustic character states are
shared between highly vocal species of teleost fish and tetrapods—vocal motor patterning and sequencing, vocal-auditory coupling,
social context-dependent modulation by hormones. [Research support from NSF (IOS 1457108).]
10:20–10:40 Break
10:40
4aAB8. Directional hearing in fishes. Zhongmin Lu (Biology, Univ. of Miami, 5151 San Amaro Dr., Cox Annex 208, Coral Gables,
FL 33146, zlu@miami.edu)
The ability to determine the direction of sound sources is a fundamental function, which has been commonly observed in many vertebrate species in all major groups from fish to mammals. It is well known that terrestrial vertebrates including humans use binaural cues
such as interaural time and intensity differences to localize sound sources. However, how fish perform sound localization has been puzzling researchers for many decades since Karl von Frisch. In this talk, I will highlight some behavioral, anatomical and physiological
work addressing the question of directional hearing in teleost fishes, primarily focusing on two fish species—the oscar, Astronotus Ocellatus and the sleeper goby, Dormitator latifrons, during my postdoctoral research with Art Popper at the University of Maryland, College Park, and later in my lab at the University of Miami. Most of the studies were conducted using the Fay shaker apparatus that can
provide directional stimulation simulating underwater acoustic particle motion in the three dimensional space. Results of this work help
us understand behavioral detection ability and peripheral neural encoding of acoustic particle motion in fish.
11:00
4aAB9. Passive acoustic monitoring of haddock in the Gulf of Maine. Rodney A. Rountree (23 Joshua Ln., Waquoit, MA 02536,
rrountree@fishecology.org), Katie A. Burchard (Cooperative Res. Study Fleet Program, Northeast Fisheries Sci. Ctr., Narragansett, RI),
Xavier Mouy (JASCO Appl. Sci., Victoria, BC, Canada), Clifford A. Goudey (C.A. Goudey & Assoc., Newburyport, MA), and Francis
Juanes (Biology, Univ. of Victoria, Victoria, BC, Canada)
We have conducted several studies of haddock sounds in the Gulf of Maine (GOM) with mixed results. An analysis of an archival recording from captive haddock brood stock made in 1970 found that the “spawning rumble” sound occurred variously at the end of short
thump trains, in the middle of thump trains, or in isolation. Interestingly, haddock were silent while spawning when we attempted to record sounds in the same facility in March 2000, suggesting that sound production may be negatively affected by chronic noise. Haddock
sounds were absent in ROV and tethered instrument surveys in the summer and fall of 2001-2002. During 2006-2007, we deployed bottom mounted recorders while conducting long-line surveys of haddock spawning condition. Surprisingly few haddock sounds were
detected and there was no correlation with spawning activity despite recording in highly active spawning areas. Haddock sounds consisted of isolated knocks, which were difficult to distinguish from thumps of unknown origin. We are now applying autodetection algorithms tuned to these data sets to extensive recordings made on the fishing grounds in 2003-2004. Our observations suggest that GOM
haddock spawn in small isolated groups within a larger spawning area and their sounds are detectable only over short distances.
11:20
4aAB10. Revisions to the sound exposure guidelines for fish and sea turtles report. Michele B. Halvorsen (CSA Ocean Sci. Inc,
8502 SW Kansas Hwy, Stuart, FL 34997, mhalvorsen@conshelf.com), Arthur N. Popper (Dept. of Biology, Univ. of Maryland, College
Park, MD), Anthony D. Hawkins (Loughine Marine Res., Aberdeen, United Kingdom), David Mann (Loggerhead Instruments, Sarasota,
FL), and Thomas J. Carlson (ProBioSound LLC, Holmes Beach, FL)
Anthropogenic underwater sounds can impact aquatic life. Adherence to the Marine Mammal Protection Act (MMPA), the Endangered Species Act (ESA), and the Magnuson-Stevens Fishery Conservation and Management Act (MSA) requires a risk assessment of
the potential effects from underwater noise. Procedures for evaluating the risk to marine mammals (MMPA) are increasingly sophisticated, and quantitative science-based criteria for mammals were published in 2007 by Southall et al. The need for equivalent criteria for
fishes and sea turtles (ESA and MSA) led to the creation of our expert working group co-led by Professors Arthur Popper and Richard
Fay, pillars in the USA for everything fish. While Professor Tony Hawkins, a pillar in the EU for everything fish, brought his own unique
perspectives and broad expertise of fish hearing studies and sound exposure. The results from our working group are described from its
inception in 2004 to the 2014 publication of its findings as an ANSI/ASA report. The report determined broad sound exposure guidance
based on the best available scientific information for fishes and sea turtles in a series of tables specific to a sound source. It is important
to maintain current information and portions of the tables have recently been revised.
3788
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3788
11:40
4aAB11. Effect of environmental toxins on the fish lateral line. Allison Coffin (Integrative Physiol. and Neurosci., Washington State
Univ., 14204 NE Salmon Creek Ave., Vancouver, WA 98686, allison.coffin@wsu.edu)
Fishes use their lateral line system to sense nearfield water movement associated with both abiotic and biotic sources. Given the
external location of lateral line sensory organs, we hypothesized that this system would be sensitive to exposure from toxins that accumulate in the aquatic environment. The toxicant bisphenol-A (BPA), a component of many plastics, is prevalent in U.S. watersheds. We
found that BPA exposure did not influence lateral line development in zebrafish, but that BPA was toxic when applied acutely to mature
lateral line organs. BPA also reduced the regenerative potential of the lateral line, suggesting that BPA may affect both sensory hair cells
and surrounding supporting cells. As fish in urban areas are often exposed to high levels of road pollution that enters waterways during
storm events, we also examined the effect of stormwater runoff on the lateral line. In contrast to BPA, stormwater was not acutely toxic
to the mature lateral line, but it had a detrimental effect on lateral line development in both zebrafish and salmonids. Collectively, our
results demonstrate that aquatic pollutants can negatively impact fish mechanosensory systems, perhaps leading to decreased survival.
12:00
4aAB12. A personal history of fish bioacoustics. Arthur N. Popper (Univ. of Maryland, Biology/Psych. Bldg., College Park, MD
20742, apopper@umd.edu) and Anthony D. Hawkins (Loughine Marine Res., Aberdeen, United Kingdom)
Together, we represent more than 100 years of research on various aspects of fish bioacoustics, starting with basic work on hearing,
and continuing today in our individual and joint work on effects of man-made sound on aquatic animals. Over the course of our careers,
we have had the honor of knowing, and in some cases working with, many of the true pioneers in our field—people whose contributions
were fundamental to fish bioacoustics, and whose work should be read and known by everyone who is pursuing marine bioacoustics
now and those who enter the field in the future. During the course of this talk we will briefly mention the contributions of notables one
or both of us has personally known including (in alphabetical order): John Blaxter, Horst Bleckmann, Rob Buwalda, Colin Chapman,
Sheryl Coombs, Eric Denton, Sven Dijkgraaf, Andreas Elepfandt, Per Enger, Richard Fay John Gray, Donald Griffin, Gerard Harris,
Kathleen Horner, Cathy McCormick, James Moulton, Arthur A. Myrberg Jr., Antares Parvulescu, Christopher Platt, Olav Sand, Arie
Schuijf, William Tavolga, and Willem van Bergeijk. We dedicate this presentation to our close friend and collaborator, and one of the
great pioneers of studies of fish hearing, Richard R. Fay.
WEDNESDAY MORNING, 28 JUNE 2017
BALLROOM B, 7:55 A.M. TO 12:20 P.M.
Session 4aBA
Gail ter Haar, Cochair
Physics Dept., Institute of Cancer Research, Royal Marsden Hospital, Sutton SM2 5PT, United Kingdom
David T. Blackstock, Cochair
Applied Research Labs, University of Texas at Austin, Appl. Res. Labs UT Austin, PO Box 8029, Austin, TX 78713-8029
Chair’s Introduction—7:55
Invited Papers
8:00
4aBA1. Edwin L. Carstensen, A scientist’s life. David T. Blackstock (Appl. Res. Labs & Dept. of Mech. Eng., Univ. of Texas at Austin,
PO Box 8029, Austin, TX 78713-8029, dtb@austin.utexas.edu)
Born in 1919, Edwin L. Carstensen grew up in Oakdale, Nebraska. Thinking to be a teacher or preacher, Ed attended Nebraska State
Teachers College, 1938-1941. An interest in physics and music led Ed in fall 1941 to Case School of Applied Science. But World War
II interrupted; when his professor Robert Shankland was tapped to head the newly formed Underwater Sound Reference Laboratory in
spring 1941, Ed followed and spent the war and early postwar years at USRL’s lab in Orlando, Florida. Thus Ed’s first serious involvement in acoustics was in underwater sound. Several publications emerged after the war; the most significant was the Carstensen-Foldy
study of sound propagation through bubbly water (JASA 19, 481-501 (1947)). After PhD studies at University of Pennsylvania, 1948-
3789
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3789
4a WED. AM
Biomedical Acoustics, Physical Acoustics, and Underwater Acoustics: Session in Honor of Edwin
Carstensen I
55, where Herman Schwann introduced Ed to biophysics, and five years at the Army Biological Laboratory, Ft. Detrich, Maryland, Ed
took a faculty position in Electrical Engineering at the University of Rochester. There he remained for the rest of his career. His work at
Rochester was in biomedical acoustics, principally biomedical ultrasound. He made major contributions to identifying the physical
mechanisms by which ultrasound affects tissue, lithotripsy, and tissue strain as an ultrasound bioeffect.
8:20
4aBA2. Edwin L. Carstensen, the father. Laura L. Carstensen (Psych., Stanford Univ., Bldg. 420, Jordan Hall, Stanford, CA 94305,
laura.carstensen@stanford.edu)
Edwin Lorenz Carstensen was son to Opal and August, husband to Pam, father to five, grandfather to seven, and great-grandfather to
two. Each and every one of us, along with many more extended family members, feel extraordinarily privileged to be associated with
him. We fully appreciate that Ed Carstensen was a giant in his scientific discipline. Know also that he achieved all that he did while
showing unfailing grace and full dedication to a devoted, and challenging, family. In the relatively short time since he died, we have
learned about scores of qualities you admired in the scientist. In this talk, I hope to share with you the many similarities between the scientist and the father. He was invariably kind and gently critical in ways that clarified our thinking. He was insatiably curious about
everything in which we showed an interest, and he encouraged us to be better people simply by believing that we were. I asked him
once how he possibly managed to accomplish all that he had professionally with five children running around the house. He smiled and
said, “Oh, about 50 papers never made it out the door.” We’ll need to count on you all for those.
8:40
4aBA3. Ed Carstensen—Elucidating the physical mechanisms for biological effects of ultrasound. Diane Dalecki (Dept. of Biomedical Eng., and the Rochester Ctr. for Biomedical Ultrasound, Univ. of Rochester, 210 Goergen Hall, P.O. Box 270168, Rochester,
NY 14627, dalecki@bme.rochester.edu)
Ed Carstensen was a pioneer in the field of biomedical ultrasound, and his ground-breaking research set the foundation for our understanding of physical mechanisms for the interaction of ultrasound with biological tissues and systems. In this presentation, I share my
perspectives on Ed’s research advances from my viewpoint as his past student and research colleague for many years at the University
of Rochester. Trained as a physicist, Ed focused on elucidating the physical mechanisms for biological effects of ultrasound. His early
work identified ultrasound absorption mechanisms in blood, and later work studied effects of heating, cavitation, radiation force, and
shear force on biological tissues. Ed’s research provided a basis for safety guidance of diagnostic ultrasound imaging, and for the development of new ultrasound therapies that exploit the physical interaction of ultrasound fields with tissues. Ed was a wonderful teacher
and mentor who generously shared his time and insight. His leadership in the development of biomedical engineering and the founding
of the Rochester Center for Biomedical Ultrasound have had lasting impact. Through his research and personal interactions, Ed positively influenced many in the field nationally and internationally, and set paths for the advancement of new applications of biomedical
ultrasound.
9:00
4aBA4. From A-mode to virtual beam: 50 years of diagnostic ultrasound. Frederick W. Kremkau (Ctr. for Appl. Learning, Wake
Forest Univ. School of Medicine, Winston-Salem, NC 27157-1039, fkremkau@wfubmc.edu)
In the 50 years from 1967 (the year that, as a student, I met Edwin Carstensen) to 2017, diagnostic ultrasound has progressed from A
mode to M mode to B&W static 2D to gray-scale static 2D to real-time 2D to static 3D to real-time 3D (“4D”). Along the way, other
modes and features appeared including analog-to-digital, color-, power- and spectral-Doppler, coded excitation, harmonic imaging, panoramic imaging, spatial compounding, contrast agents, elastography and virtual beam forming. Contrast agents have been in wide use
globally for years, although limited in the U.S. by the FDA to cardiac application until recently. Contrast agents operate on the ability of
gas bubbles to strongly reflect ultrasound. In a 1968 study conducted by Professors Carstensen and Gramiak and me, it was shown that
the contrast effect was due to bubbles in the radiologic contrast agent. As an electrical engineering graduate student at the time, I could
not know that this was the launching of a 50-year medical academic career in sonography. Prof. Carstensen was instrumental in guiding
me into this exciting and rewarding career. This presentation concludes with a comparison of contemporary physical and virtual beam
formation and its impact on image quality.
9:20
4aBA5. Professor Ed Carstensen—A personal University of Rochester perspective. Robert M. Lerner (Diagnostic Radiology, Rochester General Hospital, 1425 Portland Ave., Rochester, NY 14621, robbymdphd@yahoo.com)
Many contributions to biomedical ultrasound pioneered at the University of Rochester may be traced either directly or indirectly to
Ed Carstensen. Armed with strong scientific background in physics, tremendous ability to get along with people and great insight into
what was needed to accomplish scientific goals, he was able to help attract great faculty to the University of Rochester across many disciplines, all with interests in ultrasound, to add to the great talent pool already there. Eventually, he formalized the expertise in the Rochester ultrasound community by founding the Rochester Center for Biomedical Ultrasound. This paper provides examples of how
Professor Carstensen was able to teach, inspire and lead individuals to seek answers to important questions in the biomedical sciences.
His easygoing manner did not detract from his rigorous scientific standards. His ability to facilitate collaboration among physicians, bioscientists and engineers contributed greatly to the phenomenal growth of biomedical engineering and ultrasound at Rochester. Two of
the author’s research projects greatly influenced by Professor Carstensen are discussed: (1) the lack of frequency dependence for thermal
lesion production in biological tissue by focused ultrasound, and (2) sonoelasticity research which has led to shear wave quantitation
and imaging with ultrasound and MRI.
3790
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3790
9:40
4aBA6. Cavitation nucleation in medical ultrasound. Lawrence Crum, Michael R. Bailey (Appl. Phys. Lab, Ctr. for Industrial and
Medical Ultrasound, Univ. of Washington, 1013 NE 40th St., Seattle, WA 98105, lac@apl.washington.edu), Oleg Sapozhnikov (Ctr. for
Industrial and Medical Ultrasound, Univ. of Washington, Moscow, Russian Federation), and Julianna C. Simon (Ctr. for Industrial and
Medical Ultrasound, Univ. of Washington, University Park, PA)
In 1980, Ed Carstensen coauthored the first of several articles on the effects of ultrasound on drosophila [Child, et al., Ultrasound in
Med. & Biol., 6, 127-130 (1980)]. As diagnostic ultrasound became an ever-increasing imaging modality, it was important to determine
if there were any bioeffects of the relatively high pressure amplitudes, but relatively low time-averaged intensities, used in these systems. Ed and his colleagues discovered that drosophila larvae and eggs, had air channels within them and these channels acted as cavitation nuclei. When it was determined that there were few preexisting gas nuclei present in human tissue, it was assumed that diagnostic
ultrasound devices were safe for human use. More recently, therapeutic ultrasound has gained considerable support from the clinical
community, and High Intensity Focused Ultrasound (HIFU) systems have been approved for clinical use. Furthermore, ultrasound contrast agents—stabilized gas bubbles—are also in common use. Accordingly, cavitation can be very important in this application of medical ultrasound. Recently, we have examined the source of potential gas nuclei that may give rise to cavitation inception and will report
on these studies along with a discussion of Ed’s earlier contributions.
10:00–10:20 Break
10:20
4aBA7. Ed Carstensen and the recognition of nonlinear acoustics in biomedical ultrasound. Thomas G. Muir (Appl. Res. Laboratories, Univ. of Texas at Austin, P/O. Box 8029, Austin, TX 78713, muir@arlut.utexas.edu)
Nonlinear acoustics in biomedical ultrasound was not well recognized, nor utilized in the era leading up to the 1980s. The discipline
was experiencing rapid growth and experimentation, but without consideration of nonlinear effects, even though they were significant.
Ed Carstensen was one of the first to become suspicious of the “linear assumption,” mostly in his own experiments. In 1979, he contacted me and asked for help in making the community aware of nonlinear acoustics at biomedical frequencies and intensities. The two
of us, with the help of David Blackstock and Ed’s graduate students, W.K. Law and N.D. McKay, set out to produce a pair of tutorial
manuscripts, to call the community’s attention to acoustic nonlinearity. These were published in the Journal of Ultrasound in Medicine
and Biology, Vol. 6, p 345-357, and 359-368 (1980). The first was a description of pertinent nonlinear acoustic theory, as then used, and
the second was an experimental demonstration of nonlinear effects, easily recognizable in the laboratory. The present paper recounts the
situation at the time (37 years ago) which experienced a turning point, once nonlinear acoustics was recognized. Existing perplexities
were resolved and new paths of research and development were identified. Some discussion of here-to-fore unappreciated individuals
and history in nonlinear acoustics is also presented. [Work supported by Applied Research Laboratories, The University of Texas at
Austin.]
10:40
4aBA8. Carstensen’s contributions to shear stress in strain and tissues. Kevin J. Parker (Dept. of Elec. & Comput. Eng., Univ. of
Rochester, Hopeman Eng. Bldg. 203, PO Box 270126, Rochester, NY 14627-0126, kevin.parker@rochester.edu)
4a WED. AM
Some of Edwin Carstensen’s final papers considered elastographic shear waves, which had opened up a new set of questions regarding stress and strain in tissues. Carstensen, from his commanding perspective overlooking nearly 70 years of fields and waves in tissues,
could uniquely make the following statement: “At present, no consensus group has undertaken a systematic evaluation of the biological
effects of low-frequency strains…Thanks to developments in the field of elastography over the last two decades, we now have the tools
needed to measure low-frequency strains in tissues directly…Despite the fundamental importance of strain in bioeffects, no bioeffects
investigators to our knowledge have taken advantage of these techniques. In fact, strain is rarely mentioned in the bioeffects literature”
(Carstensen et al., “Biological effects of low-frequency shear strain: physical descriptors,” Ultrasound Med Biol 42(1)1-15, 2016).
Indeed, with this topic, his “voyage of exploration and illumination” had truly come full circle, returning to some earlier questions but
under new circumstances. We explore the deep issues Carstensen uncovered, including special forms of shear waves, the implications of
Ostreicher’s work, tissue viscoelastic response to shear stresses, and the inherent puzzles of linear hysteresis models.
11:00
4aBA9. Edwin Carstensen’s unique perspective on biological effects of acoustically-excited bubbles. Sheryl Gracewski (Mech. and
Biomed. Eng., Univ. of Rochester, Rochester, NY 14627, sheryl.gracewski@rochester.edu)
Edwin Carstensen made significant contributions to the understanding of the interactions between ultrasound and biological tissues.
He was especially interested in how acoustically-excited bubbles interact with biological systems. In his experiments, sources of these in
vivo bubbles included gas in respiratory tubules of fruit fly larvae, ultrasound contrast agents, and gas in murine lung and intestine.
While much of Ed’s work was experimental, he had keen insight and a unique perspective for understanding the mechanisms of the bioeffects. He realized that the shear strains in the vicinity of an oscillating bubble could be much greater than they would be if the bubble
were not present. Not only was Ed a remarkable scientist, he was also a great mentor. When I first arrived at the University of Rochester,
Ed introduced me to the field of biomedical ultrasound and the many outstanding researchers working in the area. It has been a great
pleasure and rewarding experience to collaborate with Edwin Carstensen and his student, then close colleague, Diane Dalecki over that
last 30 years. This paper will present some of our collaborative work on investigating the response of acoustically-excited bubbles and
the stresses and strains induced in the surrounding media.
3791
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3791
11:20
4aBA10. Bioeffects of microsecond pulses of ultrasound—Launching a new era in diagnostic ultrasound safety. Jeffrey B. Fowlkes
(Radiology, Univ. of Michigan, 3226C Medical Sci. Bldg. I, 1301 Catherine St., Ann Arbor, MI 48109-5667, fowlkes@umich.edu)
Over 30 years ago, a provocative letter to the editor [Carstensen, E. L. & Flynn, H. G. The potential for transient cavitation with
microsecond pulses of ultrasound. Ultrasound Med. Biol. 8, L720-L724, doi:10.1016/0301-5629(82)90134-X (1982)] launched a new
era in bioeffects and safety research for medical ultrasound. This letter questioned the presumed safety of diagnostic ultrasound and the
result has been on ongoing intellectual effort that has influenced the research and regulatory environment throughout the world. Based
on both an experimental observation and a theoretical framework of bubble dynamics, the fundamental potential for cavitation from
short pulses of ultrasound, as used in diagnostic ultrasound, and its associated bioeffects in the presence of appropriate nuclei was
brought into clear focus and created a driving force and call to action by the academic community. Many young researchers in this field
started their careers investigating this early proposal that would shape decades of ultrasound investigation. This presentation will review
the origins of work and some of the influences it has had on bioeffects research and the safety and regulatory aspects found today in
diagnostic ultrasound.
11:40
4aBA11. The role of cavitation in vascular occlusion. Gail ter Haar (Phys., Inst. of Cancer Res., Phys. Dept., Inst. of Cancer Research:Royal Marsden Hospital, Sutton, Surrey SM2 5PT, United Kingdom, gail.terhaar@icr.ac.uk), Ian Rivens, John Civale (Phys., Inst. of
Cancer Res., London, United Kingdom), Caroline Shaw, Dino Giussani (Cambridge Univ., Cambridge, United Kingdom), and Christoph
Lees (Imperial College, London, United Kingdom)
Ed Carstensen’s papers on the effects of ultrasound on plants were amongst the first that attracted me to study ultrasound bio-effects.
It was little surprise to me that, when I first started looking at HIFU, his name was on many of the seminal papers in this area. And now,
his papers on cavitation and vasculature inform our current research. Ed’s influence on bio-effects research stretches far beyond the borders of the USA Vascular occlusion has the potential to treat life-threatening conditions including twin-twin transfusion syndrome
(occurring in fetuses that share a placenta). Sheep placental vessels have been used to develop HIFU treatment guidance, delivery
(1.66MHz) and monitoring. Ultrasound imaging guidance/flow monitoring, with simultaneous acoustic cavitation detection in 82 targets
in 14 pregnant sheep resulted in flow occlusion in 90%. Adverse events included skin erythema (18/82) and burns (10/82), but all
resolved over 21 days. Analysis of acoustic cavitation required low pass (software) filtering and careful choice of broadband integration
range (0.1 to 1.1 MHz). 9% of successful treatments displayed detectable broadband, 32% with half-harmonic and 1% with drive-voltage
fluctuations. In conclusion, treatment is possible, side effects are manageable, and acoustic cavitation appears not to be essential for successful occlusion.
12:00
4aBA12. Defining ultrasound for bio-effects at all frequencies. Francis Duck (Phys. Dept., Univ. of Bath, Bath BA2 7AY, United
Kingdom, f.duck@bath.ac.uk)
Ed Carstensen made pioneering contributions to our understanding of the interaction between ultrasound and living bodies. His
observations have been applied to its safe medical use. As human exposure to ultrasound grows, in medical, environmental and industrial
contexts, a need is arising for frequency-specific interpretation of bio-effects observations. The International Commission on Non-Ionising Radiation Protection is exploring this issue, which has given rise, for example, to na€ıve interpretation of low-frequency ultrasound
studies, and incorrect application of medical regulations for MHz exposure. A proposal under discussion defines frequency bands for
ultrasound in a manner similar to UV radiation. The three proposed bands are: Band A, centered around 100 kHz, for which local forces
at gas-liquid interfaces, including cavitation, dominate biological effects: Band B, centered around 10 MHz, for which temperature rise
is the dominant bio-effect mechanism: and Band C, centered around 100 MHz, for which many biological effects result from acoustic
forces. The frequencies of band boundaries should be agreed by international consensus. Suggested values will be presented and justified. Such a banding structure could serve to improve clarity and avoid obfuscation during bio-effects discussion and to assist in the
establishment of improved guidelines and regulations.
3792
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3792
WEDNESDAY MORNING, 28 JUNE 2017
ROOM 205, 8:00 A.M. TO 10:00 A.M.
Session 4aEAa
Engineering Acoustics and Physical Acoustics: Microelectromechanicalsystems (MEMS)
Acoustic Sensors III
Vahid Naderyan, Cochair
Physics/National Center for Physical Acoustics, University of Mississippi, NCPA, 1 Coliseum Drive, University, MS 38677
Kheirollah Sepahvand, Cochair
Mechanical, Technical University of Munich, Boltzmannstraße 15, Garching bei Munich 85748, Germany
Robert D. White, Cochair
Mechanical Engineering, Tufts University, 200 College Ave., Medford, MA 02155
Invited Papers
8:00
4aEAa1. A MEMS microphone with repulsive sensing. Mehmet Ozdogan, Shahrzad Towfighian (Mech. Eng., Binghamton Univ.,
4400 Vestal Parkway East, Mech. Eng., Binghamton, NY 13902, sht@binghamton.edu), and Ronald Miles (Mech. Eng., Binghamton
Univ., Vestal, NY)
The most common types of MEMS microphones are based on the capacitive sensing principle because of their ease of integration
and their ability to detect low pressure fluctuations. One bottleneck in the design of conventional electrostatic MEMS microphones is
that the sensitivity is impaired by the pull-in effect in parallel-plate capacitors. The electrical sensitivity of the microphone is a linear
function of the bias voltage applied on the microphone. To increase the sensitivity and signal to noise ratio, the bias voltage should be
increased, but the bias voltage is severely limited to the pull-in voltage, where the diaphragm will collapse into the backplate. To address
the sensitivity issue in MEMS microphones, we devised a new type of capacitive sensor that creates a repulsive force rather than an
attractive force, thereby completely avoiding the pull-in effect. The pull-in voltage has constrained the performance of capacitive microphones since their invention by Edward C. Wente in 1916. The ability to avoid pull-in will enable microphone designs with more compliant diaphragms and will result in significantly higher sensitivity, higher resolution, less noise and flatter frequency response. We will
demonstrate our simulation results on the dynamic and acoustic responses of the MEMS microphone and present our experimental
results for validation.
4a WED. AM
8:20
4aEAa2. Miniature disposable phononic crystal biosensors. Ralf Lucklum (Inst. for Micro and Sensor Systems, Otto-von-GuerickeUniv. Magdeburg, P.O. Box 4120, Magdeburg 39016, Germany, ralf.lucklum@ovgu.de) and Frieder Lucklum (Inst. for Microsensors,
Actuators and Systems, Univ. of Bremen, Bremen, Germany)
We introduce concept and first realizations of a disposable phononic crystal sensor, dedicated to point-of-care applications. Home
tests, screening at a physician’s office or decentralized hospital tests require a disposable element filled with liquid analytes like blood or
urine. The application of ultrasound leads to the advantage that speed of sound is sensitive to molecular shape, hydration or type of interaction potentials between mixture components. This provides reliable access to sensor specificity. The deviations however are rather
small and require measurement at high resolution. The phononic crystal sensor concept applies the idea of liquid cavity resonance which
can be realized with planar or cylindrical 2D or fully 3D phononic crystals containing the analyte in at least one element. High-Q resonance virtually extends the interaction path of the probing acoustic signal with the analyte. The specific challenge of a miniature disposable phononic crystal liquid cavity resonator is two-fold: First to find designs with high concentration of acoustic energy of a
longitudinal mode in the cavity, and second to utilize a fabrication technology which meets the requirements in terms of materials, geometry, and prize. We use numerical models for design optimization and investigate high-resolution 3D printing as rapid, low-cost fabrication method.
3793
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3793
8:40
4aEAa3. Comparisons of the performance of commercially-available hearing aid microphones to that of the Binghamton Ormiainspired gradient microphone. Ronald Miles (SUNY Binghamton, Dept. of Mech. Eng., Vestal, NY 13850, miles@binghamton.edu)
Comparisons are presented of the performance of a MEMS differential microphone inspired by the ears of the fly Ormia ochracea
with that of commercially available high-performance microphones. The results presented here compare the frequency response, 1/3rd
octave band noise floor, and directionality of the Ormia microphone with what can be achieved with state-of-the-art commercially available miniature microphones. The Ormia microphone is found to provide significantly better performance than the existing microphones
in each response category.
9:00
4aEAa4. A biologically inspired piezoelectric microphone. Neal A. Hall, Michael Kuntzman, and Donghwan Kim (Elec. and Comput.
Eng., Univ. of Texas, Austin, 10100 Burnet Rd., Bldg. 160 Rm. 1.108, Austin, TX 78702, nahall@mail.utexas.edu)
The parasitoid fly Ormia ochracea has the remarkable ability to locate crickets using audible sound. This ability is, in fact, remarkable as the fly’s hearing mechanism spans only 1.5 mm, which is 50 times smaller than the wavelength of sound emitted by the cricket.
The hearing mechanism is, for all practical purposes, a point in space with no significant interaural time or level differences to draw
from. It has been discovered that the fly’s hearing organ utilizes multiple vibration modes to amplify interaural time and level differences
(see Miles et al., J. Acoust. Soc. Am. 98 (6), December 1995). Here, we present a fully integrated mimic of the Ormia’s hearing mechanism capable of replicating the remarkable sound localization ability of the special fly. A silicon-micromachined prototype is presented
which uses multiple piezoelectric sensing ports to simultaneously transduce two orthogonal vibration modes of the sensing structure,
thereby enabling simultaneous measurement of sound pressure and pressure gradient.
Contributed Papers
9:20
9:40
4aEAa5. New Ormia-inspired directional microelectromechanical systems microphone operating in a low-frequency band. Yansheng Zhang,
Ralf Bauer, James F. Windmill, Deepak Uttamchandani, and Joseph Jackson
(Electron. and Elec. Eng., Univ. of Strathclyde, 99 George St., Glasgow G1
1RD, United Kingdom, yansheng.zhang.101@strath.ac.uk)
4aEAa6. Thermodynamic investigation on the PMN-PT stoichiometry
change during thermocycling. Hooman Sabarou (Florida Int. Univ., 10555
W Flagler St., EC3355, Miami, FL 33174), Dehua Huang (Navy Undersea
Warfare Ctr., Newport, RI), and Yu Zhong (Florida Int. Univ., Miami, FL,
yzhong@fiu.edu)
Directional MEMS microphones inspired by the parasitoid fly Ormia
ochracea have been studied since the discovery that the micro-scale tympana structure of this female fly can amplify and locate narrow band mating
calls from its host. This presentation will concentrate on the first piezoelectric Ormia-inspired MEMS microphone that operates in a low range of frequency bands overlapping with human vocal frequencies, and as such is
suitable for hearing aid applications. Including two plates performing as
Ormia’s two tympana, the entire region in motion in our microphone is
about 3.2 mm 1.42 mm 10 lm. Compared to other piezoelectric
Ormia-inspired designs, our design transfers the working frequency band
from over 10 kHz to below 3 kHz due to its asymmetric structure and an Stype rotational cantilever. Furthermore, it provides a unidirectional response
around two resonance frequencies below 3 kHz. The open-circuit acoustic
response of the device is approximately 3.9 mV/Pa at 464 Hz which is close
to human vocal frequencies, with a maximum value of 9.9 mV/Pa at 2275
Hz, which is near the frequency region where the human auditory system is
most sensitive. The new microphone, coupled with a custom-built preamplifier, has a noise floor of 10 lV/冑Hz at 1 kHz.
A new thermodynamic investigation on the (1-x)Pb(Mg1/3Nb2/3)O3xPbTiO3 (PMN-PT, x = 0.25) single crystal has been carried out to understand the structural evolution during the thermocycling processes. The samples have been examined by three thermocycle regimes: 250-300 C, 300400 C, and 400-600 C, under air and argon atmospheres. The XRD refinement after thermocycling experiments identified the existence of two different rhombohedral phases (Rc and R3m), in addition to the retained cubic
phase (Pmm) at room temperature. It also predicted the existence of the
retained rhombohedral phase at a high temperature. The results for the first
time show that PMN-PT has a unique weight-change phenomenon during
the thermocycling processes, i.e., weight loss is observed in the first stage
while weight gain is observed in the last stage. It is believed that this novel
phenomenon is affected by the structural evolution and the formation of the
retained rhombohedral and cubic phases, respectively. It is believed that the
stoichiometry changes observed are closely related to the electric fatigue
phenomenon of piezo-electric materials.
3794
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3794
WEDNESDAY MORNING, 28 JUNE 2017
ROOM 204, 8:20 A.M. TO 10:20 A.M.
Session 4aEAb
Engineering Acoustics: Engineering Acoustics Topics II
Lane P. Miller, Cochair
Acoustics, Pennsylvania State University, PO Box 134, Boalsburg, PA 16827
Hubert S. Hall, Cochair
Mechanical Engineering, The Catholic University of America, 620 Michigan Ave. NE, Washington, DC 20064
8:20
9:00
4aEAb1. Modeling of continuous acoustic field with sparse transducer
arrays. Lane P. Miller (Graduate Program in Acoust., The Penn State
Univ., PO Box 134, Boalsburg, PA 16827, lpm17@psu.edu), Stephen C.
Thompson (Graduate Program in Acoust., The Penn State Univ., University
Park, PA), and Andrew Dittberner (GN Hearing, Glenview, Illinois)
4aEAb3. Two-port network theory applied to multi-body scattering.
Randall P. Williams and Neal A. Hall (Elec. and Comput. Eng., The Univ.
of Texas at Austin, 10100 Burnet Rd., Bldg. 160, Rm. 1.108, Austin, TX
78758, randy.williams@utexas.edu)
The overarching objective of this research is to explore the effectiveness
of reproducing an acoustic field with a sparse array of transducers. This paper presents a comparison of the beam pattern formed by an arbitrarily
acoustically excited rectangular aperture with that of a sparse array of transducers with approximately the same dimensions of the rectangular aperture.
The accuracy of sparse arrays containing different numbers of transducer
elements in different physical arrangements is investigated. The beam-forming accuracy for different frequencies, given as ratios of wavelength to aperture size, is also investigated. Conclusions determine the precision with
which continuous sound-radiating apertures may be modeled by discrete
representations, as well as the expected minimum number (and arrangement) of sources required for optimal sound field reproduction.
We have previously shown how Thevenin’s theorem may be used to
help solve problems in linear acoustic scattering from a mobile body, by
forming the solution as a superposition of the field scattered from the body
when held immobile and the solution for radiation from the body in an otherwise quiescent field. For problems involving acoustic scattering from multiple mobile bodies, the basic approach can be extended by using multi-port
network formalism, commonly used in engineering circuit analysis, to
decompose the problem into a set of simpler problems. In this presentation
we will first review two-port network theory, and then illustrate the
approach using the problem of scattering from a pair of mobile, rigid
spheres in an ideal plane progressive wave. In addition to solving for the
velocities of the spheres, the resultant pressure field will also be determined.
Finally, extension to higher-order systems will be discussed.
8:40
9:20
4aEAb2. A correction method for the two-microphone transfer function
technique in the free field using numerical modeling. Hubert S. Hall
(Ship Signatures, Naval Surface Warfare Ctr. Carderock, 9500 MacArthur
Blvd., West Bethesda, MD 20817, 61hall@cua.edu), Joseph F. Vignola,
John Judge, and Diego Turo (Mech. Eng., The Catholic Univ. of America,
Washington, DC)
4aEAb4. Estimation of coupling loss factors employed in the statistical
energy analysis of kitchen appliances. Roberto Zarate (Mech., Universidad
Nacional Aut
onoma de Mexico, Jose Marıa Arevalo, San Miguel de Allende,
Guanajuato 37730, Mexico, rooo_zarate@hotmail.com), Marcelo Lopez
(Mech., Universidad Nacional Aut
onoma de Mexico, Juriquilla, Queretaro,
Mexico), and Martın Ortega (Mabe, Queretaro, Queretaro, Mexico)
The two-microphone transfer function technique of measuring absorption coefficient in a free field has remained unchanged since its development
in the 1980s. The free field technique has remained scarcely used due to
usage restrictions caused by sound field contributions from diffraction of the
test sample edge. Currently, the technique is only valid for instances where
field contributions from edge diffraction is sufficiently minimized. This
research uses acoustic numerical modeling to study the effects of error sources on the technique. Numerical models have been developed and used to
quantify the effects of “image source deviation” and edge diffraction on the
implementation of the free field technique. Each error source is quantified
independently. Updated guidance on the usage restrictions of the free field
technique is provided. Additionally, an improvement to the free field technique using a correction method is proposed. An experimental validation of
the correction method was performed. The correction method showed
improvement to the current two-microphone free field technique for higher
frequencies (> 800 Hz) for samples 9” and larger as long as the nearest
microphone location is no more than 16.7% of the sample width.
Statistical energy analysis continues to be an important framework of
study that is helping appliance manufacturers to design quieter machines, particularly as household machinery has now moved into the living space with
fashionable open kitchens and designs for small apartments. Precise SEA
models depend on good estimates of coupling loss factors. There are several
analytical, numerical and experimental approaches reported in the literature
to help compute these CLFs; published results show good comparison when
data are correlated with experimental measurements. Accurate CLFs estimations depend on many factors, amongst them we can mention the geometry of
structural components, structure-to-structure type of coupling, material composition, type and number of cavities, quality of the appliance installation and
noise source frequency range. This paper presents a survey of different methods that have successfully been used for CLFs estimation. The objective is to
help the new product department in the task of defining a procedure to accurately model cabinets that are designed for domestic machinery. The authors
layout a procedure that will be used for CLFs calculation of new equipment
that is currently being developed by an appliance manufacturer.
3795
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3795
4a WED. AM
Contributed Papers
9:40
10:00
4aEAb5. Extension of the phase and amplitude gradient estimation
method for acoustic intensity to narrowband sources. Kelli Succo, Scott
D. Sommerfeldt, Kent L. Gee, and Tracianne B. Neilsen (Dept. of Phys. and
Astronomy, Brigham Young Univ., N283 ESC, Provo, UT 84602, kelli.fredrickson7@gmail.com)
4aEAb6. Normal incidence sound absorption of parallel absorber at
high sound pressure. Yan Kei Chiang and Yatsze Choy (Mech. Eng., The
Hong Kong Polytechnic Univ., Hung Hom, Kowloon, Hong Kong 999077,
Hong Kong, kidencyk@gmail.com)
The phase and amplitude gradient estimation (PAGE) method [D.
C.Thomas et al., J. Acoust. Soc. Am. 137, 3366-3376 (2015)] has proven
successful in improving the accuracy of measured energy quantities over the
traditional p-p method in several applications. For example, the PAGE
method has successfully increased the bandwidth over which magnitude and
phase calculations are accurate for broadband sources with smoothly varying phase. This is partially accomplished by unwrapping the phase relationship in order to get valid phase information above the spatial Nyquist
frequency. However, narrowband sources may not have sufficient coherent
bandwidth information for a phase unwrapping algorithm to unwrap properly. To test the limits of the PAGE method on narrowband sources, sine
waves, sawtooth waves, and bandlimited white noise have been used in various scenarios. In one-dimensional tests of these signals, the PAGE method
provides correct magnitude and direction for frequencies up to the spatial
Nyquist frequency, which represents an extended bandwidth over the p-p
method. Additional specific results for the different input signals are presented. Also presented are the results of using low-level broadband noise
propagating in the same general direction as the source to provide sufficient
information for accurate phase unwrapping. [Work supported by NSF.]
A finite element model is developed to simulate the acoustic response of
the parallel absorber array at high sound intensity. The device consists of an
MPP and a rectangular backing cavity which is divided into two sub-cavities
with different cavity depths. At high acoustic excitation, a nonlinear impedance model instead of the traditional linear model is adopted such that the
effects of jets and vortex rings formed at the exit of orifices on the acoustic
properties are taken into account. The normal incident pressure is considered as a main variable of the nonlinear acoustic impedance model. The performance of the MPP absorber array with different designed geometric
parameters are studied. Based on the parallel absorption mechanism, the
preliminary results show that the MPP absorber array provides good absorption performance with a wider frequency range by comparing with the single MPP absorber. Also, compared with the results obtained in linear
regime, a better absorption performance is achieved by the MPP absorber
array at high sound intensity due to the higher acoustic resistance. The
acoustic behaviors of the MPP absorber array are also studied experimentally under normal incidence. The predicted and measured results show
good agreement.
WEDNESDAY MORNING, 28 JUNE 2017
ROOM 205, 10:15 A.M. TO 12:20 P.M.
Session 4aEAc
Engineering Acoustics: Micro-Perforates I
J. S. Bolton, Cochair
Ray W. Herrick Laboratories, School of Mechanical Engineering, Purdue University, Ray W. Herrick Laboratories,
177 S. Russell St., West Lafayette, IN 47907-2099
Mats Åbom, Cochair
The Marcus Wallenberg Laboratory, KTH-The Royal Inst of Technology, Teknikringen 8, Stockholm 10044, Sweden
Chair’s Introduction—10:15
Invited Papers
10:20
4aEAc1. Propagation of sound through a series of contraction micro-tubes. Changyong Jiang and Lixi Huang (HKU-ZIRI Lab for
AeroDynam., and Acoust., Dept. of Mech. Eng., The Univ. of Hong Kong, Haking Wong Bldg. Rm.704, Hong Kong 00000, Hong
Kong, lixi@hku.hk)
Ordinary fibrous material is mainly quantified by its density and the acoustic resistance is tightly coupled with other parameters, a feature which is not always desirable for every sound absorption task. This study investigates the acoustic property of a structure consisting of
unit cells made by simple contraction tubes. Wave element method is used to simulate the property of a unit cell, and the equivalent fluid
model is employed to calculate the performance of an absorber of finite thickness. The acoustic mass and resistance are controlled by the
contraction ratio and the diameter of small tubes, respectively. In terms of the spectra of sound absorption, it is shown that the contraction
ratio mainly controls the peak frequency of the absorption curve while the tube diameter influences the bandwidth of absorption. The performance of the finite absorber is compared with the usual layer of porous material, and the structure parameters are conveniently optimized
3796
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3796
for specified sound absorption tasks. A multiple layer of structures, each with a different set of parameters, may be used to achieve a very
broadband performance, and the simplicity of the unit cell design facilitates the multi-layer optimization process.
10:40
4aEAc2. Improving perforated plate modeling with porous media theory. Luc Jaouen, Fabien Chevillotte, and François-Xavier
Becot (Matelys, 7 rue des Mara^ıchers, Bat. B, Vaulx-en-Velin 69120, France, luc.jaouen@matelys.com)
Acoustical facings (including micro perforated panels—MPP—woven and non-woven textiles, screens..) which can be used in association with porous materials, have been widely studied. A large number of models are described in the literature. This work presents a
comparison of these models discussing particularly the expression of the corrections related to the pressure transition from the air inside
the perforations to the outer media (air or porous materials) formulated in their resistive and reactive parts. Based on this analysis, it will
be shown how to improve D. Y. Maa’s model using the pioneer works by V. A. Fok, V. S. Nesterov or U. Ingard. Finally, the modeling
of acoustical facings using classical porous models will be discussed. It will be shown how these porous models can be used to account
for vibrations of the facings’ skeletons or flow through the perforations in a non-linear regime.
Contributed Paper
11:00
4aEAc3. Characterization of micro-perforated panel at high sound
pressure levels using rigid frame porous models. Zacharie Laly, Noureddine Atalla (GAUS, Dept. of Mech. Eng., Universite de Sherbrooke, PQ,,
2500 Boulevard de l’Universite, Sherbrooke, QC J1K 2R1, Canada,
Zacharie.Laly@usherbrooke.ca), and Sid-Ali Meslioui (Acoust., Pratt &
Whitney Canada, Longueil, PQ, Canada J4G 1A1, Longueil, QC,
Canada)
An acoustic impedance model to predict the acoustic response of microperforated panels at high sound pressure levels is proposed using rigid frame
porous models. The micro-perforated panel is modeled following JohnsonAllard approach with an effective density which depends on the frequency.
The incident sound pressure on the surface of the perforations is considered
as a main variable in the model and the parameters of the equivalent fluid
such as the tortuosity and the flow resistivity are expressed as functions of
this incidence pressure. The proposed model shows good agreement by
comparison with other existing nonlinear impedance models for sound pressure level up to 150 dB. Experimental measurements were performed on
several micro-perforated panels backed by air cavities using an impedance
tube equipped with a high sound speaker capable of delivering a high sound
pressure level up to 155 dB. A good correlation between theoretical and experimental results is obtained. Micro-perforated panel backed by porous
layer is modeled and validated experimentally using an equivalent tortuosity
of the micro-perforated panel which depends on the dynamic tortuosity of
the porous layer.
Invited Papers
11:20
4aEAc4. A parametric study of flexible micro-perforated panels with a patch-impedance numerical model. Muttalip A. Temiz
(Mech. Eng., Eindhoven Univ. of Technol., Eindhoven 5600MB, Netherlands, muttalip.temiz@gmail.com), Jonathan Tournadre (PMA
Mech. Dept., KU Leuven, Leuven, Belgium), Ines Lopez Arteaga (Mech. Eng., Eindhoven Univ. of Technol., Eindhoven, Netherlands),
and Avraham Hirschberg (Appl. Phys., Eindhoven Univ. of Technol., Eindhoven, Netherlands)
4a WED. AM
The absorption characteristics of a single flexible micro-perforated panel with a back cavity are compared for different combinations
of physical parameters. Provided that the panel is made of a rigid plate, the acoustic properties of a micro-perforated panel are determined by the plate thickness, the distribution and diameter of the perforations, the porosity, the edge profile and back cavity depth.
Nevertheless, some applications can involve a material choice or plate thickness values invalidating the rigidity assumption. In such
cases, additional absorption peaks are observed in the absorption frequency spectrum, which cannot be explained by the classical microperforated plate theory. The vibro-acoustic coupling of the flexible plate and the acoustic medium has to be taken into account. This is
done by a thin shell model employing a patch-impedance approach, modeling each perforation separately. Using this model, absorption
coefficient values are calculated for various combinations of plate thickness, perforation diameter and perforation distribution. The
results of this study provide a better understanding of the influence of the design parameters of flexible micro-perforated panels on the
sound absorption. The method is numerically efficient and can be used for optimizing the acoustic absorption of such panels.
11:40
4aEAc5. Investigation of the impedance of a micro-perforated plate with two-sided grazing flow. Maaz Farooqui (KTH Royal Inst.
of Technol., MWL-AVE, KTH, Stockholm 100 44, Sweden, maazfarooqui@gmail.com), Tamer Elnady (Ain Shams Univ., Cairo,
Egypt), and Mats Åbom (KTH Royal Inst. of Technol., Stockholm, Sweden)
Acoustic impedance for perforates is mainly quantified by semi or totally empirical formulas, which depend on the experimental setup used and are specific for each type of tested sample. In this work, the acoustic impedance of a micro-perforated plate (MPP) with
two-sided low Mach grazing flow is educed by a modified semi-analytical inverse technique. The inputs to this technique are complex
acoustic pressure measured at eight positions at the wall of the duct, upstream and downstream of the MPP section. Several measurements on the two sided grazing flow rig are conducted and trends in the impedance shift are observed. These trends serve to quantify the
change in the resistance and reactance of a MPP when it is subjected to two sided grazing flow. An empirical formula is also proposed
which is based on these trends. This formula helps in approximating the behavior of MPP’s in applications such as perforated cooling
fans, perforated guide vanes, and turbine nozzles. It was observed that with increased flow velocity on either side of the duct the resistance tends to increase while the reactance drops.
3797
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3797
12:00
4aEAc6. Reducing the generation of discrete-tones by micro-perforating the back of an open shallow cavity under a subsonic
flow. Cedric Maury (Equipe Sons, Laboratoire de Mecanique et d’Acoustique - UPR CNRS 7051, Laboratoire de Mecanique et
d’Acoustique, 31, chemin Joseph Aiguier, Marseille cedex 20 13402, France, cedric.maury@centrale-marseille.fr), Teresa Bravo (Consejo Superior de Investigaciones Cientificas, Madrid, Madrid, Spain), Daniel Mazzoni, and Muriel Amielh (Institut de Recherche sur les
Phenomènes Hors Equilibres (IRPHE), Marseille, France)
An experimental and numerical study has been carried out to investigate the effect of micro-perforating the back of an open shallow
cavity under a low-speed air flow in order to reduce the amplitudes of the discrete tones due to the shear layer-cavity interaction. Open
shallow cavities with length-to-depth ratio of 10 and 17 have been considered flush-mounted on the test section of a low-speed wind-tunnel with mean flow velocity 30 m/s. The bottom plate of the cavity was either a plain or an unbacked micro-perforated panel (MPP). A
set of wall-pressure measurements over the bottom wall showed that the MPP was able to reduce by up to 8 dB the amplitude of the discrete tones, mostly noticeable beneath the upstream edge of the cavity within the recirculation bubble where the acoustic components
dominate over the broadband components. However, the MPP may enhance the broadband components over and beyond the reattachment zone, thereby suggesting to micro-perforate the back wall only in the attenuation zone. These results were assessed against aeroacoustic numerical simulations performed in time domain using Lattice Boltzmann Method. The effect of the cavity length-to-depth ratio
on the damping induced by the micro-perforated treatment was also examined.
WEDNESDAY MORNING, 28 JUNE 2017
ROOM 200, 11:00 A.M. TO 12:00 NOON
Session 4aED
Education in Acoustics: Take 5’s
Jack Dostal, Chair
Physics, Wake Forest University, P.O. Box 7507, Winston-Salem, NC 27109
For a Take-Five session, no abstract is required. We invite you to bring your favorite acoustics teaching ideas. Choose from the following: short demonstrations, teaching devices, or videos. The intent is to share teaching ideas with your colleagues. If possible, bring a
brief, descriptive handout with enough copies for distribution. Spontaneous inspirations are also welcome. You sign up at the door for a
five-minute slot before the session begins. If you have more than one demo, sign-up for two consecutive slots.
3798
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3798
WEDNESDAY MORNING, 28 JUNE 2017
ROOM 200, 7:55 A.M. TO 10:40 A.M.
Session 4aMU
Musical Acoustics, Psychological and Physiological Acoustics: Musical Instrument Performance,
Perception, and Psychophysics I
Edgar J. Berdahl, Cochair
Music, Louisiana State University, 102 New Music Building, Louisiana State University, Baton Rouge, LA 70803
Peter Rucz, Cochair
Dept. of Networked Systems and Services, Budapest University of Technology and Economics, 2 Magyar Tuod
osok k€
or
utja,
Budapest H1117, Hungary
Thomas Moore, Cochair
Department of Physics, Rollins College, 1000 Holt Ave, Winter Park, FL 32789
Claudia Fritz, Cochair
Institut Jean le Rond d’Alembert, UPMC, 4 place Jussieu, Paris 75005, France
Chair’s Introduction—7:55
Invited Paper
8:00
4aMU1. Concepts of timbre emerging from musician linguistic expressions. Charalampos Saitis and Stefan Weinzierl (Audio Commun. Group, Tech. Univ. Berlin, Einsteinufer 17c, Berlin 10587, Germany, charalampos.saitis@campus.tu-berlin.de)
4a WED. AM
The metaphorical nature of the lexicon used by musicians to describe timbral qualities of instruments shows that they are not familiar
with describing sound as a sensory experience in an acoustical terminology and share little knowledge about the perceptual dimensions
of sound. Instead, they conceptualize and communicate sound qualities through different sensory domains, for instance, a sound felt,
seen, or tasting as “velvety.” These metaphorical linguistic structures are central to the process of conceptualizing timbre by allowing
the musician to communicate subtle acoustic variations in terms of other, more commonly shared sensory experiences. Their psycholinguistic analysis can be considered as one way to study the underlying cognitive representations empirically. An online listening test
using short instrumental solo excerpts from recorded music was designed to obtain a rich corpus of free-format verbal descriptions of
violin, clarinet, piano, and guitar timbre from instrumentalists describing their own as well as other instruments. Through linguistic analysis, associated with psychological theories of perception and sensory categorization, the emerging instrument-dependent and -independent conceptual structures are extracted as a first step in translating the semantics of musician expressions into perceptually meaningful
descriptors of sound quality.
Contributed Papers
8:20
4aMU2. Acoustical coupling between two guitars, onset variation, and
timbre perception. Robert Mores (Design Media Information, Univ. of
Appl. Sci. Hamburg, Finkenau 35, 104, Hamburg 22081, Germany, robert.
mores@haw-hamburg.de)
Acoustical coupling between simultaneously played musical instruments
has been observed at times, for instance, between adjacent organ pipes [F.
Trendelenburg et al., Akustische Zeitschrift 7-20 (1938)]. The observation
here concerns two guitars that reveal a different timbre when played simultaneously (A) in contrast to when played separately with the tracks being mixed
afterwards (B). Alternatively, the two guitars are played separately but each
3799
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
with a simultaneous steady-state tone presented by a speaker while the frequency relates to the key (C). Subjects rated the music produced under the
paradigms A and C as being similar and music produced under the paradigms
A and B as being dissimilar, likewise for B and C. The differences were attributed to timbre. For analysis, a mechanical plugging mechanism replaces the
musician on the first guitar, whereas the second guitar is replaced by a
steady-state tone. While most of the partial tones remain unaffected, partial
tones above 3 kHz rise more sharpely and reach higher levels during onset
when a simultaneous tone is present compared to when it is absent. Although
these partial tones decline rapidly and finally progress toward comparable levels with and without the simultaneous tone, the perceived timbre differs
noticeably.
Acoustics ’17 Boston
3799
8:40
school day, from countless hours of personal practice to regular ensemble
rehearsals to lessons and performances and other personal activities. Many
of these activities expose the musicians to significantly high sound levels
and potential threats to long-term hearing health. In order to understand the
daily student musician noise experience, music majors at Brigham Young
University were selected from various instrument categories to wear a Larson Davis noise dosimeter and their noise levels were recorded for two
days. Each musician wrote a log of the times different activities took place
as well as the location in which they were performed. Overall daily noise
dosages and contributions to the noise dosage from each separate activity
were calculated. Doses for each instrument type as well as each activity
type were compared to identify which instruments and activities contribute
most to noise overexposure.
4aMU3. Absolute pitch is disrupted by an auditory illusion. Diana
Deutsch, Miren Edelstein, and Trevor Henthorn (Dept. of Psych., Univ. of
California, San Diego, 9500 Gilman Dr. #0109, La Jolla, CA 92093-0109,
ddeutsch@ucsd.edu)
Absolute pitch (AP) is the rare ability to name a musical note in the absence of a reference note. Here we show that AP possessors sometimes
name notes incorrectly in accordance with an auditory illusion. AP possessors were presented with a test tone, which was followed by six intervening
tones, and then by a second test tone. The test tones were either identical in
pitch or they differed by a semitone. All tones were sine waves.The AP possessors were asked to ignore the intervening tones, and to name both the first
and the second test tones after hearing the full sequence. In one condition in
which the test tones differed, a tone that was identical in pitch to the second
test tone was inserted in the intervening sequence. For example, if the first
test tone was D and the second test tone was D#, the note D# was inserted in
the intervening sequence. In this condition, the AP possessors showed a significant tendency to misname the first test tone as having the same pitch as
the second test tone. This is the first study showing that AP possessors can
be induced to misname notes in certain contexts.
9:20
4aMU5. Phantom partials in the piano sound and the role of the soundboard. Eric Rokni, Camille Adkison, and Thomas Moore (Dept. of Phys.,
Rollins College, Dept. of Phys., Rollins College, Winter Park, FL 32789,
erokni@rollins.edu)
The presence of phantom partials in the spectra of the piano sound are well
known, and for decades they have been attributed to forced longitudinal vibrations in the string. These vibrations are generally assumed to originate with the
stretching of the string, which produces frequency components at the sums,
differences and second harmonics of the resonant frequencies of the transverse
string motion. Results of recent experiments indicate that phantom partials can
be produced in other parts of the piano as well as the string, and in some cases
these contributions to the sound can be significant. Measurements of the power
in multiple phantom partials produced simultaneously by steady-state excitation are providing insight into the process that creates them.
9:00
4aMU4. A day in the noisy life of a student musician. Kieren H. Smith,
Tracianne B. Neilsen, and Jeremy Grimshaw (Brigham Young Univ., Provo,
UT 84602, kierenhs@gmail.com)
In conservatories and universities across the world, students prepare to
be the next generation of performing professionals to entertain and enliven
worldwide audiences. They pack numerous musical activities into each
Invited Papers
9:40
4aMU6. Measurements and perceptions of interactions between musicians and cellos. Timothy Wofford, Claudia Fritz, Benoit
FABRE (Institut Jean le Rond d’Alembert, Univ. Pierre et Marie Curie, @’Alembert, bo^ıte 162, 4, Pl. Jussieu, Paris 75252 Cedex 05,
France, wofford@lam.jussieu.fr), and Jean-Marc Chouvel (Universite Paris-Sorbonne, Paris, France)
Measurement of physical properties alone has failed to predict how an instrument will be perceived by musicians. Physical properties
are only capable of characterizing an instrument as an object. They are not enough to characterize the suitability of the object as an
instrument for producing music by a particular musician. To evaluate an instrument, the musician must apply gestures to the instrument
and interpret the response of the instrument to those gestures. Two musicians may use different gestures, receive different responses,
and thus have a different set of interactions upon which to base their evaluation. This could explain why musicians disagree over instrument evaluations. To study the relationships between gestures, responses, and the perception of interactions in the context of evaluating
cellos, we have put in place (1) a motion capture system for measuring the control parameters and (2) a set of piezoelectric sensors to
measure the string vibrations of each string. Gestures and string vibrations were measured during the evaluation of two cellos by a few
cellists. These measurements are interpreted in the light of the perceptual evaluations.
10:00
4aMU7. The influence of cellist’s postural movements on their musical expressivity. Jocelyn Roze, Mitsuko Aramaki, Richard Kronland-Martinet, and Solvi Ystad (PRISM - CNRS, FRE 2006, Aix-Marseille Univ., PRISM FRE 2006, 31 Chemin Joseph Aiguier, Marseille 13009, France, roze@lma.cnrs-mrs.fr)
A particularly fascinating aspect of musical expressivity concerns the relationships between the musician’s body and acoustical signal features of the sounds produced by the instrument. Numerous studies have demonstrated this connection, referring to embodied musical cognition, not only through well-studied instrumental gestures directly responsible for the sound production, but also through the
use of more indirect and ingrained ancillary movements. Our actual project aims to better understand the influence of these postural
adjustments on the expressive musical interpretation for the cello instrument. To this aim, we conducted an experiment involving several
professional cellists subjected to various kinds of postural constraints, and analysed the influence of these constraints on sound signal
features by comparing constrained and natural playing situations. The results reveal that the cellists’ body movements play an important
role during performance. In fact, the torso and especially the head movements appear to insure the metric coherence with the bowing
gesture as well as an expressive musical phrasing and rich timbre variations. Finally, we explored more in depth the harsh notes frequently occurring in the most constrained condition, and drew correlations between this timbral degradation and bowing and postural
features, with the aim of proposing new pedagogical tools for beginners.
3800
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3800
Contributed Paper
10:20
inharmonicity is reduced by a proper choice of the mouthpiece volume. The
playing frequency was calculated by using a minimum, ab initio model of
self-sustained oscillations, and compared to analytical results. Furthermore,
the “reactive power rule” (Boutillon, 1989) provides a link between the pressure spectrum and inharmonicity. Compared to the natural frequency of the
complete cone, which is a well known approximation, the playing frequency
is slightly higher: typical discrepancies are 50 cents for short truncated cones
(higher notes) and 10 cents for long truncated cones (lower notes). This discrepancy, due to inharmonicity, depends on the excitation parameters, and the
influence of the second and third harmonics is found to be significant.
4aMU8. Playing frequency of conical reed instruments. Jean Kergomard,
Philippe Guillemain, and Christophe Vergez (CNRS-LMA, 4 impasse Nikola
Tesla, CS 40006, Marseille 13453, France, kergomard@lma.cnrs-mrs.fr)
Several factors explain the difference between the resonance frequencies
of the resonator of reed instruments and their playing frequency: the flow rate
due to the reed displacement, the reed dynamics, the resonator inharmonicity,
and the temperature gradient. For conical instruments, the truncation of the
cone entails non-negligible inharmonicity effects. This is true even if the
WEDNESDAY MORNING, 28 JUNE 2017
ROOM 202, 8:00 A.M. TO 12:20 P.M.
Session 4aNSa
Noise: Measuring, Modeling, and Managing Transportation Noise I
Matthew Kamrath, Cochair
Acoustics, Pennsylvania State University, 717 Shady Ridge Road, Hutchinson, MN 55350
Lisa Lavia, Cochair
Noise Abatement Society, 8 Nizells Avenue, Hove BN3 1PL, United Kingdom
Invited Papers
8:00
4a WED. AM
4aNSa1. Mitigating noise and traffic congestion through measuring, mapping, and reducing noise pollution. Tae Hong Park, Minjoon Yoo (Music, New York Univ., 35 West 4th St., Ste. 1077, 35 West 4th St., Ste. 1077, New York, NY 10012, thp1@nyu.edu),
Charles Shamoon (Dept. of Environ. Protection, New York City, New York, NY), Christopher Dye (Watson Cloud Innovations, IBM,
San Diego, CA), Stacey Hodge, and Asheque Rahman (Dept. of Transportation, New York City, New York, NY)
Noise is a ubiquitous and omnipresent urban pollutant with serious health issues. This so-called “forgotten pollutant” includes all
types of sounds including vehicular noise, barking dogs, music, and human sounds. Mechanical noise, and especially traffic sound, is
especially “annoying” as it manifests in a plethora of ways that can be constant, intermittent, or impulsive. We have approached noise
mitigation efforts via notions of “you can’t fix what you can’t measure,” “seeing is believing,” machine-aided soundscape listening, and
unusually cost-effective, robust, scalable sensor network designs for dense, spatiotemporal soundmaps creation. In this paper we report
on updates—Citygram’s recent partnership with IBM and its “Horizon” edge compute system, plug-and-sense sensor network hardware/
software, visualizations, machine learning and sensor scaling/deployment strategies—applicable towards capturing road, rail, aircraft,
and other types of urban noise agents to improve understanding of urban livability and traffic congestion. This will include discussion of
NYCDOT’s noise monitoring strategies to support the NYC Off Hour Delivery Program. We believe that truck delivery-based noise can
be significantly mitigated in order make urban, off-hour truck delivery practicable: a key factor in reducing congestion during the daytime and improving safety for pedestrians and cyclists during busy daytime hours in the urban environment.
8:20
4aNSa2. The effects of transportation noise on the people’s emotional feelings of the community environment. Ming Yang, Jian
Kang, Yiying Hao (School of Architecture, Univ. of Sheffield, Western Bank, Sheffield S10 2TN, United Kingdom, mingkateyang@
163.com), and Lisa Lavia (Noise Abatement Society, Hove, United Kingdom)
A large number of soundscape studies have shown that soundscapes or sound environments affect people’s emotional states, behaviors, and performance. This paper studies the effects of community noise, especially noise from transportations, on the residents’ emotional feelings about the environment. Based upon a large-scale questionnaire survey of residents about the community noise such as
sounds of traffic and people’s activity, it presents the specific relationships between the sound sources and a range of emotional feelings
towards the community environment, such as feeling it annoying, calm, pleasant or exciting, using statistical methods including correlation analysis and principal component analysis. It shows that people’s emotional feelings of the sound environment largely depend on
the dominant sound sources they hear; also the sound sources in soundscapes (activities in the communities) tend to appear in groups
which collectively influence people’s feelings.
3801
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3801
Contributed Papers
8:40
9:20
4aNSa3. Testing of urban road traffic noise annoyance models—based
on psychoacoustic indices—using in situ socio-acoustic survey. LaureAnne Gille and Catherine Marquis-Favre (Direction territoriale Ile-deFrance, Cerema / Univ Lyon, ENTPE, Laboratoire Genie Civil et B^atiment,
ENTPE/LGCB, rue Maurice Audin, Vaulx-en-Velin 69120, France, laureanne_gillechambo@yahoo.com)
4aNSa5. Noise pollution from road traffic noise in Rome: Incidence of
coronary and cardiovascular events. Elena Ascari (IDASC-CNR, via
fosso del cavaliere 100, Roma 00133, Italy, elena.ascari@gmail.com), Gaetano Licitra (Area Vasta Costa, ARPAT, Livorno, Italy), and Carla Ancona
(U.O.C. Epidemiologia Eziologica e Occupazionale, Dipartimento di Epidemiologia del Servizio Sanitario Regionale - Regione Lazio, Roma, Italy)
Noise annoyance is one of the main non-acoustical effects of noise. To
manage this environmental issue, European cities of more than 100,000
inhabitants produce strategic noise maps which represent the noise exposure
in terms of Lden. Despite this index is used in dose-effect relationships, several studies showed that models based on Lden insufficiently predict annoyance measured during in situ survey. Indeed, several acoustical
characteristics (e.g. temporal variation) influence noise annoyance and are
not considered by Lden. Laboratory experiments enable to propose annoyance models based on noise sensitivity and several psychoacoustic indices.
These indices enable to characterize different acoustical characteristics.
However, they cannot be measured for each respondent during an in situ
survey. It was therefore not possible to test these annoyance models using in
situ survey data. Thus, a methodology is proposed to estimate the values of
the psychoacoustic indices on the basis of Lden given in survey database.
Urban road traffic noise annoyance models are tested using French survey
data. Annoyance predicted by these models is compared to the measured
one. Results show that these noise annoyance models coupled with the
methodology to estimate the values of the different indices enable a better
prediction than Lden only.
This work focuses on the preliminary results of the research “Noise and
health pollution from road traffic and incidence of coronary and cardiovascular events in three Italian cohort studies.” Outcomes of noise simulations
on the Rome agglomeration are reported on the whole territory (1285 km2):
grid map and noise levels at residential buildings have been calculated
according Directive 2002/49/EC method. Population exposure has been
linked according a specific protocol that is detailed in the work. Exposure
evaluations for epidemiological studies requires assigning noise levels to
population according the probability of being exposed to a specific level in
the dwelling. Limitations of simulations will be analyzed and associated
uncertainty will be estimated, basing on the modeling choices performed.
Therefore, uncertainty of linked epidemiological outcomes will be also discussed. First outcomes of incidence of coronary and cardiovascular events
in Rome will be presented. Finally, expected results on the three Italian
cohort studies will be briefly explained.
9:00
4aNSa4. Selection of a sound propagation model for noise annoyance
prediction: A perceptual approach. Pierre-Augustin Vallin, Catherine
Marquis-Favre, Laure-Anne Gille (Univ Lyon, ENTPE, Laboratoire Genie
Civil et B^atiment, ENTPE/LGCB, 3, rue Maurice Audin, Vaulx-En-Velin
69120, France, pierreaugustin.vallin@entpe.fr), and Wolfgang Ellermeier
(Technische Universit€at Darmstadt, Darmstadt, Germany)
The LDEN is widely used to assess transportation noise annoyance, as
recommended by the European Commission. However, other signal features
(e.g. amplitude modulation) have proven to influence noise annoyance as
well. It is not practical to determine the corresponding acoustical and psychoacoustical indices (e.g. roughness) when producing noise maps, since
that would require numerous in situ recordings. An alternative might be to
estimate the relevant index values knowing the LDEN value at a given receiver point M. To this end, a perceptually relevant sound propagation
model needs to be selected. Therefore, ground transportation noises were
simultaneously recorded at short (M1) and longer (M2) distances from the
source. Three different propagation models were applied to the M1 recordings in order to simulate pass-by noises heard at M2. A simple level
decrease based on geometrical divergence was also considered as a fourth
model. After a physical comparison of the four models, a listening test was
carried out to determine the best propagation model from a perceptual viewpoint. This test consisted of collecting dissimilarity ratings between simulated pass-by noises and a reference noise recorded in situ at M2.
9:40
4aNSa6. New methods for measuring and correlating transportation
noise frequencies with ultrafine particulate emissions under varying meteorological conditions to inform environmental health studies. Douglas
J. Leaffer (Civil and Environ. Eng., Tufts Univ., Medford, MA 02155,
Douglas.Leaffer@Tufts.edu), Rafia Malik, Brian Tracey (Elec. and Comput.
Eng., Tufts Univ., Medford, MA), David M. Gute (Civil and Environ. Eng.,
Tufts Univ., Medford, MA), Aaron L. Hastings, Christopher J. Roof, and
George J. Noel (Environ. Measurement and Modeling, Volpe National
Transportation Systems Ctr., Cambridge, MA)
Transportation-derived particulate matter and chronic noise exposure frequently occur concomitantly in urban areas. Noise is an important confounder
to be evaluated in epidemiological studies, yet few public health studies have
included both air pollution and noise in health effects models due to difficulty
in demonstrating epidemiologic causal mechanisms and confounding factors
in noise and air pollution sampling and analytical methodologies. This study
will present a framework for the development of a traffic-noise frequency and
particulate emissions correlation model based on frequency-domain analysis
of vehicle noise measurements, compared with particulate measurements
sampled concurrently at two Greater Boston urban neighborhoods under varying meteorological conditions. The research will present new methods for
evaluating bi-seasonal measurement of both transportation noise and associated particulate emissions, with emphasis on Ultrafine Particulates (UFP,
<100 nm diameter) from diesel exhausts. The goal of the paper is to develop
a preliminary model demonstrating correlations between transportation source
noise frequencies and UFP. The importance of this research is to establish a
methodology to disentangle health-based receptor impacts from both pollutants under varying scenarios where meteorological parameters are not favorable for particulate transport to receptors, yet noise is measurably present.
10:00–10:20 Break
3802
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3802
Invited Papers
10:20
4aNSa7. A review of transport noise management plans in large North American and European cities. Julian Rice (Multimodal
Interaction Lab, McGIll Univ., 3661 Peel St., Montreal, QC H3A 1X1, Canada, jarjtbt@gmail.com), Daniel Steele (Multimodal Interaction Lab & Ctr. for Interdisciplinary Res. in Music Media and Technol., McGill Univ., Montreal, QC, Canada), Romain Dumoulin
(Acoust. Consultant, Montreal, QC, Canada), and Catherine Guastavino (Multimodal Interaction Lab & Ctr. for Interdisciplinary Res. in
Music Media and Technol., McGill Univ., Montreal, QC, Canada)
Noise management plans of large cities are diverse along many factors, such as the responsibilities amd interactions at different levels of governance (national vs. local) and definition of noise (measurement-reliant vs. subject-centered). More specifically, for the regulation of transportation noise, there are various ways these sources are identified (by source type or acoustic signal properties), managed
(time of day, zoning, and context), and controlled (police v. specialized departments, complaint systems, treatment of private v. public
sources). The regulations, official communications and plans for transport noise are analyzed for 20 major North American and European cities (> 500,000 inhabitants) in order to assess current noise management strategies. Over the past 15 years, an extensive body of
academic literature has provided grounds for a soundscape approach to urban noise where “appropriate” sounds can be used to positive
effect. Current management plans are also examined for existing applications of this soundscape approach. Results will assist in highlighting trends and will feed into the development of best practices for noise management and regulation in large cities. This review is
part of a larger project (Sounds in the City), a collaborative research effort with the City of Montreal, to shape the future of urban noise
management.
10:40
4aNSa8. Advances regarding a new method for measuring the in situ noise abatement performance of urban noise reducing devices. Alexandre Jolibois (Health and Comfort, CSTB, CSTB, 84 Ave. Jean Jaurès, Champs-sur-Marne 77420, France, alexandre.jolibois@cstb.fr), Jer^
ome Defrance, and Philippe Jean (Health and Comfort, CSTB, Saint-Martin-d’Hères, France)
Recent works regarding urban noise reducing devices have clearly shown the interest of such solutions in order to improve the sound
quality in urban areas. To accompany the development of new products, it now becomes necessary to provide a technical regulatory
framework for all stakeholders (manufacturers, city planners, consulting acoustical engineers, etc.) in order to guarantee quality, efficiency, and compliance with urban implementation requirements. This is the main purpose of the technical group CNEA-U, affiliated to
the French commission for standardization of road traffic noise reducing devices. As part the technical group activities, a new method
for measuring the in situ noise abatement performance of urban devices adapted to the urban context has been developed. The purpose
of the method is to provide an indicator of the noise reduction effect of a particular product, measurable in situ but depending as little as
possible on environmental effects. To reach this purpose, the parameters of the method have first been studied numerically and optimized. Then the method has been tested experimentally with several industrial prototypes in a controlled environment. In this paper, the
framework of the method as well as results of preliminary analyses are presented and discussed.
11:00
11:20
4aNSa9. Informing the public on noise impacts through the WEB-GIS
DYNAMAP software application. Laura Peruzzi (Anas SpA, Via della
Stazione di Cesano 311, Rome 00123, Italy, l.peruzzi@stradeanas.it), Patrizia Bellucci (Anas SpA, Rome, Rome, Italy), Andrea Cerniglia (ACCON,
Pavia, Italy), and Paola Coppi (AMAT, Milano, Italy)
4aNSa10. Making the DYNAMAP project a reality in the suburban
area of Rome. Patrizia Bellucci, Laura Peruzzi (Rd. Res. Ctr., ANAS
S.p.A., Via della Stazione di Ceasno, 311, Rome 00123, Italy, p.bellucci@
stradeanas.it), Francesca R. Cruciani (Rd. Res. Ctr., ANAS S.p.A., Cesano
di Roma, Italy), and Cristina Ferrari (Operation and Territorial Coordination, ANAS S.p.A., Rome, Italy)
Public information on noise impacts is one of the most problematic
objectives of the environmental noise directive 2002/49/EC that authorities
responsible for providing strategic noise maps should provide. In order to
facilitate public information, a web-gis software application has been developed within the Dynamap project. The DYNAMAP project is a LIFE project aiming at developing a dynamic noise mapping system able to detect
and represent in real time the acoustic impact due to road infrastructures. To
that end, the project involves the development of customized low-cost sensors and communication devices, as well as the implementation of an
advanced management and reporting interface to update noise maps and
inform the public on noise issues. To guarantee the full effectiveness of this
application, a group of selected users will be monitored to check the accessibility of the system and help developing a user-friendly interface for public
information. Tests will be also administered to the general public to evaluate
the system versatility and its contents comprehensibility. In this paper a
detailed description of the web software application, of the indicators used
to ease public information and of the tests prepared to check recipients ability in managing and consulting the system is given.
3803
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
The DYNAMAP project is a LIFE project aiming at developing a
dynamic noise mapping system able to detect and represent in real time the
acoustic impact due to road infrastructures (dynamic noise maps). Dynamic
noise maps are achieved by updating pre-calculated basic noise maps as a
function of sound pressure levels and weather conditions, provided by an
automatic monitoring system, made of customized low-cost sensors and of a
software tool implemented in a general purpose GIS platform. The feasibility of this approach will be validated by installing the system in two pilot
areas with different territorial and environmental characteristics: an agglomeration and a major road. The first pilot area is located in Milan, in a significant portion of the town, while the second one is situated along the
motorway A90 encircling the city of Rome. In this paper the main issues
related to the preparation of the basic noise maps and the implementation of
the Dynamap system in the suburban area of Rome are described. An overview of the test sites and of the system configuration is also provided.
Acoustics ’17 Boston
3803
4a WED. AM
Contributed Papers
11:40
12:00
4aNSa11. Development of a new smart noise monitoring system for a
long-term noise control in the cities. Francesco Borchi, Lapo Governi,
Monica Carfagni, and Chiara Bartalucci (Dept. of Industrial Eng. of Florence, Univ. of Florence, Via di Santa Marta, 3, Firenze 50139, Italy, francesco.borchi@unifi.it)
4aNSa12. Effects of noise for workers in the transportation industry.
Marion Burgess (School Eng. and Information Technology, UNSW, Australia, Northcott Dr., Canberra, ACT 2612, Australia, m.burgess@adfa.edu.
au) and Brett Molesworth (School of Aviation, UNSW Australia, Sydney,
NSW, Australia)
The paper describes a new smart noise monitoring system designed and
implemented into the project named LIFE15 ENV/IT/000586
“Methodologies fOr Noise low emission Zones introduction And management” (MONZA). The prototype system has been designed starting from
the state of the art and the monitoring needs of the MONZA project. The
designed system can be considered as a prototype according to the necessary
customization in the designing of connections among the hardware components and in the definition of protocols to manage and post process of data
collected. The designed prototype needs to maintain the original specs over
a long time. In fact, the new monitoring system will be used for a long time
(at least five years) during and after the MONZA project duration. In this
paper, a description related to the designed network are reported as well as
the optimized protocol used to check the system performance variation over
time.
There are many work environments in the transportation industry where
the employees are required to perform tasks that require a high level of concentration and attention in a noise environment that is below a damage risk
level but above an acceptable level for office workers doing similar challenging tasks. Pilots, bus, truck, and train drivers all need to make safety
critical decisions and operate technical equipment in the presence of continuous noise generated from their vehicles engine/s. Transport check-in staff
need to communicate and process passengers in noisy check-in halls. In this
paper, we will discuss findings from a number of studies investigating the
effect on various cognitive and memory skills of noise similar to that experienced by workers in transportation. The studies have considered the interactions of noise type, i.e., wideband noise or babble noise, signal to noise
ratio, and language background on cognition processes, including working
and recognition memory. The results highlight the detrimental effect of
broadband noise on both working memory and recognition memory.
WEDNESDAY MORNING, 28 JUNE 2017
ROOM 203, 8:55 A.M. TO 12:20 P.M.
Session 4aNSb
Noise, ASA Committee on Standards, and Structural Acoustics and Vibration: Wind Turbine Noise
Nancy S. Timmerman, Cochair
Nancy S. Timmerman, P.E., 25 Upton Street, Boston, MA 02118
Paul D. Schomer, Cochair
Schomer and Associates Inc., 2117 Robert Drive, Champaign, IL 61821
Robert D. Hellweg, Cochair
Hellweg Acoustics, 13 Pine Tree Road, Wellesley, MA 02482
Chair’s Introduction—8:55
Invited Papers
9:00
4aNSb1. Subjective perception of wind turbine noise. Steven E. Cooper (The Acoust. Group, 22 Fred St., Lilyfield, NSW 2040, Australia, drnoise@acoustics.com.au)
Subjective testing of wind turbine noise to examine amplitude modulation and subjective loudness has tended to use large baffle
speaker systems to produce the infrasound/low frequency noise and one high frequency speaker—all as a mono source. Comparison of
mono and stereo recordings of audible wind turbine noise played back in a test chamber and a smaller hemi-anechoic space provides a
distinct and different perception of amplitude modulation of turbines. A similar exercise compares the use of high quality full spectrum
headphones with the two different sound fields applied to just the ears.
3804
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3804
9:20
4aNSb2. The electric power being generated may be a key variable in the sensing of wind farm operations. Paul D. Schomer
(Schomer and Assoc. Inc., 2117 Robert Dr., Champaign, IL 61821, schomer@SchomerAndAssociates.com)
This entire assessment method is based on a series of statements and conclusions from those statements. The basic premise is that
sound flowing through the cochlea is not the source of problems below the threshold of hearing. That statement leaves two of what I will
call “technical possibilities.” One possibility is that there are pathways other than through the cochlea for the infrasound to get to the
brain. A second possibility is that to date we have missed something in the audible sound range that is the source of problems or that
both of these situations exist. This paper falls in the category of something missed in the audible range. It develops a theory that the electric power being generated is the dominant factor in people’s response, not acoustic level. People can see and hear a turbine running, but
they cannot know the power being generated. A diary study would list times people were home and the times they could detect the sensation. Correlation between electric power level being generated and subject responses would be particularly meaningful because they
would be responses without the subject having knowledge of the electric power being generated.
9:40
4aNSb3. A possible noise criterion for wind farms. Paul D. Schomer and Pranav K. Pamidighantam (Schomer and Assoc. Inc., 2117
Robert Dr., Champaign, IL 61821, schomer@SchomerAndAssociates.com)
Opposition to wind farm noise is not abating and shows no sign of doing so in the future. In a January 2017 paper in Sound and
Vibration, Hessler, Leventhal, Walker and Schomer come together to report that independently they have come to about the same conclusion for a proper threshold of wind turbine noise. The same A-Weighted criterion has shown to come up in a variety of independent
ways. This paper is not for pie in the sky desires for no sound. Rather, it attempts to map sound from a common source such as road traffic noise to the sound from wind farms. We do this calculation in two ways: using percent highly annoyed set to 6.5%, roughly 58 DNL
dB for road traffic. In the second method, we developed the CTL for the comparative sources and it is the difference in CTL that shows
the adjustment from one source to the other. Our conclusion is that the criterion for wind turbines should be 36-38 dB(A).
10:00
4aNSb4. On translating minimum siting distances into percentages of receiving properties meeting a stated dB(A) criterion. Pranav K.
Pamidighantam and Paul D. Schomer (Schomer and Assoc. Inc., 2117 Robert Dr., Champaign, IL 61821, ppamidig@illinois.edu)
Around the world, minimum siting distances are used by regulators and developers to limit the effects of wind turbine noise on people. Acousticians know that the proper calculation is equal sound level contours, but customers, in this case the communities, developers,
and regulators, all want simpler solutions. When creating limits for most industrial sources, noise levels from the source decrease monotonically with distance. For wind farms, the observing can be done from within the source and a monotonic function can no longer be
assumed. This study makes use of data collected at over 1200 dwelling units as a part of the Health Canada Study. This paper provides a
method to determine minimum siting distances based on predicted percentages of exceedances of dB(A) criterion at dwellings. A Riemann sum using CTL as a basis creates a model that can be applied to find acceptable minimum siting distances.
10:20–10:40 Break
10:40
4a WED. AM
4aNSb5. Reproduction of wind turbine infrasound and low frequency noise in a laboratory. Steven E. Cooper (The Acoust. Group,
22 fred St., Lilyfield, NSW 2040, Australia, drnoise@acoustics.com.au)
The use of large baffle speaker system to produce infrasound/low frequency noise from wind turbines for subject medical testing
was found to have issues in reproducing the original recorded signal. Reproducing the original transient time signal was not achieved.
The use of synthesised signals for subjective testing of infrasound was examined and is discouraged when compared to actual real world
sound files. The results of the testing and recommendation to use medical studies in the field rather than laboratory testing is discussed.
Contributed Papers
11:00
4aNSb6. Simulation of wind turbine noise for community engagement.
Roalt Aalmoes and Merlijn Boer, den (Environ. Dept., Netherlands Aerosp.
Ctr., Anthony Fokkerweg 2, Amsterdam 1059CM, Netherlands, roalt.aalmoes@nlr.nl)
The introduction of wind farms on land for the generation of renewable
energy often leads to discussion in the surrounding community. Their concerns on impact of Shadow flickering, horizon pollution, and noise levels
may postpone or even cancel the construction of a new wind farm. To
improve the communication to the public, wind farm simulations, using the
Virtual Community Noise Simulator (VCNS), were created for two different
wind farm projects in the Netherlands. For this purpose, the primary noise
of the wind turbine, the rotor, has been auralized according to the design of
the blades. Atmospheric circumstances, including wind direction, were also
taken into account to create the propagation through the atmosphere towards
3805
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
the observer location. Finally, 360 degrees video recordings were made and
combined with the planned 3D models of the wind turbines to create an
appropriate visual experience using Virtual Reality. This was presented to
the public during local municipal meetings. Empirical conclusion of these
meetings is that the simulation helped to create an objective way to experience the plans, and provided a better understanding and acceptance of the
impact of the planned wind farms for the local community.
11:20
4aNSb7. Measurement techniques for determining wind turbine infrasound penetration into homes. Andy Metelka (Sound and Vib. Solutions
Canada Inc., 13652 4th Line, Acton, ON L7J 2L8, Canada, ametelka@
cogeco.ca)
Previous measurements using advanced measurement instrumentation
and Narrowband 3-dimensional FFT based signal processing indicate the
Acoustics ’17 Boston
3805
presence at infrasonic blade pass frequencies in dwellings from multiple turbines. New measurements include both near and extreme far field detailing
that some homes are highly susceptible while others are not. Proposed calculations for transmissibility are outlined with simultaneous long-term
measurements at 4 homes. Various locations inside homes are also compared to outside measurements relating wind speed, wind direction and
other audible SLM parameters.
towers in proximity to the wind energy facility sites. Relationships between
ambient sound level data and various meteorological parameters including
wind speed, direction, temperature, and relative humidity are reviewed and
analyzed.
12:00
4aNSb9. Effect of atmospheric stability and low level jets on wind turbine noise. Benjamin COTTE (IMSIA, ENSTA ParisTech, 828 boulevard
des Marechaux, Palaiseau F-91120, France, benjamin.cotte@ensta.fr)
11:40
4aNSb8. Background noise variability relative to wind direction, temperature, and other factors. Patricia Pellerin (Tetra Tech, 160 Federal St.,
3rd Fl., Boston, MA 02110, tricia.pellerin@tetratech.com)
Numerous studies from the literature have shown that strong wind shear
occurs frequently in stable atmospheres, typically at night. This phenomenon
can be associated to low level jets, characterized by a wind speed profile with a
maximum at a few hundred meters above the ground. This study investigates
the effect of such wind speed profiles on wind turbine noise using a physicallybased model. The predictions are obtained by coupling an aeroacoustic source
model based on Amiet’s theory and a parabolic equation code for acoustic
propagation in an inhomogeneous atmosphere. Two important broadband noise
generation mechanisms are considered, namely trailing edge noise and turbulence inflow noise, and the coupling method takes into account the fact that
wind turbine blades are moving and extended sources. In order to obtain realistic wind speeds for the simulations, a simple model of nocturnal low-level jet is
used, with input parameters based on measurements acquired in Cabauw observatory in the Netherlands. Predictions of the overall sound pressure levels
and of the third octave band noise spectra will be presented up to a distance of
1km to show the noise variations due to the wind shear effect.
Characterization of the ambient acoustic environment is often a critical
component in the wind energy permitting process. Collection of ambient data
may be required during the pre-construction or post-construction phases to establish a baseline for determining wind turbine sound contribution at different
points of reception. The challenge with establishing that baseline is that ambient
sound levels continuously fluctuate, affected by multiple factors including existing sound sources, human activity, and meteorological parameters like wind
speed and direction. This continuous fluctuation makes both definition and
reproducibility of ambient conditions difficult. This paper examines meteorological data collected during surveys at numerous wind energy facilities
throughout the United States. Meteorological data were collected at different
heights above ground level using a combination of multiparameter weather sensor stations deployed during survey periods as well as other meteorological
WEDNESDAY MORNING, 28 JUNE 2017
ROOM 310, 8:35 A.M. TO 12:20 P.M.
Session 4aPAa
Physical Acoustics and Signal Processing in Acoustics: Outdoor Sound Propagation I
Philippe Blanc-Benon, Cochair
Centre acoustique, LMFA UMR CNRS 5509, Ecole Centrale de Lyon, 36 avenue Guy de Collongue, Ecully 69134
Ecully Cedex, France
Sandra L. Collier, Cochair
U.S. Army Research Laboratory, 2800 Powder Mill Rd, RDRL-CIE-S, Adelphi, MD 20783-1197
Chair’s Introduction—8:35
Invited Papers
8:40
4aPAa1. Extended source models for long range wind turbine noise propagation. Benjamin Cotte (IMSIA, ENSTA ParisTech, 828
boulevard des Marechaux, Palaiseau F-91120, France, benjamin.cotte@ensta.fr)
Wind turbine noise can be perceived at distances greater than one kilometer and is characterized by amplitude modulations at the receiver. In order to predict this noise, it is necessary to model the dominant aeroacoustic noise sources as well as the main outdoor propagation effects. In most studies from the literature, the wind turbines are modeled as point sources to simplify the coupling between
source and propagation models, but this assumption is not always justified. In this study, two original methods are proposed to couple an
aeroacoustic source model based on Amiet’s theory and a Split-Step Pade parabolic equation code for acoustic propagation in an inhomogeneous atmosphere. In the first method, an initial starter is obtained for each segment of the blade using the backpropagation
approach. This method enables us to accurately model the directivity of the noise sources but is very computationally intensive. In the
second method, the blade segments are viewed as moving monopole sources, and only a limited number of parabolic equation simulations are needed which strongly reduces the computation time. These two methods are validated using an analytical reference solution in
a homogeneous medium and compared in various inhomogeneous atmospheres.
3806
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3806
9:00
4aPAa2. Meteorological reanalysis data inputs for improved aircraft noise modeling. Rachel A. Romond and Victor Sparrow
(Graduate Program in Acoust., Penn State Univ., 201 Appl. Sci. Bldg., University Park, PA 16802, romond@psu.edu)
In order to improve the prediction capabilities of aircraft noise modeling tools, recent work has focused on including measurements
of the vertical structure of the inhomogeneous propagation environment. Meteorological reanalysis incorporates quality-controlled
measurements from many sources into atmospheric models, which produce 4-D data fields describing the state of the atmosphere. In this
work, reanalysis products are considered for use in the calculation of aircraft noise propagation from en-route and lower altitudes to the
ground. Atmospheric profiles are extracted from the NCDC/UCAR Climate Forecast System Reanalysis (CFSR) and input to an acoustic
raytracing model to demonstrate effects of realistic inhomogeneous atmospheric conditions throughout the propagation path. Results
will be discussed, as well as practical and methodological considerations for data integration. The techniques demonstrated for this data
set should also be applicable for other similar data sets, and thus may be useful in the future development of the Federal Aviation
Administration’s (FAA) Aviation Environmental Design Tool (AEDT). [Work supported by the FAA. The opinions, findings, conclusions, and recommendations expressed in this material are those of the authors and do not necessarily reflect the views of ASCENT
FAA Center of Excellence sponsor organizations.]
9:20
4aPAa3. Predicting the pass-by signature of vehicle flow sound sources including the influence of nearby built environment. Nicolas. Pignier, Ciaran J. O’Reilly, and Susann Boij (The Ctr. for ECO2 Vehicle Design, KTH Royal Inst. of Technol., Teknikringen 8,
Stockholm 10044, Sweden, ciaran@kth.se)
A numerical framework is presented aimed at evaluating the pass-by sound of vehicle sources in a simplified urban environment. By
including the influence of the nearby built environment in the propagation approach, this method goes beyond the scope of the standard
pass-by test performed on an ideal track in a free-field. This work allows for a complete modeling of the problem from sound generation
to sound propagation in three steps. First, computation of the flow around the geometry of interest; second, extraction of the sound sources generated by the flow using direct numerical beamforming, and third, propagation of the moving sound sources to observers including reflections and scattering by nearby surfaces. The identification of the sound sources in the second step is performed using linear
programming beamforming based on pressure data extracted from the flow simulations, resulting in a model of monopole sources. Step
three uses a propagation method based on a point-to-point moving source Green’s function, ray-tracing techniques, and a modified
Kirchhoff integral under the Kirchhoff approximation to compute first- and second-order reflections on built surfaces. The method is
demonstrated on the example of the sound generated by an air inlet.
9:40
4aPAa4. A German-French acoustic road pavement database: DEUFRABASE latest version. Michel C. Berengier, Judica€el Picaut,
Antoine Beguere, Nicolas Fortin (AME-LAE, IFSTTAR, Ctr. de Nantes, Rte. de Bouaye - CS4, BOUGUENAIS 44344 Cedex, France,
michel.berengier@ifsttar.fr), and Marie-Agnes Pallas (Univ Lyon-AME-LAE, IFSTTAR, BRON, France)
4a WED. AM
The aim of this database developed during two DEUFRAKO (German/French cooperation) projects is to provide a tool to predict in
a large number of predefined situations, the impact of road pavements on Lden estimation for realistic configurations in terms of geometry, traffic composition and propagation effects. The method implemented is based on (i) the standardized light and heavy vehicles
LAmax measured according to ISO pass-by method, (ii) on the relationship between this LAmax and the single vehicle LAeq calculated
on a 1-hour time period, (iii) on the excess sound attenuation between two receivers, one located in the road vicinity (7.50 m-1.20 m)
and the other in farfield and (iv), on an average daily traffic distribution. In 2008, a first version of the database was implemented and
uploaded on the German partner’s website. More recently, the database structure has been redesigned to easily add new configurations
of topography, traffic and pavements. New indicators, representative for instance of urban constraints, could also be envisaged. The paper deals with the description and validation of the procedures implemented, including their accuracy estimation, the new open access
website (deufrabase.ifsttar.fr) and how to use it for road traffic noise prediction.
Contributed Paper
10:00
4aPAa5. Atmospheric turbulence effects on acoustic vector sensing. Sandra L. Collier, Latasha Solomon, David Ligon, Max Denis, John Noble, W.
C. Kirkpatrick Alberts, and Leng Sim (U.S. Army Res. Lab., 2800 Powder
Mill Rd., RDRL-CIE-S, Adelphi, MD 20783-1197, sandra.l.collier4.civ@
mail.mil)
continue to improve. The effects of atmospheric turbulence on acoustic
pressure-sensor arrays has been well studied and documented. Two-dimensional acoustic particle velocity and acoustic pressure, concurrent with
atmospheric data, were collected during a series of field tests. Here, we
examine the effects of atmospheric turbulence on acoustic vector sensing.
Acoustic vector sensing has been established in the underwater community, and is gaining interest in the atmospheric community as technologies
10:20–10:40 Break
3807
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3807
Invited Paper
10:40
4aPAa6. Noise mapping based on participative measurements with a smartphone. Judica€el Picaut, Pierre Aumond, Arnaud Can,
Nicolas Fortin, Benoit Gauvreau (AME, LAE, Ifsttar, Rte. de Bouaye, CS4, Bouguenais Cedex 44344, France, Judicael.Picaut@ifsttar.
fr), Erwan Bocher, Sylvain Palominos, Gwendall Petit (Lab-STICC UMR CNRS 6285, UBS, Vannes, France), and Gwena€el Guillaume
(Cerema Est, Strasbourg, France)
Because noise is a major pollution leading to non-negligible socio-economical impacts, many national regulations aim at reducing
the population noise exposure. Within the context of the European directive 2002/49/EC, a special attention is paid to the evaluation of
the existing noise environment. Nowadays, this assessment is addressed based on simulated noise maps, which however present some
limitations due to the simplification of noise generation and propagation phenomena. Smartphone participative measurements are alternatively being developed, offering the high temporal and spatial granularities recommended by the EU directive. However, the existing
approaches often lack a quantification of the produced noise maps accuracy, and are rarely user-oriented. In this context, within the
framework of the EU project ENERGIC-OD, a Spatial Data Infrastructure (SDI “OnoMap”) has been developed to manage smartphones
measurements using a dedicated Android application (“NoiseCapture”) and to produce relevant noise maps. In the present communication, this infrastructure is detailed, with a specific attention to the following major key points: data management and qualification from
its production to its dissemination, use of standards for data interoperability, optimization of noise measurements, production of 24h
noise maps based on measurements distributed over a day, integration of a confidence index on the produced data.
Contributed Paper
11:00
4aPAa7. Characterization of urban sound environments using a comprehensive approach combining open data, measurements, and modeling. Judica€el Picaut, Arnaud Can (AME, LAE, Ifsttar, Rte. de Bouaye, CS4,
Bouguenais Cedex 44344, France, Judicael.Picaut@ifsttar.fr), Jeremy
Ardouin (Wi6Labs, Rennes, France), Pierre Crepeaux (City of Lorient, Lor
ient Cedex, France), Thierry Dhorne (UBS, Vannes, France), David Ecotière
(Cerema Est, Strasbourg, France), Mathieu Lagrange (ADSTI team, LS2N,
Nantes, France), Catherine Lavandier (ETIS CNRS UMR 8051, ENSEA,
UCP, Cergy-Pontoise, France), Vivien Mallet (INRIA, Paris, France), Christophe Mietlicki (Bruitparif, Pantin, France), and Marc Paboeuf (Bouygues
Energies & Services, Saint-Herblain, France)
Urban noise reduction is a societal priority. In this context, the European
Directive 2002/49/EC aims at producing strategic noise maps for large
cities. However, nowadays the relevance of such maps is questionable, due
to considerable uncertainties, which are rarely quantified. Conversely, the
development of noise observatories can provide useful information for a
more realistic description of the sound environment, but at the expense of
insufficient spatial resolution and high costs. Thus, the CENSE project aims
at proposing a new methodology for the production of more realistic noise
maps, based on an assimilation of simulated and measured data, collected
through a dense network of low-cost sensors that rely on new technologies.
In addition, the proposed approach tries to take into account the various
sources of uncertainty, either from measurements and modeling. Beyond the
production of physical indicators, the project also includes advanced sound
environments characterization, through sound recognition and perceptual
assessments. CENSE is resolutely a multidisciplinary project, bringing together experts from environmental acoustics, data assimilation, statistics,
GIS, sensor networks, signal processing, and noise perception. As the project is in launch state, the present communication will focus on a global
overview, emphasizing the innovative and key points of the project.
Invited Papers
11:20
4aPAa8. Sound radiation of a line source moving above an absorbing plane with a frequency-dependent admittance. Didier Dragna and Philippe Blanc-Benon (Laboratoire de Mecanique des Fluides et d’Acoustique, Ecole Centrale de Lyon, Ecole Centrale de
Lyon, 36 Ave. Guy de Collongue, Ecully 69134 Ecully Cedex, France, Philippe.blanc-benon@ec-lyon.fr)
This paper is concerned with the derivation of an analytical solution for the acoustic pressure field generated by a line source moving
at a constant speed and height above an absorbing plane and summarizes the results obtained in a recent article [Dragna and BlancBenon, J. Sound Vib. 349, 259-275, 2015]. As an extension of previous studies, the frequency dependence of the admittance is
accounted for. First, a Lorentz transformation is used to obtain an equivalent stationary problem. A special attention is paid to the translation of the admittance boundary condition in the Lorentz space. An analytical solution is obtained as a Fourier transform. Excitation of
surface waves is investigated depending on the Mach number. In the far field, an asymptotic expression is sought using the modified saddle point method. The solution is expressed under the form of a Weyl-Van der Pol formula, in which the admittance is evaluated at the
Doppler frequency. Comparison of the pressure field obtained with the analytical solution and with a direct numerical simulation is performed. Finally, a parametric study is carried out showing that the frequency variations of the admittance must be accounted for if the
source is located close to the ground and if its Mach number is greater than 0.2. [This work was supported by the LabEx Centre Lyonnais
d’Acoustique of Universit\’e de Lyon, operated by the French National Research Agency (ANR-10-LABX-0060/ANR-11-IDEX-0007.]
3808
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3808
11:40
4aPAa9. Hybrid Fourier pseudospectral/discontinuous Galerkin time-domain method for arbitrary boundary conditions. Ra
ul
Pagan Mu~noz and Maarten Hornikx (Bldg. Phys. and Services, Dept. of the Built Environment, Eindhoven Univ. of Technol., P.O. Box
513, Eindhoven 5600 MB, Netherlands, r.pagan.munoz@tue.nl)
The wave-based Fourier Pseudospectral time-domain (Fourier-PSTD) method was shown to be an effective way of modeling outdoor
acoustic propagation problems as described by the linearized Euler equations (LEE), but is limited to real-valued frequency independent
boundary conditions and predominantly staircase-like boundary shapes. A hybrid modeling approach was recently presented to solve the
LEE, coupling Fourier-PSTD with the nodal discontinuous Galerkin (DG) time domain method. The hybrid approach allows the computation of complex geometries by using the benefits of the DG methodology at the boundaries while keeping Fourier-PSTD in the bulk of
the domain. This paper presents the implementation of arbitrary boundary conditions in the novel methodology, for instance, frequency
dependent boundaries. The paper includes an application case of sound propagation for an urban scenario and the comparison of the numerical results with experimental data.
12:00
4aPAa10. Application of the radiative transfer theory to forest acoustics. Vladimir E. Ostashev, D. Keith Wilson, and Michael Muhlestein (U.S. Army Engineer Res. and Developmnet Ctr., 72 Lyme Rd., Hanover, NH 03755, vladimir.ostashev@colorado.edu)
Although forest acoustics has been studied extensively, there are no established theories or numerical methods for calculating sound
propagation and scattering from first principles. A novel approach based on the radiative transfer equation (RTE) overcomes these shortcomings [V. E. Ostashev and D. K. Wilson, J. Acoust. Soc. Am. V. 140 (4), 3194 (2016)]. In this presentation, the RTE as applied to forest acoustics is overviewed and some results are presented. The RTE is an integro-differential equation for the specific intensity (or the
radiance) which is the angular Fourier transform of the spatial correlation function of the sound field. This equation correctly describes
propagation phenomena such as the transformation of the coherent part of the sound field into the incoherent field, multiple scattering of
sound in different directions, and attenuation due to absorption. In this formulation, acoustical properties of a forest are described by the
total cross sections and differential scattering cross section of trunks, branches, leaves, and other scatterrers. Analytical solutions of the
RTE are considered such as the modified Born approximation and diffusion approximation. Numerical solutions of the RTE are well
developed in other fields of physics and can be adjusted to forest acoustics.
WEDNESDAY MORNING, 28 JUNE 2017
ROOM 210, 8:35 A.M. TO 12:20 P.M.
Session 4aPAb
Physical Acoustics, Biomedical Acoustics, and Structural Acoustics and Vibration: Propagation in
Inhomogeneous Media I
4a WED. AM
Valerie J. Pinfield, Cochair
Chemical Engineering Department, Loughborough University, Loughborough LE11 3TU, United Kingdom
Olga Umnova, Cochair
University of Salford, The Crescent, Salford m5 4wt, United Kingdom
Josh R. Gladden, Cochair
Physics & NCPA, University of Mississippi, 108 Lewis Hall, University, MS 38677
Chair’s Introduction—8:35
Invited Papers
8:40
4aPAb1. Simulation of elastic wave propagation in heterogeneous materials. Anton Van Pamel (Mech. Eng., Imperial College London,
London, United Kingdom), Gaofeng Sha, Stanislav I. Rokhlin (Mater. Sci. and Eng., The Ohio State Univ., Columbus, OH), and Michael
J. Lowe (Mech. Eng., Imperial College London, South Kensington, London SW7 2AZ, United Kingdom, m.lowe@imperial.ac.uk)
The propagation and scattering of elastic waves within heterogeneous materials is of wide interest in seismology, medical ultrasound,
and non-destructive evaluation. The attenuation and noise caused by scattering can be a hindrance, limiting the interrogation of features
within the material, or it can be used to measure the properties of the material. In both cases, the possibility to perform accurate
3809
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3809
simulations is hugely attractive in order to develop the exploitation. Until now, most modeling of wave propagation in these media has
been limited to analytical methods based on low-order scattering assumptions, or numerical simulations in two dimensions or small
domains. But it has recently become possible to perform accurate three-dimensional Finite Element simulations in significant sample
volumes. This talk will present the deployment and validation of such a model, developed by the authors, and believed to be the first
quantitative simulation of this kind. Predicted attenuation and wave speed dispersion will be shown to compare well with values
obtained from established scattering theory with accurate accounting of the second order statistics of the material system. The results
provide new and independent insights and the new modeling capability will facilitate future investigations into the physics of the propagation and attenuation phenomena.
9:00
4aPAb2. Mechanical rainbow trapping and Bloch oscillations in chirped metallic beams. Jose Sanchez-Dehesa (Dept of Electron.
Eng., Universitat Politecnica de Valencia, Camino de vera s.n., Edificio 7F, Valencia, Valencia ES-46022, Spain, jsdehesa@upvnet.upv.
es), Arturo Arreola-Lucas (Posgrado en Ciencias e Ingenieria de Materiales, Universidad Autonoma Metropolitana, Mexico Distrito
Federal, Mexico), Gabriela Baez (Departamento de Ciencias Basicas, Universidad Autonoma Metropolitana, Mexico Distrito Federal,
Mexico), Francisco Cervera, Alfonso Climente (Dept of Electron. Eng., Universitat Politecnica de Valencia, Valencia, Comunidad
Valenciana, Spain), and Rafael Mendez-Sanchez (Instituto de Ciencias Fisicas, Universidad Nacional Autonoma de Mexico, Cuernavaca, Morelos, Mexico)
The mechanical rainbow trapping effect and the mechanical Bloch oscillations for torsional waves propagating in chirped mechanical structures are here experimentally demonstrated. After extensive simulations, three quasi-one-dimensional chirped structures were
designed, constructed and experimentally characterized by Doppler spectroscopy. When the chirp intensity vanishes, a perfect periodic
system, with bands and gaps, is obtained. The mechanical rainbow trapping effect is experimentally characterized for small values of the
chirp intensity. The wave packet traveling along the structure is progressively slowing down and is reflected back at a certain depth,
which depends on its central frequency. For larger values of the chirping parameter the rainbow trapping yields the penetration length
where the mechanical Bloch oscillations emerge. Numerical simulations based on the transfer matrix method show an excellent agreement with experimental data.
Contributed Papers
9:20
9:40
4aPAb3. Ultrasound field structure simulation in acousto-optic cells
with acoustic beam reflection. Sergey Mantsevich, Vladimir Balakshy
(Phys., M.V. Lomonosov Moscow State Univ., Vorobevy gory 1, Moscow
119991, Russian Federation, snmantsevich@yahoo.com), Vladimir Molchanov, and Konstantin Yushkov (National Univ. of Sci. and Technol.
“MISIS”, Moscow, Russian Federation)
4aPAb4. Characterizing composites with acoustic backscattering: Combining data driven and analytical methods. Artur L. Gower, Jonathan
Deakin, William J. Parnell (School of Mathematics, Univ. of Manchester,
Alan Turing Building-G.112, Manchester M13 9PL, United Kingdom, arturgower@gmail.com), Robert M. Gower (Departement d’Informatique de
l’Ecole Normale Superieure, INRIA, Paris, France), and Ian D. Abrahams
(Mathematics, Univ. of Manchester, Manchester, United Kingdom)
Some part of the acousto-optic devices fabricated from various crystals
applies acoustic wave reflection from one of the crystal facets to arouse the
wave with characteristics needed. The acoustic wave is at first aroused by
piezoelectric transducer. It propagates along some direction and reflects
from the acousto-optic cell face. The face orientation gives the possibility to
obtain desired acoustic wave after the reflection. The reflection may be
accompanied by the change of the acoustic mode type. The reflection process affects the acoustic beam structure causing the redistribution of acoustic
energy inside the beam and the inclination of wave fronts. In this research
the influence of acoustic beam reflection on the acoustoc field structure in
the quasi-collinear acousto-optic cells fabricated from the tellurium dioxide
crystal is examined and compared with acoustic beam reflection in collinear
acousto-optic cell fabricated from the calcium molybdate crystal. Tellurium
dioxide crystal has extremely strong acoustic anisotropy while calcium molybdate may be considered as acoustically isotropic media. The examined
problem is solved using the original method when the media acoustic anisotropy influence on every component of ultrasound beam angular spectrum is
taken into account. The presented research has also the applied importance
as acoustic field structure influence on the acousto-optic devices characteristics. [This research was supported by the RSF grant @14-22-0042.]
3810
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustic wave measurements are quick and non-invasive. They can be
ideal to characterize composites, when we can accurately interpret the
acoustic signals. However, for a wide range of frequencies, interpreting the
signal is complicated as the field will be subjected to multiple scattering.
We introduce a new data driven approach to characterize composites via
acoustic backscattering. There are two major challenges in a data driven
approach. First, we need a large amount of data of backscattered waves
from well characterized composites. Second, is how best to use this data to
characterize unknown composites. In this talk, we address both these challenges, and use simulated 2D data to demonstrate our method. Our methodology can be used to characterize many features of the medium in question,
but for simplicity, we make a number of assumptions: the medium is formed
from two materials; one material forms a large number of scatterers (all
with approximately the same radius) embedded in a uniform background
material. Characterizing this composite is now equivalent to measuring the
volume fraction and radius of the scatterers. To simulate a wide range of
volume fractions and scatterer sizes, we use the exact theory for acoustic
scattering by identical circular cylinders.
Acoustics ’17 Boston
3810
10:00
4aPAb5. Dispersion and acoustic nonlinearity in heterogeneous media
containing hyperelastic inclusions. Stephanie G. Konarski, Michael R.
Haberman, and Mark F. Hamilton (Appl. Res. Labs. and Dept. of Mech.
Eng., The Univ. of Texas at Austin, P.O. Box 8029, Austin, TX 787138029, skonarski@utexas.edu)
The present work considers the dispersion and quadratic nonlinearity
associated with propagation of finite-amplitude acoustic waves through a heterogeneous medium that contains a dilute concentration of hyperelastic inclusions embedded in a nearly incompressible matrix material. The acoustic
wave causes the inclusions to oscillate, which are modeled using a volume
formulation of a generalized Rayleigh-Plesset equation. The dispersion arises
from the inertia of the matrix surrounding the oscillating inclusions, whereas
the nonlinearity is primarily due to the stress-strain response of the inclusions. The importance of dispersion relative to nonlinearity is determined by
considering the efficiency of second-harmonic generation. The theory is then
used to study a specific inclusion, termed a snapping acoustic metamaterial
(SAMM). SAMM inclusions are sub-wavelength structures with designed
mechanical instabilities resulting in local regimes of positive and negative
stiffness. The low-frequency limit of the acoustic nonlinearity for the SAMM
inclusion is compared with the quasistatic values previously obtained through
a homogenization method. The significance of dynamic effects, such as the
behavior near resonance of the SAMM inclusions, can then be quantified.
[This work was supported by the Office of Naval Research.]
10:20–10:40 Break
Invited Papers
10:40
4aPAb6. Diffusive transport and Anderson localization of ultrasonic waves in strongly scattering inhomogeneous media. John H.
Page, Laura Cobus, Kurt Hildebrand, Sebastien O. Kerherve, Anatoliy Strybulevych (Phys. and Astronomy, Univ. of MB, 301 Allen
Bldg, 30A Sifton Rd., University of MB, Winnipeg, MB R3T 2N2, Canada, john.page@umanitoba.ca), Benoit TALLON, Thomas Brunet (I2M, Universite de Bordeaux & CNRS, Talence, France), Fabrice Lemoult (Institut Langevin, ESPCI ParisTech, Paris, France),
Stephane Job (LISMMA, Supmeca, Saint-Ouen, France), Sergey Skipetrov, and Bart van Tiggelen (LPMMC, Universite Grenoble Alpes
& CNRS, Grenoble, France)
In inhomogeneous media with constituents having very different acoustic properties, very strong multiple scattering of ultrasonic
waves can occur, especially when the wavelength is comparable with the length scales over which the constituent properties vary. Such
strong multiple scattering can lead to a long “coda” that dominates the observable behaviour in pulsed experiments, and can dwarf the
ballistic pulse that travels coherently through the medium. In many cases, the transport of energy by the multiply scattered waves can be
well described using the diffusion approximation, which may even seem quite surprising since all interference effects are ignored. An
exception occurs when the return probability that the waves scatter back to the same spot becomes enhanced as a result of very strong
multiple scattering; then interference plays an important role and can ultimately lead to Anderson localization and the breakdown of
wave propagation. Examples of these wave phenomena will be presented in contrasting inhomogeneous materials, ranging from solid or
liquid inclusions in a fluid to dry granular media and unusual porous solids. I will describe robust methods for distinguishing between
diffusive and localized waves, as well as some of the remarkable properties of localized waves that can be investigated using ultrasound.
11:00
4a WED. AM
4aPAb7. Soft porous materials with ultra-low sound speeds in acoustic metamaterials. Thomas Brunet, Olivier Poncelet, Christophe
Aristegui (I2M, Universite de Bordeaux, 351, cours de la liberation, B^aitment A4 - I2M/APY, Talence 33405, France, thomas.brunet@
u-bordeaux.fr), Jacques Leng (LOF, Universite de Bordeaux, Pessac, France), and Olivier Mondain-Monval (CRPP, Universite de Bordeaux, Pessac, France)
Porous media have unique acoustic properties that have been intensively studied for many decades and found many applications in
various areas. Owing to their low sound speed, porous materials have proven to be interesting key elements for the realization of acoustic
metamaterials since they may act as strong Mie-type resonators, when shaped as spherical particles [1]. In that context, soft porous silicone rubbers are ideal “ultra-slow” materials since a sudden drop of the longitudinal sound speed cL with the porosity U has been
observed in these porous materials (cL < 100 m/s for U < 10%). Such unusual behavior is well captured by simple models based on
low-frequency approximations of multiple scattering theories [2]. Then, I will discuss the interest of these “ultra-slow” particles for
acoustic metamaterials of which the acoustic index has been shown to be negative [3]. At last, I will demonstrate that the exotic values
of the acoustic index may be tuned by using other porous particles such as xerogel beads [4]. [1] Brunet et al., Science 342, 323 (2013).
[2] Ba et al., Scientific Reports 7, 40106 (2017). [3] Brunet et al., Nature Materials 14, 384 (2015). [4] Raffy et al., Advanced Materials
28, 1760 (2016).
3811
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3811
Contributed Papers
11:20
4aPAb8. Multiple scattering in resonant emulsions: Coherent-ballistic
propagation and diffusive transport. Benoit Tallon, Thomas Brunet (I2M,
Universite de Bordeaux, Universite de Bordeaux, 351 cours de la liberation,
Talence 33400, France, benoit.tallon@u-bordeaux.fr), and John H. Page
(Dept. of Phys. and Astronomy, Univ. of MB, Winnipeg, MB, Canada)
Ultrasonic pulse propagation experiments are reported on dilute suspensions of fluorinated-oil droplets immersed in a water-based gel matrix. These
resonant emulsions are model systems for studying the effects of scattering
resonances on wave transport since the large sound-speed contrast between
the scatterers and the surrounding medium enhances the Mie resonances of
the liquid particles. Measurements of the coherent-ballistic component
reveal that both the scattering mean free path and the group velocity strongly
depend on the frequency as predicted by the Independent Scattering Approximation. Scattering resonances are also responsible for very slow diffusivity
of the multiply scattered ultrasound. This slowing down of the diffusion process due to resonances is well captured by models that include additional
scattering delays of the ultrasonic pulses. The relationship between the diffusion coefficient and the ballistic data allow the frequency dependence of
energy velocity of diffusing waves to be estimated, and show that the energy
and group velocities are very different in our system. Although our ultrasonic measurements and their interpretation give a complete picture of wave
transport in dilute resonant emulsions, the description of the wave transport
in concentrated emulsions requires more sophisticated models based on the
spectral function approach and the self-consistent theory.
11:40
4aPAb9. Anomalous transport of ultrasound in a strongly scattering inhomogeneous medium. Sebastien O. Kerherve, John H. Page (Dept. of
Phys. and Astronomy, Univ. of MB, Allen Bldg., Winnipeg, MB R3T 2N2,
Canada, sebastien.kerherve@umanitoba.ca), and Sergey Skipetrov
(LPMMC, CNRS - Universite Grenoble Alpes, Grenoble, France)
Investigating the propagation of multiply-scattered waves in disordered
materials allows the characterization of heterogeneous media and may
reveal anomalous wave properties. We study wave transport in a deceptively
simple system consisting of closed packed aluminum beads surrounded by
low viscosity silicone oil, in which ultrasonic waves undergo many
3812
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
scattering events without incurring significant dissipative losses. Using ultrasonic pulsed transmission experiments, the small ballistic pulse at short
observation times was separated from the much larger multiple scattering
coda that extends over a wide range of arrival times, enabling the scattering
strength to be assessed as a function of frequency and giving values of kls
(wave vector times scattering mean free path) between 4 and 10. Based on
kls, the propagation of the multiply scattered waves is expected to be well
described by the diffusion approximation. However, this is not the case, and
the observed behaviour suggests the presence of two coupled modes of
propagation: a fast component traveling through the liquid and a slower
component traveling through the bead network. The results are interpreted
by developing a model for the coupled propagation of a diffusive component and a “renormalized” component that is described using the self-consistent theory of localization.
12:00
4aPAb10. A new expression for the second-order acoustical nonlinearity
parameter B/A for a suspension of free or encapsulated bubbles. Lang
Xia and Kausik Sarkar (Mech. and Aerospece Eng., George Washington
Univ., 800 22nd St. NW, SEH 3961, Washington, DC 20052, langxia.org@
gmail.com)
The presence of bubbles in the liquid introduces dispersion, increased
attenuation and nonlinearities in the medium (bubbly liquid). The secondorder acoustical parameter B/A describes the nonlinearity of the bubbly liquid. The nonlinear parameter B/A is an important property for characterizing different media and materials, and also important for ultrasound
imaging. It can be derived from the equation of state of a fluid using the
Taylor expansion to the second order. B/A has been studied extensively in
industrial, chemical and biological fluids since Beyer proposed this thermodynamic technique. However, certain important aspect for this criterion
remains unexplored specifically for bubbly liquid. Here, we develop a new
formula based on the thermodynamic method which correlates both attenuation and phase velocity of ultrasound waves in liquids containing free or
encapsulated microbubbles. These quantities can be measured directly using
a broadband technique. The formula offered here is relatively simple and
can be used to directly measure B/A from experiments; the new formula
avoids using second harmonics that requires higher excitation pressures, as
well as the direct thermodynamic method that gives rise to inaccuracy.
Acoustics ’17 Boston
3812
WEDNESDAY MORNING, 28 JUNE 2017
BALLROOM A, 8:00 A.M. TO 12:20 P.M.
Session 4aPPa
Psychological and Physiological Acoustics: Speech, Pitch, Cochlear Implants, and Hearing Aids Potpourri
(Poster Session)
Eric Healy, Chair
Speech & Hearing Science, The Ohio State University, Pressey Hall Rm. 110, 1070 Carmack Rd., Columbus, OH 43210
All posters will be on display from 8:00 a.m. to 12:20 p.m. To allow contributors in this session to see the other posters, authors of oddnumbered papers will be at their posters from 8:00 a.m. to 10:10 a.m. and authors of even-numbered papers will be at their posters from
10:10 a.m. to 12:20 p.m.
Contributed Papers
4aPPa1. Benefits of wearing hearing aids in a natural complex auditory
environment. Ervin Hafter (Psych., Univ. of California, Berkeley, 1854
San Lorenzo Ave., Berkeley, CA 94707, hafter@berkeley.edu), Jing Xia,
Shareka Pentony, Nazanin Nooraei (Starkey Hearing Res. Ctr., Berkeley,
CA), Brianne Ahmann, Ishan Kanungo (Psych., Univ. of California, Berkeley, Berkeley, CA), and Sridhar Kalluri (Starkey Hearing Res. Ctr., Berkeley, CA)
addressed. Here, we propose an alternative implementation that offers multiple advantages over current approaches.
A simulation of multi-talker environments (Hafter et al., Basic Aspects
of Hearing. 2013) asks subjects to process a continuous flow of information
from spoken stories. Subjects respond manually to visually-presented questions that appear soon after occurrence of the relevant information in one of
the stories. In the present study, hearing-impaired listeners either do or do
not receive gain through their hearing aids, and special interest is in
responses to questions based on phonetic cues vs. those which require
semantic analysis of the acoustic signal. Our aim is to determine the benefit
of receiving hearing-aid amplification, comparing the benefit observed for
phonetically based questions to that observed with semantically-based questions, and in terms of individual linguistic and cognitive abilities as measured by frequently used tests for the purpose. Results showed a strong
benefit of aided listening among subjects when the need is only to recognize
speech but, large individual difference in aided benefits when the listener
was required to understand the meaning of the speech, with auditory, linguistic and cognitive capabilities all being factors that determine successful
understanding. The incorporation of naturalistic elements in the simulation
of multi-talker listening is important for unearthing the large individual variability in communication success.
Uncompromised situational awareness has become a critical component
of hearing protection devices (HPDs). Situational awareness is a complex
psychological phenomenon, which consists of several perceptual and cognitive factors. This study presents data from three experiments designed to analyze the performance of HPDs on measures of situational awareness and
determine how multiple factors may impact in-the-field performance. Data
are presented for four hearing protection devices on measures of sound
localization, distance perception, spatial segregation, speech intelligibility,
and dynamic scene analysis. Baseline (open-ear) data is also presented to
contrast with the HPDs. Small differences in fine localization-discrimination
and the number of large quadrant errors were observed, but these differences
between devices were not observed in the more complex scene analysis
task. A performance index was developed to determine overall performance
of each HPD compared to open-ear performance. This index combined components from each experiment with subcategories of spatial segregation,
object segregation, and attention. The index found no major differences
between HPDs; however, it may serve as a reliable summary measure in
future assessments of situational awareness.
Recent work has shown that a machine learning algorithm can produce
large speech intelligibility in noise increases for hearing-impaired listeners.
This algorithm involves a deep neural network trained through supervised
learning to estimate the ideal binary or ratio mask. The direct translational
potential of this work is addressed currently. Primary issues surrounding
future implementation into hearing aids and cochlear implants involve (i)
the ability to generalize to conditions not encountered during training, and
(ii) the computational load associated with operation of such an algorithm.
Substantial advances have been made with regard to generalization. These
will be outlined as will associated decisions that can be made. The computational load associated with training and operation of a network will also be
3813
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
4aPPa4. Method to choose the ideal size of earplug. Veronique Zimpfer
(ISL, Inst. of Saint Louis, BP 70034, Saint Louis 68301, France, veronique.
zimpfer@isl.eu), Guillaume Andeol (IRBA, Bretigny, France), Yvan Demumieux (STAT, Satory, France), Agnes Job (IRBA, Bretigny, France), Pascal
Hamery (ISL, Inst. of Saint Louis, St. Louis, France), Geoffroy Blanck, and
Sebastien De Mezzo (ISL, Inst. of Saint Louis, Saint Louis, France)
The last version of 3MTM Combat ArmsTM earplugs was designed to
meet the hearing protection needs of the French armed forces. This earplug
uses triple-flange-design fits in three sizes (small, medium, and large) allowing the best adaptation to all morphologies. The individual choice of the
size of the earplug is the major problem to have an adequate protection:
how to choose the ideal size? Three methods were assessed in the current
study. The first method was based on the observation of tympanometry tips
fit. The second was a commercial device allowing to check the attenuation
of the earplug in situ. The last one was based on the detection of leak in low
frequency with an instrumented earplug. The objective of this study was to
compare these three different methods. An assessment test with 106 soldiers
was performed. Each soldier was tested with the three methods, in a different order. The results of the different methods are presented and compared.
Acoustics ’17 Boston
3813
4a WED. AM
4aPPa2. Can a trained deep neural network be implemented into hearing technology? Eric Healy (Speech and Hearing Sci., The Ohio State
Univ., Columbus, OH), Masood Delfarah (Comput. Sci. and Eng., The Ohio
State Univ., Columbus, OH), Jordan Vasko, Brittney Carter (Speech and
Hearing Sci., The Ohio State Univ., 1070 Carmack Rd., Columbus, OH,
vasko.30@buckeyemail.osu.edu), and DeLiang Wang (Comput. Sci. and
Eng., The Ohio State Univ., Columbus, OH)
4aPPa3. Situational awareness assessment of hearing protection. Eugene
Brandewie (Res., GN Hearing A/S, 75 E. River Rd., Ctr. for Appl. and
Translational Sensory Sci., Minneapolis, MN 55455, ebrandewie@gnresound.com) and Andrew Dittberner (Res., GN Hearing A/S, Glenview,
IL)
4aPPa5. Effects of high-frequency hearing loss on the unmasking produced by narrowband maskers. Yuan He and Jennifer Lentz (Speech and
Hearing Sci., Indiana Univ., 200 South Jordan Ave., Bloomington, IN
47405, heyuan@indiana.edu)
To examine whether high-frequency hearing loss impacts the unmasking
abilities of the ear, we measured the unmasking effects with narrowband
maskers in different frequency regions at a suprathreshold level. Listeners
with normal-hearing and those with mild hearing loss in extended-high-frequency regions (12-20 kHz) participated in this study. Unmasking effects
were evaluated using a method based on two-tone unmasking (Shannon,
1976). But rather than tones, narrowband noises were used as the maskers.
Preliminary results suggest that unmasking effects can be successfully
observed using narrowband maskers. Furthermore, we will evaluate effects
of masker frequency and bandwidth and the relationship between extendedhigh-frequency hearing loss and unmasking effects. The possibility of using
unmasking effects to assess hidden hearing loss as a frequency-specific measurement will be discussed.
4aPPa6. Are reports of temporary threshold shift-like symptoms by
humans with normal hearing associated with hidden hearing loss? Anthony J. Brammer, Gongqiang Yu, James J. Grady (Dept. of Medicine,
Univ. of Connecticut Health, 263 Farmington Ave., MC 2017, Farmington,
CT 06030-2017, gyu@uchc.edu), Kourosh Parham (Div. of Otolaryngol.,
Univ. of Connecticut Health, Farmington, CT), Martin G. Cherniack (Dept.
of Medicine, Univ. of Connecticut Health, Farmington, CT), Shannon Wannagot, and Kathleen M. Cienkowski (Speech, Lang. & Hearing Sci., Univ.
of Connecticut, Storrs, CT)
Forty-six subjects with normal hearing, mean age 20.2 years, were
selected from 451 volunteers completing a questionnaire concerning hearing, exposure to noise, experiencing TTS-like symptoms, and speech understanding. Metrics quantifying reports of TTS-like symptoms were
constructed from responses to questions concerning hearing immediately after noise exposure. Statistically significant deteriorations in scores on the
Speech, Spatial and Qualities of Hearing Scale (SSQ) (Gatehouse & Noble,
Int J Audiol 43, 85-99 (2004)) were found with increasing values of TTS
metrics for all SSQ questions. Groups reporting TTS-like symptoms
(“exposed”), and “controls” (with little / no noise exposure and no reports of
TTS-like symptoms), were formed from the subject pool with mean hearing
levels differing <2 dB from 250Hz—8kHz. There was no difference in
mean word scores between groups in a Modified Rhyme test conducted in
speech-spectrum shaped noise. However, the exposed group exhibited a
statistically significant deterioration in threshold for detecting 4Hz amplitude modulation of a 500Hz carrier at 10 dB sensation level (SL) compared
to controls, and an improvement at 50 dB SL. It thus appears that TTS-like
symptoms reported by persons with normal hearing may be associated with
subtle suprathreshold changes in auditory performance. [Work supported by
NIOSH.]
4aPPa7. Effects of cochlear-synaptopathy inducing moderate noise exposure on auditory-nerve-fiber responses in chinchillas. Vijaya Prakash
Krishnan Muthaiah, Michael K. Walls (Speech, Lang., and Hearing Sci.,
Purdue Univ., 715 Clinic Dr., West Lafayette, IN 47907, krishn77@purdue.
edu), and Michael G. Heinz (Speech, Lang., and Hearing Sci. & Biomedical
Eng., Purdue Univ., West Lafayette, IN)
It has been hypothesized that selective loss of low-spontaneous-rate
(low-SR) auditory-nerve (AN) fibers following moderate noise exposure
may underlie perceptual difficulties some people experience in noisy situations, despite normal audiograms. However, the finding of selective lowSR-fiber loss has not been replicated in an animal model with behavioral
thresholds similar to humans. We recently established a behavioral chinchilla model for which neural and behavioral AM-detection thresholds are
in line with each other and similar to humans. Here, we report physiological
AN-fiber response properties from anesthetized chinchillas exposed to noise
that produced cochlear synaptopathy, as confirmed by immunofluorescence
histology. Auditory-brainstem responses, distortion-product otoacoustic
3814
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
emissions, and compound action potentials confirmed no significant permanent threshold shift. Stimuli included both simple (pure tones, as studied
previously) and complex (broadband noise) sounds. Low-SR fibers were
reduced in percentage (but not eliminated) following noise exposure, as
shown previously in guinea pigs. Saturated rates to tones were reduced.
Similar tuning and temporal coding were observed in broadband-noise
responses following noise exposure. Complete characterization of AN-fiber
responses to complex sounds in a mammalian behavioral model of noiseinduced cochlear synaptopathy will be useful for understanding suprathreshold deficits that may occur due to hidden hearing loss. [Work supported by
NIH (R01-DC009838).]
4aPPa8. Effect of a mixed rate cochlear implant strategy on speech
understanding. Thibaud Leclère, Alan Kan, and Ruth Litovsky (Waisman
Ctr., Univ. of Wisconsin-Madison, 1500 Highland Ave., Waisman Ctr.,
Madison, WI 53705, leclere2@wisc.edu)
Bilateral cochlear implant (BiCI) users perform better on speech understanding in noise with two vs. one implants, however, they still perform significantly worse than normal hearing (NH) listeners. CI processors encode
speech envelopes using pulsatile stimulation at a relatively high rates, delivered to all electrodes. At these rates, slow-varying temporal fine structure of
the original signal is lost and interaural time difference (ITD) cues are
unavailable. To maintain relatively good performance on both speech recognition and localization, we hypothesized that high- and low-rate pulses
could be mixed in a single stimulation strategy, and presented to different
channels. As a first step, NH listeners were tested on speech recognition in
noise. Pulsatile excited vocoders, with center frequencies varying from
4000 to 8417 Hz, and envelope modulation at high (1000 Hz) or low (100
Hz) rates were used to simulate multi-channel CI processing. Comparisons
were made for conditions in which some high-rate channels are replaced
with low-rate channels intended to ultimately preserve ITD sensitivity.
Results from this experiment are important for determining the feasibility of
using mixed-rate CI stimulation strategies in healthy auditory systems, and
determining the extent to which speech recognition is robust to the insertion
of low-rate channels.
4aPPa9. Establishing the presence of cochlear implant related artifact
during sound-field recording of the auditory steady state response: A
comparison between normal hearing adults, cochlear implant recipients, and a cochlear implanted human cadaver. Shruti B. Deshpande
(Commun. Sci. & Disord., St. John’s Univ., 8000 Utopia Parkway, Queens,
NY 11439, deshpans@stjohns.edu), Michael P. Scott, Jill S. Huizenga (Cincinnati Children’s Hospital Medical Ctr., Cincinnati, OH), Ravi N. Samy
(Univ. of Cincinnati, Cincinnati, OH), and David K. Brown (Pacific Univ.,
Hillsboro, OR)
The Auditory Steady State Response (ASSR) is an objective technique
permitting dichotic, frequency-specific hearing threshold estimation. Electrical ASSR (eASSR) has the potential for auditory assessment in cochlear
implant (CI) users through direct CI electrode stimulation. However, soundfield ASSR (sASSR) provides a more natural listening environment for CI
users. Electrophysiological recordings could be affected by CI related artifact. The present study investigated the possibility of CI-artifact during
sASSR recording in 10 CI subjects at 0.5, 1, 2 and 4 kHz. Results were compared with 15 NH controls. We investigated the mean threshold differences
between the sASSR thresholds and the behavioral thresholds for each group
at the four frequencies. Correlation between the sASSR thresholds and behavioral thresholds were computed. There were significant differences
between the NH and CI groups across the four frequencies indicating the
possibility of a CI-related artifact. In order to confirm the presence of the artifact, we recorded the sASSR in a cochlear-implanted human cadaver, six
hours post-mortem. This pilot investigation which, is possibly the first
sASSR in an implanted human cadaver, confirms the presence of a continuous artifact that “mimics” a true physiological response. Implications and
methods to minimize artifacts to potentially increase sASSR utility will be
discussed.
Acoustics ’17 Boston
3814
Speech processed to replace the original temporal fine structure (TFS)
with tones or noise carriers (vocoder processing) are generally less intelligible than natural or unprocessed speech, especially if a background noise is
present. Moreover, the poorer intelligibility associated with vocoder processing is typically larger if the background fluctuates over time. This deleterious effect of vocoder processing has led to the postulate that TFS cues
play a critical role when listening into the dips in the background. Recently,
we have proposed a technique to reintroduce synthetic TFS cues in vocoded
speech using one carrier for the target and one carrier for the background.
This “dual-carrier” approach allows sentence intelligibility with a speech
masker to reach a level almost comparable to that of natural speech. The
goal of the present study was to investigate the extent to which dual-carrier
processing generally improves speech recognition in various noises or if it
truly compensates for the loss of TFS cues, therefore engendering masking
release, as does natural speech. Results comparing masking release for three
processing conditions (single-carrier, dual-carrier, and natural speech) in
five backgrounds (speech-shaped noise, speech-modulated noise, and 1, 2,
or 8 talkers) will be discussed.
4aPPa11. The relationship between electrical auditory middle-latency
response components and measures of auditory performance and
speech intelligibility in pediatric cochlear implant recipients. Shruti B.
Deshpande (Commun. Sci. & Disord., St. John’s Univ., 8000 Utopia Parkway, Queens, NY 11439, deshpans@stjohns.edu), Zhaoyi Lu, Tao Pan, and
Furong Ma (Peking Univ. Third Hospital, Beijing, China)
Evidence for the utility of electrically-evoked auditory responses, as
measures of objective hearing assessments in pediatric cochlear implant
(CI) recipients, is proliferating. Recently, we demonstrated the relationship
between electrical auditory brainstem response (EABR) and measures of auditory performance and speech intelligibility in pediatric CI users (Wang,
Pan, Deshpande, & Ma, 2015). However, the EABR reflects activity only up
to the auditory brainstem. The electrical auditory middle-latency response
(EAMLR) is sensitive to the functioning of the sub-cortical structures as
well as the auditory cortex and thus has greater utility in predicting auditory
perception and processing. Here, we present data from 40 pre-lingually
hearing impaired, pediatric CI recipients. The relationship between components of the EAMLR (latencies of Na, Pa, Nb, Pb; Na-Pa amplitude; and
Na-Pa interval) and measures of auditory performance and speech intelligibility assessed using care-giver rating scales such as Categories of Auditory
Performance (CAP) (Archbold, Lutman, & Marshall, 1995) and Speech
Intelligibility Rating (SIR) (Allen, Nikolopoulos, Dyar, & O’Donoghue,
2001), respectively, are analyzed. EAMLR profiles of “good” versus “poor”
CI users (based on CAP and SIR ratings) will be discussed. Additionally,
the effect of age of implantation on the EAMLR, CAP and SIR outcomes
will be considered.
4aPPa12. Pitch magnitude estimation can predict across-ear pitch comparisons in cochlear-implant users. Sean R. Anderson, Alan Kan, Tanvi
D. Thakkar, and Ruth Litovsky (Commun. Sci. & Disord., Univ. of Wisconsin-Madison, Waisman Ctr., Office 564, 1500 Highland Ave., Madison, WI
53705, sean.anderson@wisc.edu)
Bilateral cochlear implants provide auditory input in both ears, but sensitivity to interaural time differences (ITDs) varies across individuals. One
factor affecting ITD sensitivity is place-of-stimulation mismatch across
ears. Currently there is no fast method for estimating mismatch. Since pitch
varies depending on place-of-stimulation, our laboratory uses two pitchmatching methods to estimate matched place-of-stimulation across ears:
pitch magnitude estimation (PME) and direct pitch comparison (DPC).
PME involves rating pitch from 1 (lowest) to 100 (highest) for even
3815
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
numbered electrodes in each ear. DPC involves comparing the pitch of one
electrode against six electrodes in the opposite ear, yielding an estimate of
place-of-stimulation mismatch at one electrode. DPC predicts interaural
electrode pairs with good ITD sensitivity, but is relatively slow because
pairs are chosen one at a time. PME is a much faster, global estimate of the
relative difference of interaural electrodes, but it is uncertain whether PME
can predict the same interaural pairs as DPC. Here, we predict DPC estimates from PME results using regression techniques. Results suggest PME
may be an efficient tool for creating clinical maps that yield good binaural
sensitivity. [Work supported by NIH-NIDCD R01-DC003083 to RYL,
NIH-NIDCD R03-DC015321 to AK, and NIH-NICHD P30-HD03552 to
Waisman Center.]
4aPPa13. Non-sensory biases in a pitch-discrimination task for bilateral
and single-sided deafness cochlear-implant listeners. Olga Stakhovskaya,
Joshua G. Bernstein (Audiol. and Speech Ctr., Walter Reed National Military Medical Ctr., 4954 North Palmer Rd., Bldg. 19, R. 5607, Bethesda, MD
20889, olga.stakhovskaya.ctr@mail.mil), and Matthew Goupell (Hearing
and Speech Sci., Univ. of Maryland College Park, College Park, MD)
Matching the interaural place of stimulation is likely to improve binaural
processing for bilateral (BI) and single-sided deafness (SSD) cochlearimplant (CI) listeners. Although pitch matching can be used to estimate
interaural mismatch for these listeners, non-sensory biases (e.g., responding
to the test-frequency range rather than interaural pitch comparisons) could
influence the accuracy of place-match estimates. This study evaluated pitch
discrimination as a method of estimating the relative interaural places of
stimulation for individual CI electrodes for BI-CI and SSD-CI listeners.
Three different frequency ranges, and randomization of the stimulus presentation across reference electrodes, were used to measure non-sensory biases.
Results showed substantial frequency-range effects for the majority of reference electrodes tested for both listener groups, shifting place-match estimates by as much as about 7 mm for some listeners. Reference-electrode
randomization affected a smaller proportion of the estimates, producing
shifts of 2-3 mm or less. These findings suggest that pitch discrimination
might not reliably estimate relative interaural place of stimulation for BI-CI
and SSD-CI listeners. [The views expressed in this article are those of the
authors and do not reflect the official policy of the Department of Army/
Navy/Air Force, Department of Defense, or U.S. Government. Funding:
NIH-NIDCD R01-DC-015798 (Goupell/Bernstein).]
4aPPa14. Influence of limitations from the auditory periphery on
across-channel sensitivity to amplitude modulation rate in cochlearimplant users. Sean R. Anderson, Alan Kan, and Ruth Litovsky (Commun.
Sci. & Disord., Univ. of Wisconsin-Madison, Waisman Ctr., Office 564,
1500 Highland Ave., Madison, WI 53705, sean.anderson@wisc.edu)
Individuals with cochlear implants (CIs) attain lower speech reception
scores in noise relative to normal-hearing (NH) listeners. Poorer performance in CI users is partly due to reduced access to cues used to segregate
sound sources, which may be limited by poor electrode-neuron interface in
the auditory periphery. Poor electrode-neuron interface reduces sensitivity
to temporal information at specific cochlear sites. This study investigated
the influence of the periphery on discriminating differences in amplitude
modulation (AM) rates presented simultaneously in two different cochlear
channels. In each trial, one interval was presented and subjects chose
whether AM rates were the same or different. AM rates were paired withinor across-ears. It was hypothesized that, if sensitivity to AM rate was
reduced in one channel due to poor transduction of AM rate, then acrosschannel sensitivity would decrease. Both CI users and NH listeners participated in this experiment. Results suggest that when temporal encoding in
the auditory periphery is poor, sensitivity to differences across-channels
decreases. These results may influence CI mapping with respect to which
electrode channels are removed to enhance grouping cues important for
source segregation. [Work supported by NIH-NIDCD R01-DC003083 to
RYL, NIH-NIDCD R03-DC015321 to AK, and NIH-NICHD P30-HD03552
to Waisman Center.]
Acoustics ’17 Boston
3815
4a WED. AM
4aPPa10. Can dual-carrier processing restore masking release in
vocoded speech? Brittney Carter, Eric Healy (Speech and Hearing Sci., The
Ohio State Univ., 1070 Carmack Rd., Columbus, OH 43210, carter.962@
buckeyemail.osu.edu), and Frederic Apoux (Otolaryngol. - Head & Neck
Surgery, The Ohio State Univ., Columbus, OH)
4aPPa15. Voice gender release from masking in cochlear implant users
is correlated with binaural pitch fusion. Yonghee Oh, Lina Reiss (Otolaryngology-Head and Neck Surgery, Oregon Health & Sci. Univ., 3181 SW
Sam Jackson Park Rd., Mailcode NRC04, Portland, OR 97239, oyo@ohsu.
edu), Nirmal Srinivasan (Dept. of Audiol., Speech-Lang. Pathol. & Deaf
Studies, Towson Univ., Towson, MD), Kasey Jakien, Anna Diedesch, Frederick J. Gallun (National Ctr. for Rehabilitative Auditory Res., VA Portland
Health Care System, Portland, OR), and Curtis Hartling (OtolaryngologyHead and Neck Surgery, Oregon Health & Sci. Univ., Portland, OR)
Spatial and voice gender separation of target from masking speech leads
to substantial release from masking in normal-hearing listeners. However,
binaural pitch fusion is often broad in cochlear implant (CI) listeners, such
that dichotic stimuli with pitches differing by up to 3-4 octaves are fused
(Reiss et al., 2014). We hypothesized that broad binaural fusion could
reduce a listener’s ability to separate competing speech streams with different voice pitches, and thus reduce the voice gender as well as spatial benefit
for speech perception in noise. Speech reception thresholds were measured
in both bilateral and bimodal CI users, using male and female target talkers
at two spatial configurations (co-location and 60-degrees of target-masker
separation). Binaural pitch fusion was also measured. Different-gender
maskers improved target detection performance in bimodal CI users, and
performance was better with female than male targets in bilateral CI users.
No spatial benefit was seen in either CI group. As hypothesized, voice gender masking release was strongly correlated with binaural fusion range in bimodal CI users. These results suggest that sharp binaural fusion is necessary
for maximal speech perception in noise in bimodal CI users, but does not
benefit bilateral CI users. [Work supported by NIH-NIDCD grant R01
DC01337.]
4aPPa16. The role of spectral and temporal cues for vocal emotion recognition by cochlear implant simulations. Zhi Zhu, Ryota Miyauchi
(School of Information Sci., Japan Adv. Inst. of Sci. and Technol., 5-201
Student House, 1-1 Asahidai, Nomi, Ishikawa 9231211, Japan, zhuzhi@
jaist.ac.jp), Yukiko Araki (Kanazawa Univ., Kanazawa, Ishikawa, Japan),
and Masashi Unoki (School of Information Sci., Japan Adv. Inst. of Sci. and
Technol., Nomi-shi, Japan)
It has been known that cochlear implant (CI) listeners have difficulty in
vocal emotion perception. Researches on vocal emotion perception of CI listeners have been studying ways to simulate responses of CI listeners by
using noise-vocoded speech (NVS) as CI simulations with normal-hearing
listeners. The purpose of this study is to clarify the relative contributions of
spectral and temporal cues in vocal emotion recognition for NVS. In the
simulation, the spectral cue can be numbers of speech processing channels
are 4, 8, and 16. The cutoff frequency of the envelope filters ranged from 0
to 64 Hz. Sentences were produced by one female talker according to five
target emotions: neutral, joy, cold anger, sadness, and hot anger. As a result,
the recognition rates significantly improved as the cutoff frequency was
increased for all emotions. Moreover, the number of channels only effect
the recognition rate of neutral, joy, and cold anger. The results suggest that
temporal cues contribute to vocal emotion recognition. However, the contribution of spectral cues on vocal emotion recognition will be depended on
the type of emotion. In the future, the relationship between modulation spectral features and these perceptual data will be discussed.
4aPPa17. Theoretical evidence for lack of correspondence between
spectral ripple parameters and output of the cochlear implant processor, and an alternate explanation for the correlation with speech perception scores. Gabrielle O’Brien and Matthew Winn (Speech & Hearing
Sci., Univ. of Washington, 1417 N.E. 42nd St., Box 354875, Seattle, WA
98105-6246, andronovhopf@gmail.com)
The spectral ripple test is a popular measure of spectral resolution that,
in cochlear implant (CI) listeners, has been shown to correlate with speech
recognition scores. In the test, listeners distinguish between sounds whose
broadband spectra contain some variable number of peaks (i.e. spectral density), with some spectral modulation depth. A meta-analysis of literature on
spectral ripple tests in CI listeners shows a generally consistent ceiling level
of performance around 2 ripples per octave, with some outliers performing
above. We propose that there is a critical point in spectral density at which
3816
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
the output of the CI transitions from representing spectral ripples to a qualitatively different signal, resulting from the interaction between the spectral
peaks and the number of active device channels. Artefactual nonlinearities
for ripples of higher densities generally do not resemble ripples but may
accidentally match the spectral envelope characteristics of speech sounds.
The unintended correspondence between the test signal and speech at high
densities, rather than a true metric of spectral resolution, might explain the
predictive power of the test. Regardless of predictive power, our results suggest that ripples exceeding a critical spectral density cannot be defined along
the same continuum as ripples below that value.
4aPPa18. Specific components of amplitude modulation that enable
detection in listeners with cochlear implants. Adam K. Bosen, Aditya M.
Kulkarni, and Monita Chatterjee (Boys Town National Res. Hospital, 555
North 30th St., Omaha, NE 68131, adam.bosen@boystown.org)
The ability of listeners with cochlear implants to detect amplitude modulation (AM) of stimulation level has been linked to their ability to understand speech, but the specific features of AM stimuli that enable their
detection remains unknown. At low modulation rates sensitivity is strongly
associated with intensity resolution, but at high modulation rates sensitivity
to AM stimuli further degrades, indicating additional temporal limits on
detection. The goal of this work is to identify the specific components of
AM pulse trains that listeners with cochlear implants use to detect AM. We
measured increment and decrement detection thresholds for multiple increment/decrement durations in these listeners and compared these thresholds
to their ability to detect sinusoidal AM at different rates. Electrical stimulation also provides a unique opportunity to present arbitrary stimulation
waveforms through a single electrode, so we generated additional waveforms that either sharpened or smoothed different portions of each AM period to determine how this altered detectability. Preliminary results suggest
that increments in stimulation level are the critical feature that listeners use
to detect AM, and that at high AM rates increments are partially masked by
preceding stimulation.
4aPPa19. Effects of the interaction between the pitch of band noise and
color on psychological evaluation. Ryosuke Konno and Takeshi Akita
(School of Sci. and Technol. for Future Life, Tokyo Denki Univ., 5 Asahicho Senju Adachi-ku, Tokyo 120-8551, Japan, kondorapiyopiyo6969@
gmail.com)
In the present research, we are studying about the effects of complex
stimuli of visual and auditory on perception and cognition. Based on the
above way of thinking, in this time, we made an psychological experiment
about the interaction between “the pitch of a band noise” and “color.” In the
experiment, we made subjects look into the room model. In the model, visual stimulating are presented by color scheme cards. And also, auditory
stimulating of different stimulations in which visual stimulation of color
scheme cards and auditory stimulation of 1/3 octave band noises (100 Hz, 1
kHz, and 10 kHz) are presented to subjects through loud speaker. After the
one trial of visual and auditory stimulation, we heled quetionnaire about the
impressions. As a result, we could obtain that subjects evaluated the bright
and light impression lower when the bright color is presented with 100 Hz
band noise. However, its evaluated value is higher than the obtained when
only the 100 Hz noise is presented to subjects. It seems that a person composes the impressions by means of combined sensation of visual and auditory information.
4aPPa20. A pitch model based on rate, place, and envelope information.
David A. Dahlbom and Jonas Braasch (School of Architecture, Rensselaer
Polytechnic Inst., 110 8th St., Troy, NY 12180, dahlbd@rpi.edu)
A functional model of the auditory processing of pitch is presented. The
model draws on Licklider’s duplex theory of pitch [Experientia, 7, 128-143]
and further extends his approach by adding an envelope processing stage.
The incoming signal is sent through a gammatone filter bank to simulate the
behavior of the basilar membrane. Within each auditory band the autocorrelation function is calculated and a pitch estimate is made based on the delay
between the main peak and the adjacent side peak. A weighting for the salience of each pitch cue is then calculated based on the place theory using the
Acoustics ’17 Boston
3816
4aPPa21. Multiple pitch mechanisms revealed by effects of inharmonicity on pitch perception. Malinda J. McPherson (Program in Speech and
Hearing BioSci. and Technol., Harvard Univ., MIT Bldg. 46-4078, 43 Vassar St., 46-4078, Cambridge, MA 02139, malindamcpherson@g.harvard.
edu) and Josh McDermott (Brain and Cognit. Sci., Massachusetts Inst. of
Technol., Cambridge, MA)
Pitch is generally defined as the percept corresponding to a sound’s fundamental frequency (F0). However, there is surprisingly little evidence for
the importance of F0-specific mechanisms in pitch perception. To investigate the extent to which pitch perception involves estimating F0, we conducted a battery of pitch-related music and speech tasks using harmonic and
inharmonic stimuli, the rationale being that inharmonic stimuli lack a clear
F0 and should impair F0-dependent mechanisms. Inharmonic speech was
generated with a variant of STRAIGHT analysis/synthesis. We found no difference in performance between harmonic and inharmonic stimuli for basic
pitch discrimination, melodic contour discrimination, and speech contour
discrimination, but substantial inharmonicity-induced deficits for sour-note
detection, interval recognition and discrimination, melody recognition, and
voice recognition. Collectively, the results indicate that F0 estimation is not
necessary to extract pitch contours in speech or music, but does appear critical for accurate interval perception, and for representing voices. These findings suggest that two or three distinct mechanisms may underlie what has
conventionally been couched as pitch perception: one which tracks shifts in
fine spectral patterns independent of F0, another that estimates F0 for determining precise interval relationships between notes, and perhaps another
associated with voice qualities.
4aPPa22. Pitch discrimination for multiple simultaneous complexes:
Effects of harmonic resolvability. Jackson Graves and Andrew J. Oxenham (Psych., Univ. of Minnesota, 75 E River Parkway, Minneapolis, MN
55454, grave276@umn.edu)
In natural listening contexts, especially in music, it is common to hear
three or more simultaneous pitches, each defined by a harmonic complex
tone. In order to extract the pitches of these tones using the rate-place code,
peripherally resolved components are required, but the peripheral resolvability of a complex tone is reduced when multiple pitches are presented. In this
experiment, we investigated the effect of resolvability on pitch discrimination in the context of multiple pitches. Listeners were asked to discriminate
the direction of a 0.5-semitone pitch change at the end of a four-tone
sequence, where the last tone in the sequence was embedded in a mixture of
two other simultaneous tones. Tones were either pure tones, or complex
tones filtered into one of two bandpass regions, in order to manipulate the
extent to which harmonics were resolved both before and after mixing. Discrimination performance was significantly above chance, even in the highfrequency region where the combination of harmonic components should
have been unresolved after mixing, suggesting that resolved harmonics may
not be necessary to extract the pitch from multiple complex tones. Predictions from spectral and temporal models of pitch were compared with the
results. [Work supported by NIH grant R01DC005216.]
4aPPa23. Long-term maintenance for learning on pitch and melody discrimination in congenital amusia. Kelly L. Whiteford and Andrew J.
Oxenham (Psych., Univ. of Minnesota, 75 East River Parkway, Minneapolis, MN 55455, whit1945@umn.edu)
Congenital amusia is described as a life-long disorder in melody discrimination, related to poor fine-grained pitch perception. A recent study in
our lab found, however, that pitch and melody discrimination in amusia can
3817
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
improve with laboratory training. After training, over half of the amusics no
longer met the standard diagnostic criterion for amusia, assessed via the
Montreal Battery of Evaluation of Amusia (MBEA). The present study
examined the durability of learning effects by re-examining frequency difference limens (FDLs) and melody discrimination in the same participants
one year after post training. Pure-tone FDLs were measured at 500, 2000,
and 8000 Hz using an adaptive three-interval forced-choice procedure, and
melody discrimination was assessed via the MBEA. Preliminary results
(n = 23; 11 amusics) showed no significant change in FDLs or melody discrimination between post training and one-year follow-up. Consistent with
post-training results, there were significant main effects of group, with amusics performing more poorly than controls. Despite these differences, eight
originally identified amusics no longer met the criterion for amusia based on
the MBEA. Results suggest learning on pitch- and melody-related tasks is
robust and can be maintained for at least one year. [Work supported by NIH
grant R01DC005216.]
4aPPa24. Speech transformed into song heightens temporal sensitivity
but weakens absolute pitch sensitivity. Emily Graber (Dept. of Music, Ctr.
for Comput. Res. in Music and Acoust., Stanford Univ., 658 Lomita Ct.,
Stanford, CA 94305, emgraber@stanford.edu), Rhimmon Simchy-Gross
(Dept. of Psychol. Sci., Univ. of Arkansas, Fayetteville, AR), and Elizabeth
H. Margulis (Dept. of Music, Univ. of Arkansas, Fayetteville, AR)
The speech-to-song illusion (STS) is a phenomenon in which some spoken utterances perceptually transform to song via repetition. The existence
of utterances that do and do not transform makes it possible to investigate
similarities and differences between the musical and linguistic modes of listening, with the former elicited by transforming utterances and the latter by
non-transforming utterances. In Experiment 1, inter-stimulus interval durations within STS trials were either steady, slightly variable, or highly variable. Participants reported how temporally regular the utterance entrances
were. In Experiment 2, participants were first exposed to STS trials and then
asked to choose the transposition of the utterances they heard during the exposure phase. Results indicate that listeners exhibit heightened awareness of
temporal manipulations to transforming utterances compared to non-transforming utterances, but not to absolute pitch manipulations. This suggests
that compared to the linguistic mode of perception, the musical mode entails
an increased sensitivity to temporal regularity, but not to absolute pitch. The
methodology used here establishes a framework for implicitly differentiating musical from linguistic perception, as well as for behaviorally investigating the different cognitive apparatus that people use when activating the
musical or linguistic mode of perception.
4aPPa25. Pitch recognition with sparse-coding recurrent neural networks. Oded Barzelay (Electric Eng., Systems, Tel-Aviv Univ., Geva 7,
Givatayim 53315, Israel, odedbarz@gmail.com), Omri Barak (Rappaport
Faculty of Medicine, Technion – Israel Inst. of Technol., Haifa, Israel), and
Miriam Furst (Electric Eng., Systems, Tel-Aviv Univ., Tel Aviv, Israel)
Pitch is a percept of the mind that correlates to, but not solely with,
acoustical periodicities. Simply stated, it relates different acoustic signals
that share the same repetition rate into one percept. In contrast, various stimuli entering the acoustical periphery are expressed differently in the auditory
nerves population response. Thus, there is a difference between the input activity and the percept of the brain. Broadly, two leading opponent
approaches exist, temporal models and spatial models, but these models can
explain partly disjoint features of the pitch. We propose a novel approach to
bridge the gap between these two extremities. It is based on optimality constraints that assume parsimonious representation of the sensory auditory
input. A recurrent neural network is trained to extract spatiotemporal patterns—pitch cues—from the auditory nerves population activities. These
pitch cues are then linearly related to the stimulus’ pitch. The model can
explain different stimuli from psychoacoustic experiments, such as resolved
and unresolved pitch, Transposed Tones, iterated rippled noise, nonlinear
amplitude responses of the stimuli, harmonic shift, and musical notes. The
uniqueness of this model is its ability to explain various psychoacoustic phenomena within one mathematical framework that is suggested to be shared
with other modalities.
Acoustics ’17 Boston
3817
4a WED. AM
distance of the estimated fundamental frequency from the band’s center frequency. Finally, the envelope is determined for each band and an additional
pitch estimate is made based on the autocorrelation function of the envelope. These three calculations—which provide rate, place, and envelope information—are combined into a global pitch estimate, explaining the pitch
percept associated with a wide range of stimuli. The model can be easily
integrated into larger auditory processing frameworks in which envelope information is important (e.g., monaural source segregation and binaural
localization). [Work supported by NSF BCS-1539276.]
4aPPa26. The effect of musicality on cue selection in pitch perception.
Jianjing Kuang (Linguist, UPenn, 255 S 36th St., Philadelphia, PA 19104,
kuangj@sas.upenn.edu)
Our previous experiments (Kuang and Liberman 2015, 2016) have
shown that spectral cues have significant effects on pitch perception. One
striking finding from Kuang et al. (2015) is that there is a great deal of cognitive variation among individuals: musicians are much less affected by the
spectral conditions; by contrast, non-musicians heavily rely on spectral cues
in pitch perception. We hypothesized that this is because musicians are
more sensitive to fine f0 difference. Extending from this finding, our present
study examines whether the degree of musicality of a given speaker have
effects on his/her strategies of perceiving pitch. The current experiment consists of two parts: musicality tests and pitch classification test (as in Kuang
and Liberman 2016). It is found that the people with higher musicality
scores tend to more heavily attend to f0 cues than the people with much
lower musicality scores do. This result supports our hypothesis, and has important implications for tone acquisition.
4aPPa27. Differences in sound perception are reflected by individual
auditory fingerprints in musicians. Jan Benner, Julia Reinhardt (Dept. of
Radiology, Div. of Neuroradiology, Univ. of Basel Hospital, Petersgraben
4, Basel 4031, Switzerland, jan.benner@unibas.ch), Elke Hofmann (Music
Acad. Basel, Basel, Switzerland), Christoph Stippich (Dept. of Radiology,
Div. of Neuroradiology, Univ. of Basel Hospital, Basel, Switzerland), Peter
Schneider (Dept. of Neuroradiology, Univ. of Heidelberg Med. School, Heidelberg, Germany), Maria Blatow (Dept. of Radiology, Div. of Neuroradiology, Univ. of Basel Hospital, Basel, Switzerland), and William J. Davies
(Acoust. Res. Ctr., Univ. of Salford, Salford, United Kingdom)
Musicians have been reported to show significant inter-Individual differences in elementary hearing functions, sound perception mode, musical
instrument preference, performance style, as well as more complex musical
abilities like absolute- and relative pitch perception, and auditory imagery.
However, it remains unexplored how individual elementary hearing functions and corresponding musical abilities are inter-connected and to what
extend they reflect individual differences in the musical behavior of musicians. Using a combination of five listening tests and assessing resulting
psychoacoustic parameters, we were able to determine individual auditory
fingerprints on the single subject- and group level. 93 musicians (49 professionals and 44 amateurs) were individually tested for: frequency discrimination threshold, holistic and spectral sound perception, absolute pitch
perception, relative pitch perception (musical interval recognition), and musical imagery (AMMA). On the individual level, our results show that auditory fingerprints differ remarkably between subjects. On the group level,
using PCA and cluster analysis, we found four main components represented significantly different in the three characteristic clusters of subjects.
Taken together, our findings suggest that inter-individual differences in the
auditory fingerprint reflect the high variability of individual sound perception and may have crucial impact on specific musical preference, style and
performance of musicians.
4aPPa28. Factors associated with broad binaural pitch fusion in children and adults with hearing aids and cochlear implants. Lina Reiss,
Curtis Hartling, Bess Glickman, Jennifer Fowler, Gemaine Stark, and Yonghee Oh (Otolaryngol., Oregon Health & Sci. Univ., 3181 SW Sam Jackson
Park Rd., Mailcode NRC04, Portland, OR 97239, reiss@ohsu.edu)
Many hearing-impaired adults who use hearing aids (HAs) and/or cochlear implants (CIs) have broad binaural pitch fusion, such that sounds with
large pitch differences are fused across ears (Reiss et al., JARO 2014;
JASA). The goal was to determine which subject factors are associated with
broad binaural pitch fusion. Binaural pitch fusion was measured in bilateral
HA users, bimodal CI users who use a HA in the contralateral ear, and bilateral CI users. Fusion ranges were measured by simultaneous, dichotic presentation of reference and comparison stimuli in opposite ears, and varying
the comparison stimulus to find the range that fused with the reference stimulus. Both children (ages 6-9) and adults were tested. Children in the HA
and bilateral CI groups had broader fusion than the bimodal CI group. In
addition, broad fusion was positively correlated with long durations of HA
use and early onset of hearing loss in adult HA users, and negatively
3818
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
correlated with duration of CI use in pediatric bimodal CI users. No correlations with subject factors were seen in bilateral CI users. The findings suggest that type of hearing device experience may influence binaural pitch
fusion in hearing-impaired individuals, especially children. [Work supported
by NIH-NIDCD grant R01 DC013307.]
4aPPa29. Pitch-trained neural networks replicate properties of human
pitch perception. Ray Gonzalez and Josh McDermott (Brain and Cognit.
Sci., Massachusetts Inst. of Technol., 77 Massachusetts Ave., 46-4078B,
Cambridge, MA 02139, raygon@mit.edu)
Pitch perception exhibits well-established dependencies on stimulus
characteristics. In human listeners, pitch is predominantly determined by
low-numbered harmonics that are believed to be resolved by the cochlea,
but is also conveyed to a lesser extent by high-numbered harmonics that are
not individually resolved. Moreover, the relative phases of harmonics influence pitch perception only when the stimulus consists of high-numbered
(unresolved) harmonics. To assess whether these dependencies reflect information constraints on the problem of extracting pitch from peripheral auditory representations, we trained a convolutional neural network to estimate
the fundamental frequency (F0) of tones in noise. The tones replicated basic
spectral properties of natural sounds. The resulting neural network model
was tested on complex tones that varied in harmonic composition and phase
(as in classic psychoacoustic experiments). The network qualitatively replicated features of human pitch perception, including better performance for
tones containing low-numbered harmonics and worse performance when the
phases of high-numbered (presumably unresolved) harmonics were randomized. The results are consistent with the notion that human pitch perception
exhibits near-optimal performance characteristics for the task of estimating
F0 from peripheral auditory representations, in that optimizing for performance of this task is sufficient to reproduce human performance
characteristics.
4aPPa30. Pitch height as part of speech plans: Evidence from response
to auditory startle. Chenhao Chiu (Graduate Inst. of Linguist, National Taiwan Univ., No. 1, Sec. 4, Roosevelt Rd., Taipei 10617, Taiwan, chenhaochiu@ntu.edu.tw)
In Mandarin, pitch contour denotes phonemic contrast while pitch height
does not serve as a phonemic parameter. When triggered by a startling auditory stimulus (SAS, >120dB), prepared Mandarin CV syllable [ba] is elicited rapidly while its pitch contour remains unperturbed, suggesting that
phonemic details are included in the speech plan and susceptible for rapid
release under feedforward control [Chiu and Gick 2014, JASA-EL, 322328]. Pitch height, however, is elevated in SAS-induced responses. It is not
clear whether or not pitch height can be independently pre-specified in
speech plans. The current study uses the startle paradigm to tackle this question by comparing general Mandarin speakers and Mandarin speakers with
pitch training. Preliminary results show that specific pitch height can be prepared and is preserved in SAS-induced responses. More unperturbed pitch
height is observed in pitch-trained speakers’ responses than in general Mandarin speakers’ responses. While slight and limited elevation may occur at
the beginning of SAS-induced responses, the preservation of the absolute
pitch height appears to interact with intended response durations. The results
suggest that absolute pitch height, even when not acting as a phonemic parameter, can be pre-specified in speech plans and released by a SAS both
rapidly and accurately.
4aPPa31. Does a pitch center exist in auditory cortex? A mismatch negativity study. Feng Gu and Lena Wong (The Univ. of Hong Kong, Rm.
770, Meng Wah Complex, Pokfulam, Hong Kong 000000, Hong Kong,
gufeng@hku.hk)
Pitch sensation can be evoked by different acoustic cues which should
be processed by different strategies for extracting pitch. However, in resent
studies a general pitch center was hypothesized exist at some level of the auditory pathway for a general pitch sensation independent of other acoustic
features. In this study, the existence of pitch center was examined by testing
that whether the mismatch negativity (MMN) response, an index of preattentive auditory processing, can be elicited by pitch deviations in a passive
Acoustics ’17 Boston
3818
4aPPa32. How susceptibility to noise varies across speech frequencies.
Sarah E. Yoho (Dept. of Speech and Hearing Sci., The Ohio State Univ.,
1000 Old Main Hill, Logan, UT 84321-6746, sarah.leopold@usu.edu), Eric
Healy, and Frederic Apoux (Dept. of Speech and Hearing Sci., The Ohio
State Univ., Columbus, OH)
It has been long assumed that the corrupting influence of noise on
speech is uniform across all frequencies, and that the contribution of each
speech frequency decreases at the same rate as noise is added. This assumption is evident in numerous previous works, and is seen clearly in the
Speech Intelligibility Index where the contribution of each speech band is
scaled in the same way according to the amount of noise present. Here, it is
argued that susceptibility to noise may differ across speech bands. To test
this hypothesis, the compound method [F. Apoux and E.W. Healy (2012)
132, J. Acoust. Soc. Am.] was modified to evaluate the noise susceptibility
of individual critical bands of speech. Noise was added to each target speech
band and the signal-to-noise ratio (SNR) required to reduce the contribution
of that band by half was estimated. It was found that noise susceptibility
varies greatly across speech bands, and that the SNR required to similarly
affect each band differed by as much as 12 dB. Interestingly, no obvious
systematic relationship appeared to exist between band importance and
noise susceptibility.
4aPPa33. Stimulus features affecting speech recognition in a two-talker
masker. Lauren Calandruccio (Psychol. Sci., Case Western Reserve Univ.,
11635 Euclid Ave., Cleveland, OH 44106, lauren.calandruccio@case.edu),
Emily Buss (Otolaryngology/Head and Neck Surgery, Univ. of North Carolina, Chapel HIll, NC), Lori Leibold (Ctr. for Hearing Res., Boys Town
National Res. Hospital, Omaha, NE), and Mary Lowery (Psychol. Sci., Case
Western Reserve Univ., Cleveland, OH)
Large individual differences in performance have been observed for
speech recognition in the presence of two-talker speech maskers. Moreover,
two-talker masker samples appear to vary in masking effectiveness. These
two observations suggest that (1) listeners vary in the extent to which they
take advantage of stimulus features that aid in the segregation of target from
masker speech and (2) maskers vary in the extent to which salient stimulus
features are available that aid in the segregation of target from masker
speech. Stimulus talker variability and individual listener variability are
explored across two experiments. The first experiment evaluates differences
between talkers based on vocal characteristics. These characteristics include
fundamental frequency, pitch variability, and cadence. Additionally, listeners are asked to rank each individual competing talker’s voice with respect
to similarity to the target talker’s voice. The second experiment will explore
linguistic differences in the competing speech including two-talker maskers
that consist of concatenated sentences, story passages, and conversational
dialogs and how the content of the speech affects masking. Data will be presented for listeners with normal-hearing thresholds.
4aPPa34. Differences in speech motor control between bilingual and
monolingual speakers—An acoustic study. Rena K. Chang and Manwa L.
Ng (Speech and Hearing Sci., Univ. of Hong Kong, 759 Meng Wah Complex, Pokfulam 000, Hong Kong, manwa@hku.hk)
The present study attempted to acoustically examine the speech motor
abilities associated with English produced by English monolingual speakers
(MS), Cantonese-English bilingual speakers with superior (BS-SE) and inferior English (BS-IE). Articulation rate, formant frequencies (F1 and F2),
and voice onset time (VOT) obtained from different speech tasks were compared across the three speaker groups to reveal the language influence on
3819
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
their speech motor control. Results indicated that: (1) the MS group exhibited the fastest articulation rate while the BS-IE group the slowest; (2) the
three speaker groups had significantly different VOT values for the plosives
/b-/, /g-/, /ph-/ and /th-/; and (3) bilingual speakers exhibited larger vowel
spaces, demonstrating more posterior tongue position, based on F1 and F2
values of the three corner vowels /-i/, /-a/ and /-u/, than monolingual speakers. No systematic implications were obtained regarding the effect of bilingualism on speech motor control. Despite the inconclusive findings, data
from the present study shed light on understanding the effect of bilingualism
on speech motor control.
4aPPa35. An algorithm to increase intelligibility for hearing-impaired
listeners in the presence of a competing talker. Eric Healy (Speech and
Hearing Sci., The Ohio State Univ., Pressey Hall Rm. 110, 1070 Carmack
Rd., Columbus, OH 43210, healy.66@osu.edu), Masood Delfarah (Comput.
Sci. and Eng., The Ohio State Univ., Columbus, OH), Jordan Vasko, Brittney Carter (Speech and Hearing Sci., The Ohio State Univ., Columbus,
OH), and DeLiang Wang (Comput. Sci. and Eng., The Ohio State Univ.,
Columbus, OH)
Individuals with hearing impairment have particular difficulty perceptually segregating concurrent voices and understanding a talker in the presence of a competing voice. In contrast, individuals with normal hearing
perform this task quite well. A machine learning algorithm is introduced
here to address this listening situation. A deep neural network was trained to
estimate the ideal ratio mask for a target talker in the presence of a single
competing talker. The algorithm was found to produce sentence-intelligibility increases for hearing-impaired and normal-hearing listeners at various
signal-to-noise ratios. This benefit was largest for the hearing-impaired listeners and averaged 59 %-points at the least-favorable SNR, with a maximum of 87 %-points. The mean intelligibility achieved by the hearingimpaired listeners using the algorithm was equivalent to that of young normal-hearing listeners without processing, under conditions of identical interference. Possible reasons for the limited ability of hearing-impaired listeners
to perceptually segregate concurrent voices are also addressed.
4aPPa36. Aging and the use of syntactic and talker consistency cues in a
temporally interleaved task. Karen S. Helfer, Sarah Poissant, and Gabrielle R. Merchant (Commun. Disord., Univ. of Massachusetts Amherst, 358
N. Pleasant St., Amherst, MA 01002, khelfer@comdis.umass.edu)
The purpose of the present study was to identify differences between
older and younger listeners in how they use talker consistency and word
order cues in both to-be-attended and to-be-ignored streams. A temporallyinterleaved method was used in which participants were asked to report the
first and then every other word within a stream of sounds, and ignore the
intervening sounds. The to-be-attended words were presented in either correct syntactic order (“Anne dropped three old mats”) or in random order
(“Six pink Dave zones lost”). The to-be-ignored sounds were either words
(in syntactically-correct or random order), steady-state noise, or environmental sounds. When stimuli were words, the talker was either consistent or
varied for all five words of the target and/or masker. Preliminary analyses
showed that listeners in both groups were only minimally affected by intervening stimuli that were samples of noise or environmental sounds. When
the interleaved sounds were words, younger adults were able to take greater
advantage of syntactic correctness of the to-be-attended stream, as compared to older adults. Both younger and older participants benefitted from a
consistent target talker when syntactic cues were unavailable. [Work supported by NIDCD R01 DC012057).]
4aPPa37. Three-dimensional ultrasound images of Polish high front
vowels. Steven M. Lulich (Speech and Hearing, Indiana Univ., Bloomington, IN), Malgorzata E. Cavar, and Max Nelson (Dept. of Linguist., Indiana
Univ., Ballantine Hall 844, 1020 E. Kirkwood Ave., Bloomington, IN
47405-7005, mcavar@indiana.edu)
The 3-D ultrasound method has been applied to collect data on Polish
high front vowels. In particular, Polish has one unambiguous high front
vowel and another one that in the phonological literature is variously
referred to as high central or back unrounded and transcribed as [Ø]. While
Acoustics ’17 Boston
3819
4a WED. AM
oddball paradigm that composed of four different pitch-evoking stimuli
(i.e., sinusoidal tone, resolved complex, iterated rippled noise, and pulse
train). Although the pitch-evoking cues in these pitch-evoking stimuli were
disparate, they evoked a fixed pitch of 200 Hz (or 283 Hz in a reversed oddball paradigm). Occasionally a deviant sound was encountered which
evoked a different pitch (283 Hz, or 200 Hz in the reversed oddball paradigm). The results showed no significant MMN response was elicited by the
deviant, regardless of the type of pitch-evoking stimuli, suggesting the absence of pitch center in the pre-attentive auditory processing stage.
there exists a sizeable body of articulatory research on Polish, including Xrays from as early as the 50s and 60s, the ultrasound data reveal more detail
about the position of the tongue center and tongue root. The data evaluated
so far support the view that the vowel transcribed as [Ø] is a front vowel.
The two front vowels differ in the position of the tongue root, relative raising of the tongue, and extent of lip gesture, but do not differ substantially
with regard to tongue body advancement on the font-back axis. The data
also capture the temporal aspect of speech, and together with time-aligned
audio recordings and video recordings of the lips, allow for fine-grained
analysis of the acoustic effects of these articulatory gestures.
children learned equally well in each condition. Further inspection of individual data, however, showed a subset of children acquiring more labelobject pairs in the competitor condition. The subject pool was split into two
subgroups of children, those who performed > 5% better or worse in the
competitor condition. Independent t-tests showed that children who performed better in the competitor condition had lower scores on an executive
function test. Though counterintuitive, results suggest that children with
poorer inhibitory control of irrelevant stimuli are better at fast mapping
novel label-object pairs in the presence of acoustic competition. Cognitive
demands associated with the fast-mapping task will be discussed.
4aPPa38. Effect of relative envelope periodicity on speech-on-speech
masking. Frederic Apoux (Otolaryngol. - Head & Neck Surgery, The Ohio
State Univ., 915 Olentangy River Rd., Columbus, OH 43212, fred.apoux@
gmail.com), Brittney Carter, and Eric Healy (Speech and Hearing Sci., The
Ohio State Univ., Columbus, OH)
4aPPa41. Comparing musicians’ and non-musicians’ ability to make
use of F0 differences between competing talkers in natural and monotone speech. Sara M. Madsen (Auditory Percept. and Cognition Lab, Dept.
of Psych., Univ. of Minnesota, Ørsteds Plads, Bldg. 352, Lyngby 2800, Denmark, samkma@elektro.dtu.dk) and Andrew J. Oxenham (Auditory Percept.
and Cognition Lab, Dept. of Psych., Univ. of Minnesota, Minneapolis,
MN)
Speech intelligibility generally increases as the difference in fundamental frequency (F0) between two simultaneous talkers increases. Vocoded
speech shows no such effect. However, as previous work reported a contribution of envelope periodicity to masking release, an effect of F0 separation
should be observed with vocoded speech. In a first experiment, the effect of
F0 separation in vocoded speech was directly evaluated by presenting pairs
of simultaneous sentences from the same male talker. The background sentence was time-reversed and its F0 was manipulated to range from 0 to 1
octave above that of the target sentence. Although limited, an effect was
observed for large F0 separations. In a second experiment, target and background sentences were from different talkers and differed in envelope cutoff
rates. The envelope low-pass cut-off of one signal was manipulated independently, ranging from 4 to 400 Hz, while the other was fixed at 400 Hz.
As expected, decreasing the cut-off of the target resulted in a decrease in
intelligibility. Inversely, decreasing the cut-off of the background improved
intelligibility by as much as 50% points at 4 Hz. These findings show a
potential contribution of envelope periodicity to speech-on-speech masking
but only for very large F0 separations.
4aPPa39. Allophonic variation of Polish vowels in the context of prepalatal consonants. Malgorzata E. Cavar (Linguist, Indiana Univ., Dept. of
Linguist,Ballantine Hall 844, 1020 E. Kirkwood Ave., Bloomington, IN
47405-7005, mcavar@indiana.edu), Steven M. Lulich (Speech and Hearing,
Indiana Univ., Bloomington, IN), and Max Nelson (Linguist, Indiana Univ.,
Bloomington, IN)
Phonetic studies of Polish mention allophonic variation in Polish vowels, in that there is a systematic effect of the prepalatal consonant context. In
particular, [u] is fronted, [E] is raised, [a] is fronted and sometimes raised,
and [O] is fronted following a prepalatal consonant. Additionally, phonemic
[i] is excluded after non-palatalized consonants, and the phonemic [Ø] does
not occur after prepalatals. Although X-ray data for Polish speech production exists from the 50s and 60s, no X-ray images are available of the contextual variants of vowels adjacent to prepalatal consonants. In this study,
we present 3-D tongue shapes of the vowels in neutral and prepalatal contexts. The data show a combination of raising and fronting of the tongue
body for all front vowels, and also a consistent effect of tongue root
advancement.
4aPPa40. Preschool children’s fast mapping of novel words in the presence of speech competitors. Tina M. Grieco-Calub, Tiffany W. Fang, and
Katherine M. Simeon (Northwestern Univ., 2240 Campus Dr., FS, 2-231,
Evanston, IL 60208, tinagc@northwestern.edu)
Children frequently learn new vocabulary in the presence of competing
sounds. This study was designed to test the influence of speech competitors
on children’s ability to learn novel label-object pairs. Preschool-aged children (N = 16) performed a fast-mapping task in both quiet (target speech =
60 dBSPL) and speech competitor conditions (58 dBSPL). In each condition, children were familiarized to three novel label-object pairs on a computer screen. Children were subsequently tested on whether they mapped
each pair with a closed-set, three-alternative-forced-choice test: children
pointed to the object that corresponded to the spoken label. On average,
3820
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Recent studies disagree on whether musicians have an advantage over
non-musicians in understanding speech in noise. This study tested the hypothesis that better fundamental-frequency (F0) discrimination enables
musicians to make better use of F0 differences between competing talkers.
Sentence intelligibility was measured in a background of noise or two competing talkers, where the target and background speech were natural or
monotonized. Participants were also tested with the Vocabulary and Matrix
Reasoning subtests of the Wechsler Abbreviated Scale of Intelligence. As
expected, speech intelligibility improved with increasing F0 difference
between the target and the two maskers for both natural and monotone
speech. However, no significant intelligibility advantage in speech was
observed for musicians over non-musicians in any condition. F0 discrimination was significantly better for musicians than for non-musicians. Scores in
the IQ test did not differ significantly between groups. Overall, the results
do not support the hypothesis that musical training leads to improved speech
intelligibility in speech or noise backgrounds. [Work supported by NIH
grant R01DC005216, the Carlsberg, Augustinus, P.A. Fisker, and Knud
Hjgaard Foundations.]
4aPPa42. Presentation method as air- and bone-conducted speech for
delayed auditory feedback. Teruki Toya and Masashi Unoki (School of
Adv. Sci. and Technol., Japan Adv. Inst. of Sci. and Technol., 1-1, Asahidai,
Nomi-shi 923-1292, Japan, yattin_yatson@jaist.ac.jp)
Speakers perceive their own voices as “auditory feedback” during
speech production. Effects of delayed auditory feedback (DAF) on their
speaking styles have been investigated to clarify relationships between
speech production and perception. It has been previously investigated
whether DAF as not only air-conducted (AC) but also bone-conducted (BC)
speech affects speakers’ speaking styles. However, existence of speakers’
own natural voice via bone-conduction could not be ignored. This study
investigates the speaking styles under DAF as AC and BC speech in presence of masker, for clarifying the effects of only delayed stimuli as AC and
BC speech. Speech duration and the number of dysfluent episodes under
delay conditions were measured to quantify the effect of delayed stimuli on
the speaking styles. As a result, under noiseless and AC masker conditions,
longer speech duration and more dysfluencies were observed under DAF as
BC speech than that as AC speech. In contrast, under BC masker condition,
opposite trends were observed. Speakers’ introspection suggested that their
own natural voices were masked by AC masker better than BC masker.
These results indicated that the use of AC masker is effective for masking
speakers’ own natural voice during DAF as AC and BC speech.
4aPPa43. The independent contribution of glimpse properties to speech
recognition in speech-modulated noise. Bobby E. Gibbs and Daniel Fogerty (Dept. of Commun. Sci. and Disord., Univ. of South Carolina, Columbia, SC 29208, artfull.mind@gmail.com)
During fluctuating noise, temporal speech fragments (i.e., glimpses) at
sufficient signal-to-noise ratios (SNRs) contribute to speech recognition.
Different acoustic properties of these glimpses, related to rate and
Acoustics ’17 Boston
3820
4aPPa44. Effects of wide dynamic range compression on speech signals
with respect to reverberation. Ruksana Giurda (Lab. for Experimental
Audiol., Dept. of Otorhinolaryngology, University Hospital, Z€
urich, Switzerland, ruksy89@gmail.com), Eleftheria Georganti (Sonova AG, Staefa,
Switzerland), Henrik G. Hassager, and Torsten Dau (Hearing Systems
Group, Elec. Eng., Tech. Univ. of Denmark, Kgs. Lyngby, Denmark)
The sound perception in enclosed spaces is dominated by the room
acoustics properties of the enclosures. Today, it is known that reverberation
generated by walls and obstacles challenges hearing-impaired people, even
with hearing aids, and several studies have been conducted to address this
problem. However, relatively little is known about how various signal processing blocks (i.e., beamforming, wide dynamic range compression) within
hearing aids affect the reverberation content of the speech signals. Aim of
this work was to investigate and quantify the effects of wide dynamic range
compression on the reverberant component of speech signals employing
both subjective and objective methods. Several objective metrics which correlate with reverberation were applied on the speech signals before and after
the compression. Moreover, a listening test with 14 normal hearing participants was performed to assess whether the changes in the reverberation content of the compressed signals were perceivable. The perceptual results
show that the gain model changed the perception of reverberation of the
speech signals tested. Finally, the correlation between the objective metrics
and the perceptual results of the listening test was then investigated, indicating increased reverberation content for the compressed signals.
4aPPa45. Spatial perception and speech intelligibility with hearing aids.
Jens Cubick (Hearing Systems Group, Tech. Univ. of Denmark, Ørsteds
Plads, Bldg. 352, Lyngby 2800, Denmark, jecu@elektro.dtu.dk), Jorg M.
Buchholz (National Acoust. Labs., Chatswood, NSW, Australia), Virginia
Best (Dept. of Speech, Lang. and Hearing Sci., Boston Univ., Boston, MA),
Mathieu Lavandier (Laboratoire Genie Civil et B^atiment, Univ Lyon,
ENTPE, Vaulx-en-Velin, France), and Torsten Dau (Hearing Systems
Group, Tech. Univ. of Denmark, Kgs. Lyngby, Denmark)
Cubick and Dau (2016) showed that speech reception thresholds (SRTs)
in noise, obtained with normal-hearing (NH) listeners, can be significantly
higher with hearing aids (HAs) than in the corresponding unaided condition.
Some of the listeners reported a change in their spatial perception of the
sounds due to the HA processing, with auditory images often being broader
and closer to the listener or even internalized. The current study investigated
whether worse speech intelligibility with HAs might be caused by a
“shrunken” acoustic scene and thus a reduced ability to spatially separate
the target speech from the interferers. SRTs were measured in normal-hearing listeners with or without “ideal” HAs (with broadband, linear, flat gain)
in the presence of three interfering talkers or speech-shaped noises. The
interferers were presented either at + /- 90 and 180 degrees azimuth or were
colocated with the target sentence at 0 degrees. Consistent with the previous
study, SRTs were found to be increased by 2-2.5 dB with HAs when the
interferers were spatially separated, but only by 0.5-1 dB when they were
colocated. This 1.5 dB difference indicates that at least some of the
3821
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
disruption to speech intelligibility caused by HAs can potentially be attributed to degraded spatial separation.
4aPPa46. Linguistic contributions to frequency importance functions in
children with normal hearing. Ryan W. McCreery, Adam K. Bosen, and
Marc A. Brennan (Audiol., Boys Town National Res. Hospital, 555 North
30th St., Omaha, NE 68131, ryan.mccreery@boystown.org)
Frequency-importance functions represent the contribution of an individual frequency band to speech recognition and are used to generate
weighted estimates of audibility in the Speech Intelligibility Index (SII). To
date, nearly all frequency-importance functions have been derived from
adult speech perception data. Estimates of audibility based on these functions are increasingly being applied to children who wear hearing aids. Frequency-importance functions based on perceptual data from children and
adults do not differ on average (McCreery & Stelmachowicz, 2011), but importance functions derived from children are more variable than those
obtained from adults. Numerous factors could influence individual variability in frequency-importance functions for children. The purpose of this study
was to examine the influence of language abilities on individual differences
in importance functions obtained from children. We hypothesized that children with stronger vocabulary knowledge would have importance-functions
that were more adult-like than peers with more limited vocabulary abilities.
Frequency-importance functions were derived from a group of 105 children
with normal hearing who were between 5-12 years of age and a group of 20
adults with normal hearing. Children with higher vocabulary abilities had
frequency-importance functions that were more adult-like than peers with
lower vocabulary abilities, after controlling for age.
4aPPa47. Relating verbal and non-verbal auditory spans to language
comprehension performance. Jeffrey J. DiGiovanni, Travis L. Riffle
(Commun. Sci. and Disord., Ohio Univ., Grover W151c, Athens, OH
45701, digiovan@ohio.edu), and Aurora J. Weaver (Commun. Sci. and Disord., Auburn Univ., Auburn, AL)
Working memory capacity has often been assessed by various forms of
the span task (e.g., reading span, digit span, etc.). Simple span tasks involve
storage, whereas complex span tasks involve both storage and processing,
and can include verbal and/or non-verbal stimuli. Arguably, daily activities,
such as engaging in conversation or understanding a news article, require
more storage and processing than recalling a list of numbers. The objective
of this study was to determine the relationship of verbal and non-verbal auditory spans to language comprehension performance. Twenty-two normalhearing adults participated in the study consisting of the following four
experiments: forward and reverse digit span, Working Memory Span Task,
Pitch Pattern Span Task, and the LISN (Lecture, Interview, and Spoken Narratives) listening comprehension task. Results revealed no significant relationship between non-verbal spans and language comprehension. Also,
there was no significant relationship between forward and reverse digit
spans to language comprehension. There was a significant correlation
between the working memory span task and language comprehension. This
suggests that complex language-based tasks that require more storage and
processing will be better predictors of language comprehension than simple
span tasks or tasks that do not involve language.
4aPPa48. Perception of keywords elicited under various adverse acoustic environments. Nandini Iyer (Air Force Res. Lab., 2610 Seventh St.,
Bldg. 441, Wpafb, OH 45433, Nandini.Iyer.2@us.af.mil), Eric R. Thompson (Air Force Res. Lab., Wright-Patterson AFB, OH), Zachariah N. Ennis
(Oak Ridge Inst. for Sci. and Education, Wright-Patterson AFB, OH), Abigail Willis (Oak Ridge Inst. for Sci. and Education, WPAFB, OH), and
Brian Simpson (Air Force Res. Lab., Wright-Patterson AFB, OH)
In a recent study, keywords were elicited in spontaneous speech samples
from eight pairs of interlocutors who completed the “spot-the-difference”
Diapix task in one of three acoustic environments: quiet, 2-talker babble, or
8-talker babble. In some conditions, talkers were in the same acoustic environment and in others they were in disparate environments. In all cases, the
acoustic environments were presented over headphones in order to obtain
clean recordings of the speech but maintain the influence of the environment
Acoustics ’17 Boston
3821
4a WED. AM
proportion, reflect the temporal distribution of available speech cues.
Glimpse metrics may be used to define different aspects of this temporal distribution. However, these measures are often highly correlated, limiting
interpretation of how different glimpse properties contribute independently
to speech recognition. The present study investigates how the local SNR
cutoff (LC) influences the correlation between glimpse metrics and affects
related associations with speech recognition. Speech recognition was
assessed in the presence of speech-modulated noise that was temporally
manipulated through time compression and presented at different SNRs.
Optimization analyses identified LCs that yielded glimpse metrics that were
most correlated with the perceptual data. Stimulus manipulations of the
noise modulation spectrum and noise level were most related to changes in
the glimpse rate and proportion of glimpsed speech, respectively. At an LC
of -2 dB, the correlation between these glimpse parameters was minimized
while the combined association with speech recognition performance was
maximized. Results suggest two glimpse mechanisms, with the relative importance of either mechanism determined by the acoustic noise properties
and the analyzed LC.
on production. The current study investigated the perception of the keywords in each of the three acoustic environments at different signal-to-noise
ratios, when they were presented in either the same or disparate adverse
environment than ones in which they were elicited. Results from the current
experiment will be discussed with regard to intelligibility obtained when
words were presented in the same vs. different environments; the differences obtained in perception might indicate whether speech modifications in
adverse environments are specific to a particular environment or more
global in nature. The correlations between perception results to acoustic
measures of these keywords in these various acoustic environment will also
identify key factors that might improve communication effectiveness in
adverse environments.
4aPPa49. Articulation and timing in mouse ultrasonic vocalizations.
Gregg A. Castellucci (Linguist, Yale Univ., 333 Cedar St., Rm. I-404, New
Haven, CT 06511, gregg.castellucci@yale.edu), Daniel Calbick, and David
A. McCormick (Neurosci., Yale Univ., New Haven, CT)
Male mice produce ultrasonic courtship vocalizations (UCVs) which
have been used as a model in the study of various human social and vocal
behaviors. However, little is known about the features of UCVs that are important for their perception, and would therefore be expected to show meaningful variability following experimental manipulations. Here, we show that
the temporal structure of UCV bouts are remarkably consistent across
C57Bl6/J mice. Specifically, we find that mice produce two UCV subtypes
which arise from unique respiratory articulations, and three discrete classes
of silent boundaries which also result from distinct respiratory activity patterns. In addition, we demonstrate that these UCV and boundary types occur
in a highly consistent pattern across mice, and that UCV temporal structure
displays evidence of rudimentary motor planning. Finally, while adult mice
produce their UCVs via an egressive pulmonic phonation mechanism, neonatal mice were found to utilize an ingressive pulmonic airstream in the production of their ultrasonic isolation calls. In conclusion, future
3822
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
investigations of mouse UCVs should include analyses of rhythmic structure. Also, the study of adult UCV production may prove to be a useful
model for the study of basic aspects of general orofacial coordination and
speech motor coordination.
1:20
4aPPa50. Sensitivity to interaural timing differences using the advanced
combinational encoder strategy. Alan Kan, Ruth Litovsky (Univ. of Wisconsin-Madison, 1500 Highland Ave., Madison, WI 53705, ahkan@waisman.wisc.edu), and Zachary M. Smith (Cochlear Americas, Centennial,
CO)
When listening through clinical processors, sound localization in bilateral cochlear implant (BiCI) users is highly variable, and some patients perform rather poorly. While this is often ascribed to lack of access to temporal
fine-structure interaural timing differences (ITDs), the potential role of envelope ITDs is often disregarded. Furthermore, sensitivity to ITDs alone has
not been closely examined with the commonly used Advanced Combinational Encoder (ACE) strategy. In theory, envelope ITDs could be encoded
with ACE and some BiCI users may be sensitive to this cue to locate
sounds, which may explain why some BiCI users have better localization
performance. ITD just noticeable differences (JNDs) were measured in 9
participants, using a novel setup of presenting whole-waveform-delayed
words through the auxillary port of clinical processors. Listening with ACE,
ITD JNDs ranged from 105-680 ms (unmeasurable in one listener). Localization root-mean-squared error ranged from 19-68 degrees. However, ITD
sensitivity and sound localization were not significantly correlated (q = 0.5,
p = 0.17). The unexplained range of sound localization performance in BiCI
users is likely due to distorted and inconsistent binaural cues that arise from
uncoordinated device programming and signal processing across the two
ears. [Work supported by NIH-NIDCD (R03DC015321 to AK and
R01DC003083 to RYL) and NIH-NICHD (P30HD03352 to Waisman
Center).]
Acoustics ’17 Boston
3822
WEDNESDAY MORNING, 28 JUNE 2017
ROOM 311, 8:15 A.M. TO 12:20 P.M.
Session 4aPPb
Psychological and Physiological Acoustics: History of Psychoacoustics in the Period 1900-1950
Jont B. Allen, Cochair
ECE, Univ. of IL, Beckman Inst., 405 N. Mathews, Urbana, IL 61801
Armin Kohlrausch, Cochair
Human Technology Interaction, Eindhoven University of Technology, TU Eindhoven, PO Box 513, Eindhoven 5600MB,
Netherlands
Chair’s Introduction—8:15
Invited Papers
8:20
4aPPb1. Sound source localization: 1850s to 1950s. William Yost (Speech and Hearing, Arizona State Univ., PO Box 870102, Tempe,
AZ 85287, william.yost@asu.edu)
While scientists and philosophers have been interested in sound source localization since the time of the ancient Greeks, the modern
study of this topic probably began in the late 19th century. Because sound has no spatial dimensions, there were many arguments at this
time as to how humans localize a source based on the sound it produces. Lord Rayleigh conducted a “garden experiment” and concluded
that a binaural ratio of sound level at each ear could account for his ability to identify the location of people who spoke in the garden.
This type of experiment began the modern investigation of the acoustic cues used for sound source localization. In the first half of the
20th century, psychoacousticians such as Licklider, Jeffress, Mills, Newman, Rosenzweig, Stevens, von Hornbostel, Wallach, Wertheimer, and many others (documented by Boring in Sensation and Perception, 1942 and by Blauert in Spatial Hearing, 1997) added
seminal papers leading to our current understanding of sound source localization. This presentation will briefly review some of this history. [Partially support by a grant from the National Institute on Deafness and Other Communication Disorders, NIDCD).]
8:40
4aPPb2. Psychoacoustics in the pre-electronic, electronic, and early digital eras. Harry Levitt (Adv. Hearing Concepts, Inc, PO Box
610, 998 Sea Eagle Loop, Bodega Bay, CA 94923, harrylevitt@earthlink.net)
4a WED. AM
The history of psychoacoustics can be subdivided into three eras, Pre-electronic, Electronic, and Digital. We are currently in the middle of the Digital era. Psychoacousticians displayed remarkable ingenuity in furthering our understanding of auditory perception using
the limited technology of the day. The brilliant use of spinning discs, innovative pulse generators, tuning forks provided fascinating new
insights into the nature of hearing. The Electronic era began with the invention of the electronic amplifier. The high gain provided by
electronic amplifiers opened the door to new areas of investigation. Fundamental new findings regarding non-linear distortion and protective auditory mechanisms resulted as well as the emergence of the field of audiology. Psychoacoustic research during the Electronic
era focused more on the spectral properties of hearing, presumably because of the greater difficulty in manipulating pulsate stimuli. The
invention of the digital computer provided the means for generating, analyzing and manipulating pulsate stimuli with the same degree of
ease as spectral stimuli. The early Digital era thus resulted in major new findings relating to both the temporal and spectral properties of
hearing. More importantly, it changed our thinking with respect to the measurement of hearing.
9:00
4aPPb3. History of speech perception psycho-physics. Jont B. Allen (ECE, Univ. of IL, Beckman Inst., 405 N. Mathews, Urbana, IL
61801, jontallen@ieee.org)
The history of speech psycho-physics is relatively recent, due to the complexity of quantifying information measures. All of this
research required the invention of the Telephone (Bell, 1874). The first research was by Lord Rayleigh (1908), who used an Acousticon
LT commercial PA system (c1905), followed by George Campbell (1910), who introduced the confusion matrix. In 1921 Harvey
Fletcher took thousands of hours and dozens of different types of experimental data, resulting in the Articulation Index. Fletcher retired
in 1950, and the speech work at Bell labs, lead by James Flanagan, switched to vocoder research. Work at Haskins began about the same
time, with Liberman, Cooper and colleagues. Their main research questions was perceptual speech cues. There were more theories than
facts. George Miller (1955) used Shannon’s Theory of Information (1947) as the inspiration for experiments on speech information transferred. Between 1994-2005 the author repeated the work of Fletcher and Miller, using “modern computer methods,” and came to some
new conclusions. Online demos will be presented, that reveal the consonant speech cues. These demos may be viewed at http://auditorymodels.org/ (Click Demos).
3823
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3823
9:20
4aPPb4. A history of loudness before 1950. Stephen T. Neely (Hearing Res., Boys Town National Res. Hospital, 555 North 30th St.,
Omaha, NE 68131, Stephen.Neely@boystown.org)
Loudness is a perceptual correlate of the physical intensity of a sound, but the relationship between loudness and intensity is not simple. The derivation from loudness-matching data of equal-loudness contours for tones across frequency was described as early as 1927.
However, the most influential work on loudness prior to 1950 was the formulation by Fletcher and Munson (1933) of a method for calculating the loudness of any steady sound from the intensities of its frequency components. In a subsequent publication, Fletcher and Munson (1937) described how the loudness of a tone changes in the presence of another tone, which is a phenomenon known as masking.
Stevens (1936) made extensive contributions to the theory of loudness scaling and methods for loudness measurement. Miller (1947)
discovered that “just detectable increments in intensity” were of the same order of magnitude for noise as for pure tones.” Dix et al.
(1948) suggested a clinical use in differential diagnosis of hearing loss for the phenomenon known as loudness recruitment, which is
defined as an abnormally-rapid loudness growth. In part because of its relevance to the remediation of hearing loss, the measurement,
quantification, and prediction of loudness continue to be active areas of psychophysical research.
9:40
4aPPb5. The dawn of psychoacoustics in Japan. Tatsuya Hirahara (Faculty of Eng., Toyama Prefectural Univ., 5180 Kurokawa, Imizu
939-0398, Japan, hirahara@pu-toyama.ac.jp) and Y^
oiti Suzuki (Res. Inst. of Elec. Commun., Tohoku Univ., Sendai, Japan)
In the early 1900’s, two scholars opened the door of psychoacoustics in Japan. Han’ichi Muraoka was sent to University of Strasbourg in 1878, where he studied physics under August Kundt and received a doctoral degree in 1881. He published his study of the discrimination threshold of Japanese-harp timbre in 1919. Matataro Matsumoto went to Yale University in 1896, studied psychology under
Edward Scripture, and received a Ph.D. degree in 1899 with his thesis “Researches on Acoustic Space.” In the 1930s, several psychologists actively conducted psychoacoustical researches and two psychoacoustics textbooks were published. In 1936, the Acoustical Society
of Japan was founded. Three years later, Shuji Yagi edited a book titled “Acoustical Sciences,” of which psychoacoustics occupied 23
pages out of 428. Psychoacoustical research, however, gradually became inactive until after WWII. The Journal of the Acoustical Society of Japan was resumed in 1950 and several psychoacoustics textbooks were published in the 1950’s. From around 1960, not only psychologists such as of Osaka University but also electrical engineers of Tohoku University, NHK-STRL, and NTT-ECL began
psychoacoustical studies, particularly related to timbre, spatial hearing and speech perception. This movement signaled the second down
for the current Japanese psychoacoustics; the fusion of psychology and technology.
10:00–10:20 Break
10:20
4aPPb6. Psychoacoustics in the United Kingdom in the period 1900 to 1950. Brian C. Moore (Experimental Psych., Univ. of Cambridge, Downing St., Cambridge CB3 9LG, United Kingdom, bcjm@cam.ac.uk)
Much research in psychoacoustics during this period was conducted in the “Psychological Laboratory” in Cambridge (H. Bannister,
A. F. Rawdon-Smith) and in Manchester (T. S. Littler, A. W. Ewing). Although Lord Rayleigh had published his influential duplex
theory of sound localization in 1907, lingering doubts remained about whether interaural phase could truly be discriminated—rather, it
had been proposed that somehow interaural phase differences were converted into interaural intensity differences. Bannister conducted a
series of experiments to assess this idea and concluded that “binaural phase differences are appreciated in some manner which is distinct
from the appreciation of binaural intensity differences.” Bannister also reported experiments on time-intensity trading in sound localization and reported that sometimes two images were perceived. Rawdon Smith and independently Littler and Ewing studied the phenomenon of auditory fatigue (temporary threshold shift, TTS). They showed that the maximum TTS produced by an intense fatiguing tone
occurred for test tones with frequencies well above that of the fatiguing tone, a phenomenon that was repeatedly confirmed in later studies and that has only recently been explained. A common feature of the papers describing this work was a detailed description of the (often ingenious) apparatus that was developed to conduct the experiments.
10:40
4aPPb7. On the developments of psychoacoustics in the Netherlands in the first half of the 20th century. Hendrikus Duifhuis (ENT
Dept., Univ. of Groningen, Zonnebloemweg 21, Paterswolde 9765 HW, Netherlands, h.duifhuis@rug.nl)
Formally psychoacoustics started after ~1950, but during the preceding period many basic elements emerged. Traditionally the Netherlands’ developments were most directly linked to European mainland schools, but in the forties British and American interests took
over most links. The great wars during this period stimulated an interest in information transmission (telephone and radio) and later in
(de)coding. The development of smaller electronic devices had a spin off for the hearing impaired—portable hearing aids (Philips). Two
researchers active in these fields were E. ter Kuile and J. F. Schouten. Ter Kuile received his Ph.D. degree in Amsterdam (1904),
Schouten his in Utrecht (1937). Ter Kuile was the first investigator with a theory about the function of the tectorial membrane and the
cochlear haircells, but he was also interested in the role of waveform and spectra re timbre and speech properties. Schouten, after his
PhD in vision, switched to hearing at Philips “Nat.Lab” in Eindhoven, and worked on the case of “the missing fundamental,” for which
he introduced the term “residue.” Again a problem in which spectral and temporal coding are competitive. Whereas Ter Kuile strongly
builds on Helmholtz, after 1945 Schouten got in touch with colleagues at MIT and Bell-Labs.
3824
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3824
11:00
4aPPb8. Psychoacoustics in Germany in the period 1900 to 1950. Armin Kohlrausch (Human Technol. Interaction, Eindhoven Univ.
of Technol., TU Eindhoven, PO Box 513, Eindhoven 5600MB, Netherlands, a.kohlrausch@tue.nl)
Psychoacoustic research in Germany in this period was performed in the context of various academic disciplines (experimal psychology, musicology, physics, communication engineering) and a range of institutions, covering both industrial laboratories (AES, Siemens
& Halske), (technical) universities (Berlin, Breslau, Dresden) and (semi) governmental institutions like the Heinrich-Hertz Institut f€
ur
Schwingungsforschung (Oscillation Research) and imperial mail research institutions. Prominent researchers were Heinrich Barkhausen
(appointed as professor at the TU Dresden in 1911 for the field communications engineering); Erwin Waetzman (since 1912 professor at
the University Breslau); Ferdinand Trendelenburg (since 1922 with Siemans & Halske, Berlin, in addition since 1935 adjunct professor
at the University Berlin); Erwin Meyer, since 1928 at the Heinrich-Herz Institute in Berlin. The research questions were often derived
from technical needs of the upcoming communication and radio industry, or addressed consequences of innovations like the car. Barkhausen introduced in 1926 a loudness meter (by comparison with a standardized reference), which was taken into production by Siemens
a year later. Waetzmann’s habilitation thesis (1907, published as book in 1912) extended Helmholtz’ studies on the damping characteristics of “ear resonators.” Erwin Meyer started with binaural research in 1924 because he recognized its relevance for stereophonic sound
reproduction.
11:20
4aPPb9. The nature and import of the legacy of Binaural Hearing research from 1900 to 1950. Constantine Trahiotis and Leslie R.
Bernstein (Neurosci., UConn Health Ctr., 263 Farmington Ave., Farmington, CT 06030, tino@uchc.edu)
The goal is to present a historical picture of the research that serves as the foundations for Binaural Hearing as we know it today.
The focus will be on the scientists and students who conducted that research and the institutions at which they resided. Knowledge composing the foundation of the field of binaural hearing is commonly identified with several individual pioneering scientists whose publications formed its basis. What is not so well appreciated is the fact that the modern literature stems from succeeding generations of
doctoral and post-doctoral students that are related to the pioneers and each other via a rich set of integral institutional and personal
interrelationships. In today’s field of Binaural Hearing it has become commonplace to be trained by more than one mentor, in laboratories at more than one institution, in more than one discipline spanning anatomy, physiology, behavior, or acoustical engineering and, often, to perform mathematical modeling using information gleaned from any or all of them. Today, despite one’s particular background,
it is possible to be employed (often doing similar research) in any of a variety of settings within universities and industry. The primary
motivation for this talk stems from interest expressed by many of our younger colleagues. Over many years, they have shown a delight
in learning about “who trained and worked with whom, and where.”
11:40
12:00
4aPPb10. The German acoustical journal Akustische Zeitschrift (19361944). Armin Kohlrausch (Human Technol. Interaction, Eindhoven Univ. of
Technol., TU Eindhoven, PO Box 513, Eindhoven 5600MB, Netherlands, a.
kohlrausch@tue.nl)
4aPPb11. The trouble with reading machines: Exploring acoustic alphabets at Haskins Laboratories post World War II. Gabrielle O’Brien (Otolarnygology, Univ. of Washington, 1417 N.E. 42nd St., Box 354875,
Seattle, WA 98105-6246, andronovhopf@gmail.com)
A few years after the foundation of JASA in 1929, the “Akustische
Zeitschrift” (Hirzel Verlag, Leipzig) started to appear in 1936. This journal
was edited by Martin Gr€
utzmacher and Erwin Meyer, on behalf of the German
Science Foundation (DFG) and with support from the German Imperial Mail
and the Imperial Physical Technical Institute (first director in 1887 von Helmholtz). The motivation for this journal was the explosive growth of acoustics,
mainly enabled by the development of electroacoustics, the improved scientific foundations of room- and building acoustics, and the renewal of psychological and physiological acoustics. Right from the beginning, articles from
the area P&P had a prominent place. The first article was: Theoretical and experimental comparison of measurements of the absolute threshold of hearing
(Waetzmann and Keibs), followed by the first of many articles by Georg von
Bekesy: On physics of the middle ear and on hearing with a missing tympanic
membrane. One important aspect were the regular German summaries of
articles from English-language journals, a review of patents and acoustic
news. In the first edition, we can read that Erwin Meyer had been invited by
the ASA to talk at their annual meeting in New York October 29 to 31.
At the end of World War II, the American Office of Scientific Research
and Development turned to Franklin Cooper and Caryl Haskins, the
co-founders of Haskins Laboratories, to build working reading machines
that would assist blind veterans. Reading machines were devices that converted text to sound, using acoustic alphabets to assign each orthographic
character its own sound. It was expected that given sufficient user training
and an optimized acoustic alphabet, anyone could learn to use a reading
machine. But the Haskins group quickly discovered that even the most proficient users could only “read” a few words a minute, and speeding up the device caused letters to blur as the temporal resolving limits of the ear were
reached. The failure of acoustic alphabets, even employing a range of signal
parameters, led the researchers to begin analyzing speech signals with the
spectrograph, then a recent invention. This series of experiments demonstrated the infeasibility of rapid word recognition by acoustic alphabets, but
serendipitously set the stage to reveal mechanisms by which speech is efficiently perceived.
3825
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3825
4a WED. AM
Contributed Papers
WEDNESDAY MORNING, 28 JUNE 2017
ROOM 304, 9:15 A.M. TO 12:20 P.M.
Session 4aPPc
Psychological and Physiological Acoustics: Physiology Meets Perception I
Antje Ihlefeld, Cochair
Biomedical Engineering, New Jersey Institute of Technology, 323 Martin Luther King Blvd., Fenster Hall,
Room 645, Newark, NJ 07102
Sarah Verhulst, Cochair
Dept. Information Technology, Ghent Univ., Zwijnaarde 9052, Belgium
Chair’s Introduction—9:15
Invited Papers
9:20
4aPPc1. Cochlear tuning and phase-locking assessed with complementary techniques within the same species, and extrapolation
to humans. Philip X. Joris and Eric Verschooten (Lab of Auditory NeuroPhysiol., Univ. of Leuven, Herestraat 49 box 1021, Leuven B3000, Belgium, philip.joris@med.kuleuven.be)
The cochlea decomposes sound into bands of its constituent frequencies and encodes the temporal waveform of these bands. These
two properties generate frequency-tuning and phase-locking in individual neurons of the auditory nerve. The relative roles of frequency
tuning and phase-locking toward important aspects of perception, e.g., the coding of speech and pitch, are heavily debated. Characterization of the limits of these processes in humans may help to clarify these relative roles. We studied threshold and suprathreshold frequency tuning, as well as phase-locking to the fine-structure of sound, by recording from single-fibers in the auditory nerve of different
species, including chinchilla, cat, and macaque monkey. We also developed stimulus and analysis paradigms to study frequency tuning
and phase-locking via mass potentials recorded near the cochlear round window, and applied these techniques to these same species as
well as to normal hearing human subjects. The comparison of bandwidths of single fibers with mass potentials within the same species
allows us to infer bandwidths of single fibers from human mass potential data. Combined with behavioral and otoacoustic emission data,
the evidence suggests that the human auditory nerve is unusual in its sharpness of frequency tuning but not in its range of phase-locking.
9:40
4aPPc2. Exploring auditory frequency selectivity using otoacoustic emissions. Karolina Charaziak (Caruso Dept. of Otolaryngol.,
Univ. of Southern California, 1540 Alcazar St., Los Angeles, CA 90033, KarolinaCharaziak2013@u.northwestern.edu) and Jonathan H.
Siegel (Roxelyn and Richard Pepper Dept. of Commun. Sci. and Disord., Northwestern Univ., Evanston, IL)
Frequency selectivity—the ability of the auditory system to separate one stimulus out from others on the basis of frequency—originates in the cochlea where outer hair cells (OHCs) provide sharp mechanical tuning at low signal levels. Whether humans have sharper
cochlear tuning compared to common laboratory species has been a matter of scientific debate. Whereas cochlear tuning can be measured directly in animal models, similar tests cannot be performed in humans due to their invasiveness. However, cochlear tuning can be
gauged from measurements of other OHC-dependent phenomena, such as otoacoustic emissions (OAEs). Here, we compare frequency
selectivity derived from stimulus-frequency (SF)OAE suppression tuning curves measured in both humans and chinchillas. SFOAE suppression tuning curves have previously been shown to be as sharply tuned as compound-action-potential tuning curves in chinchillas and
behavioral tuning curves in humans (when using simultaneous masking). These earlier findings support the ideas that: (1) OAE tuning
curves reflect aspects of cochlear tuning and (2) Sharp frequency selectivity is primarily established at the level of the cochlea. After correcting for interspecies differences in the apical-basal transition frequency and in cochlear lengths, we find that SFOAE tuning curves
are twice as sharp in humans as in chinchillas.
10:00
4aPPc3. Characterizing hidden and overt sensorineural hearing loss: Assays of peripheral pathophysiology and histopathology.
Sharon G. Kujawa (Dept. of Otology and Laryngology, Harvard Med. School and Massachusetts Eye and Ear Infirmary, Massachusetts
Eye and Ear Infirmary, 243 Charles St., Boston, MA 02114, sharon_kujawa@meei.harvard.edu)
The study of acquired sensorineural hearing loss (SNHL) has relied heavily on assessments of hair cell injury and loss and the threshold sensitivity losses that accompany them. This strategy is based on the view that, for most acquired SNHL etiologies, sensory hair cells
are the most vulnerable cochlear elements and auditory neurons (ANs) degenerate if, and only long after, loss of their peripheral inner
hair cell (IHC) targets. Recent work in our laboratories has suggested that, in aging and after noise, the synaptic connections between
IHCs and ANs are actually the most vulnerable elements, and that massive cochlear synaptopathy can be seen before hair cell loss or
3826
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3826
threshold elevation. In characterizing these outcomes, our investigational algorithms have relied on assessments of hair cell and earlystage neural generators of evoked potentials, coupled with otoacoustic emissions, tests of middle ear and medial olivocochlear reflexes
and electrophysiological assays of neural temporal fine structure processing. Responses recorded non-invasively facilitate translation of
findings in the animal models to clinical diagnostics. We couple these with assessments of histopathology of hair cells, neurons and the
synapses that connect them, which, at present, provide the only truly definitive assay for the synaptopathy. [Work supported by NIH/
NIDCD, DoD, and ONR.]
10:20–10:40 Break
10:40
4aPPc4. Discrimination of Schroeder-phase complex stimuli: Physiological, behavioral, and modeling studies in the Mongolian
gerbil. Laurel H. Carney (Univ. of Rochester, 601 Elmwood Ave., Box 603, Rochester, NY 14642, Laurel.Carney@Rochester.edu),
Henning Oetjen, and Georg Klump (Cluster of Excellence Hearing4all and Dept. for Neurosci., School of Medicine and Health Sci.,
Univ. of Oldenburg, Oldenburg, Germany)
Schroeder-phase tone complexes have been studied psychophysically, behaviorally, and physiologically in several species. These
stimuli are comprised of equal-amplitude harmonics, defined by fundamental frequency (F0), sign and slope (C) of the phase spectra,
and number of harmonics. Perceptual differences between positive (C + ) and negative (C-) phase spectra (time-reversed versions of
each other) challenge the power-spectrum model and must be explained by temporal cues, either fine structure or the low-frequency fluctuations of peripherally filtered stimuli. Here, physiological, behavioral, and modeling studies of discrimination of Schroeder complexes
in Mongolian gerbil will be presented. Behavioral discrimination of complexes with different C values or signs decreased as F0
increased from 50 to 400 Hz. Here we investigate whether peripheral neural fine structure or changes in low-frequency fluctuations of
rate underlie discrimination. Modeling results show that both synaptic adaptation and frequency-dependent asymmetry of tuning create
differences in the F0-related vector strength of peripheral responses. Because inferior colliculus (IC) neurons are sensitive to low-frequency fluctuations, we recorded responses in the midbrain to the behavioral stimuli. Discharge rates of many IC cells varied significantly with the value and sign of C. The direction of rate differences depended upon best frequency and type of modulation transfer
function.
11:00
4aPPc5. Influence of behavioral state on auditory processing in the marmoset midbrain. Stephen V. David, Luke A. Shaheen, and
Sean J. Slee (Otolaryngol., Oregon Health and Sci. Univ., 3181 SW Sam Jackson Pk Rd., L335A, Portland, OR 97239, davids@ohsu.
edu)
4a WED. AM
Recent work shows that neurons in the inferior colliculus (IC) undergo receptive field changes during auditory behavior in ferrets
(Slee & David, 2015. J. Neurosci.). In this study, we tested for similar effects in the marmoset monkey, a highly vocal primate species.
We trained two marmosets to detect a pure tone target embedded in a background of random spectral shape (RSS) distractor stimuli. We
recorded single-unit activity in the central nucleus of the IC (ICC). Neural responses to targets and distractors were compared between
conditions when the marmoset performed the detection task or listened passively. When target frequency was near the neuronal best frequency (BF), responses to distractors were suppressed in about half of the neurons during behavior. Target responses were modulated in
some neurons but suppression or enhancement was equally likely. We also measured effects of task engagement in non-central divisions
of the IC (NCIC). There, target responses were strongly enhanced during behavior in about half of neurons in this region (median
change = 70%). We also found a subset of NCIC neurons that responded with increased firing following the target sound, possibly encoding a reward-related signal. This study replicates our previous finding that distractor responses are suppressed during auditory behavior
in the ferret IC. In addition, we found an area in the NCIC with large enhancement of target responses during behavior, suggesting that
task engagement produces distinct effects across subdivisions of the IC.
11:20
4aPPc6. Sustained periodic phase locking during a perceptual streaming task using resolved and unresolved harmonic tones.
Dorea Ruggles, Alexis N. Tausend, and Andrew J. Oxenham (Psych., Univ. of Minnesota, 75 East River Rd., Minneapolis, MN 55455,
druggles@umn.edu)
Perceptual object formation and streaming are critical aspects of auditory processing. Previous studies have found cortical markers
of streaming and attention, but it is unclear if these depend on tonotopic separation or whether other features, such as fundamental frequency (F0), are represented by these cortical markers. In this study, we used complex tones comprised of resolved or unresolved harmonics and simultaneously measured cortical and subcortical steady-state EEG responses during a behavioral streaming task. Subjects
attended to either a fast stream of high-F0 harmonic complexes or a slower stream of lower-F0 complexes, filtered into the same spectral
region. Subjects maintained directed attention during 1-minute blocks while reporting level oddballs in the attended stream and ignoring
oddballs in the unattended stream. During the task, sustained envelope following responses phase locked to the complex F0 (nominally
subcortical) and to the presentation rates (nominally cortical) were recorded. Phase-locking analyses suggest that directed attention does
not alter subcortical sustained responses or cortical sustained responses to streams comprised of unresolved harmonics. Attentional
effects in cortical responses to streams of resolved harmonics suggest that the cortical markers of attention and segregation reflect
enhancement based on tonotopic segregation. [Work supported by NIH grants R01DC007657 and R01DC005216.]
3827
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3827
11:40
4aPPc7. Measuring speech-in-noise abilities as part of the standard audiometric protocol. Matthew Fitzgerald (Dept. of Otolaryngol. - Head and Neck Surgery, Stanford Univ., 550 1st Ave., NBV-5E5, New York, NY 10016, fitz.mb@gmail.com), Austin Swanson,
and Steven Losorelli (Dept. of Otolaryngol. - Head and Neck Surgery, Stanford Univ., Palo Alto, CA)
Many apsects of the audiologic test battery have remained unchanged for decades, and do not assess the real-world communicative
abilities of the patient. For example, word-recognition in quiet is the default test of speech recognition, despite difficulties understanding
speech in noise being the primary complaint of most patients. Our goal is to adjust the audiologic test protocol to better assess the communicative abilities of the patient. Towards that goal, we are devising a new clinical protocol in which speech-in-noise testing, rather
than word-recognition in quiet, is the default speech test in the audiologic test battery. We have data on over 1500 adults, which indicate
normal audiometric thresholds with abnormal speech in noise results for many patients. Perhaps more important, our data also suggest
that most instances in which word-recognition in quiet is excellent can be predicted with a combination of audiometric thresholds and
speech-in-noise abilities. These data have been used to create clinical recommendations in which speech in noise testing becomes the
default clinical test, with guidelines for when word-recognition in quiet is likely to have diagnostic significance and should be conducted. Making subtle, but fundamental shifts of this sort in clinical testing may have significant research and clinical implications.
Contributed Paper
12:00
4aPPc8. On the relationships between otoacoustic emissions, auditory
evoked potentials, and psychoacoustical performance. Dennis McFadden, Edward G. Pasanen, Mindy M. Maloney (Psych., Univ. of Texas, Austin, 108 E. Dean Keeton A8000, Austin, TX 78712-1043, mcfadden@psy.
utexas.edu), Erin M. Leshikar, Michelle H. Pho, and Craig A. Champlin
(Commun. Sci. and Disord., Univ. of Texas, Austin, TX)
Performance was measured on several common psychoacoustical tasks
for about 70 subjects. The measures included simultaneous and temporal
masking, masking by tones and by complex sounds, critical bandwidth,
release from masking, and detection in the quiet. Also measured were spontaneous, click-evoked, and distortion-product otoacoustic emissions (OAEs)
and auditory evoked potentials (AEPs, short and middle latency). Of interest
were the correlations between psychoacoustical performance and the
3828
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
various physiological measures as well as any mean differences by sex and
by menstrual cycle. Subjects were tested behaviorally in same-sex crews of
4–8 members, and behavioral testing required from 8 to 10 weeks for each
crew. Correlation and effect size were the primary measures of interest.
Resampling was used to determine implied significance for the various comparisons studied. Some correlations between physiological measures were
moderately high, but the correlations between psychoacoustical tasks and
the different physiological measures were generally low, although there
were some unexpected differences by racial background. That is, the individual differences observed in psychoacoustical performance generally
were not related to the individual differences in the various physiological
measures. For these subjects, at least, psychoacoustical performance seemed
unrelated to the mechanisms underlying OAEs and AEPs. [Work supported
by NIDCD (DC000153).]
Acoustics ’17 Boston
3828
WEDNESDAY MORNING, 28 JUNE 2017
ROOM 201, 8:00 A.M. TO 12:20 P.M.
Session 4aSAa
Structural Acoustics and Vibration, Biomedical Acoustics, Signal Processing in Acoustics, and Physical
Acoustics: Novel Techniques for Nondestructive Evaluation I
Brian E. Anderson, Cochair
N145 Esc, Brigham Young Univ., MS D446, Provo, UT 84602
Marcel Remillieux, Cochair
Los Alamos National Laboratory, Geophysics Group (EES-17), Mail Stop: D446, Los Alamos, NM 87545
Sylvain Haupert, Cochair
Laboratoire d’Imagerie Biom
edicale, UPMC Sorbonnes Universit
e CNRS INSERM, 15 rue de l’
ecole de m
edecine,
PARIS 75006, France
Invited Papers
8:00
4aSAa1. Near surface ultrasonic imaging. Anthony Croxford and Jack Potter (Mech. Eng., Univ. of Bristol, Queens Bldg., University
Walk, Bristol bs81tr, United Kingdom, a.j.croxford@bristol.ac.uk)
4a WED. AM
Ultrasonic phased arrays offer excellent performance for detecting and classifying defects; however when inspecting near the array,
there is typically a deadzone where electrical cross talk saturates the response making it impossible to measure. In many situations this
can be mitigated through the use of a physical standoff, however for permanently installed systems or in situ inspection of components
in access restricted areas such as gas turbines such a solution is impossible. In addition, such a standoff typically reduces the amplitude
of the received signals degrading the signal to noise ratio (SNR). This paper reports on an approach that allows ultrasonic measurements
to be made of the near surface region. Specifically by measuring the diffuse response of the system it is possible to reconstruct the greens
function between any pair of transducers. As this is created from data that is not saturated there is no deadzone in the resulting image.
When combined with advanced sampling techniques using hadamard coding the region immediately in front of the array can be imaged
with performance similar to that seen in the bulk material despite this reconstruction approach. In this paper, the reconstruction and sampling technique are explained and demonstrated with images shown for defects within 0.5 mm of the sample surface in both aluminium
and composite components. The advantages of the sampling approach are shown for immersion coupled measurements to demonstrate
the SNR gains achievable.
8:20
4aSAa2. Imaging challenging media by full waveform inversion of ultrasonic signals. Ludovic Moreau, Romain Brossier, and
Ludovic Metivier (ISTerre, Universite Grenoble Alpes, France, ISTerre - Maison des GeoSci., 1381 rue de la Piscine - CS 40700, Grenoble Cedex 9 38058, France, ludovic.moreau@univ-grenoble-alpes.fr)
Austenitic welds are important parts of the cooling system in nuclear power plants, which undergo extreme temperature and pressure
variations that may cause defects to appear in the welded zone. If not attended, these may lead to a leakage of radioactive liquids. It is
therefore crucial to assess the structural integrity of austenitic welds by detecting and imaging such defects. Ultrasonic methods are one
of the reference methods in this matter. Current imaging techniques rely on phased array technology to focus the ultrasonic energy based
on beamforming approaches such as the total focusing method. Because the elastic properties and exact geometry of austenitic welds are
unknown, these methods fail to produce a reliable image of the inspected area. We introduce the full waveform inversion (FWI) of ultrasonic signals as a promising alternative imaging method. Adapted from Geophysics, the FWI uses a numerical model to simulate the
experiment, and iterates by updating the model until the error between experimental and synthetic data is minimized. The model inputs
are elastic properties in the inspected area. Eventually, it converges towards a reliable representation of the actual weld, including its geometry, elastic properties, and defects.
3829
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3829
8:40
4aSAa3. Estimating the penetration depth of stress corrosion cracks using time-reversal acoustics—A Numerical Study. Lukasz
J. Pieczonka (Robotics and Mechatronics, AGH Univ. of Sci. and Technol., Al. A. Mickiewicza 30, Krakow 30059, Poland, lukasz.pieczonka@agh.edu.pl), Marcel Remillieux (Geophys. group, EES-17, Los Alamos National Lab., Los Alamos, NM), Brian E. Anderson
(N145 Esc, Brigham Young Univ., Provo, UT), Timothy J. Ulrich, and Pierre-yves Le Bas (Geophys. group, EES-17, Los Alamos
National Lab., Los Alamos, NM)
It has been shown experimentally that it is feasible to use time-reversal elastic nonlinearity diagnostics (TREND) to estimate the penetration depth of stress corrosion cracks (SCC) in steel samples. TREND is a method to focus elastic waves energy to a point in space in
order to probe that point for damage. High frequencies are used to probe near the surface, while low frequencies are used to probe deeper
into the material. Depth estimation of a crack is important to determine when a critical size of defect is reached. This paper aims at a
parametric analysis of the time reversal focusing process by means of numerical simulations. The influence of excitation frequency and
waveform on the focal spot size and penetration depth are analyzed. An in-house simulation framework based on a commercial Finite
Element (FE) solver is used to perform a series of numerical simulations of time-reversal focusing in a steel sample with a curved crack.
Results of this study could help to understand the relationship between the vibration responses measured at the sample’s surface and
crack-wave integrations at different penetration depths. Ultimately, this knowledge can be used to optimize the experimental setup and
facilitate the interpretation of experimental results.
9:00
4aSAa4. Acoustic surface waves measurements by self mixing array transducers for nondestructive testing and structure health
monitoring applications. Roberto Longo, Ladji Adiaviakoye., Romain Feron, Mathieu Feuilloy, Alain LeDuff, and Guy Plantier
(ESEO Group, 10 Boulevard Jean Jeanneteau, Angers 49000, France, roberto.longo@eseo.fr)
In the Non Destructive Testing (NDT) and Structure Health Monitoring (SHM) field, Surface Acoustic Waves (SAWs) are widely
employed to detect defects and/ or to monitor the integrity of the structure to test. In this context, SAWs contactless measurements offer
several advantages with respect to the traditional use of a recording piezoelectric transducers fixed in the structure to test. They can be
easily employed in situations of difficult access to the structure avoiding the generation of undesired reflections due to the presence of a
second transducers mass. The gold standard technique for this this kind of measurements is the scanning Laser Doppler Vibrometer
(sLDV), which allows to perform contactless measurements with a high spatial resolutions and a good Signal to Noise ratio. The main
drawback is the high costs to be supported and the large acquisition time that a 2D grid of measurement points could require. Moreover,
to ensure an optimal reflection of the incident light, the surface of the structure under test is often treated with reflective paint or stickers.
The purpose of this article is to build a large bandwidth array transducer for NDT/SHM applications overcoming the sLDV measurements disadvantages. This technique is based on the use of Laser diodes. The principle behind this approach is the Self Mixing effect
occurring when the optical beam is back-scattered in the active cavity of the diode itself.
9:20–9:40 Break
9:40
4aSAa5. Measurement of texture in polycrystalline materials using ultrasonic wave speeds. Bo Lan, Michael J. Lowe, and Fionn P.
Dunne (Mech. Eng., Imperial College London, South Kensington, London SW7 2AZ, United Kingdom, m.lowe@imperial.ac.uk)
Manufacturing of metal components often results in significant texture, that is to say preferred orientations of their polycrystals.
Since each crystal can be strongly anisotropic, this can give the component orientation-dependent material properties, affecting stiffness,
thermal expansion, ultimate strength, and fatigue and creep resistance. So it is important to be able to measure texture, especially for
high-value safety-critical components. Typically, the Orientation Distribution Function (ODF) of the crystals can be measured on
exposed surfaces using EBSD, or in thin samples using neutron diffraction. But both are expensive, while there is currently no means to
measure ODFs internally in real components. The authors have developed a method to determine internal texture from measurements of
wave speeds at selected angles through the material. This is based on a convolution of wave speeds in a single crystal with the ODF, giving the resultant polycrystal wave speed angular function. Thus knowledge of the single crystal properties, together with wave speed
measurements in the polycrystal, enables the ODF to be extracted. The method has been validated experimentally on flat samples of hexagonal and cubic materials, using ultrasound measurements in a water bath, and comparing the findings with neutron diffraction measurements, showing excellent agreement.
Contributed Papers
10:00
4aSAa6. Ultrasonic imaging of nonlinear scatterers buried in a medium.
Sylvain Haupert, guillaume renaud (Laboratoire d’Imagerie Biomedicale,
UPMC Sorbonnes Universite CNRS INSERM, 15 rue de l’ecole de
medecine, PARIS 75006, France, sylvain.haupert@upmc.fr), and Andreas
Schumm (EDF R&D, MORET SUR LOING, France)
An ultrasonic technique for imaging nonlinear scatterers, such as cracks,
buried in a medium has been recently proposed. The method called amplitude
modulation consists of a sequence of three acquisitions for each line of the
image has been implemented on conventional phased array ultrasonic devices.
The first acquisition is obtained by transmitting with all elements of the
3830
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
phased array while the second and third acquisitions are obtained by transmitting with odd elements only and even elements only, respectively. An image
revealing nonlinear scattering from the medium is reconstructed line by line
by subtracting the responses measured with second and third acquisitions
(odd elements and even elements) from the response obtained with all elements transmitting. The proof of concept was performed on a unique sample
which has shown that amplitude modulation has a higher detection specificity
and a better contrast of the crack than the conventional ultrasound image. The
goal of this study is to gain a better knowledge of the capabilities of the amplitude modulation method by determining its sensitivity and robustness compared to the conventional imaging for several stainless steel samples with
different grain sizes as well as different crack sizes and orientations.
Acoustics ’17 Boston
3830
4aSAa7. Diffuse ultrasound monitoring of stress and damage development on large scale concrete structures. Eric Larose, Yuxiang Zhang,
Ludovic Moreau (ISTerre, CNRS, CS 40700, Grenoble Cedex 9 38058,
France, eric.larose@univ-grenoble-alpes.fr), Thomas Planes (Dept of Earth
Sci., Univ. of Geneva, Genève, Switzerland), and Anne Obermann (SED,
ETH-Zurich, ZURICH, Switzerland)
This paper describes the use of an ultrasonic imaging technique (Locadiff [1]) for the NDT&E of concrete structures. By combining coda wave
interferometry and a sensitivity kernel for diffuse waves, Locadiff can monitor the elastic and structural properties of a heterogeneous material with a
high sensitivity, and can map changes of these properties over time when a
perturbation occurs in the bulk of the material. The applicability of the technique to life-size concrete structures is demonstrated through monitoring 2
reinforced concrete strutures [2-3]. Locadiff achieved to detect and locate
the cracking zones in the core of concrete and monitor the internal stress
level in both temporal and spatial domains by mapping the variation in velocity caused by the acoustoelastic effect. The mechanical behavior of the
concrete structure is also studied using conventional techniques such as
acoustic emission, vibrating wire extensometers, and digital image correlation. [1] E. Larose et al., “Locating a small change in a multiple scattering
environment,” Appl. Phys. Lett. 96 (20), 204101 (2010). [2] Y. Zhang et al.,
“Diffuse ultrasound monitoring of stress and damage development on a 15ton concrete beam,” J. Acoust. Soc. Am. 139 1691-1701 (2016). [3] Y.
Zhang et al., “3D in-situ imaging of cracks in concrete using diffuse ultrasound,” Struct. Health Monitoring, under review (2017).
10:40
4aSAa8. The defect detection algorithm that combined spectrum entropy with vibrational energy ratio for acoustic inspection method.
Kazuko Sugimoto (Graduate School of Eng., Toin Univ. of Yokohama,
1614 Kurogane-cho, Aoba-ku, Yokohama 225-8503, Japan, kazukosu@
toin.ac.jp), Tsuneyoshi Sugimoto (Graduate School of Eng., Toin Univ. of
Yokohama, Yokohama, Kanagawa, Japan), Noriyuki Utagawa (Sato Kogyo
Co.,Ltd., Atsugi, Japan), and Kageyoshi Katakura (Meitoku Eng., Tokyo,
Japan)
We have studied the non-contact and non-destructive acoustic inspection
method using airborne sound wave and a laser Doppler vibrometer. Internal
defects (crack, peeling etc.) of concrete near the surface are excited by
vibrational energy of acoustic radiation. The resonance frequency of flexural
vibration of the defect (inside to depth 8-10 cm) of concrete can be measured. By our technique, a non-destructive test is possible from a long distance more than 5 m max to 30 m. The two-dimensional vibration velocity
data are measured and processed numerically. An image of the defect is
reconstructed by vibrational energy ratio. There was a problem of an optical
noise resulting from the leakage of light reception depending on surface
state of concrete. To solve it, we introduced “Spectrum Entropy.” The data
under influence of an optical noise show a frequency characteristic similar
to a white noise. Spectrum entropy is calculated as an information entropy
and express white nature of a signal. Therefore, we propose an algorithm
that combines spectrum entropy with vibrational energy ratio. Our technique
was applied to an experiment with a real concrete structure (bridge). The defective part was extracted as an image and the validity of our method was
confirmed.
11:00
4aSAa9. Determination of the acetabular cup implant stability using an
acoustic method based on the impact between the hammer and the ancillary. Giuseppe Rosi, Vu-Hieu Nguyen, Antoine Tijou (Multiscale Modeling and Simulation Lab., CNRS, Creteil, France), Romain Bosc (Universite
Paris-Est Creteil, INSERM U955, IMRB, Creteil, France), and Guillaume
Haiat (Multiscale Modeling and Simulation Lab., CNRS, Laboratoire
MSMS, Faculte des Sci., UPEC, 61 Ave. du gal de Gaulle, Creteil 94010,
France, guillaume.haiat@univ-paris-est.fr)
the feasibility of retrieving the AC implant stability based on impact signal
analyses. AC implants with various sizes were inserted in 12 cadaveric hips
following the same protocol as the one employed in the clinic. An instrumented hammer was then used to measure the variation of the force as a
function of time produced during the impact between the hammer and the
ancillary. Then, an indicator I was determined for each impact based on the
impact momentum. A significant correlation (R2 = 0.69) was found between
I and the pull-out force. Moreover, a tridimensional axisymmetric finite element model was developed to simulate the insertion processes of the AC
implant into bone tissue during impacts. The variation of the force applied
between the hammer and the ancillary was analyzed and the numerical
results were compared with the experimental results. The results show the
potential of impact analysis to retrieve the bone-implant contact properties.
11:20
4aSAa10. Optical absorption effect on laser-generated resonances in
semi-transparent solid. Jer^
ome Laurent, Daniel Royer, and Claire Prada
(Institut Langevin, ESPCI Paris, CNRS (UMR 7587), PSL Res. Univ., 1 rue
Jussieu, Paris 75005, France, jerome.laurent@espci.fr)
We studied the effect of optical absorption on the Lamb wave thermoelastic generation in isotropic plates. For metallic plates, high optical
absorption results in a surface source and Lamb mode amplitudes depend on
the total deposited energy and laser source shape. In the case of moderately
absorbing plates, the laser beam can penetrate the sample to some optical
depth, producing a bulk source. Consequently, the radiation characteristics
of laser-ultrasound are significantly different. The displacement amplitudes
can be controlled by the incidence angle of the laser beam and depend on
the material absorption. We used a laser-based ultrasonic setup for the generation and detection of guided waves in semi-transparent plate-like structure. In a free elastic plate, resonances occur at Zero-Group Velocity (ZGV)
points where the phase velocity remains finite. For studying the transition
from a surface source to a bulk source we performed local measurements in
neutral density filters having different absorbance. ZGV resonances amplitude increase with absorption until a maximum is reached depending on the
order of the mode. Furthermore, these measurements allow to discriminate
the so-called thickness resonances, associated to the precursors, from ZGV
resonances. Finally, the experimental results are compared with semi-analytical simulations based on Spicer model.
11:40
4aSAa11. High speed non-contact acoustic inspection method using long
distance acoustic irradiation-induced vibration. Tsuneyoshi Sugimoto,
Kazuko Sugimoto (Graduate School of Eng., Toin Univ. of Yokohama,
1614, Kurogane-cho, Aoba-Ku, Yokohama, Kanagawa 2258503, Japan, tsugimot@toin.ac.jp), Noriyuki Utagawa (SatoKogyo Co., Ltd., Atsugi, Japan),
and Kageyoshi Katakura (Meitoku Eng., Tokyo, Japan)
The non-contact acoustic inspection method using air-borne sound can
detect the cavity defect and crack of inside concrete near the surface. As a
feature of this technique, focusing on the physical phenomenon of flexural
vibration of defects, we succeed in improving energy efficiency to vibrate
the defect effectively. It is possible to measure all day using a small
dynamo. Moreover, using a low-output laser Doppler vibrometer (LDV),
high signal to noise (S/N) ratio is achieved by our proposed broadband
“Tone Burst Wave” and “Time & Frequency-gate (T-F gate).” Therefore,
although a possible distance from a sound source and a LDV to target is dependent on output power of acoustic radiation, it was already confirmed that
a measurement of long distance over 30 m away is possible under the condition that a target is excited efficiently. In order to shorten measurement
time, we propose the “Multi Tone Burst wave” that fully utilized the principle of T-F gate. After many experiments using a concrete test objects and
some experiments of real concrete structures (bridge, etc.), we confirmed
the validity of high-speed measurement technique of the “Multi-Tone Burst
wave.”
The hemispherical acetabular cup (AC) implant is used during total hip
replacement surgery and is inserted in the acetabulum using impacts applied
with an orthopedic hammer. However, the assessment of the AC implant
primary stability remains difficult. The aim of this study was to investigate
3831
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3831
4a WED. AM
10:20
12:00
4aSAa12. Rayleigh wave method of measuring the frequency-dependent
shear modulus. Marius J. Muller and Luc Mongeau (Mech. Eng., Mcgill
Univ., 845 Rue Sherbrooke, Montreal, QC H3A 0G4, Canada, marius.muller@mail.mcgill.ca)
The viscoelastic characterization of biomaterials is needed to develop
compatible injectable implants for versatile medical applications. The present study is an investigation of injectable hydrogels for use in healing
scarred soft tissues. A Rayleigh wave method was used to measure the
frequency-dependent shear modulus of viscoelastic materials. A block of
synthetic material was cast and excited by a shaker over a wide frequency
range. The transverse velocity of the surface was recorded via an accelerometer and Laser Doppler Vibrometry (LDV). The linear phase delay validated
the use of a transfer function method. The complex elastic modulus and the
loss factor were obtained from the measured wave speed, and compared to
data from a torsional rheometer. The benefits of the wave propagation
approach compared to conventional parallel plate rheometry are that the material properties are acquired over a greater bandwidth. There is also a possibility of applying the same technique in vivo and in cell-seeded materials.
WEDNESDAY MORNING, 28 JUNE 2017
BALLROOM C, 8:00 A.M. TO 10:20 A.M.
Session 4aSAb
Structural Acoustics and Vibration: Topics in Structural Acoustics and Vibration (Poster Session)
Benjamin Shafer, Chair
Technical Services, PABCO Gypsum, 3905 N 10th St., Tacoma, WA 98406
All posters will be on display from 8:00 a.m. to 10:20 a.m. To allow contributors in this session to see the other posters, authors of
odd-numbered papers will be at their posters from 8:00 a.m. to 9:10 a.m. and authors of even-numbered papers will be at their posters
from 9:10 a.m. to 10:20 a.m.
Contributed Papers
4aSAb1. Effect of microannulus on ultrasonic pulse-echo resonance,
flexural, and extensional Lamb-wave cement-evaluation measurements.
Christoph Klieber (Schlumberger, 1 rue Henri Becquerel, Clamart 92140,
France, cklieber@slb.com) and Thilo Brill (Schlumberger, Paris,
France)
Subterranean wells are usually constructed by cementing steel tubes,
called casings, inside the borehole. The cement quality is typically verified
through ultrasonic measurements deployed from inside the casing. Environmental effects such as cement shrinkage or changes in static pressure can alter the bonding properties between casing and cement with significant
effects on the acoustic measurement response. The cement may detach from
the casing, opening a gap, called a microannulus. This microannulus is sized
from submicrometer to hundreds of micrometers and filled with either gas
or liquid. The subwavelength nature of the microannuli does not allow a
direct, unambiguous characterization through an ultrasonic measurement.
We studied the measurement signature of ultrasonic-pulse-echo resonance,
and flexural and extensional Lamb waves for air- and liquid-filled microannuli for various annulus materials and steel-casing thicknesses. This characterization allows statistically linking measured results to microannulus
widths. The highest-precision measurements used in-situ laser interferometry through transparent annulus samples to characterize the microannulus
thickness.
4aSAb2. A simplified compression test for the estimation of the Poisson’s ratio of viscoelastic foams. Paolo Bonfiglio and Francesco Pompoli
(Eng. Dept., Univ. of Ferrara, Via Saragat 1, Ferrara 44122, Italy, paolo.
bonfiglio@unife.it)
The present paper describes a simplified procedure for determining the
Poisson’s ratio of homogeneous and isotropic viscoelastic materials. To that
end, a cylindrical shaped material is axially excited by an electromagnetic
shaker and displacement waves are investigated. Using a frequency sweep
3832
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
as an excitation signal, the frequency domain displacement response is
measured upstream and sideways of the sample itself and, by using a plane
cross section analytical model of the experimental setup, the Poisson’s ratio
is estimated using a minimization based procedure applied on radial displacement once the complex modulus has been directly determined under
the assumption of spring-like behavior of the axial displacement. The results
are presented and discussed for different materials and compared with wellestablished quasi-static and finite element simulations.
4aSAb3. Mapping of ultrasonic Lamb-wave field in elastic, layered
structures using acoustic and laser probes. Christoph Klieber (Schlumberger, 1 rue Henri Becquerel, Clamart 92140, France, cklieber@slb.com)
and Thilo Brill (Schlumberger, Paris, France)
In the oil and gas industry, the nondestructive evaluation of cemented
steel pipes in subterranean wells commonly involves the use of ultrasonic,
guided Lamb waves. These measurements help ensure that the cement
annulus between the rock formation and the steel pipes provide hydraulic
isolation between different depth zones of the well. Such techniques
employ the excitation of leaky flexural and extensional waves inside the
highly contrasting steel layer to distinguish solids from liquids in the inaccessible region outside the pipe. Furthermore, the annular cement layer
may exhibit defects such as cracks or channels which may compromise
zonal isolation. We present laboratory measurements using piezo-electric
needle probes and laser interferometry, as well as comparative modeling
results along spatial and temporal dimensions to visualize and quantify
leaky-Lamb-wave propagation for a variety of homogeneous liquid and
solid layers behind a steel sheet in planar and cylindrical geometries. We
characterize several annular materials with compressional velocities,
which are higher or lower than the Lamb phase speed to demonstrate the
effect on mode dispersion and attenuation. Furthermore, we study the
effects on transmission and reflection of Lamb waves at discontinuities
such as conduits in the inaccessible layer.
Acoustics ’17 Boston
3832
Nonreciprocal waveguides can exhibit different properties depending on
the direction, amplitude, and frequency content of an incident wave. These
systems are the focus of intense recent research activity in engineering and
in acoustics in particular driven by exciting applications such as cloaking,
subwavelength anechoic termination, and full-duplex communication. In
biology, nonreciprocity has been identified in cochlear mechanics and is
hypothesized to play a role in its filtering and nonlinear processing of sound.
Although there has been a surge in the study of unidirectional energy propagation in acoustic media, most of the work has been limited to non-linear
effects and/or narrowband non-reciprocity. However, nonlinear effects generate additional harmonics, complicating the approach, and narrowband
non-reciprocity constrains the bandwidth that can be used. We have developed the theory for linear nonreciprocal broadband acoustic waveguides
using a distributed feed forward control scheme that exhibits significant
broadband, nonreciprocity while maintaining stability. We discuss the different systems for which the paradigm may be adopted along with the theoretical formalism to predict the stability and dispersion relations for this
class of waveguides.
4aSAb5. Load identification by coherence analysis of structural
response. Silvia Milana, Giorgia Sinibaldi, Luca Marino, and Antonio Culla
(Dept. of Mech. and Aerosp. Eng., Univ. of Rome La Sapienza, via Eudossiana 18, Rome, Italy, silvia.milana@uniroma1.it)
Aim of this paper is the identification of uncorrelated forces acting on a
structure based on a coherence analysis of the structure response, performed
entirely in operative condition. In order to identify the position and the amplitude of the applied load only the responses of the structure and the experimental FRF are required. The proposed procedure consists of three steps.
First, the number of acting loads is established by the analysis of the
responses coherence, second the position of the acting forces is identified
using an index obtained by the knowledge of experimental FRF and of the
responses of the structure, then, the amplitude of the acting forces is computed in correspondence of the excited points. The procedure is tested by
two experiments. First experiment consists in the excitation of a complex
structure in several places with an instrumented hammer. The identification
is performed by the accelerations measured on a set of points of the structure
itself. The second experiment is carried out by exciting a structure with a
fluid flow in a wind tunnel. The accelerations of the structure vibrations and
the structure-born acoustic noise are measured and used to identify the fluid
pressure acting on the structure.
4aSAb6. Characterization of natural and recoil-induced vibration of an
AR-15 rifle at the cheekbone-stock interface. Timothy J. Cyders (Mech.
Eng., Ohio Univ., 261 Stocker Ctr., Athens, OH 45701, cyderst@ohio.edu),
Jeffrey J. DiGiovanni (Commun. Sci. and Disord., Ohio Univ., Athens,
OH), and Jay Wilhelm (Mech. Eng., Ohio Univ., Athens, OH)
Effects of sound pressure and shock on hearing loss have been widely
studied. Studies involving direct assessment of sound pressure levels and
influencing variables related to rifle discharge, especially with respect to
standard military small arms have mostly focused on the effects of external
pressure on hearing. Other studies have characterized physiological effects
of external vibration on animals and humans. Shock phenomena and highpressure waves have been linked to effects from gradual changes in tissue
thickness to traumatic brain injury and other central nervous system maladies. Rifle shooters, including soldiers, law enforcement officers, and hunters, typically shoulder rifle-type firearms in a way that puts the buttstock in
direct contact with the cheekbone, known as the “cheek weld.” This work
experimentally characterizes the vibrations experienced by the shooter at
the cheek/buttstock interface, and discusses expected physiological and
acoustical effects as a result of conduction into the skull.
3833
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
4aSAb7. Vibration control with a bistale nonlinear absorber. Volodymyr
Iurasov and Pierre-Olivier Mattei (Laboratoire de Mecanique et d’Acoustique,
Apt. 16, 25 rue d’Orient, Marseille 13005, France, iurasov@lma.cnrs-mrs.fr)
The goal of our research is to develop a nonlinear absorber that will be
effective at low frequencies and low amplitudes of vibration, notably in the
domain of vibroacoustics. From this perspective, the usual Nonlinear Energy
Sinks (NESs) based on the idea of the internal resonance have certain limitations due to the relatively high activation threshold. In order to decrease this
threshold we developed the idea of a bistable absorber. The bistability provides a chaotic regime in the absorber dynamics that leads to effective dissipation by the NES. While this energy transfer is completely different from the
well-known energy pumping, the two types share the main features such as
the activation threshold and the functioning region. The bistable NES that we
propose consists of a clamped-clamped buckled beam and an attached mass.
The experimental and numerical tests performed for our particular realization
of a bistable absorber have shown its high efficiency and robustness.
4aSAb8. Vibro-acoustic beamforming technique for the detection of a
monopole source inside a fluid-filled cylinder with turbulent flow: Numerical and experimental investigations. Souha Kassab (CEA Cadarache INSA LYON, 25 bis av. Jean Capelle, Villeurbanne 69621, France, souha.kassab@insa-lyon.fr) and Maxit Laurent (INSA Lyon, Villeurbanne, France)
This study takes part of the R&D framework on sodium-water steam
generators, mainly used in the cooling of Fast Sodium Reactors. The aim is
to develop a vibro-acoustic monitoring technique that can detect a leak of
water into sodium from a defective tube inside the heat exchanger. In the
presence of a significantly high background noise and a rather large bandwidth for the leak signature, threshold-detection methods might be insufficient. A beamforming technique is thus considered in order to enhance the
leak’s signature against the nuclear facility’s background noise. The present
study is carried out on a laboratory test case composed by a cylindrical shell
filled with water which is coupled to a hydraulic circuit with two axisymmetric stiffeners. Two kind of excitations are considered: first is a monopole
radiating inside the fluid-filled cylinder, the second is a well-established turbulent boundary layer at the neighborhood of the shell’s walls. Numerical
simulations are performed to estimate the shell acceleration field used as
input of the beamforming technique. Additionally, an experiment is carried
out on a mock-up to study the correlations between the numerical and experimental results associated with each excitation. The theoretical and experimental performances of the beamforming are also compared.
4aSAb9. Physical implementation of immersive boundary conditions in
one dimension. Theodor S. Becker, Dirk-Jan van Manen, Carly Donahue, and
Johan O. Robertsson (Earth Sci., ETH Zurich, Sonneggstrasse 5, Inst. of Geophys., NO H 41.1, Z€
urich 8092, Switzerland, theodor.becker@erdw.ethz.ch)
Immersive boundary conditions (IBCs) are a novel approach to target-oriented numerical wavefield modeling. When implemented physically, IBCs
allow the construction of anechoic chambers that actively suppress the reflection of wavefields from the boundaries of a physical domain, such as a wave
propagation laboratory. Moreover, IBCs can be used for immersive wave
propagation experimentation by linking the wave propagation in the physical
domain with the propagation in a virtual domain enclosing the physical domain. In this case, the IBCs correctly account for all wavefield interactions
between the two domains. The physical implementation of IBCs is achieved
by densely populating the boundary surrounding the physical domain with
transducers that enforce the necessary boundary conditions. To estimate the
signals that need to be emitted at the injection boundary, a second surface of
transducers slightly inside the physical domain records the propagating
wavefield. The recorded wavefield is extrapolated to the injection boundary
by evaluating a Kirchhoff-Helmholtz integral in real-time using an FPGAenabled data acquisition and control system. A recently constructed onedimensional acoustic wave propagation laboratory provides an ideal setup
for the physical installation of IBCs in one dimension. In this work, we demonstrate the implementation of IBCs on one side of this laboratory.
Acoustics ’17 Boston
3833
4a WED. AM
4aSAb4. Linear broadband nonreciprocity in waveguides using distributed feed forward control. Aritra Sasmal and Karl Grosh (Mech. Eng.,
Univ. of Michigan, 2350 Hayward St., Ann Arbor, MI 48109, Ann Arbor,
MI 48109, asasmal@umich.edu)
4aSAb10. Sensitivity of the radiated sound power to amplitude dependent damping. Mario Wuehrl, Matthias Klaerner, and Lothar Kroll (Lightweight Structures, Chemnitz Univ. of Technol., Reichenhainer Str. 31/33,
Chemnitz 09126, Germany, mario.wuehrl@mb.tu-chemnitz.de)
Using low shear resistant cores in metal plastic composites offers the
possibility of an increased damping and therefrom improved acoustic properties of extensive metal sheet structures which are sensitive to vibrations.
The damping of the single components in the composite shows an amplitude
dependency. Established numerical material models in the finite element
method do not consider any amplitude dependency. Therefore, the influence
of the damping on the radiated sound power is evaluated numerically for
monolithic rectangular steel plates and subsequently extended for the metal
plastic composite. The results of the experimental characterization of the
amplitude dependency for the components of the composite are described.
Relevant parameter ranges are identified and their effect on the sound radiation is outlined.
4aSAb11. Improvement of acoustic and vibration models by temporal
error metrics. Alyssa T. Liem and James G. McDaniel (Mech. Eng., Boston Univ., 110 Cummington Mall, Boston, MA 02215, atliem@bu.edu)
Analyses and examples are presented that explore the limits and accuracies of a technique for improving acoustic and vibration models by temporal
comparisons to experimental data. In a previous presentation, the authors
proposed the use of impulsive excitations followed by time windowing of
responses. This approach allows comparisons between experimental data
and model predictions over an isolated spatial region whose volume is a
fraction of the entire system volume. The advantage of this spatial isolation
is that it significantly reduces the number of model parameters that must be
varied to bring the model predictions into agreement with the experimental
data. In the present work, the method is analyzed in detail to quantify the
limits and accuracies of the method relative to window size, number of measurement locations, and excitation. Two types of examples will be presented
to illustrate these findings. The first involves the improvement of material
properties for a homogeneous region. The second involves the improvement
of the coupling conditions between two homogeneous regions. Results of
these examples will be reviewed in the context of existing and emerging
measurement technologies, such as digital image correlation.
4aSAb12. Design of tunable acoustic metamaterials using 3D computer
graphics. Mark J. Cops, James G. McDaniel (Mech. Eng., Boston Univ.,
110 Cummington Mall, Boston, MA 02215, mcops@bu.edu), and Elizabeth
A. Magliula (NUWCDIVNPT, Newport, RI)
The goal of this work is to investigate how the combination of 3D computer graphics and finite element software can be used to rapidly design
materials with tunable properties for noise and vibration mitigation applications. Algorithms and software that create three-dimensional objects, known
collectively as 3D computer graphics, are widely used artistically for rendering, animation, and game creation. These approaches allow for the design of
complex topological structures such as cellular solids. This presentation
describes the use of 3D computer graphics to design cellular structures,
which can be imported into finite element software in order to determine
effective vibrational properties. This approach is advantageous for several
3834
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
reasons. It allows for quick variation of parameters of cellular solids such as
porosity and void fraction. It also is time efficient compared to alternative
methods such as performing computed tomography scans on physical samples and analyzing the imaged files. Furthermore, resulting designs can be
easily fabricated by direct 3D printing. This presentation will include analysis and discussion of designs generated by the proposed approach.
4aSAb13. Validation of a hybrid boundary and finite element method
for the simulation of large membrane-based piezoelectric transducer
arrays in immersion. Bernard Shieh, Karim G. Sabra, and F. Levent Degertekin (Mech. Eng., Georgia Inst. of Technol., 771 Ferst Dr., Love Bldg.,
Rm. 311B, Atlanta, GA 30332, levent.degertekin@me.gatech.edu)
Membrane-based piezoelectric ultrasonic transducers, especially piezoelectric micromachined ultrasonic transducers (PMUTs), are a promising
technology for the realization of large transducer arrays for use in integrated
imaging, sensing, and actuation where a broadband response is desirable. In
this work, a hybrid boundary and finite element method is proposed for the
transmit simulation of large PMUT arrays in immersion. Finite element software (COMSOL) readily handles the simulation of single membrane structures, from which static deformation (stiffness) and harmonic displacement
data is extracted. A boundary element formulation based on these inputs
handles the membrane-to-membrane acoustic cross-coupling through the
calculation of a mutual impedance matrix. For arrays consisting of hundreds
of membranes or more, the problem of quadratic storage and cubic time
complexity for boundary element is avoided by employing a multi-level fast
multipole algorithm (Shieh et al., IEEE Trans. UFFC, 63, 1967-1979). We
validate this hybrid method for common membrane geometries, including
square and circular membranes with varying degrees of electrode coverage.
4aSAb14. The effect of classroom capacity and size on vocal fatigue as
quantified by the vocal fatigue index. Russell E. Banks, Pasquale Bottalico, and Eric J. Hunter (Communicative Sci. and Disord., Michigan State
Univ., 2232 Rolling Brook Ln., East Lansing, MI 48823, russbanks88@
gmail.com)
Previous research has concluded that teachers are at higher than normal
risk for voice issues that can cause occupational limitations. While some
risk factors have been identified, there are still many unknowns. To gain
more understanding regarding some of these unknowns, a self-reported survey was distributed electronically with more than 500 female respondents.
The survey quantified vocal fatigue using the Vocal Fatigue Index. The
areas investigated with the survey included the amount of potential risk
involved for teachers of varying classroom sizes. Teachers’ responses from
several different school districts throughout the United States were analyzed
to compare grade level and classroom size on the teachers’ reported experience of vocal fatigue. Results indicated a significant effect of the physical
size and capacity of classrooms on teachers reported amounts of vocal fatigue. Teachers of larger classrooms experienced significantly more vocal
fatigue. Age related factors were also examined in this study. These research
discoveries will have a great effect on the precautions taken by educators
and school administrators to avoid vocal fatigue, and, thus, occupational
risk from short- and long-term voice issues. Many additional factors which
may affect perceived vocal fatigue must be explored in future research.
Acoustics ’17 Boston
3834
WEDNESDAY MORNING, 28 JUNE 2017
ROOM 204, 10:40 A.M. TO 12:20 P.M.
Session 4aSAc
Structural Acoustics and Vibration: General Topics in Structural Acoustics and Vibration III
Benjamin Shafer, Chair
Technical Services, PABCO Gypsum, 3905 N 10th St., Tacoma, WA 98406
All posters will be on display from 8:00 a.m. to 12:20 p.m. To allow contributors in this session to see the other posters, authors of
odd-numbered papers will be at their posters from 8:00 a.m. to 10:10 a.m. and authors of even-numbered papers will be at their posters
from 10:10 a.m. to 12:20 p.m.
Contributed Papers
10:40
11:20
4aSAc1. Two methods of ship structure health monitoring sample
expansion: A comparison study. linke zhang, Hejun Li, and You Tan
(School of Energy and Power Eng., Wuhan Univ. of Technol., 1178 Heping
Ave., Wuhan, Hubei 430063, China, lincol_zhang@126.com)
4aSAc3. Detection of structural bolt detorquing using direct acoustic
measurement. Joe Guarino (Mech. and Biomedical Eng., Boise State
Univ., Boise, ID) and Robert Hamilton (Civil Eng., Boise State Univ., 1910
University Dr., Boise, ID 83725, rhamilton@boisestate.edu)
The fault diagnosis of ship structure-borne sound transfer path is mainly
based on data binary classification, which requires enough transfer functions
or training samples. Actually, the data collection sensors installed on the
ship are always limited resulting in quite lack of real training samples.
Hence, it is of great significance to solve such a small sample problem for
the ship structure acoustic fault identification and location. In this paper,
some theoretical studies have been conducted to compare two methods of
structure acoustic fault sample expansion including extracting the local
Green’s function from cross-correlations of ambient vibration and dynamic
monitoring based on the transmissibility function. Then a novel model of
fault sample expansion was put forward and verified through a simulation
example. The research shows that the proposed two methods have their respective pros and cons. Some suggestions on how to employ these two
methods to ship structure health monitoring were also proposed.
A method for detecting loosened bolts in a structural joint based upon
open-loop acoustic measurement is presented. The acoustic measurement is
taken direclty on the bolt head. The response of the bolt to a proximal hammer impact is evaluated and characterised using wavelet decomposition of
the signal measured from the bolt head. Results are presented from a set of
structural bolts in several conditions of preload and looseness. The study
could enable a quick and simple method for detecting and evaluating detorqued bolts in structural joints.
11:00
Vibration and noise control of mechanical structures play an important
role in the design of many industrial systems. Recently, Acoustic Black
Hole (ABH), a new passive structural modification approach to control
vibration and noise from mechanical structures has been developed and
studied. An ABH is usually a power-law taper profile due to which the wave
velocity gradually reduces to zero. Also, the vibration energy is concentrated at the locations of ABH due to the reduction of wavelength. The
exponent and parameter of the power-law curve define the geometry of an
ABH. This paper presents an investigation of the influence of the geometry
of ABH on the sound radiation from vibrating structures. This paper
presents both numerical and experimental work on the on the near field
sound radiation from vibrating cantilever beams containing ABH.
In order to design vehicles with diminished CO2/km emissions level, car
manufacturers aim at reducing the weight of their vehicles. One of the solutions advocated by the automotive engineers consists in the replacement of
metallic parts by lighter systems made of polymer composites. Unfortunately, the numerical simulations set to evaluate the vibratory and acoustic
performances of systems made of this kind of materials are often not sufficiently effective and robust so that convincing test/simulation correlations
are rarely met. Indeed, for polymer-based materials, numerous parameters
affect the vibroacoustic behavior of the system. For the present study, focusing on Polyamide 6 reinforced glass fiber plates (PA6-GF35), it will be demonstrated using DMA (Dynamic Mechanical Analysis) and FAT (Force
Analysis Technique) analysis that the viscoelastic properties depend on the
temperature and frequency but also on the humidity content. We will compare the FAT method which permits to identify the equivalent complex
Young’s modulus of a flat structure as a function of frequency to the DMA
measurements which give access to the complex modulus of the intrinsic
constitutive materials at small frequency (<20Hz). Finally, the anisotropy
of the PA6-GF35 has been evidenced using X-ray tomography images and
confirmed by DMA and FAT method analysis.
3835
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
4aSAc4. Influence of geometrical parameters of an acoustic black hole
on sound radiation. Chenhui Zhao and Marehalli Prasad (Dept. of Mech.
Eng., Stevens Inst. of Technol., 519 Hudson St., Hoboken, NJ 07030,
czhao1@stevens.edu)
12:00
4aSAc5. Vibration of a baffled piezoelectric ceramic circular disk in
lower radial modes, with near- and farfield sound radiation. Espen Storheim (Nansen Environ. and Remote Sensing Ctr., Thormhlensgate 47,
Bergen 5006, Norway, Espen.storheim@nersc.no), Per Lunde, and Magne
Vestrheim (Dept. of Phys. and Technol., Univ. of Bergen, Bergen,
Norway)
Accurate knowledge of the sound field generated by an ultrasonic transducer is important in certain applications, such as custody transfer oil and
gas flow measurement. Near- and farfield diffraction effects may influence
on the measurement result, demanding a need for control of magnitude and
phase influences. In many situations, the simplified model of a planar and
uniformly vibrating baffled piston is used to approximate the transducer’s
sound field. The Rayleigh distance of the piston model is often used as a
measure of the near- and farfield transition range. Real transducers typically
Acoustics ’17 Boston
3835
4a WED. AM
4aSAc2. Experimental characterization of short-glass-fiber polymer
composite for vibroacoustic applications. Mehdi Zerrad, Nicolas Totaro
(Laboratoire Vibrations Acoustique, INSA Lyon, Campus Lyontech la
Doua- Insa lyon b^atiment St. Exupery 25 bis Ave. Jean Capelle, Villeurbanne, Rh^one alpes 69621, France, mehdi.zerrad@insa-lyon.fr), Renaud G.
Rinaldi (MATEIS, INSA Lyon, Villeurbanne, France), Quentin Leclerc
(Laboratoire Vibrations Acoustique, INSA Lyon, Villeurbanne, France),
and Benjamin Eller (Renault SAS, Lardy, France)
11:40
exhibit a non-uniform vibration distribution at the front, with significant
side and rear vibration. It is of interest to investigate the accuracy of the traditional piston and Rayleigh distance approaches for real transducer sound
fields. A circular and baffled Pz27 piezoelectric ceramic disk with diameter
12.7 mm and thickness 2.0 mm is studied, operating in air. Finite-element
modeling, supported by analytical and numerical modeling, is used to investigate the near- and farfield sound pressure field over the 0-500 kHz range,
including radiation at the two lowest radial modes of the piezoelectric disk.
Significant nearfield effects are observed well beyond the Rayleigh distance,
both on and off the acoustic axis.
WEDNESDAY MORNING, 28 JUNE 2017
BALLROOM A, 8:00 A.M. TO 12:20 P.M.
Session 4aSC
Speech Communication: Speech Perception and Production in Clinical Populations (Poster Session)
Wendy Herd, Chair
Mississippi State University, 2004 Lee Hall, Drawer E, Mississippi State, MS 39762
All posters will be on display from 8:00 a.m. to 12:20 p.m. To allow contributors in this session to see the other posters, authors of
odd-numbered papers will be at their posters from 8:00 a.m. to 10:10 a.m. and authors of even-numbered papers will be at their posters
from 10:10 a.m. to 12:20 p.m.
Contributed Papers
4aSC1. Fricative and affricate productions of Mandarin-speaking children with cerebral palsy: A view from spectral moment analysis. ChinTing Jimbo Liu, Li-mei Chen (Foreign Lang. and Lit., National Cheng
Kung Univ., Tainan City, Taiwan), Katherine C. Hustad (Commun. Sci. &
Disord., Univ. of Wisconsin-Madison, Madison, WI), Ray D. Kent (Waisman Ctr., Univ. of Wisconsin-Madison, Madison, WI), Wan-Chen Wang,
and Hsin-yu Li (Foreign Lang. and Lit., National Cheng Kung Univ., No.1,
University Rd., Tainan, Taiwan, ilsewang1201@gmail.com)
Spectral moment analysis has been adopted as a means of quantifying
the differences in the place of articulation among fricatives and affricates
produced by typically developing populations and individuals with language
disorders in Mandarin Chinese (Jiang, Whitehill, McPherson & Ng, 2015,
2016; Lee, Zhang & Li, 2014). Therefore, this study intends to examine the
fricative/affricate productions in Mandarin Chinese between typically developing children (TDs) and cerebral palsy children (CPs) with dysarthria by
focusing on the spectral moment analysis (i.e. M1: mean, M2: standard
deviation, M3: skewness and M4: kurtosis). Results from nine CPs
(Mean: 7;0) and ten TDs (Mean: 5;7) indicated that: 1) The average M2 values of all fricatives and affricates were significantly higher among CPs; 2)
For individual segments, the M2 values of voiceless alveolo-palatal fricatives, aspirated/unaspirated voiceless dental affricates and the M3 values of
voiceless velar fricatives from CPs were significantly higher than those
from TDs. Based on the articulatory interpretations of the spectral moment
analysis (Li, Edwards & Beckman., 2009; Li, 2012), it was concluded that
CPs tended to have a more anterior place of articulation for certain fricatives
and affricates. Future cross-linguistic studies might help identify if current
observations are language-specific or universal.
4aSC2. Hearing thresholds and the use of delayed auditory feedback
R in patients with Parkinson’s disease. Emily
provided by SpeechEasyV
Wang, Eric S. Hulse (Commun. Disord. and Sci., Rush Univ., Rush Univ.
Medical Ctr., 1611 West Harrison St., Ste. 530, Chicago, IL 60612, emily_
wang@rush.edu), Leonard A. Verhagen Metman (Neurological Sci., Rush
Univ. Medical Ctr., Chicago, IL), and Valeriy Shafiro (Commun. Disord.
and Sci., Rush Univ., Rush Univ. Medical Ctr., Chicago, IL)
R device is registered with FDA as
The wearable in-the-ear SpeechEasyV
an anti-stuttering device. It was first investigated as a therapeutic option for
hypokinetic dysarthria in Parkinson’s disease (PD) by our group in 2008.
3836
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
While the results demonstrated positive treatment effects on hypokinetic
dysarthria in PD, the question remains whether long-term use of the device
could lead to detrimental effect on hearing in a typically aging population.
R device has the capacity to provide various levels of
Since the SpeechEasyV
increased volume to the wearer, the two main goals of the current study
were to determine 1) whether the long-term use of this device caused a significant shift in our subjects’ hearing thresholds; and 2) the various output
R CF-BTE devices used in the study. All fifteen
levels of the SpeechEasyV
R device for one year in the Wang et al.
subjects who wore the SpeechEasyV
2008 study were included. Seven met inclusion criteria for determining if a
potential standard threshold shift occurred during device usage. The results
R device will not cause hearing loss if proindicate that the SpeechEasyV
grammed correctly. However, because SpeechEasy device is capable of producing potentially high output levels if programmed incorrectly, a
standardized device fitting protocol must be considered.
4aSC3. Protophone codings for early speech development of infants
with high risk of cerebral palsy in the first year of life. Wan Chen Wang,
Li-mei Chen, Yung-Chieh Lin, and Li-Wen Chen (National Cheng Kung
Univ., 1 University Rd., Tainan, Taiwan, leemaychen1@gmail.com)
Protophones (Oller, 2000) are regarded as precursors of the human
speech. Deviation in the onset of the protophone patterns is associated with
a variety of developmental disorders. This study investigated early speech
development of three infants with high risk of cerebral palsy (CP) in the first
year of life based on 6 categories of protophones: vocant, growl, squeal, cry,
laugh, and others (including whisper, ingressive sound, raspberry, other consonant alone, and yell) (Buder et al., 2012). They were early identified with
high risk of CP and had been detected disorders by pediatricians. Three
recordings in the period of 4 to 12 months from each infant were perceptually categoriezed. The major findings were (1) The frequency of vocant
increased over time; (2) others dominated in the vocalizations; (3) squeal
(high pitch) show lower frequency than growl (low pitch). These preliminary findings displayed a profile of early vocalization of children with CP.
In future studies, data from typically developing children should also be
included to reveal the deviation of disordered speech. Moreover, more data
from more participants should be included to trace how these six categories
of protophones are variegated in the first year of life, and then develop toward mature speech categories.
Acoustics ’17 Boston
3836
Accurate encoding of the spectrotemporal envelope of speech is essential for intelligible perception. We have recently developed a psychophysical procedure that identifies the relative contributions of particular
spectrotemporal modulations (STMs) to intelligibility. Here, we use the procedure to test the hypothesis that intelligibility is supported by different patterns of STMs in hearing-impaired versus normal-hearing listeners. A group
of 20 hearing-impaired listeners and an age-matched group of 13 normalhearing ( 25 dB HL from 0.25-4 kHz) listeners performed a speech
recognition task in which acoustically-degraded sentences presented over
headphones were repeated back verbally and scored for keywords correctly
identified. Different patterns of STMs were removed on each trial by applying a randomly-shaped filter to the 2-D modulation power spectrum.
Reverse correlation was used to identify STMs that predicted performance
(i.e., intelligibility) across trials. Three main findings describe the results:
(1) the group-average patterns of STMs supporting intelligibility did not differ between hearing-impaired and normal-hearing listeners; (2) greater individual variability in STM patterns was observed within the hearingimpaired group; and (3) hearing-impaired listeners required more overall
STM information to perform the task. The results suggest hearing-impaired
listeners rely on the same STM information as normal-hearing listeners but
encode this information less efficiently.
4aSC5. Automated anaysis of syllable complexity in children as an indicator of speech disorder. Marisha Speights (Commun. Disord., Auburn
Univ., 1199 Haley Ctr., Auburn, AL 36849, mls0096@auburn.edu),
Suzanne E. Boyce (Commun. Sci. and Disord., Univ. of Cincinnati, Cincinnati, OH), Joel MacAuslan (Speech Technol. & Appl. Res. Corp., Bedford,
MA), and Noah H. Silbert (Commun. Sci. and Disord., Univ. of Cincinnati,
Cincinnati, OH)
Speech disorders affecting intelligibility commonly occur in young children. One way to differentiate normal versus delayed speech development is
to measure the ability to articulate increasingly complex syllables. We present a computer-assisted approach, Syllabic Cluster Analysis (SCA) as an
objective measure of syllabic complexity. SCA uses clusters of acoustic
landmarks to detect articulatory complexity in the production of syllables.
Although most research using landmarks focuses on the lexical content of
speech, SCA focuses on non-lexical differences which is well suited for
analysis of speech with decreased intelligibility. Feasibility of this system to
predict disordered speaker group is tested. Words recorded by normal adult
(n = 10) and typical child (n = 20) speakers and children with speech disorders (n = 10) were ranked using a published word complexity measure to establish a high complexity word list (n= 20). Multinomial logistic regression
models are fit for Landmarks per Syllabic Cluster (LM/SC) and Syllabic
Cluster per Utterance (SC/UTTs) counts as speech complexity predictors.
LM/SC counts are a significant predictor of disordered status in the model
when measuring a short set of complex words (p <.001). Future work toward the development of automated diagnostic tools is discussed.
4aSC6. Acoustic cues to distinctive features are modified in the speech
of typically-developing versus atypically developing children. Tanya
Talkar, Jennifer Zuk, Maria X. Guerrero, Jeung-Yoon Choi, and Stefanie
Shattuck-Hufnagel (Massachusetts Inst. of Technol., 70 Pacific Ave., Cambridge, MA 02139, tjtalkar@mit.edu)
Non-word repetition tasks have been used to diagnose children with various developmental difficulties with phonology, but these productions have
not been phonetically analyzed to reveal the nature of the modifications produced by children diagnosed with SLI, autism spectrum disorder or dyslexia
compared to those produced by typically-developing children. In this study,
3837
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
we compared the modification of predicted acoustic cues to distinctive features of manner, place and voicing for just under 30 children (ages 5-12),
for the CN-Rep word inventory, in an extension of the earlier analysis in
Levy et al. 2014. Feature cues, including abrupt acoustic landmarks (Stevens 2002) and other acoustic feature cues, were hand-labeled and analysis
of factors that may influence feature cue modifications included position in
the word, position in the syllable, word length measured in syllables, lexical
stress, and manner type. Results suggest specific patterns of modification in
specific contexts for specific clinical populations. These findings set the
foundation for understanding how phonetic variation in speech arises in
both typical and clinical populations, and for using this knowledge to develop tools to aid in more accurate and insightful diagnosis as well as
improved intervention methods.
4aSC7. Automated screening for speech disorders using acoustic landmark detection. Marisha Speights (Commun. Disord., Auburn Univ., 1106
Haley Ctr., Auburn, AL 45239, mls0096@auburn.edu), Keiko Ishikawa
(Commun. Sci. and Disord., Univ. of Cincinnati, Mason, OH), Suzanne E.
Boyce (Commun. Sci. and Disord., Univ. of Cincinnati, Cincinnati, OH),
and Joel MacAuslan (STAR Analytical Services / Speech Technol. & Appl.
Res. Corp., Bedford, MA)
Most children effortlessly learn how to coordinate movements for normal speech production. About one in twelve preschool-aged children, however, show delays in speech production capability that may put them at risk
for academic and behavioral difficulties, if not identified and treated. Automated tools that can distinguish between children with and without speech
and language impairments could serve as a useful clinical tool for early
identification of speech related disorders in young children. In this study, we
consider measures based on detecting sequences of acoustic landmarks characteristic of normal speech production over multisyllabic words and continuous speech samples. Ten normal adults, ten typical children and ten with
speech disorders recorded twenty multisyllabic words and thirty-three sentences. Acoustic landmarks patterns within utterances and syllabic clusters
are examined to characterize differences in landmark sequences between
normal and disordered speech. Shannon’s Entropy and ROC analysis are
used to evaluate the landmark patterns as potential diagnostic measures of
a-typical speech production. We discuss these results and our future work
toward developing a fully automated clinical screening tool.
4aSC8. Nasality patterns in word productions of children with cochlear
implants: Evidence from Greek. Areti Okalidou (Educational and Social
Policy, Univ. of Macedonia, 156 Egnatia St., P.O. Box 1591, Salonika 540
06, Greece, okalidou@uom.edu.gr), Laura L. Koenig (Commun. Sci. and
Disord., Long Island Univ., Brooklyn, NY), and George Psillas (1st University ENT Clinic, Ahepa Univ. Hospital, Salonika, Salonika, Greece)
Typical speech production development requires adequate auditory
input. Children who are born deaf or become deaf in early childhood display
atypical speech patterns, especially for articulatory actions that cannot be
seen. Such actions include tongue backing, laryngeal actions, and velar
function. Several previous authors have discussed how inadequate auditory
input affects anterior-posterior tongue positioning and laryngeal parameters,
compromising phonetic contrasts (e.g., vowel and voicing distinctions), but
few studies have directly assessed velar control in hearing-impaired children. This work presents nasometer data for Greek-learning children with
cochlear implants [CI] and age- and sex-matched normal hearing [NH] controls, ages 5-16 years. Participants produced single words, elicited in
response to visual and auditory prompts. Bisyllabic word types of varying
stress pattern were contrasted in nasality and position of the target consonant. Based on the two nasometer microphone signals (oral and nasal), word
and segment boundaries were defined, and nasometer values over these
intervals were extracted. We will present whole-word data, comparing
nasalance values between NH and CI children, and examples showing atypical oral-nasal timing in the CI population. This preliminary analysis shows
some of the ways in which hearing-impaired children with cochlear implants
may show unusual patterns of velar control.
Acoustics ’17 Boston
3837
4a WED. AM
4aSC4. Identification of the spectrotemporal modulations that support
speech intelligibility in hearing-impaired and normal-hearing listeners.
Jonathan Venezia (Auditory Res., VA Loma Linda Healthcare System,
11201 Benton St., Loma Linda, CA 92357, jonathan.venezia@va.gov), Allison-Graham Martin, and Virginia Richards (Cognit. Sci., Univ. of California, Irvine, Irvine, CA)
4aSC9. Relating ambulatory voice measures with self-ratings of vocal
fatigue in individuals with phonotraumatic vocal hyperfunction. Daryush Mehta, Jarrad Van Stan (Ctr. for Laryngeal Surgery and Voice Rehabilitation, Massachusetts General Hospital, One Bowdoin Square, 11th Fl.,
Boston, MA 02114, daryush.mehta@alum.mit.edu), Maria Lucia Masson
(Speech-Lang. Pathol. and Audiol. Dept., Federal Univ. of Bahia, Vale do
Canela, Salvador, Brazil), Marc Maffei (Dept. of Commun. Sci. and Disord.,
MGH Inst. of Health Professions, Boston, MA), and Robert E. Hillman (Ctr.
for Laryngeal Surgery and Voice Rehabilitation, Massachusetts General
Hospital, Boston, MA)
Advancements in mobile and wearable technologies continue to enhance
ambulatory voice monitoring for the improved assessment and treatment of
behaviorally based voice disorders. Phonotraumatic vocal hyperfunction is
one common behaviorally based voice disorder associated with faulty patterns of chronic vocal behavior that result in vocal fold tissue trauma, such
as nodules or polyps. As a result, individuals often exhibit dysphonia and
elevated levels of vocal fatigue. This study investigated the relationships
between self-ratings of vocal fatigue and ambulatory voice measures in 44
patients with vocal fold nodules or polyps and a control group of individuals
with normal voices matched for sex, age, and occupation. Using a smartphone-based ambulatory voice monitor, self-ratings were provided on a visual analog scale at five-hour intervals during the day, and data were
continuously recorded from a subglottal neck-surface accelerometer. Voice
dosimetry metrics and summary statistics of ambulatory voice measures
were derived from accelerometer-based estimates of sound pressure level,
fundamental frequency, and spectral and cepstral properties. Given the variance inherent in perceptual judgments, the analyses focused on comparisons between time periods that exhibited differences in self-ratings that
were determined to be significant (>19.7 points on a 100-point scale).
4aSC10. Application of laryngeal landmarks for characterization of
dysphonic speech. Keiko Ishikawa (Dept. of Commun. Sci. and Disord.,
Univ. of Cincinnati, 322 Eden Ave., P.O. Box 670379, Cincinnati, OH
45267-0379, ishikak@mail.uc.edu), Joel MacAuslan (Speech Technol. and
Appl. Res., Bedford, MA), and Suzanne E. Boyce (Dept. of Commun. Sci.
and Disord., Univ. of Cincinnati, Cincinnati, OH)
Dysphonia is often a result of laryngeal pathology, which elicits greater
aperiodicity and instability in a speech signal. These acoustic abnormalities
likely contribute to the intelligibility deficit reported by speakers with dysphonia. Acoustic analysis is commonly used in dysphonia evaluation; however, currently available algorithms focus on describing aspects of the
signal that are relevant to perception of voice quality. Signal abnormalities
contributing to the intelligibility deficit may be better described by a linguistically-motivated approach. One such approach, landmark-based analysis
describes a speech signal with acoustic markers that are relevant to speech
production and perception. The analysis further denotes onset and offset of
speech events. This study examined the utility of acoustic markers specifically designed to detect laryngeal events for differentiating normal and dysphonic speech signals. In particular, we examined three markers: two
markers that detect periodic moments with different acoustic rules, [g] (glottal) and [p] (periodicity); and one marker that detects moments of abrupt F0
change, [j] (jump). The analysis was performed on recordings of the first
sentence of the Rainbow Passage from 33 normal and 36 dysphonic speakers. Results suggest that, for the same speech materials, counts of these
markers differentiate dysphonic from normal speech.
4aSC11. Vocal turn-taking between mothers and their hearingimpaired infants with cochlear implants. Maria V. Kondaurova (Dept. of
Psychol. & Brain Sci., Univ. of Louisville, 317 Life Sci. Bldg., Louisville,
KY 46292, maria.kondaurova@louisville.edu), Jessa Reed (Dept. of Otolaryngology-Head and Neck Surgery, The Ohio State Univ. Medical Ctr.,
Columbus, OH), and Qi Zheng (Dept. of Biostatistics, University of Louisville, Louisville, KY)
Normal-hearing (NH) infants participate in social exchanges soon after
birth. What does the temporal organization of vocal turn-taking (VTT) look
3838
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
like in infants with hearing loss with cochlear implants? This study examined VTT during spontaneous play in eight dyads of mothers and their hearing-impaired (HI) or age-matched NH infants (mean age 17.4 m at time 1).
Dyads came to two sessions, corresponding to 3 and 12 months post-implantation. Analyses demonstrated that although HI infants vocalized less than
NH infants, the proportion of vocalizations involved in VTT exceeded that
from NH infants at time 1 and was equal at time 2. For vocalizations
involved in VTT, there was a higher proportion of simultaneous speech in
HI dyads compared to NH dyads at time 1, but the direction reversed at time
2. At time 1, the number of turns was greater in the HI group but decreased
compared to the NH group at time 2. Duration of between-speaker pauses
(BSP) was shorter in infant-mother compared to mother-infant turns in the
NH group only. At time 2, the duration of BSP was shorter in infant-mother
compared to mother-infant turns in both groups. The results suggest that
infant hearing status affects temporal characteristics of VTT.
4aSC12. Predicting intelligibility of dysphonic speech with automatic
measurement of vowel related parameters. Keiko Ishikawa, Meredith
Meyer (Dept. of Commun. Sci. and Disord., Univ. of Cincinnati, 3202 Eden
Ave., P.O. Box 670379, Cincinnati, OH 45267-0379, ishikak@mail.uc.edu),
Joel MacAuslan (Speech Technol. and Appl. Res., Bedford, MA), and
Suzanne E. Boyce (Dept. of Commun. Sci. and Disord., Univ. of Cincinnati,
Cincinnati, OH)
Speakers with dysphonia frequently report difficulty with being understood in noisy places. Perception of vowels plays an important role in intelligibility. Dysphonic speech is typically characterized both by shorter
periodic intervals and an increased proportion of noise vs. harmonic components, potentially obscuring acoustic cues for identification of vowels. This
study examined whether vowel-related acoustic measures would predict
intelligibility of dysphonic speech. Sentences from the Consensus AuditoryPerceptual Evaluation of Voice (CAPE-V) were recorded from 18 dysphonic speakers (6 adult females, 6 adult males, and 6 children) and 3 normal speakers (1 adult female, 1 adult male, and 1 child). These sentences
R Wavesurfer plug-in version, a
were analyzed acoustically by SpeechMarkV
speech analysis tool based on the landmark theory of speech perception.
The sentences were presented to 45 listeners for perceptual rating of intelligibility at 3 S/N ratios. The following output parameters were compared:
vowel space area, number of vowels whose F1 and F2 values fell within a
standard vowel quadrilateral (defined based on existing literature), and number of vowels with normal formant bandwidth. Preliminary results indicated
that number of vowels with normal formant bandwidth falling within the
quadrilateral significantly predicted the intelligibility ratings of dysphonic
speech at all noise levels.
4aSC13. Prosodic bootstrapping of syntax from cochlear implant-simulated speech. Kara E. Hawthorne (Commun. Sci. and Disord., Univ. of MS,
2-40 Assiniboia Hall, University of AB, Edmonton, AB T6G 2E7, Canada,
khawthor@olemiss.edu)
It has been well-documented that prosodic boundaries often align with
syntactic boundaries, and that both infants and adults capitalize on prosodic
cues to bootstrap knowledge of syntax. However, it is less clear which prosodic cues—pre-boundary lengthening, pauses, and/or pitch resets across
boundaries—are necessary for this bootstrapping to occur. It is also
unknown how syntax acquisition is impacted for listeners who do not have
access to the full spectrum of prosodic information. These questions were
addressed using noise vocoded speech, which simulates speech perceived
through a cochlear implant. While pre-boundary lengthening and pauses are
well-transmitted through noise vocoded speech, pitch is not. In two experiments, adults listening to noise vocoded speech performed similarly to
adults listening to unmanipulated speech in syntax acquisition tasks. This
suggests that lengthening and pause cues alone are sufficient to facilitate acquisition of some syntactic structures, and that listeners with cochlear
implants may be able to bootstrap syntax using prosody in a similar way as
individuals with normal hearing.
Acoustics ’17 Boston
3838
Previous research [Assmann et al., J. Acoust Soc. Am 138, 1811 (2015)]
investigated normal-hearing (NH) listeners’ ability to discriminate age and
gender in children’s speech. Speech stimuli (/hVd/ syllables from 140
speakers between 5 and 18 years of age) were processed using STRAIGHT
to simulate a change in perceived gender. Experimental conditions involved
swapping the fundamental frequency (F0) contour and/or formant frequencies (FF) to the opposite-sex average at each age level. This research was
extended by presenting the stimuli to cochlear implant (CI) users. Preliminary results from two CI users have led to two main conclusions. First,
whereas NH listeners used both F0 and FF to discriminate voice gender, CI
users relied primarily on F0. Second, NH listeners and CI users demonstrated differential patterns of voice gender misclassification, particularly in
the case of young children. NH listeners, while frequently making errors,
identified a majority of young boys as male and young girls as female. In
contrast, CI users identified most young children as female.
4aSC15. Perception of coarticulation in listeners with cochlear implants
and other spectrally degraded conditions. Steven P. Gianakas and Matthew Winn (Speech & Hearing Sci., Univ. of Washington, 1417 NE 42nd
St., Seattle, WA 98105, spgia5@uw.edu)
Hearing loss can lead to not only decreased overall word recognition,
but also poor access to cues that help us identify words more quickly in context. One such cue is coarticulation, or the overlap of articulatory gestures in
neighboring sounds, which is utilized by listeners to more quickly an
upcoming word. This study measures benefit of coarticulation when the
incoming speech signal is spectrally degraded, as with the use of a cochlear
implant or other degradation. In a visual world eye-tracking paradigm, listeners looked to four pictures of named objects while listening to speech
stimuli in which the vowel preceding a target word contained natural cooperating coarticulation cues, conflicting cues (for a different word), or neutral
cues (no coarticulation). The benefit of coarticulation was measured as
reduction of latency of eye movements elicited by the cooperating cue compared to neutral cues, or the increase in latency resulting from conflicting
cues; both situations would show evidence of sensitivity to coarticulation.
Preliminary results suggest that coarticulation perception is partially robust
to degradation for normal-hearing listeners but is highly variable/deficient
in cochlear implant listeners, suggesting a disadvantage in the speed of
word recognition that would not be evident in conventional word recognition scores.
4aSC16. Dyslexia limits listener responsiveness to indexical cues in
speech. Robert A. Fox, Ewa Jacewicz, and Gayle Long (Dept. and Speech
and Hearing Sci., The Ohio State Univ., 110 Pressey Hall, 1070 Carmack
Rd., Columbus, OH 43210-1002, fox.2@osu.edu)
Recent research has shown that the underlying phonological impairment
in dyslexia (DYS) is associated with a deficit in recognizing indexical features in voices of multiple talkers, including talker dialect. This study further examined sensitivity (measured in A-prime) to indexical information
using stimuli which varied the nature and the redundancy of acoustic cues.
Twenty DYS adults, 20 DYS children and 40 corresponding controls from
Ohio listened to three sets of stimuli: unaltered, low-pass filtered (LP) at
400 Hz (retaining voice information but little content), and eight-channel
noise-vocoded speech (VS) (eliminating all harmonic information). Listeners identified talker dialect (Ohio, North Carolina) and sex. Compared with
controls, performance of DYS listeners was significantly lower although the
overall pattern was consistent for both groups: For the degraded stimuli,
sensitivity to dialect was greater in VS than in LP, and sensitivity to talker
sex was greater in LP than in VS. However, DYS listeners were disproportionally less sensitive to talker sex in VS. Overall, children performed significantly worse than did adults for all three stimulus types. The current
results further revealed that individuals with dyslexia are deficient in utilizing indexical features, irrespective of the amount of the acoustic cues available in stimulus speech.
3839
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
4aSC17. Disability rights aspects of ambient noise for people with auditory disorders. Daniel Fink (The Quiet Coalition, The Quiet Coalition, P.O.
Box 533, Lincoln, MA 01733, DJFink@thequietcoalition.org)
The United States and European Union (EU) guarantee people with disabilities certain rights, with goals of full enjoyment, active inclusion, and
equal participation in society. This approach is also found in the United
Nations Convention on the Rights of Persons with Disabilities, adopted by
EU. Noise is a disability rights issue for individuals with hearing loss. Many
cannot understand speech in noisy places, with or without hearing aids. Ambient noise levels below 60 A-weighted decibels with a reverberation time
under 0.50 seconds are needed to allow those with partial hearing loss to follow normal conversations. Noise worsens symptoms for those with tinnitus
and hyperacusis. Noisy restaurants, stores, and other places deny full enjoyment and equal participation in public life to those with hearing loss, tinnitus, and hyperacusis. Legislative and regulatory action is needed to provide
quiet environments, with established noise standards vigorously enforced.
Technologies and environmental modifications to control noise are well
known, readily available, and relatively inexpensive. The simplest modification, which costs nothing, is merely turning down the volume of amplified
sound. Quiet facilitates communication for everyone and prevents development of hearing loss, tinnitus, and hyperacusis in those without auditory
disorders.
4aSC18. An optimized lumped model for tracheoesophageal speakers.
Guilherme Zanotelli and Andrey R. da Silva (Dept. of Mech. Eng., Federal
opolis, Santa
Univ. of Santa Catarina, Rua Monsenhor Topp, 173, Florian
Catarina 88020-500, Brazil, andrey.rs@ufsc.br)
Since the advent of the first lumped model of the phonatory system by
Flanagan and Landgraf (1968), many improvements have been made in
order to capture more realistic features of the human phonation in glottal
speakers. Although lumped models are quite limited when it comes to representing realistic structural and fluid dynamic aspects, they are very important to understand fundamental features of human phonation. Nevertheless,
only a few lumped models have been proposed in order to investigate the
dynamics of the pharyngoesophageal segment in tracheoesophageal speakers. This work presents some improvements on a lumped model of the pharyngoesophageal segment, firstly proposed by Schwarz et al. (2011). The
improvements have been achieved by conducting an optimization process
that involves the glottal volume flow as the optimization function, using as
reference the experimental data obtained from tracheoesophageal speakers.
The results suggest that the lumped model of the esophageal region can be
extended to capture important features, such as the pressure threshold for
the onset of self-sustained oscillation.
4aSC19. Covert acoustic markers of alveolar-velar stop contrasts in the
speech of two-year-old children with and without repaired cleft palate.
Marziye Eshghi (Speech, Lang. and Hearing Sci., Univ. of North Carolina
at Chapel Hill, 002 Brauer Hall, Craniofacial Ctr., Chapel Hill, NC 27599,
marziye_eshghi@med.unc.edu) and David Zajac (Dental Ecology, Univ. of
North Carolina at Chapel Hill, Chapel Hill, NC)
The development of backed alveolar stops in children with repaired cleft
palate (CP) may be due to reduced maxillary arch dimensions (Zajac et al.,
2012; Eshghi et al, 2013). Shriberg et al. (2003) have also suggested that
otitis media (OM) may be a marker for backed articulation in children without CP. The present study sought to explore acoustic markers of alveolar
and velar stops in two-year old children with and without repaired CP. All
children were from American-English speaking families, had competent VP
function as determined by nasal ram pressure (NRP) monitoring, and had
hearing within normal limits. Speech samples consisted of words with initial
alveolar and velar stops. Audio recordings of six children with repaired CP,
(2 males, 4 females), seven children with OM (6 males, 1 female), and 13
typically developing (TD) children (7 males, 6 females) were analyzed
acoustically. Results indicated that mean first spectral moment was smallest
(6.03 kHz) for the alveolar sounds produced by children with CP, followed
by children with OM (6.22 kHz), and then by TD children (6.61 kHz)
(p = 0.15). These preliminary trends suggest that both maxillary anomalies
and/or fluctuating hearing loss may contribute to the development of palatalized stops. [Research reported in this publication was supported by the
Acoustics ’17 Boston
3839
4a WED. AM
4aSC14. Perception of voice gender in children’s voices by cochlear
implant users. Daniel R. Guest, Michelle R. Kapolowicz, Vahid Montazeri,
and Peter F. Assmann (School of Behavioral and Brain Sci., Univ. of Texas
at Dallas, GR41 The University of Texas at Dallas, Box 830688, Richardson, TX 75083, daniel.guest@utdallas.edu)
National Institute of Dental & Craniofacial Research of the National Institutes of Health under Award Number 1R01DE022566-01A1.]
4aSC20. Measuring the “glottal waveform” in laringectomized patients.
Andressa Beckert Otto and Andrey R. da Silva (Dept. of Mech. Eng., Federal Univ. of Santa Catarina, Rua Monsenhor Topp, 173, Florian
opolis,
Santa Catarina 88020-500, Brazil, andrey.rs@ufsc.br)
There are several noninvasive and well-devised techniques for measuring the glottal waveform of human subjects, most of which rely on
source-filter separation by digital inverse filtering. A more direct and simple
approach relies on the use of the Sondhi tube technique to record the subjects’ voice, from which the glottal waveform is extracted, assuming that
the effect of the vocal tract is negligible on the recorded sound. Although
these techniques can capture the characteristics of the source waveform
from normal speakers with reasonable accuracy, they have limitations when
used to capture the source waveform of tracheoesophageal and esophageal
speakers. This is due to the fact that these types of speech are highly aperiodic, which may compromise the accuracy of inverse filtering techniques.
Moreover, the acoustic coupling between the vocal tract and the source in
tracheoesophageal and esophageal speech is not well understood yet, and
the assumptions implied by the Sondhi tube techniques cannot be taken.
This work proposes a new experimental technique to capture the source
waveform of tracheoesophageal and esophageal speakers based on the
layer-peeling technique using a open-closed tube with two microphones and
a sound source. Results for the source waveform of both normal and tracheoesophageal speakers are captured with the new technique and compared
with traditional methods based on both inverse filtering and the Sondhi tube.
4aSC21. The impact of deictic gesture on vowel acoustics in childhood
apraxia of speech. Kathryn Connaghan (Commun. Sci. & Disord., Northeastern Univ., 360 Huntington Ave., Forsyth Bldg. - Rm. 226, Boston, MA
02115, k.connaghan@northeastern.edu) and Heather Rusiewicz (SpeechLang. Pathol., Duquesne Univ., Pittsburgh, PA)
A growing literature supports the notion that manual gestures and speech
are entrained, with one movement modulating the spatiotemporal properties
of the other. For instance, Krahmer and Swerts (2007) demonstrated that
gestures elicited unintentional prosodic stress production in healthy adult
speakers. Understanding this relationship in individuals with motor speech
disorders could inform both the underlying neuromotor impairment and the
design of efficacious interventions. For example, two of the hallmark characteristics of childhood apraxia of speech (CAS) are atypical prosody and
vowel distortions (ASHA, 2007). Given the impact of prosodic stress on
vowel formants (e.g., Hay et al., 2006), enhanced prosody elicited through
gestures may improve vowel clarity and consistency. The current investigation was designed to explore the relationship between manual and speech
gestures in CAS by evaluating the consequences of deictic (pointing) gestures on vowel acoustics. Participants included children with CAS and
healthy controls (ages 6-12 years). Vowel formants (F1, F2) were extracted
from utterances produced with and without targeted stress and gestures.
Metrics of formant centralization, vowel distinctiveness, and consistency of
production were compared across stress and gesture conditions and between
speaker groups. Preliminary findings suggest the potential of manual gestures to facilitate vowel production in CAS.
4aSC22. Effectiveness of an intervention program combining direct vocabulary instruction and individualized phonetic training for Mandarin-speaking young children with specific language impairment.
Yuchun Chen (Ctr. of Teacher Education, No. 510 Zhongzheng Rd, Xinzhuang Dist., New Taipei City 24205, Taiwan, 128162@mail.fju.edu.tw)
Children with specific language impairment (SLI) demonstrate deficits
in vocabulary development and novel word learning process, which have
been proposed to stem from their speech perception and phonological processing deficits. In this study, we designed an intervention program combining small-group direct vocabulary instruction and individualized phonetic
3840
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
training computer program. Twenty-five 5- to 7-year-old children with SLI
participated in the study, and another 15 children with SLI served as control
group. All participants in the experimental group attended 50-minutes direct
vocabulary instruction class with interactive activities for ten weeks. The
phonetic training computer games included AX discrimination and identification tasks of Mandarin consonant and lexical tones minimal pairs presented in single word and sentence levels. Children were asked to play the
phonetic training games at least 30 minutes a week at home and encouraged
to run more if they prefer. Results showed that children in the experimental
group performed better than control group in the post test, including receptive and expressive vocabulary knowledge. In addition, there was a correlation between the gain in the post test and the time spent in the phonetic
training game. The result suggests the vocabulary intervention programs
incorporated with speech perception training promote children’s vocabulary
abilities.
4aSC23. Acoustic consequences of speech treatment in children with
cerebral palsy. Younghwa M. Chang, Sarah Eldib (Columbia Univ., Teachers College, Columbia University, 525 W 120th St., New York, NY 10027,
ymc2111@tc.columbia.edu), Megan J. McAuliffe (Univ. of Canterbury,
Christchurch, New Zealand), and Erika Levy (Columbia Univ., New York,
NY)
Cerebral palsy (CP) is the most common motor disability in childhood
and can severely impact speech intelligibility. However, sparse evidence
exists on which to base treatment. The current study examined the acoustic
outcomes of Speech-Systems-Intelligibility-Treatment (Levy, 2014), a
speech treatment aimed at enhancing intelligibility by targeting increased
articulatory-working-space and vocal intensity through child-friendly
prompts. Thirteen American-English speaking children with spastic dysarthria due to CP were assigned to a speech treatment group or a control group.
Before and after treatment, participants repeated pre-recorded minimal pair
nonsense words contrasting only in the initial consonant. Acoustic analysis
of the children’s nonsense word productions indicated that on average, duration and sound-pressure-level (SPL) increased for the speech group posttreatment. Similar increases were observed in the control group’s SPL and
duration, but to a lesser extent. Thus, preliminary results suggest promising
treatment effects on speech acoustics for repeated words. Analyses of fricative-affricate contrasts, and F2 slope changes shed additional light on stability and changes in the speech of children with dysarthria in response to
treatment. Additionally, changes in connected speech without a model talker
may point to possible transfer of new skills to more spontaneous speech.
4aSC24. Vowel contrasts relative to schwa across tasks: Preliminary
findings for Parkinson’s disease. Christina Kuo (Commun. Sci. and Disord., James Madison Univ., 235 Martin Luther King Jr. Way, MSC 4304,
Harrisonburg, VA 22807, kuocx@jmu.edu)
The purpose of this study is to quantify acoustic contrasts of vowels in
relation to the mid-central unrounded vowel schwa produced by speakers
with Parkinson’s disease (PD) in different speaking tasks. The study is motivated by a hypothesis that schwa may serve as a speaker-specific reference
for vowel contrasts given its associated anatomical properties of a neutral
vocal tract. For the present study, a speaker-specific reference schwa is identified from averaged first and second formant (F1 and F2) frequencies of
unstressed article “a” productions in citation form. Two questions are
addressed. First, can speaking-task related changes in vowel contrasts be
expressed by vowel-schwa distances in the acoustic space consistently
across speakers? Second, do characteristics of schwa-referenced vowel contrasts differ for speakers with PD and healthy speakers? F1 and F2 frequencies of schwa and vowels /i/, /a/, and /u/ in three tasks including clearspeech, sentence reading, and passage reading are examined. The Euclidean
distance between each vowel and schwa is evaluated for a given speaker
across tasks. It is hypothesized that the within-speaker distances between
vowels and schwa are task-sensitive in the direction of clear-speech to passage reading. Preliminary findings will be discussed within the framework
of the acoustic theory and the Hyper- and Hypo-speech theory.
Acoustics ’17 Boston
3840
4aSC25. Comparison of thumb-pressure vs. electromyographic modes
of frequency modulation for electrolaryngeal speech. Kathleen Nagle
(Speech-Lang. Pathol., Seton Hall Univ., MGH Dept. of Laryngeal Surgery,
One Bowdoin Square, 11th Fl., Boston, MA 02114, kfnagle@uw.edu) and
James T. Heaton (Ctr. for Laryngeal Surgery & Voice Rehabilitation, Massachusetts General Hospital, Boston, MA)
4aSC26. The effect of hearing acuity on using semantic expectancy in
degraded speech. Katherine M. Simeon (Commun. Sci. & Disord., Northwestern Univ., 2240 Campus Dr., Frances Searle Bldg. Rm. 2-381, Evanston, IL 60208, ksimeon@u.northwestern.edu), Klinton Bicknell (Linguist,
Northwestern Univ., Evanston, IL), and Tina M. Grieco-Calub (Commun.
Sci. & Disord., Northwestern Univ., Evanston, IL)
Speaking with natural prosodic patterns is tremendously challenging
when using an electrolarynx (EL). Some ELs enable dynamic fundamental
frequency (f0) variation to provide prosodic patterns and thereby improve
speech naturalness. This study compares EL speech produced using thumbpressure f0 variation (TruToneTM EL) versus an experimental method
wherein f0 variation is derived from submental (under chin) electromyographic (EMG) signals (EMG-EL). Eighteen Laryngectomees provided sentence-length samples of speech using these two EL devices, and measures
of f0 mean, SD and range were made. The f0 coefficient of variation (f0CV;
SD f0/mean f0) was also calculated as a measure of f0 variation relative to
the mean (Cartei et al., 2012). Paired t-tests of f0 measures were used to
compare EL devices within participants. Mean f0 range was significantly
greater for the EMG-EL device than thumb-pressure EL device (16 of 18
speakers; p <.05). Although mean f0 was roughly the same across devices,
f0CV was significantly higher for the EMG-EL than thumb-button EL (17
of 18 speakers; p<.05). The EMG-EL device enabled greater f0 variation
than a thumb-pressure-controlled device, which is consistent with our prior
findings that the EMG-EL supports more natural-sounding speech—even
for EL users who own the thumb-pressure device.
Speech is degraded by extrinsic factors (e.g., background noise), intrinsic
factors (e.g., hearing loss), or a combination of both. Listeners can compensate for degradation with semantic expectancy, which is the ability to predict
speech from surrounding linguistic information during spoken language
processing. Though listeners with hearing loss use expectancy for speech
understanding (Lash et al., 2013; Smiljanic & Sladen, 2013), little is known
about whether their reliance on expectancy competes with their processing
of acoustic input. This project examines how acoustic degradation, from
background noise and hearing loss, influences listeners’ use of expectancy
and how this processing affects speech perception. Adults were presented
with sentences containing concrete, monosyllabic words in sentence-final
position in speech-shaped noise in different SNRs (Bloom & Fischler, 1980).
These words were interchanged to create congruent expectancy sentences
(i.e., the final word was semantically related) and conflicting expectancy sentences (i.e., the final word was not semantically related). For conflicting expectancy sentences, normal-hearing listeners had poorer speech recognition
accuracy in noise, suggesting increased use of semantic expectations when
processing degraded speech. Preliminary data show listeners with hearing
loss have similar performance with no background noise present. This presentation will discuss results from adults with and without hearing loss.
WEDNESDAY MORNING, 28 JUNE 2017
ROOM 302, 8:15 A.M. TO 12:20 P.M.
Session 4aSP
Signal Processing in Acoustics, Underwater Acoustics, and Biomedical Acoustics: Sparse and Co-Prime
Array Processing I
4a WED. AM
Efren Fernandez-Grande, Cochair
Acoustic Technology, DTU - Technical University of Denmark, Ørsteds Plads, B. 352, DTU, Kgs. Lyngby DK-2800, Denmark
John R. Buck, Cochair
ECE, UMass Dartmouth, 285 Old Westport Road, North Dartmouth, MA 02747
Chair’s Introduction—8:15
Invited Papers
8:20
4aSP1. Bilinear problems and sparse arrays. Ali Koochakzadeh and Piya Pal (Elec. and Comput. Eng., Univ. of California, San
Diego, 9500 Gilman Dr., Mail Code 0407, La Jolla, CA 92093, pipal@eng.ucsd.edu)
Inverse problems in imaging with sensor arrays often involve estimation of desired parameters (associated with targets or sources) in
presence of undesired quantities such as modeling and calibration errors, or uncertainties in accurate modeling of the propagation medium. These errors and uncertainties introduce additional unknown parameters in the measurement model, giving rise to so-called
“bilinear problems.” In such models, the measurements collected at the sensor array are linear with respect to the desired parameters,
when the nuisance parameters are held constant, and vice versa. In this talk, we will address the question of solving such bilinear problems in a unified manner. Important distinctions will be made between gain/phase calibration errors, and sensor perturbation errors. We
will show that while the former gives rise to conventional bilinear models, the latter produces a mixed affine-linear model, with completely different ambiguity sets. The role of sparse arrays and the pattern of repetitive elements in their difference sets will be shown to
3841
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3841
play crucial roles in determining the number of unknown parameters that can be provably inferred from such models, via clever elimination of nuisance parameters. We will also consider robust convex and non-convex algorithms for solving exact or relaxed versions of
bilinear problems and compare their relative performances.
8:40
4aSP2. Hourglass arrays: Planar sparse arrays with hole-free coarrays and reduced mutual coupling. Chun-Lin Liu and Palghat
Vaidyanathan (Elec. Eng., California Inst. of Technol., 1200 E California Blvd., MC 136-93, Pasadena, CA 91125, cl.liu@caltech.edu)
Linear (1D) sparse arrays such as nested arrays have hole-free difference coarrays with O(N2) virtual sensor elements, where N is the
number of physical sensors. This property implies that O(N2) monochromatic and uncorrelated sources can be identified. For the 2D
case, planar sparse arrays with hole-free coarrays having O(N2) elements have also been known for a long time. These include billboard
arrays, open box arrays (OBA), and 2D nested arrays. Their merits are similar to those of the 1D sparse arrays mentioned above,
although identifiability claims regarding O(N2) sources have to be handled with more care in 2D. In this presentation, we propose hourglass arrays, which have closed-form 2D sensor locations and hole-free coarrays with O(N2) elements just like the OBA. Furthermore,
the mutual coupling effect, which is the undesired interaction between sensors, is reduced since the number of sensor pairs with small
spacings such as k/2 decreases. Among the planar arrays mentioned above, simulations show that hourglass arrays have the best estimation performance in the presence of mutual coupling.
9:00
4aSP3. A total generalized variation approach for near-field acoustic holography. Efren Fernandez-Grande (Acoust. Technol., DTU
- Tech. Univ. of Denmark, Ørsteds Plads, B. 352, DTU, Kgs. Lyngby DK-2800, Denmark, efg@elektro.dtu.dk)
Near-field methods based on microphone array measurements are useful to understand how a source radiates sound. Due to discretization errors, these methods are typically restricted to low frequencies. Sparse approaches have gained considerable attention, as they
can potentially recover a seemingly under-sampled signal with remarkable accuracy, extending the valid frequency range. However,
near-field problems are generally not spatially sparse, and it is more appropriate to promote block-sparse solutions (i.e. spatially
extended) rather than direct spatial sparsity. In this paper, a method is examined that promotes solutions with sparse spatial derivatives.
The method seeks spatially extended solutions, valid over a wide frequency range, and suitable to near-fields and extended sources. The
methodology is based on a Total Variation approach using higher order derivatives. The frequency range of validity is examined, as well
as the robustness to noise. The performance of different finite difference stencils is investigated. Numerical and experimental results are
presented, with particular focus on the estimated power radiated by the source. The method is benchmarked against conventional
approaches.
9:20
4aSP4. Sparse distributed sensor placement via statistical restricted isometry property. Jeffrey S. Rogers and Geoffrey F. Edelmann (Acoust. Div., Naval Res. Lab, 4555 Overlook Ave. SW, Code 7161, Washington, DC 20375, jeff.rogers@nrl.navy.mil)
Perfect sensor coverage of large ocean volumes is an intractable problem for small N systems. Instead, this paper presents optimal
placement of relatively few sensors in order to achieve coherent array processing. The sensor placement will simulate acoustic beam
steering despite being a severely underdetermined problem. Utilizing the statistical restricted isometry property (StRIP), with high probability, a stable (invertable) sparse array will be formed. Additionally, sensor placement will be shown with spatial constraints such as
improved field of view (resolution and side lobe suppression) in a direction of a priori interest. A comparison of the StRIP optimized network will be made with co-prime samplers, random arrays, and Wichmann and Golomb rulers. [This work was supported by ONR.]
9:40
4aSP5. Estimation of surface impedance using different types of microphone arrays. Antoine Richard (Acoust. Technol., Tech.
Univ. of Denmark, Ørsteds Plads Bldg. 352, Kongens Lyngby 2800, Denmark, apar@elektro.dtu.dk), Efren Fernandez-Grande, Jonas
uel & Kjær Sound
Brunskog, Cheol Ho Jeong (Acoust. Technol., Tech. Univ. of Denmark, Kgs. Lyngby, Denmark), Karim Haddad (Br€
& Vib. Measurement A/S, Nærum, Denmark), Jorgen Hald (Br€
uel & Kjær Sound & Vib. Measurement A/S, Naerum, Denmark), and
uel & Kjær Sound & Vib. Measurement A/S, Nærum, Denmark)
Woo-Keun Song (Br€
This study investigates microphone array methods to measure the angle dependent surface impedance of acoustic materials. The
methods are based on the reconstruction of the sound field on the surface of the material, using a wave expansion formulation. The
reconstruction of both the pressure and the particle velocity leads to an estimation of the surface impedance for a given angle of incidence. A porous type absorber sample is tested experimentally in anechoic conditions for different array geometries, sample sizes, incidence angles, and distances between the array and sample. In particular, the performances of a rigid spherical array and a double layer
planar array are examined. The use of sparse array processing methods and conventional regulariation approaches are studied. In addition, the influence of the size of the sample on the surface impedance estimation is investigated using both experimental data and numerical simulations with a boundary element model. Results indicate that the small distance between the planar array and the sample favors
a more robust estimation.
10:00–10:20 Break
3842
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3842
10:20
4aSP6. Comparison of multiplicative and min processors for coprime and nested geometries using the Elba Island data set. Vaibhav Chavali and Kathleen E. Wage (Elec. Eng., George Mason Univ., 4400 University Dr., MSN 1G5, Fairfax, VA 22030, vchavali@
gmu.edu)
Vaidyanathan and Pal describe nested and coprime array geometries that provide significant sensor savings when compared to
densely populated Uniform Linear Arrays (ULAs) [IEEE Trans. Sig. Proc., 2010, 2011]. They show that a multiplicative processor can
reconstruct the spatial power spectrum from the sparse measurements made by a Nested Array (NA) or Coprime Sensor Array (CSA).
Multiplication of the beamformed outputs of two undersampled subarrays eliminates the ambiguity due to aliasing, but requires temporal
averaging to mitigate cross terms. Prior work analyzes a multiplicative CSA processor using data from an underwater vertical array
deployed near Elba Island [Chavali et al., Asilomar Conf. SS&C, 2014]. Compared to the conventional spectrum obtained with the
ULA, the multiplicative CSA spectrum for the Elba data contains endfire interference due to cross terms associated with coherent mode
arrivals. Liu and Buck propose an alternative processor, which uses the minimum of the two subarray outputs as the spectral estimate
[IEEE SAM, 2016]. This talk compares performance of the multiplicative and min processors using the NA and CSA subarrays designed
for the Elba experiment. The NA-min processor produces the best spectral estimate for this scenario. [Work supported by ONR Basic
Research Challenge Program.]
10:40
4aSP7. Spectrum-dependent bandpass beampattern modeling and spatial filtering with coprime linear microphone arrays. Dane
R. Bush and Ning Xiang (Architectural Acoust., Rensselaer Polytechnic Inst., 2609 15th St., Troy, NY 12180, danebush@gmail.com)
Coprime linear microphone arrays consist of subarrays, each with inter-element spacing of half the wavelength observed times an integer factor. These two factors, M and N, corresponding to each subarray, are coprime, which ensures that their sensitivity completely
overlaps only in the direction of the main beam. This implies a single observable wavelength, thus frequency; however, the grating lobemitigating effect can also be achieved for broadband sources [D. Bush, and N. Xiang, J. Acoust. Soc. Am., 138, 447-456 (2015)]. A
modified Laplacian function provides a phenomenological model for broadband noise array responses, but real-world signals vary in
spectral content making it prudent to develop a model which incorporates finer-resolution frequency dependence. This work also
explores spatial filtering/source separation techniques for coprime linear microphone arrays. Multichannel experimental impulse
response measurements with differing angles of incidence are convolved with independent speech signals. Subsequently, coprime beamforming is applied to the results in order to directionally filter, thus separating the source signals.
11:00
4aSP8. Statistical characterization of coprime sensor arrays: Array
gain vs. spatially correlated noise. Radienxe Bautista (Sensors and Sonar,
Naval Undersea Warfare Ctr., 1176 Howell St., Newport, RI 02841, rbautista@umassd.edu) and John R. Buck (Elec. and Comput. Eng., Univ. of
Massachusetts Dartmouth, North Dartmouth, MA)
Sensor arrays suppress spatially-uncorrelated noise through conventional beamforming (CBF). Array Gain (AG) quantifies the SNR improvement at the CBF output compared to the sensor-level SNR. The AG for a
CBF Uniform Line Array (ULA) in uncorrelated noise is the number of sensors. A Coprime Sensor Array (CSA) is a sparse array geometry interleaving
two undersampled ULAs, multiplying the subarray CBF outputs to achieve
the same resolution as a fully populated ULA of the same aperture using
fewer sensors [Vaidyanathan & Pal, 2010]. The CSA product process AG in
uncorrelated noise is asymptotically equal to the number of sensors for large
input SNR [Adhikari & Buck, 2015]. This research derives the AGs for
CSAs and ULAs using the traditional SNR definition and deflection statistics [Cox, 1973] for a spatial first-order autoregressive process. This process
introduces spatial correlation and is a simple model for turbulent flow noise
over a towed array. Although the CSA AG is lower than the ULA for uncorrelated noise, the CSA’s AG degrades more slowly than the ULA’s AG with
increasing noise correlation, due to larger spacing in the CSA subarrays.
The CSA is more robust to correlated noise than the ULA AG. [Funded by
NUWC & ONR.]
Invited Papers
11:20
4aSP9. Multi-frequency sparse Bayesian learning for matched field processing. Kay L. Gemba, Santosh Nannuru, Peter Gerstoft,
and William S. Hodgkiss (MPL/SIO, UCSD, University of California, San Diego, 8820 Shellback Way, Spiess Hall, Rm. 446, La Jolla,
CA 92037, gemba@ucsd.edu)
Compressive sensing has been applied to underwater acoustic problems. Using multi-frequency sparse Bayesian learning (SBL), we
present simulation and data results (SWellEx-96 Event S5) including mismatch. Mismatch is defined as a misalignment between the
actual source field observed at the array and the modeled replica vector. Results for a multiple-source scenario indicate that SBL outperforms WNC and MUSIC when localizing a quiet source in the presence of a stronger source. Furthermore, simulations (including snapshots not corresponding exactly to replicas) and data results demonstrate that SBL offers robustness to mismatch including array-tilt.
The array-tilt mismatch in the data varies over time and is especially pronounced at the closest point of approach, being 2 degrees.
Because of its computational efficiency and performance, SBL is practical for real time applications requiring an adaptive and robust
processor.
3843
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3843
4a WED. AM
Contributed Paper
11:40
4aSP10. Wideband source enumeration and direction-of-arrival estimation using sparse array periodogram averaging in low
snapshot scenarios. Yang Liu and John R. Buck (Dept. of Elec. and Comput. Eng., Univ. of Massachusetts Dartmouth, 285 Old Westport Rd., Dartmouth, MA 02747, yang.liu@umassd.edu)
Sparse arrays, such as minimum redundancy arrays and coprime arrays, often exploit the second-order statistics of the propagating
field to localize more sources than sensors by constructing an augmented covariance matrix (ACM) from the estimated spatial correlation. The source localization performance largely depends on the number of snapshots available, which might be limited in many acoustical environments due to the propagation speed, large array aperture, and non-stationary field. This paper proposes a new approach for
wideband source enumeration and direction-of-arrival (DOA) estimation on any sparse array geometry. The proposed algorithm decomposes the wideband signals into multiple disjoint frequency bands, computes the narrowband spatial periodograms and averages them to
reinforce the sources’ spatial spectral information. The spatial correlation estimated from the wideband periodogram populates the diagonals of a Hermitian Toeplitz ACM. This ACM then goes through eigenvalue decomposition, where its eigenvalues are employed for
source enumeration through a new information based criterion and its eigenvectors for DOA estimation through the MUSIC algorithm.
Simulations show that the proposed algorithm achieves improved performance enumerating and estimating DOAs for more wideband
sources than sensors in low snapshot scenarios when compared to existing approaches. [Work supported by ONR grant N00014-13-10230.]
Contributed Paper
12:00
4aSP11. Frequency-difference beamforming in inhomogeneous media.
Alexander S. Douglass and David R. Dowling (Mech. Eng., Univ. of Michigan, 2010 AL, 1231 Beal, Ann Arbor, MI 48109, asdougl@umich.edu)
Frequency-difference beamforming (Abadi et al., 2012, JASA, 132,
3018-3029) is an array signal processing technique that overcomes the limitations of the spatial Nyquist criterion by lowering the processing to out-ofband frequencies. This is accomplished using a quadratic product of complex signal amplitudes at different frequencies, resulting in wave propagation information at the out-of-band difference-frequency. Acoustic waves
are susceptible to strong scattering in inhomogeneous media when the sizes
of the inhomogeneities are comparable to or larger than the signal
wavelength. Thus, conventional beamforming in random media at high frequencies with sparse arrays may be impossible, even in the presence of
small inhomogeneities. However, at lower frequencies, acoustic propagation
and beamforming in the same environment might not be significantly
impacted. Thus, frequency-difference beamforming in the presence of highfrequency scattering is expected to maintain robustness similar to that of a
low-frequency field. In this presentation, we present a theoretical framework
that supports this hypothesis using the Born approximation. The theory is
tested using experiments in a 1.07-m-diameter, 0.8-m-deep cylindrical water
tank with an array of up to 16 receivers and 100 kHz to 200 kHz signal
pulses that propagate through an inhomogeneous medium. [Sponsored by
NAVSEA through the NEEC, and by ONR.]
WEDNESDAY MORNING, 28 JUNE 2017
ROOM 306, 8:00 A.M. TO 10:20 A.M.
Session 4aUWa
Underwater Acoustics: Acoustical Interaction with Ocean Boundaries and Targets
Derek R. Olson, Chair
Acoustics, Penn State, 201 Applied Sciences Bldg., University Park, PA 16802
Contributed Papers
8:00
4aUWa1. Sound speed and attenuation in marine mud seabottom. Zhenglin Li and Renhe Zhang (State Key Lab. of Acoust., Inst. of Acoust., Chinese Acad. of Sci., No. 21 Beisihuan West Rd., Beijing 100190, China,
lzhl@mail.ioa.ac.cn)
Acoustic propagation in shallow water is greatly influenced by the properties of the bottom. Comparing the acoustic characteristics in the sediments
of sand or silty are well inverted or directly core measured, sound speed and
attenuation in the mud sediment still need investigated further. An experiment was performed in the Yellow Sea in 2002 to inversion for the acoustic
3844
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
parameters. In the experiment, six different sediment types: Fine Sand, Silty
Sand, Sand Silty, Sand-Silty-Clay, Silty Clay and Mud, are included in an
area. A hybrid geoacoustic inversion scheme, which combines several inversion methods together to invert for the bottom parameters, is proposed based
on the fact that the bottom acoustic parameters have different sensitivities to
the different physical parameters of acoustic field. The inverted bottom parameters could distinguish the atlas marked bottom type quite well. The
sound speed and attenuation parameter in the marine sediments of silty clay
and mud is mainly discussed. The nonlinear frequency relationship of
attenuation in the mud is compared with that of silty sand. [Work supported
by the National Natural Science Foundation of China under Grant No.
11434012 and Grant No. 41561144006.]
Acoustics ’17 Boston
3844
8:20
9:20
4aUWa2. An effective surface loss model for shallow water propagation
and reverberation at mid-frequency that accounts for surface forward
scattering. Eric I. Thorsos and Brian T. Hefner (Appl. Phys. Lab., Univ. of
Washington, 1013 NE 40th St., Seattle, WA 98105, eit@apl.washington.
edu)
4aUWa5. Acoustic scattering from stainless steel shells with varying
wall thickness using a biomimetic click: Modeling and interpretation.
Gang Qiao, Xin Qing, and Donghu Nie (College of Underwater Acoust.
Eng., Harbin Eng. Univ., Harbin 150040, China, qingxin@hrbeu.edu.cn)
8:40
4aUWa3. Coherent matched-filter reflection loss from a moving rough
surface as a function of pulse duration and ensonified area. Douglas
Abraham (CausaSci LLC, PO Box 627, Ellicott City, MD 21041, abrahad@
ieee.org)
The loss incurred when matched filtering a sonar pulse after reflection
from a rough surface depends on the ensonified area on the surface and the
motion of the surface throughout the pulse. Simple solutions exist for the
loss in the coherently reflected component when the ensonified area is either
very large or very small. The large ensonified area (LEA) result is generally
accurate for narrowband pulses while a small ensonified area (SEA) requires
broadband waveforms. By assuming the reflection comprises a finite number
of statistically independent surface heights and that the surface is Gaussian
distributed with a narrowband spectrum, the loss can be determined as a
function of both the pulse duration relative to the wave period and effective
ensonified area (which depends on pulse bandwidth) using the spatial correlation function of the surface. The result is shown to be a convex combination of the LEA and SEA results. A one-dimensional example illustrates
how surface reflection loss increases with pulse duration but saturates when
the duration exceeds the wave period. The loss is also shown to decrease as
pulse bandwidth increases because of the reduction in the effective ensonified area. [This research was sponsored by the Office of Naval Research.]
9:00
4aUWa4. A computational method for the time-domain intensity scattered
from rough interfaces and volume heterogeneities in deep water. Derek R.
Olson and Charles W. Holland (Appl. Res. Lab., The Penn State Univ., Appl.
Res. Lab., P.O. Box 30, State College, PA 16804, dro131@psu.edu)
A model for the time-domain scattered intensity from a heterogeneous
layered seafloor due to a point source has been recently developed by Tang
and Jackson [J. Acoust. Soc. Am. Suppl. 4, 140, 3363]. To accurately model
measurements taken in the deep ocean with source / receiver altitudes on
the order of 5 km, the computational cost of this model is quite high, with
computational cost scaling as O((kH)^2), where k is the acoustic wavenumber, and H is the source / receiver altitude. We present a fast method of numerical integration based on the work of Levin that greatly reduces the
computational cost for deep water scenarios. In the absence of a complex
resonant structure in the sediment, the proposed numerical integration
scheme reduces the scaling to O(1). If this resonant structure is present, then
the scaling is approximately O((kD)^2), where D is the total sediment thickness. Model results from deep ocean environments, including turbidite seafloors are presented and discussed.
3845
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Using broadband click pulse, dolphins can discriminate cylindrical
shells with subtle difference in wall thickness. In this study, finite element
models were built to calculate the acoustic scattering from stainless steel
shells with varying wall thickness. In order to further analyze the mechanism of interaction between click pulse and shells, the scattering field in
fluid and the stress distribution in solid were calculated by transient solver
in time domain. The simulation results show that the acoustic solid interaction leads to object resonance, and then the elastic waves radiate into surrounding fluid. Furthermore, there are significant differences among the
elastic waves from different shells with varying wall thickness. It suggested
that the crucial information about shell’s wall thickness is hidden in elastic
echo. The results show a promising way to further understand the target discrimination in dolphin’s biosonar.
9:40
4aUWa6. Forward acoustic scattering analysis from solid objects
immersed in the water. David Soubsol (DCNS Res., 75, rue Bellot, Le
Havre, Haute-Normandie 76600, France, soubsoldavid@gmail.com), Fernand Leon, Dominique Decultot, Farid Chati, Gerard Maze (Laboratoire
Ondes et Milieux Complexes, Unite Mixte de Recherche Ctr. national de
Recherche Scientifique 6294, Universite Le Havre Normandie, Le Havre,
France), Yga€al Renou, and Christian Audoly (DCNS Res., Ollioules,
France)
Forward acoustic scattering of an immersed solid LINE (cylinder
bounded by hemispherical endcaps) in water is investigated in our study.
The object is made of stainless steel and its L/2a ratio is equal to 2 (L:
Length of the cylinder part and a: the radius is equal to 60 mm). An impulse
measurement method is used in the experimentation. Most of results are
obtained experimentally, in bistatic configuration. Mobile receiver transducer is located in a distinct position from the emitter. The polar diagram
patterns of the scattered pressure shows an important amplitude of pressure
in the shadow side of this object. Analysis of this phenomenon is based on
theoretical and experimental results obtained for a sphere with the help of
elasticity theory. Moreover, this study relies on the grey-level representation
of the angular position of the receiver in function of recorded time signals.
Thus on forward acoustic time signals, it is possible to identify echoes due
to propagation paths of waves on this object.
10:00
4aUWa7. Accuracy of the far-field approximation for the sound radiated when an immersed steel pile, its toe made non-reflective (for
clarity), is driven by a harmonic 1-kHz axisymmetric force. Marshall V.
Hall ((retired), 9 Moya Crescent, Kingsgrove, NSW 2208, Australia, marshall.hall@hotmail.com)
A steel pile immersed in cold seawater, its toe made non-reflective (for
clarity), is driven by a 1-kHz axisymmetric force. Axial and radial vibrations travel from head to toe. Radial vibration, which radiates sound into the
surrounding medium, is computed using Membrane thin shell vibration
theory. In cold seawater, 1-kHz Mach waves radiate at 73 from the axis of
a typical construction steel pile. A Mach wave is received at any position
providing a 73 line from that position intersects the pile (at the emitting
point). The vibration energy at this emitting point has travelled from the
head. For any slant range (R) to a receiver, there is a minimum colatitude
(COMIN) below which a Mach wave is not received. For a typical steel pile
it is found that as R increases beyond the pile length (L), COMIN increases
rapidly from 0 , passes through 68 at 10 L, and asymptotes to 73 as R
increases further. Radiated SPLs were calculated using both far-field and
all-field radiation theories as functions of colatitude, at slant ranges from
10 to 1000 m. The far-field approximation (which omits the Mach wave)
underestimates SPL by up to 20 dB if the receiver’s colatitude exceeds
COMIN.
Acoustics ’17 Boston
3845
4a WED. AM
A transport theory approach has been developed for modeling shallow
water propagation and reverberation at mid-frequencies with emphasis at 13 kHz. With this approach, sea surface forward scattering can be taken into
account in a 2-D (range-depth) approximation. The effects of surface forward scattering on reverberation level for typical conditions can be significant (>10 dB), even though bottom backscatter typically dominates
reverberation in shallow water. However, effects of surface forward scattering have not been treatable with traditional ray-based codes, since it is
implicit in these codes that the surface interaction is a specular reflection
combined with some loss, and accounting for the change in grazing angle
due to forward scattering, crucial for reverberation modeling, does not naturally come into it. A method to account for forward scattering with raybased codes will be described. Transport theory results are used to develop
an effective surface loss model. It is referred to as TOTLOS, for the surface
loss for the total field, reflected plus scattered. While it is referred to as an
effective surface loss model, the model can yield either loss or gain, depending on the grazing angle and other parameters of the environment. [Work
supported by PMW-120 and ONR Code 322.]
WEDNESDAY MORNING, 28 JUNE 2017
ROOM 309, 8:35 A.M. TO 12:00 NOON
Session 4aUWb
Underwater Acoustics, Acoustical Oceanography, and ASA Committee on Standards: Underwater Noise
From Marine Construction and Energy Production I
James H Miller, Cochair
Ocean Engineering, University of Rhode Island, 215 South Ferry Road, Narragansett Bay Campus URI,
Narragansett, RI 02882
Paul A. Lepper, Cochair
EESE, Loughborough University, Loughborough LE113TU, United Kingdom
Chair’s Introduction—8:35
Invited Paper
8:40
4aUWb1. A brief history of underwater construction noise. James A. Reyff (Illingworth & Rodkin, Inc., 505 Petaluma Blvd. South,
Petaluma, CA 94952, jreyff@illingworthrodkin.com)
Underwater sound from marine construction became an important issue around the year 2000 because of the potential impacts to marine mammals and fish that are protected by the Endangered Species Act. While protections were in place for marine mammals, there
was no guidance for protecting fish. Early on, pile driving was found to visibly injure fish, leading to efforts to reduce sounds, develop
protective regulatory thresholds, and research the effects of sound on aquatic species. Over the last 16 years, extensive measurements
have been conducted and compiled, protective thresholds have been developed and updated, noise attenuation systems or methods have
been improved and tested, and research has been conducted. This paper provides an overview of the different types of sounds characterized, thresholds for protecting animals and the types of noise mitigation strategies employed. Sounds characterized include various types
of pile driving in and near water, use of explosives in and near water, mechanical demolition, and dredging-type sounds. The various
types of thresholds and methodology for applying those thresholds are discussed. Finally, sounds measured from the various types of
attenuation systems are summarized.
Contributed Paper
9:00
4aUWb2. Standards for processing and reporting metrics of underwater sound for use in risk assessment. Michael A. Ainslie, Christ A. de
Jong (Acoust. and Sonar, TNO, P.O. Box 96864, The Hague 2509JG, Netherlands, michael.ainslie@tno.nl), Michele B. Halvorsen (CSA Ocean Sci.,
Inc., Stuart, FL), Darlene R. Ketten (Woods Hole Oceanographic Inst.,
Perth, Western Australia, Australia), and Mark K. Prior (Acoust. and Sonar,
TNO, The Hague, Netherlands)
Anthropogenic underwater sounds create a potential risk to aquatic
organisms. Many regulators require this risk to be assessed before allowing
a sound-producing activity to proceed. Regulators typically set allowable
exposure criteria for a range of acoustic parameters and require the assessment to address whether a given acoustic metric would exceed its specified
threshold. While the value of the threshold is usually clear, the procedure
required to calculate the metric is sometimes unspecified or described in
insufficient detail, leading to ambiguity in interpretation. Processing and
reporting procedures are described that enable intra- and inter-project consistency for processing and reporting of metrics. Quantities derived from
sound pressure and sound particle motion are considered, resulting in metrics relevant to fish, aquatic invertebrates, and aquatic mammals. Specific
metrics for which procedures are described include transient duration, zeroto-peak quantities, mean-square and time-integrated-squared quantities
(e.g., sound exposure), and their spectral densities. The processing is relevant to ambient sound as well as sound from specified activities such as drilling, pile-driving, seismic imaging, or dynamic positioning. Specific issues
addressed include the specification of one-third-octave bands, and the standardization of units and reference values used for reporting. [Work sponsored
by the E&P Sound and Marine Life Joint Industry Programme.]
Invited Papers
3846
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3846
9:20
4aUWb3. An international standard for the measurement of underwater sound radiated from marine pile-driving. Stephen P.
Robinson (Acoust., NPL, Hampton Rd., Teddington TW11 0LW, United Kingdom, stephen.robinson@npl.co.uk)
The pile-driving of marine foundations radiates substantial levels of low-frequency impulsive noise into the water column, which can
propagate over large distances. Concern over the potential for impact on marine fauna often results in a regulatory requirement to measure
the radiated noise level over distances which may extend to tens of kilometres. Furthermore, if the transmission loss is to be established or
validated it is necessary to measure as a function of range from the pile at a number of positions and, because of the variation in acoustic
output during the pile-driving, particularly if a soft-start is employed, it is desirable to perform the measurement at a fixed range for the duration of the pile-driving operation. The noise generation mechanisms are complex, and a number of factors influence the noise radiated
into the water column, including the water depth (exposing a different amount of the surface area of the pile), the seabed properties, and the
penetration depth into the seabed by the pile, the pile dimensions, and the hammer energy. A standard method for the measurement of
underwater radiated sound from percussive pile driving has been developed and is described here. The method includes a combination of
fixed-range recordings and range-dependent hydrophone deployments. The method was established by Working Group 3 of Technical
Committee 43 (Sub-Committee 3) of the International Organization for Standardization (ISO), and is published in 2017 as ISO 18406.
9:40
4aUWb4. Underwater noise assessment for energy extraction and production systems using unmanned arial vehicles (UAVs).
Paul A. Lepper, Steven LLoyd, and Simon Pomeroy (Wolfson School, Loughborough Univ., Loughborough LE113TU, United Kingdom, p.a.lepper@lboro.ac.uk)
Traditional operations such as oil and gas exploration and production have long undergone sound field and environmental impact
assessments of underwater acoustic noise. More recently emerging industries such as renewables (wind, wave, and tidal energy production) have also required scrutiny in terms of underwater noise sound fields. To make these assessments, sound fields are typically measured using hydrophones deployed from boats, drifting systems or moored acoustic data loggers. These measurements are often complex
and expensive requiring complicated equipment deployments, boat operations and personnel in often-dangerous or hazardous environments. Unmanned Ariel Vehicles (UAV) or drone based technologies offer the opportunity for rapid deployment of smart hydrophone
systems arrays over a large spatial area with significantly lower operator and boat interaction improving deployment flexibility, cost and
minimising safety concerns for boat based deployments. Results presented are from tests of a prototype multi-rotor system in an open
water site, capable of flying to site, landing on the water, deploying a wideband hydrophone for underwater noise assessment and then
returning to base. These developments and trials have demonstrated the overall feasibility of wide-scale rapid hydrophone deployment
using UAV based sensors and its potential application to underwater sound field assessment across a variety of industries.
10:00
4aUWb5. Radiated noise levels from marine geotechnical drilling and standard penetration testing. Christine Erbe (JASCO Appl.
Sci., Kent St., Bentley, Western Australia 6102, Australia, c.erbe@curtin.edu.au) and Craig McPherson (JASCO Appl. Sci., Capalaba,
QLD, Australia)
4a WED. AM
Geotechnical drilling is a common part of site investigations prior to marine construction. A small, solid core is extracted from shallow depth for examination at the surface. During standard penetration testing (SPT), a sample tube is hammered into the ground at the
bottom of the borehole. The number of blows needed for the tube to penetrate a fixed depth relates to the hardness of the ground and is
termed the standard penetration resistance. Recordings of the drilling and SPT operations of a jack-up rig situated in 12 m of water were
obtained with a mobile recording system at 10-50 m range and 10 m depth below the sea surface. Geotechnical drilling (120 kW, 83 mm
diameter drill bit, 1500 rpm, 16-17 m drill depth below the seafloor consisting of sand then mudstone) had a radiated noise level (using
geometric spreading) of 142-145 dB re 1 lPa rms @ 1 m (30-10,000 Hz). SPT (50 mm outer diameter of the test tube, 15 mm wall thickness, 100 kg hammer, and 1 m drop height) exhibited a radiated noise level of 152-160 dB re 1 lPa2s @ 1 m.
10:20–10:40 Break
10:40
4aUWb6. Noise reduction of pile driving and unexploded ordinance detonations at offshore wind farm installation sites. Mark S.
Wochner (AdBm Technologies, 3925 W. Braker Ln., Austin, TX 78759, mark@adbmtech.com), Kevin M. Lee, Andrew R. McNeese
(Appl. Res. Labs., The Univ. of Texas at Austin, Austin, TX), and Preston S. Wilson (Mech. Eng. Dept. and Appl. Res. Labs., The Univ.
of Texas at Austin, Austin, TX)
Pile driving noise during the installation of monopile foundations for offshore wind farms can produce very high noise levels, and
strict regulations on this underwater noise exist around the world. In addition, the controlled detonation of unexploded ordinance (UXO)
at offshore wind farm installation sites in European waters is an additional challenge and reduction of this noise is a major challenge.
This paper discusses a tunable acoustic resonance-based underwater noise abatement system for use on marine pile driving, controlled
UXO detonations, and other applications. The system consists of arrays of underwater air-filled resonators, which surround the noise
source and are tuned to optimally attenuate noise in a frequency band of interest. System demonstrations that were conducted at two offshore wind farm sites in the North Sea will be discussed, in which peak sound pressure level reduction of nearly 40 dB was measured,
and almost 20 dB sound exposure level reduction was measured. Laboratory testing on noise generated by a combustive sound source,
used to simulate UXO noise, will also be discussed. The method of deploying these resonator arrays in a simple collapsible framework,
operational advantages of this approach, and future projects using this technology will be shared.
Contributed Papers
3847
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3847
11:00
4aUWb7. Explosive offshore structure removal noise measurements.
Adam S. Frankel (Marine Acoust., Inc, 4350 Fairfax Dr., Ste. 600, Arlington, VA 22203, adam.frankel@marineacoustics.com), Mary Barkaszi, Jeffrey Martin (Continental Shelf Assoc., Stuart, FL), William Poe (Explosive
Service Intl. Ltd., Baton Rouge, LA), Jennifer Giard, and Ken Hunter (Marine Acoust., Inc, MIddletown, Rhode Island)
The underwater structures that support wind turbines and oil drilling rigs
must eventually be removed. Explosive severing is a commonly used removal method in which charges are inserted into a pile and placed below
the seafloor to sever the pile. The open-water source level of an explosive
charge is readily determined from its composition and weight (Urick 1986).
However, the sediment and pile absorb much of the explosion’s energy. A
recent study (BSEE project M13PX00068) measured explosive removals in
the Gulf of Mexico. Peak pressure, impulse and energy flux density metrics
were measured with a 12-element, two-dimensional array spanning 90 ft
vertically and at distances out to 200 ft. Peak amplitudes, compared with
theoretical open-water predictions, were reduced from 76% (75 lb charges)
to 54% (200 lb charges). Measured results were also compared to predictions from the UnderwaterCalculator (Dzwilewski and Fenton 2003) that
included pile and sediment attenuation effects. It accurately predicted the
propagation from the current study and somewhat overestimated propagation for earlier data collected in shallower water with smaller charges. These
data suggest that the model can be improved. Nevertheless the model is further validated for its primary purpose of estimating ranges to designated
safety thresholds.
11:20
4aUWb8. The noise of rock n’ roll: Incidental noise characterization of
underwater rock placement. Rute Portugal (Marine Wildlife and Environ.
Dept., Gardline Geosurvey Ltd., Gardline Environ., Endeavour House, Admiralty Rd., Great Yarmouth NR30 3NG, United Kingdom, rute.portugal@
gardline.com), Sei-Him Cheong (Marine Wildlife and Environ. Dept., Gardline Geosurvey Ltd., Great Yarmouth, Norfolk, United Kingdom), James
Brocklehurst (Royal Boskalis Westminster N.V., Papendrecht, Netherlands),
and Breanna Evans (Marine Wildlife and Environ. Dept., Gardline Geosurvey Ltd., Great Yarmouth, United Kingdom)
European Union) and governments (e.g., USA) are beginning to respond by
building catalogues of the noise introduced in the marine environment by
human activity. Rock placement is a construction activity for which there is
scarcely any data available. In order to fill the knowledge gap, opportunistic
recordings were taken while the Gardline Mk 3 hydrophone array was
deployed for Passive Acoustic Monitoring and mitigation for marine mammals. The recordings were analyzed for their spectral and temporal characteristics, a correlation analysis between the amount of rock placed and the
intensity of sound produced was made, and the suitability of the hydrophone
array for the collection of this type of data was assessed.
11:40
4aUWb9. Acoustic ground truthing of seismic noise in Chatham Rise,
New Zealand. Sei-Him Cheong (Marine Wildlife, Gardline Geosurvey,
Endeavour House, Admiralty Rd., Great Yarmouth, Norfolk NR30 3NG,
United Kingdom, sei-him.cheong@gardline.com)
Noise generated by seismic survey is widely recognised to have ecological consequences to marine ecosystem. Between the 31st January and 21st
March 2016, a geophysical research survey was conducted in Chatham
Rise, New Zealand to collect seismo-acoustic data using a Sercel seismic
streamer in order to ground-truth the underwater noise impact assessment,
conducted according to the DOC (NZ) Seismic Survey Code of Conduct.
Data were analyzed to determine the received sound level at a distance up to
3 km from the source array. This paper establishes the method to predict the
impact radii in order to validate the results obtained using Gardline 360M
predictive model. The aim was to provide confidence to the capability of
predictive modeling for estimating the impact zone of a seismic sound
source. Data showed that multipath reflections can fluctuate significantly
according to the seafloor topography; however, a very consistent trend can
be obtained from direct propagation to confidently establish mitigation radii.
Results show that the employment of a seismic streamer for the establishment of effective mitigation radii is technically feasible and maybe used as
a tool to ground truth predictive modelling as part of mitigation plans to
reduce the potential risk of acoustic trauma.
Underwater noise is a growing concern to conservation and stock management efforts to which supra-national organizations (e.g., OSPAR or the
3848
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3848
WEDNESDAY MORNING, 28 JUNE 2017
ROOM 306, 10:35 A.M. TO 12:20 P.M.
Session 4aUWc
Underwater Acoustics: Unmanned Vehicles and Acoustics I
Erin M. Fischell, Cochair
Mechanical Engineering, MIT, 77 Massachusetts Ave., 5-204, Cambridge, MA 02139
Martin Siderius, Cochair
ECE Dept., Portland State Univ., P.O. Box 751, Portland, OR 97207
Chair’s Introduction—10:35
Invited Papers
10:40
4aUWc1. Environmentally adaptive acoustic sensing, communication, and navigation in distributed undersea networks. Henrik
Schmidt (Mech. Eng., Massachusetts Inst. of Technol., 77 Massachusetts Ave., Rm. 5-204, Cambridge, MA 02139, henrik@mit.edu)
The explosive improvement in the capabilities of both hardware and software for unmanned underwater vehicles (UUV) is rapidly
transforming the concept of operations for ocean science, exploration, surveillance and warfare. Thus, autonomous underwater vehicles,
including autonomous gliders and propelled vehicles, are now standard tools on research vessels, cabled observatories and naval assets
for environmental assessment on both local, regional and global scales. As was the case for manned undersea platforms, underwater
acoustic s is critical to the operation of such systems, providing the only means for communicating information to, from and between
undersea platform beyond a few tens of meters, and as such forms the basis for command and control, navigation, and remote sensing in
the ocean. This in turn makes the operation of such systems highly sensitive to the acoustic environment with its convergence zones and
extensive shadow regions, making it critical to robust operations to augment the network and platform control with an artificial intelligence framework which, using environmental acoustic modeling allows the network to autonomously adapt its configuration for optimal
sensing, communication and navigation. Using the MOOS-IvP autonomy architecture, such a model-based environmental adaptation
framework has been developed and demonstrated in simulation and in field deployments. [Work supported by ONR, DARPA, and
Battelle.]
11:00
4a WED. AM
4aUWc2. Underwater sensing and surveying with autonomous vehicles. Martin Siderius, Lanfranco Muzi, Elizabeth T. K€
usel (ECE
Dept., Portland State Univ., P.O. Box 751, Portland, OR 97207, siderius@pdx.edu), and Peter L. Nielsen (STO-CMRE, La Spezia, Italy)
The increasing capabilities of autonomous underwater vehicles is leading to a variety of new possibilities for underwater sensing and
surveying. Portland State University and the Centre for Maritime Research and Experimentation (CMRE) have been working for several
years on developing capabilities for underwater sensing with autonomous vehicles- mainly gliders or hybrid glider/powered vehicles.
These vehicles have long deployment durations (weeks to months) but move relatively slowly. Two applications have been the focus of
the work, seabed characterization and marine mammal population density estimation. For seabed sensing, the ambient noise field (e.g.,
breaking wave sounds) is being used as the sound source but one of the challenges for this method has been the limitations on the size of
the hydrophone receiver array that can be deployed from the vehicle. For the marine mammal population density studies a glider has
been customized with two hydrophones on the wings and various species are localized in bearing using time difference of arrival methods. Previous density estimation methods used a single, fixed hydrophone. In this presentation, experiments, challenges, and results will
be described from several studies using autonomous vehicles for seabed characterization and marine mammal population density
estimation.
11:20
4aUWc3. Near real-time passive acoustic detection and reporting of marine mammals from mobile autonomous platforms. Mark
F. Baumgartner (Biology Dept., Woods Hole Oceanographic Inst., 266 Woods Hole Rd., MS #33, Woods Hole, MA 02543, mbaumgartner@whoi.edu), Sofie M. Van Parijs (NOAA Northeast Fisheries Sci. Ctr., Woods Hole, MA), Cara F. Hotchkin (NAVFAC Atlantic,
Norfolk, VA), Keenan Ball, and Jim Partan (Appl. Ocean Phys. and Eng., Woods Hole Oceanographic Inst., Woods Hole, MA)
Over the past two decades, passive acoustic monitoring has proven to be an effective means of estimating the occurrence of marine
mammals. The vast majority of applications involve archival recordings from bottom-mounted instruments or towed hydrophones from
moving ships; however, there is growing interest in assessing marine mammal occurrence from autonomous platforms, particularly in
real time. The Woods Hole Oceanographic Institution has developed the capability to detect, classify, and remotely report in near real
time the calls of marine mammals via passive acoustics from a variety of long-endurance autonomous platforms, including Slocum
3849
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3849
gliders, wave gliders, and moored buoys. The mobile Slocum and wave gliders provide marine mammal occurrence information in near
real-time over spatial scales of hundreds to thousands of kilometers and temporal scales of a few months. Buoys and Slocum gliders are
now being used regularly to accurately monitor baleen whales in near real-time off the east coasts of the U.S. and Canada. Our longrange goal is to incorporate this capability into regional and global ocean observatory initiatives to (1) improve marine mammal conservation and management and (2) study changes in marine mammal distribution over multi-annual time scales in response to climate
change.
11:40
4aUWc4. Bistatic continuous active sonar processing using arrays towed from unmanned underwater vehicles. Kevin D. LePage
(Res. Div., NATO Sci. and Technol. Organization - Ctr. for Maritime Res. and Experimentation, Viale San Bartolomeo 400, La Spezia,
SP 19126, Italy, kevin.lepage@cmre.nato.int), Gaetano Canepa (Eng. Div., NATO Sci. and Technol. Organization - Ctr. for Maritime
Res. and Experimentation, La Spezia, SP, Italy), Jeffrey Bates, Alessandra Tesei (Res. Div., NATO Sci. and Technol. Organization Ctr. for Maritime Res. and Experimentation, La Spezia, SP, Italy), Michele Micheli (Eng. Div., NATO Sci. and Technol. Organization Ctr. for Maritime Res. and Experimentation, La Spezia, SP, Italy), and Andrea Munafo (National Oceanogr. Ctr., Southampton, United
Kingdom)
Continuous active sonar (CAS) offers the possibility to continuously illuminate targets in underwater detection scenarios. The Cooperative Antisubmarine Programme at the Centre for Maritime Research and Experimentation has developed a real-time continuous
active sonar signal processing algorithm and tested it at sea during two recent sea trials. These sea trials, undertaken with partners of the
Littoral Continuous Active Sonar multinational project, were conducted in littoral environments for the purpose of evaluating the effectiveness of CAS against a traditional pulse active sonar benchmark. Data collected during these sea trials have been collected on both
ship-towed triplet arrays and smaller arrays deployed from AUVs. Some details of the algorithm are given and results at the beamforming, detection and tracking level are presented for both CAS and PAS for an echo repeater target. [This work was made possible by the
LCAS Multi-National Joint Research Project (MN-JRP), including as Participants CMRE (NATO), DSTG (AUS), DRDC (CAN), Dstl
(GBR), CSSN (ITA), FFI (NOR), DTA (NZL), and ONR (USA). Funding for CMRE was provided by the NATO Allied Command
Transformation.]
12:00
4aUWc5. Transmission of side-scan sonar snippets from an underway unmanned underwater vehicle. Mae L. Seto (Defence R&D
Canada, #9 Grove St., Dartmouth, NS B2Y 3Z7, Canada, mae.seto@drdc-rddc.gc.ca) and Alice Danckaers (ENSTA Bretagne, Brest,
France)
An unique vector quantization compression methodology was applied to compress and encode side-scan sonar snippets of mine-like
objects generated by automated target recognition tools on-board underway unmanned underwater vehicles (UUV). These compressed
and encoded images were then further formed into acoustic packets. The objective was to transmit these acoustic packets, underwater, as
representations of the sonar snippets (mugshots). The ability to transmit sonar snippets underwater while the UUV is underway is important as it allows the above-water operator to examine an image of mine-like objects, without recovering the UUV, for a timely decision
on whether the object is actually a mine. This vector quantization method was used because of its terseness and thus it could be transmitted by WHOI underwater micromodems integrated on IVER3 UUVs. This presentation describes the algorithm, its implementation, and
its initial in-water validation in local waters. This capability was also validated and demonstrated during the Royal Navy Unmanned
Warrior 2016 exercise. Results from this will also be presented and discussed.
3850
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3850
WEDNESDAY AFTERNOON, 28 JUNE 2017
ROOM 207, 1:15 P.M. TO 5:40 P.M.
Session 4pAAa
Architectural Acoustics and Noise: Architectural Acoustics and Audio: Even Better Than the Real Thing II
K. Anthony Hoover, Cochair
McKay Conant Hoover, 5655 Lindero Canyon Road, Suite 325, Westlake Village, CA 91362
Alexander U. Case, Cochair
Sound Recording Technology, University of Massachusetts Lowell, 35 Wilder St., Suite 3, Lowell, MA 01854
Wolfgang Ahnert, Cochair
Ahnert Feistel Media Group, Arkonastr. 45-49, Berlin D-13189, Germany
Chair’s Introduction—1:15
Invited Papers
1:20
4pAAa1. Modern enhancement systems substitute room acoustic design? Wolfgang Ahnert (Ahnert Feistel Media Group, Berlin,
Germany) and Tobias Behrens (ADA-AMC GmbH, Arkonastr. 45-49, Berlin 13189, Germany, tbehrens@ada-amc.eu)
The acoustic properties of halls and theatres are traditionally achieved by the acoustic design regarding primary shape and secondary
structure of interior design. Until now acousticians regularly use their experience and additionally various tools for predicting the acoustic properties of the acoustic design: Physical scale model measurements and/or since the 1990 ties computer simulation as a standard
method now. The acoustic properties are evaluated later subjectively by visitors and objectively by engineers with the means of specially
developed measures. Parallel to this traditional approach electronic enhancement systems have been developed allowing e.g. a multipurpose hall to convert into a space for concert performances. Since the 1970 ties different systems with different technical approaches
have been developed, and it could be said that today electronically generated acoustic properties in halls can be achieved in highest quality satisfying both musicians and audience. After a short overview of this development the pros and cons are discussed to use these electronic enhancement systems more and more which could lead to reduced efforts in traditionally achieved high-end room acoustic
design. Limits to substitute room-acoustic design by enhancement systems are discussed finally.
1:40
4pAAa2. Constellation: A hybrid active accoustic system. Roger W. Schwenke and Melody Parker (Meyer Sound Labs., 2832 San
Pablo Ave, Berkeley, CA 94702, rogers@meyersound.com)
4p WED. PM
Constellation is a hybrid active acoustic system that has been used in rooms ranging in size from a virtual reality cave to a sports
arena. Constellation has been shown to control both the effective absorption of a room as well as the effective cubic volume. This paper
will review how early reflections, effective absorption, and effective cubic volume have been controlled in rooms of various sizes.
Recently, there has been a trend of larger classical ensembles performing in smaller venues. If the strength of the room is held constant,
but the cubic volume is cut in half, the reverberation time is also cut in half. If a room has exclusively physical acoustics, it is impossible
to replicate both the strength and reverberation time of a symphony hall in a space that is considerably smaller (such as a rehearsal
room). A special emphasis will be made on how Constellation is uniquely suited to address this challenge.
2:00
4pAAa3. Observations from the field: Active acoustic systems, architecture, performers, and audience. Magne Skalevik (AKUTEK and Brekke & Strand, Bolstadtunet 7, Spikkestad 3430, Norway, msk@brekkestrand.no) and Steve Ellison (Meyer Sound Labs,
Inc., Berkeley, CA)
Active acoustic systems provide the potential for acoustic adjustability with far greater parametric control than with architectural
systems. Settings and adjustments may be made and compared in extremely short time frames. With almost limitless combinations possible, how do we quantify and qualify these settings? Observations made during the course of tuning multiple Meyer Sound Constellation acoustic system installations are presented. The importance of adjusting spatial energy balance and early reflections and
reverberation parameters, subjective observations and objective measurements, performer and audience impressions, and user interface
will be discussed.
3851
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3851
2:20
4pAAa4. Various applications of active field control (AFC). Takayuki Watanabe and Hideo Miyazaki (Yamaha Corp., 10-1 Nakazawa-cho, Nakaku, Hamamatsu 430-8650, Japan, takayuki.watanabe@music.yamaha.com)
AFC was developed as a tool for an acoustical design and has been improved to meet various types of requirements for acoustic consulting projects. The current system is the 7th generation which consists of a hybrid of the regenerative and non-regenerative systems.
The system configuration is adjusted depending on the objectives for a venue. Two unique cases of AFC applications are discussed. (1)
Compensation of acoustics in the auditoria without orchestra shells. The objectives of the system are (a) the enhancement of early reflections for performers, (b) the extension of the reverberation time and the enhancement of the sound energy on the stage, and (c) the
enhancement of early reflections in the audience area. This system showed an improvement of about 1 to 2 dB in STearly and more than 2
dB in G in the audience area, which is equivalent or better performance than simple mobile type orchestra shell. 2) Improvement of the
acoustics under the balcony in the auditoria. The system is based on a non-regenerative system which recreates the sound field above the
balcony area by using measured impulse responses. The result of the subjective experiments shows that the effectiveness of the system
is significant.
2:40
4pAAa5. Carmen and Carmencita electroacoustic systems. Jan Jagla, Paul Chervin, and Jacques Martin (Div. Acoustique, Ctr. Scientifique et Technique du B^atiment, 24 rue Joseph Fourier, Saint Martin d’Hères 38400, France, jan.jagla@cstb.fr)
During the last 20 years, CSTB successfully installed many Carmen systems in prestigious venues requiring variable acoustics all
around Europe. Electroacoustic systems like Carmen turn a performance hall with naturally dry acoustics into a multipurpose venue that
can host any type of performance from theatre to symphonic music. Carmen is based on the virtual wall principle, meaning that it simply
changes the perceived absorption of walls and ceilings and their perceived distance to the receiver by placing Carmen cells on them.
This principle enhances the natural acoustics of the hall without introducing any kind of electronic reverberation. Due to the large number of Carmen cells (typically over 30) required to homogeneously enhance a hall reverberation time, Carmen is not suited to small venues (under 700 seats). Also, it is well known that the main limitation of assisted reverberation systems is their significant price.
Therefore, CSTB developed a new system called Carmencita using much less cells (8 to 16) and introducing a specific reverberation matrix to provide to small venues a large panel of acoustic presets. Moreover, to meet the growing demand of spatially diffused sound
effects for theatre and contemporary dance, Carmencita also features new real time spatial diffusion algorithms.
3:00
4pAAa6. The future of high quality sound reinforcement. Gunter Engel (M€
uller-BBM, Robert-Koch-Str. 11, Planegg 82152, Germany, Gunter.Engel@mbbm.com)
Combining advanced sound reinforcement techniques and room enhancement features provides impressive new possibilities for a
wide variety of concerts, shows, and acoustic environment design. The achieved natural impression gives the sound designer the means
to grip the listeners in the audience with a perception which is acting partly below the limit of consciousness and by this even more
effective than all striking showmanship. Preserving the correct localization in shows or distributing sound sources around the listener
instead of summing all signals to the same position achieves not only a clear and transparent sound. It dramatically improves clarity,
intelligibility, and again the naturalness of perception. The lecture describes the underlying techniques to achieve these effects. Various
examples of applying the concept with the acoustic enhancement system Vivace for opera and drama performances are explained in
detail.
3:20–3:40 Break
3:40
4pAAa7. Object based approach for electronic variable acoustics systems. Javier Frutos-Bonilla, Sandra Brix, Christoph Sladeczek,
Rene Rodigast (Acoust., Fraunhofer IDMT, Ilmenau, Germany), and Bjorn van Munster (Astro Spatial Audio, Vanmunster BV, Groesstraat 3, Odiliapeel, Netherlands, b.vanmunster@astroaudio.eu)
Demand for spatially accurate high-quality sound reproduction in the audio entertainment industry is now at an entirely new level. In
today’s world of immersive entertainment, sound reproduction is about more than just decibels—via the accurate localization of sound
sources, performances can be immeasurably enhanced in classical concerts, operas, theatres, and every field of live entertainment. The
days of two-dimensional PAs are over—a new era has arrived with 3D spatial sound and enhanced room acoustics. This paper describes
a fully object-based approach to immersive sound reproduction, combining 3D positioning of audio sources and acoustic enhancement
on the same unit by processing room reflections as audio objects. The algorithms combine the flexibility of traditional in-line digital
reverberation devices with the quality and natural impression of a regenerative reverberation. The result of this process is a subset of discrete, early and late reverberant reflections which are convolved with the direct sound to serve as input objects for further processing.
This unique object based system approach offers many advantages with respect to the traditional channel based approach. The technological principles of this approach, along with the results of successful applications throughout Europe, will be shown.
4:00
4pAAa8. Where not to install a reverberation enhancement system. David H. Griesinger (Res., David Griesinger Acoust., 221 Mt
Auburn St. #504, Cambridge, MA 02138, dgriesinger@verizon.net)
For hundreds (perhaps thousands) of years venues for live performance emphasized communication—the transfer of speech or musical information from performers to listeners—as paramount to success. The Greeks could perform drama to 15,000 listeners without
microphones, and halls used by Haydn, Mozart, and Beethoven were small and dry. Wagner built a fan-shaped opera hall with an
3852
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3852
absorbent stage, a covered pit, and no lateral reflections. It is still considered perhaps the best in the world. Sabine and others made lecture and Vaudeville halls that worked. But in the 20th century suddenly early reflections were essential, and long reverberation times desirable. Wood stages and lateral reflections were mandatory. These ideas linger on. In this talk we share our experiences of often
misguided attempts to improve sound in venues of all types. The lessons we learned from a few perceptive musicians and artists have
changed our approach to acoustics. Attention is the key, not reverberation. Reverberation can be lovely if it enhances a listening experience, but it must not muddle the performance. There are ways to get it right, but doing so requires the right system, careful adjustment,
and close attention to the natural acoustics of the venue.
Contributed Paper
4:20
environments with loudspeakers that enable multi-channel playback of
soundscape recordings that reproduce acoustical conditions of a variety of
spaces. In addition, the systems use electronic architecture to carefully
match the acoustics of the soundscape recordings to enable live sources in
the lab to interact with these environments in real time.
4pAAa9. The fictional environment meets acoustic reality—Simulating
and creating acoustical conditions in for laboratory testing. Steve Barbar
(E-coustic Systems, 30 Dunbarton Rd., Belmont, MA 02478, steve@lareslexicon.com) and Russ Berger (Russ Berger Design Group, Carrollton, TX)
In recent years, we have installed electro-acoustic enhancement systems
in specialized acoustic laboratories. These labs incorporate sound isolating
Invited Papers
4:40
4pAAa10. The Rediscovered Engineering Lab Notebooks (1936-1944) of Ben Bauer. Michael S. Pettersen (Corporate History, Shure
Inc., 5800 W. Touhy Ave., Niles, IL 60714, pettersen_michael@shure.com)
Benjamin B. Bauer (1913-1979) was an ASA Fellow and awarded the ASA Silver Medal (1978). He held over 100 patents for acoustical/audio technology, with his first patent being arguably the most significant: the invention of the Uniphase principle integral to the
Shure Unidyne model 55 microphone. Introduced in 1939 and still manufactured today, the Shure Unidyne was the first unidirectional
microphone using a single dynamic element. Today, the Uniphase principle is employed in the vast majority of directional microphones.
In September 2016, Bauer’s engineering lab notebooks dating from 1936 to 1944 were located; they had not been seen for over 50 years.
The presentation provides a peek into these Bauer notebooks as he discovers and refines the Uniphase principle, as well as numerous
other electro-acoustical concepts—some decades ahead of their time.
5:00
4pAAa11. Crosstalk cancellation using multiple synthesis sources. William M. Hartmann and Anthony J. Tropiano (Phys. and Astronomy, Michigan State Univ., Physics-Astronomy, 567 Wilson Rd., East Lansing, MI 48824, hartmann@pa.msu.edu)
4p WED. PM
The technique of crosstalk cancellation uses two synthesis loudspeakers in an attempt to produce well separated signals in a listener’s
ear canals. A variation on the technique can be applied in precise psychoacoustical experiments by using probe tubes in the ear canals
throughout the presentation. The probe-tube application exhibits remarkable immunity to amplitude and phase variations in the
responses of all transducers because the same system is used for presentation and for initial calibration. The self-correcting nature of the
method even confers immunity to variations in the depth of the probe tubes within the ear canals. Because the solution to the 2-ear-2speaker problem involves the inverse of a 2x2 matrix, the solution can become unstable, leading to pathologically large amplitudes. This
problem can be almost entirely alleviated by using more than two synthesis speakers, and solving the resulting underdetermined inverse
problem through the Moore-Penrose pseudoinverse. Random perturbation calculations show that using three or four synthesis speakers
also reduces the ear-canal amplitude and phase sensitivity to inadvertent listener movements. However, more realistic calculations for
inadvertent head-rotations indicate an additional important role for synthesis speaker location. [Work supported by the AFOSR.]
5:20
4pAAa12. Amplified music and variable acoustics—Shorter reverberation times at low frequencies. Niels W. Adelman-Larsen
(Flex Acoust., Diplomvej 377, Kgs. Lyngby 2800, Denmark, nwl@flexac.com)
A survey from 2010 among professional musicians and sound engineers revealed that a long reverberation time at low frequencies in
halls during concerts of reinforced music such as pop and rock, is a common cause for an unacceptable sounding event. Lower frequency
sounds are, within the genre of popular music, rhythmically very active and loud, and a long reverberation leads to a situation where the
various notes and sounds including vocals cannot be clearly distinguished. This reverberant bass sound rumble often partially masks
even the direct higher pitched sounds. In an article from 2011, it was hypothesized that mid- and high-frequency sound is seldom a reason for lack of clarity and definition due to a 6 times higher absorption by audience compared to low frequencies, and a higher directivity
of speakers at these frequencies. A survey from December 2016 among 25 professional musicians and sound engineers confirms that a
longer reverberation at higher frequencies in the empty hall can advantageously be projected and that the 125 Hz octave band is probably
the single most important band to control. Details from this survey and results regarding the author’s most recent developments in the
field of variable and mobile absorption in favor of both amplified and classical musical genres are presented.
3853
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3853
WEDNESDAY AFTERNOON, 28 JUNE 2017
ROOM 208, 1:20 P.M. TO 5:00 P.M.
Session 4pAAb
Architectural Acoustics: Simulation and Evaluation of Acoustic Environments II
Michael Vorl€ander, Cochair
ITA, RWTH Aachen University, Kopernikusstr. 5, Aachen 52056, Germany
Stefan Weinzierl, Cochair
Audio Communication Group, TU Berlin, Strelitzer Str. 19, Berlin 10115, Germany
Ning Xiang, Cochair
School of Architecture, Rensselaer Polytechnic Institute, Greene Building, 110 8th Street, Troy, NY 12180
Invited Papers
1:20
4pAAb1. Considerations regarding the equalization for spatial playback from spherical microphone array recordings. Jens
Meyer, Gary W. Elko, and Steven Backer (mh Acoust., 25A Summit Ave., Summit, NJ, jmm@mhacoustics.com)
The encoding of spatial sound fields captured by spherical microphone arrays into Eigenbeams (HOA signals) provides a compact
representation of the sound field that facilitates spatially realistic audio playback in Higher-Order Ambisonics (HOA) applications. For
spatial audio reproduction, a typical Ambisonics decoder generates the loudspeaker signals from the HOA signals by applying only simple scalar weights. However, since the HOA signals derived from physical microphone arrays are frequency dependent, such a frequency
independent scalar weighting scheme will not result in the desired loudspeaker sound field reproduction. An analysis of this frequency
dependency and some equalization schemes to mitigate detrimental effects resulting from frequency independent loudspeaker decoding
will be discussed.
1:40
4pAAb2. Room measurement setup for accurate objective analysis and realistic subjective comparison of concert halls. Matthew
T. Neal and Michelle C. Vigeant (Graduate Program in Acoust., Penn State Univ., 201 Appl. Sci. Bldg., University Park, PA 16802,
mtn5048@psu.edu)
When measuring a concert hall, a primary goal is to characterize the perceptual aspects of the room’s quality. Since subjective testing requires extensive time and resources, halls are often compared using objective metrics designed to correlate with perception. The
goal of the present work is to create a concert hall measurement database that allows for both accurate objective and realistic subjective
assessment of concert halls. A 32-element spherical microphone array is used to capture a spatial room impulse response (SRIR) within
a hall. A three-way omnidirectional sound source is used for objective measurements, exhibiting omnidirectional behavior up to 5kHz.
The omnidirectional SRIR can be used to calculate typical room acoustic metrics and analyzed using beamforming techniques. For a realistic auralization that resembles an orchestral performance, an alternate sound source was used: a compact 20-element spherical loudspeaker. This loudspeaker can be moved around a stage and the measurement signal can be processed to mimic the directivity of
different instruments. These individual instrument SRIRs can be combined and processed using Ambisonics to recreate full orchestral
auralizations of measured halls. The measurement setup will be presented, along with initial measurement results, and plans for future
subjective testing. [Work supported by NSF Award 1302741.]
Contributed Paper
2:00
4pAAb3. Array source for fast sequential matched multiple input multiple output (MIMO) room impulse measurements. Johannes Klein, Marco
Berzborn, and Michael Vorlaender (Inst. of Tech. Acoust., RWTH Aachen
Univ., Kopernikusstr. 5, Aachen 52074, Germany, johannes.klein@akustik.
rwth-aachen.de)
Multiple input multiple output (MIMO) room impulse responses are the
base for the directivity related analysis and realistic reproduction of room
acoustics. To obtain measurements with a high spatial resolution specialized
loudspeaker and microphone arrays have to be used for the measurement.
Due to the sensor size high-order microphone arrays can be constructed as
one physical element, whereas for the respective loudspeaker arrays sequential measurement techniques have to be employed. Based on previous work
3854
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
concerning a dodecahedron approach [Behler 2010] a high-order loudspeaker array for measurements in a wide frequency range has been developed [Klein 2012]. It has since been thoroughly investigated [Klein 2015]
and used in concert hall MIMO room impulse measurements [Noisternig
2016]. A major drawback of sequential room impulse measurements is their
susceptibility to even minor time variances of the medium [Guski 2015].
Systematic limitations in the joint source and receiver array design [Morgenstern 2017] confine specific source receiver combinations to a narrow
frequency range. Moreover, high-order wide frequency range sources
require a large mechanical support structure leading to unwanted reflections. Considering the past experience and studies a new source for fast sequential MIMO room impulse measurements in a receiver-matched
frequency range is developed. The considerations and the source will be
presented.
Acoustics ’17 Boston
3854
Invited Papers
2:20
4pAAb4. A comparison of modal and spatially matched-filter beamforming for rigid spherical microphone arrays in the context
of data-based binaural synthesis. Sascha Spors, Till Rettberg, and Fiete Winter (Inst. of Communications Eng., Univ. of Rostock,
Richard-Wagner-Strasse 31, Rostock 18119, Germany, Sascha.Spors@uni-rostock.de)
Several approaches to data-based binaural synthesis have been published that capture a sound field by means of a spherical microphone array. The captured sound field is decomposed into plane waves, which are auralized using (far-field) head-related impulse
responses (HRIRs). In practice, the decomposition into plane waves is performed by beamforming. A well-known technique for spherical microphone arrays is modal beamforming where the sound field is represented with respect to surface spherical harmonics. A
numerically stable approximation is the spatio-temporal matched-filter technique. Here the acoustic reciprocity theorem is used in conjunction with the matched filter technique for beamforming. This paper reviews the matched filter beamformer and compares its properties to the modal beamformer in a realistic scenario. The performance of the two beamforming techniques is compared in the context of
data-based binaural synthesis. Time-domain properties of the resulting ear-signals, as well as suitable binaural measures, are investigated
for simulated and captured scenarios.
2:40
4pAAb5. Spatial perception in binaural reproduction with sparse head-related transfer function measurement grids. Zamir BenHur (Oculus & Facebook and Ben-Gurion Univ. of the Negev, Be’er Sheva, Be’er Sheva 8410501, Israel, zami@post.bgu.ac.il), David
L. Alon (Oculus & Facebook and Ben-Gurion Univ. of the Negev, Beer-Sheva, Israel), James Hillis (Oculus & Facebook, Redmond,
WA), Boaz Rafaely (Elec. & Comput. Eng., Ben-Gurion Univ. of the Negev, Beer Sheva, Israel), and Ravish Mehra (Oculus & Facebook, Redmond, WA)
Growing interest in virtual reality has led to greater demand for immersive virtual audio systems. High fidelity spatial audio requires
individualized head related transfer functions (HRTFs). Individualized HRTFs are, however, typically unavailable as they require specialized equipment and a large number of measurements. This motivates the development of a simpler more accessible HRTF estimation
process. Previous work has demonstrated that spherical-harmonics (SH) can be used to reconstruct the HRTF from a smaller number of
sample points, but this method has two known types of error: spatial aliasing and truncation error. Aliasing refer to loss of ability to represent high frequencies. Truncation error refer to the fact the SH representation will be order-limited which further limits spatial resolution. In this paper, the effect of sparse measurement grids on the reproduced binaural signal is studied by analyzing both types of errors.
The expected effect of these errors on the perceived location, externalization and source width, is studied using a theoretical model and
an experimental investigation. Preliminary results indicate relatively large effects of SH order on sound localization and smaller effects
due to aliasing errors.
3:00
4pAAb6. Efficient aliasing-free HRTF representation. David L. Alon (Oculus & Facebook and Ben-Gurion Univ. of the Negev, Redmond, Redmond, WA, davidalon@fb.com), Zamir Ben-Hur (Oculus & Facebook and Ben-Gurion Univ. of the Negev, Be’er Sheva,
Israel), Boaz Rafaely (Dept. of Elec. and Comput. Eng., Ben-Gurion Univ. of the Negev, Beer Sheva, Israel), and Ravish Mehra (Oculus
& Facebook, Redmond, WA)
4p WED. PM
Previous studies have shown that individualized head related transfer functions (HRTFs) provide improved localization performance
compared to generic HRTF filters, and are therefore considered preferable for binaural sound reproduction. However, individualized
HRTFs typically require a large number of measurements, which may extend to several hours. Therefore, they are currently unavailable
for the vast majority of users. One approach to lower measurement complexity is to reduce the number of HRTF measurements. This
approach although simplifies the measurement process, produces spatial aliasing leading to interpolation error away from the measured
directions. In this study, a new method for the measurement of individualized HRTFs with a reduced spatial aliasing error is developed.
The reduced spatial aliasing error is achieved by incorporating a small number of individualized HRTF measurements with information
from a high resolution HRTFs database, which leads to an optimal interpolation process in the sense of minimum mean square error. An
experimental investigation is conducted to validate the improved performance of the proposed method.
3:20–3:40 Break
3:40
4pAAb7. Omnidirectional measurement of room impulse response using a small bookshelf loudspeaker as a sound source.
Takayuki Hidaka and Shin-ichiro Koyanagi (Takenaka R&D Inst., 1-5-1, Otsuka, Inzai, Chiba, Tokyo 270-1395, Japan, hidaka@pep.ne.
jp)
A dodecahedral loudspeaker is commonly used in room acoustical measurements as an omnidirectional sound source. However,
inverse filtering to obtain an ideal room impulse response (RIR) is not easy because of its irregular frequency and temporal characteristics. When this RIR is convolved with music signals, the reproduced sound does not possess adequate fidelity to evaluate the acoustical
quality of the concert hall. Hence a new measurement method of the RIR by an omnidirectional source is proposed: RIR’s are measured
by a small bookshelf loudspeaker rotating around its acoustic center in multiple directions, and the RIR’s are averaged to obtain omnidirectional response. Physical characteristics of the RIR determined with this procedure are given and the room acoustical parameters are
compared with those by the dodecahedral method. Finally, application to subjective experiments is described.
3855
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3855
4:00
4pAAb8. Best approaches to modeling audience absorption. Ana M. Jaramillo and Bruce C. Olson (Olson Sound Design, LLC, 8717
Humboldt Ave. N, Brooklyn Park, MN 55444, ana.jaramillo@afmg.eu)
Multiple studies have looked into the best methods for measuring, estimating and predicting audience and seat absorption. Especially
noticeable are those from L. Beranek. With simulation software the question remains about what is the best way to build those sections
geometrically, and what type of absorption coefficient is appropriate to use. We created a series of model variations to understand the
differences between the proposed methods and their influence in predicted room acoustics results both statistical as well as through ray
tracing.
4:20
4pAAb9. First international round robin on auralization: Concept and database. Fabian Brinkmann (Audio Commun. Group,
Tech. Univ. Berlin, Einsteinufer 17c, Berlin 10787, Germany, fabian.brinkmann@tu-berlin.de), Lukas Asp€
ock (Inst. of Tech. Acoust.,
RWTH Aachen Univ., Aachen, Germany), David Ackermann (Audio Commun. Group, Tech. Univ. Berlin, Berlin, Germany), Rob
Opdam, Michael Vorlaender (Inst. of Tech. Acoust., RWTH Aachen Univ., Aachen, Germany), and Stefan Weinzierl (Audio Commun.
Group, Tech. Univ. Berlin, Berlin, Germany)
For evaluating the performance of room acoustical simulations or numerical simulations in general, these are usually compared to
corresponding measurements as a benchmark. Previous studies indicated that differences may result from neglecting wave effects (scattering, diffraction, attenuation at grazing incidence). However, it also proved to be a challenge to provide a precise representation of the
primary and secondary structure (geometry, source and receiver characteristics, absorption and scattering coefficients) of the measured
ground truth to be re-modeled in the simulation. The round robin on auralization aimed to overcome such shortcomings by generating a
ground truth database of room acoustical environments provided to developers of room simulation software. The database includes a
selection of acoustic scenes such as “single reflection,” or “coupled rooms” which isolate single acoustic phenomena, as well as three
complex “real-world” environments of different size. Simulated monaural and binaural impulse responses were evaluated by comparing
them to the corresponding measurements on the basis of acoustical parameters as well as perceptual qualities. We introduce the concept
of the round robin along with the description of the acoustic scenes, the acquisition of monaural and binaural impulse responses, and the
identification of the boundary conditions.
4:40
4pAAb10. First international round robin on auralization: Results of the acoustical evaluation. Lukas Asp€
ock (Inst. of Tech.
Acoust., RWTH Aachen, Kopernikusstr. 5, Aachen D-52074, Germany, las@akustik.rwth-aachen.de), Fabian Brinkmann, David Ackermann (Audio Commun. Group, Tech. Univ. Berlin, Berlin, Germany), and Michael Vorlaender (Inst. of Tech. Acoust., RWTH Aachen,
Aachen, Germany)
The first round robin on auralization allows developers of room acoustical simulation software to provide results of their simulation
model for typical scenarios of sound propagation and room acoustics. In contrast to the past round robins on room acoustics simulation,
it includes a perceptual evaluation of the simulation results in addition to the evaluation based on room impulse responses and room
acoustic parameters. In this talk, we will present the acoustical analysis of the monaural simulation results, based on differences in the
temporal and spectral features of the transfer function as well as differences in standard room acoustical parameters for the nine scenes
of the round robin. These indicate shortcomings in the corresponding simulation algorithms, provide indications which effects and which
acoustical situations are particularly challenging for the different simulation approaches, and predict potential perceptual differences in
the listening tests.
3856
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3856
WEDNESDAY AFTERNOON, 28 JUNE 2017
ROOM 206, 1:20 P.M. TO 2:20 P.M.
Session 4pAAc
Architectural Acoustics: Topics in Architectural Acoustics Related to Measurements II
Ian B. Hoffman, Chair
Judson Univ. - Dept. of Architecture, 1151 N State Street, Elgin, IL 60123
Contributed Papers
1:20
4pAAc1. Recording and implementing multi-source anechoic material
for use in auralizations. Gregory A. Miller and Marcus R. Mayell (Threshold Acoust., LLC, 53 W. Jackson Boulevard, Ste. 815, Chicago, IL 60604,
gmiller@thresholdacoustics.com)
Live theater or musical performance rarely involves a single performer on
stage. Indeed many of the acoustic cues leading to one’s impression of a performance space are the result of hearing multiple performers interacting with
one another across a stage. When using an auralization as a design tool, the
interplay of different sources can be extremely valuable both for the acoustic
designers and building owners. Some multi-source anechoic material is publicly available, but the material is limited both in the quality of performance
and the range of genres. The authors sought to improve upon available material by recording professional actors, singers, and instrumentalists for use in
auralizations. This paper will report on the recording techniques, strategies
for allowing musicians to play in unison (in tempo, tonality, and dynamics),
and implementation in computer modeling and ambisonic playback.
1:40
4pAAc2. Hybrid modeling and auralization for complex structures. Jon
W. Mooney and Hannah D. Knorr (Acoust. & Vib., KJWW Eng. Consultants, 623 26th Ave., Rock Island, IL 61201, mooneyjw@kjww.com)
2:00
4pAAc3. The Model versus the room: Parametric and aural comparisons of modeled and measured impulse responses. Kelsey Hochgraf, Jonah Sacks, and Benjamin Markham (Acentech, 33 Moulton St., Cambridge,
MA 02138, khochgraf@acentech.com)
Computer modeling and auralization have proven their value in the
acoustical design of performance venues. However, achieving and evaluating parametric accuracy and perceptual plausibility continues to be a challenge. Advances in our measurement techniques have given us new
opportunities to test our models against reality in several recently completed
spaces. Impulse responses were collected from the built spaces using a Bformat (multichannel) microphone, and these were analyzed in terms of numerical acoustical parameters, and for directionality and timing of reflected
sound arrival. These multichannel impulse responses were also convolved
with speech and music, and listening comparisons were made with the same
speech and music convolved with simulated impulse responses. This session
will present the results of these tests and lessons learned regarding strengths
and limitations of these techniques.
4p WED. PM
When a design is too complex for direct computer simulation, a hybrid
physical/computer model can be an effective tool. Such was the case of the
concert stage for Grand Junction Park in Westfield, Indiana. The stage façade comprises an opus spicatum of 6x6-inch stone caps on square aluminum tubing. In addition to the dramatic look, the complicated surface
provides an opportunity to control diffusion, reflection and absorption. But
the effect that the irregular 6-inch tier construction would have on the stage
acoustics was a troubling unknown. With nearly 10,000 closely-packed
surfaces—each with dimensions straddling audible wavelengths—direct
computer modeling and auralization of the pavilion’s acoustics was felt to
be unreliable. Therefore, a 1/12th scale physical model was built and used
to obtain high frequency impulse responses for the design. Post processing
of the responses showed a dip in the frequency response, most notably for
the flute section as heard at the conductor’s position. Additional post processing was used to create auralizations of a 25-piece orchestra to demonstrate the level of the effect. This presentation will describe in detail the
hybrid modeling and auralization used for this project, the results of the testing, and the recommendations made to the architect.
3857
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3857
WEDNESDAY AFTERNOON, 28 JUNE 2017
ROOM 206, 2:35 P.M. TO 5:40 P.M.
Session 4pAAd
Architectural Acoustics: Recent Developments and Advances in Archeo-Acoustics and Historical
Soundscapes II
David Lubman, Cochair
DL Acoustics, 14301 Middletown Ln., Westminster, CA 92683-4514
Miriam A. Kolar, Cochair
Architectural Studies; Music, Amherst College, School for Advanced Research, 660 Garcia St., Santa Fe, NM 87505
Elena Bo, Cochair
DAD, Polytechnic Univ. of Turin, Bologna 40128, Italy
Chair’s Introduction—2:35
Invited Papers
2:40
4pAAd1. How they changed: Therukoothu performace in a noisy city. Umashankar Manthravadi (East Villa, 46/5 Hutchins Rd.,
Bangalore, Karnataka 560084, India, umashanks@yahoo.com)
When it first evolved several hundred years ago, the Kerala Koothambalam, a theatre attached to a temple, heard performances of
Sanskrit drama in a quiet, almost rural surroundings. Mizhavu, a very loud and important part of the performance, did not drown the
words. But now it does. Many performances these days use microphones and loudspeakers to overcome the cacophony outside. The
space we are measuring is in the heart of Thrissur, a thriving city in Kerala. This is a study of the acoustics of the Koothambalam, a
wooden structure which has not changed much in the last several centuries. We will also be recording a koothu, one of the plays performed in this space. As well as take accurate measurements and photographs. We will also record the performance in a studio, to process with the ambisonic impulse responses recorded in the theatre, to recreate the sound of the performance in quieter times. The
presentation will be a talk with brief audio and visual samples.)
3:00
4pAAd2. Acoustic virtual reconstruction of the Roman theater of Posillipo, Naples. Gino Iannace (Dept. of Architecture and Industrial Design, Universita della Campania “Luigi Vanvitelli”, Borgo San Lorenzo, Aversa, Caserta 81031, Italy, gino.iannace@unina2.it)
and Umberto Berardi (Dept. of Architetural Sci., Ryerson Univ., Toronto, ON, Canada)
On the Hill of Posillipo, during the Imperial times, a sumptuous villa overlooking the sea was built. This villa contained an odeon
and a theater. Over the centuries, the villa was destroyed and only towards the end of the 800, the discovery of some of the archaeological heritage of this site started. Today, the theater has the lower part of the cavea, while the scene and the stage have not been reconstructed yet. In the central part of the orchestra, there is a pool that was perhaps used for aquatic events. The purpose of this work is to
reconstructi the acoustics of the theater as it appeared during the Imperial period. For this scope, a software for room acoustics was used.
The acoustic measurements were carried out in situ for the evaluation of acoustic characteristics in the current state, and then the virtual
model was tuned to the results of impulse response measurements. The acoustic analysis is finally carried out using the acoustic properties of the auditorium in its present state.
3:20
4pAAd3. Toward opera houses: Acoustic analysis of the ancient literature. Dario D’Orazio (DIN, Univ. of Bologna, Viale Risorgimento, 2, Bologna 40128, Italy, dario.dorazio@unibo.it) and Elena Bo (DAD, Polytechnic Univ. of Turin, Turin, Italy)
From XVI Century to the present, the demand for buildings where the opera took place followed the development of the melodrama.
Several authors wrote about the optimal design of an opera house. While some of them did not cross the boundaries of the theoretical
research, as Milizia, others went on building theaters following their particular theories, such as Nicolini with the San Carlo theater in
Naples. Early authors researched on a merely architectural systems and principles, following the classical approach—as Carini-Motta in
relationship with Vitruvius. Late authors, from the XIX Century, widened the scope including social reason in the theather design. The
aim of this article is to focus on the publications, regarding theatre design, of the early authors, analyzing them by highlighting the consistencies and discrepancies in respect to contemporary literature findings.
3858
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3858
3:40–4:00 Break
4:00
4pAAd4. Falling coins, striking matches, and whispering voices to demonstrate the acoustics of an open air amphitheatre. Constant Hak, Remy Wenmaekers, Niels Hoekstra, and Bareld Nicolai (Bldg. Phys. and Services, Eindhoven Univ. of Technol., Den Dolech
2, Eindhoven 5600 MB, Netherlands, c.c.j.m.hak@tue.nl)
Part of the Ancient Acoustics research project was to measure a dense grid of impulse responses in three ancient amphitheatres in accordance with the ISO 3382 standard. In this conference paper, the measurements performed in the theatre of Epidaurus at 264 receiver
positions are used to study the “signal” to “noise” ratios (SNR) for typical low energy sound sources such as falling coins, striking
matches and whispering voices, described in travel guides and used by tourists to confirm or disclaim the exceptional acoustical quality
of open air amphitheatres. The “signal” is obtained from sound power measurements of these sources in a reverberation room, combined
with the measured Sound Strength G at every position in the theatre. To determine G, asynchronous impulse response measurements
were carried out using two dodecahedron sound sources playing stimulus signals simultaneously. The measured background noise was
used as the “noise” in the SNR. Besides, speech intelligibility measurements have been carried out in accordance with IEC 60268-16
using a special “speech source,” that could be used to determine the SNR of a whispering voice. Using the sound sources mentioned, no
evidence of the often claimed exceptional sound transmission was found.
4:20
4pAAd5. Tracing ancient soundscapes. Pamela Jordan (Eng. Acoust., Tech. Univ. Berlin, Technische Universit€at Berlin, Institut f€
ur
Str€omungsmechanik und Technische Akustik Fachgebiete der Technischen Akustik, Sekr. TA7, Einsteinufer 25, Berlin 10587, Germany, pam.f.jordan@gmail.com)
What if the most intact aspect of an archaeological site is actually its historic soundscape? How could researchers and visitors learn
about a place from sonic “ruins” when the built environment is significantly deteriorated? This paper discusses the case study of Mt.
Lykaion, an ancient Greek sanctuary and athletic complex in near-ground-level ruins. Despite these physical conditions, visitors can experience distinct “sound-lines” throughout the site, where conversations can be held across fifty or one hundred meters in certain locations.
Ongoing field explorations are attempting to trace these acoustic connections and determine whether they align with the original site layout.
A modified impulse response test was employed based on recent architectural and archaeological findings, and the results were recorded
using binaural microphones; the methodology and analysis of the recording data will be presented in detail, with rich possibilities for other
ancient outdoor landscapes as well as historic sites without a surviving written record or physical trace. Using psychoacoustic metrics such
as perceived clarity and loudness for analysis, the recording results demonstrate how sonic relationships may be identified in open-air settings, and presents a new way to visualize how ancient sites may have functioned based on the remains of their original soundscape.
4:40
4pAAd6. Why can’t I hear that “still small voice”: Did Neolithic noise growth result in human “alienation from nature”? David
Lubman (DL Acoust., 14301 Middletown Ln., Westminster, CA 92683-4514, dlubman@dlacoustics.com)
Did human consciousness change in the transition from the Paleolithic to the Neolithic era? Paleolithic humans lived with nature in
small isolated groups of perhaps 30 to 150 people. For survival, Paleolithic hunter gatherers focused their auditory attention on recognizing emerging opportunities and threats. Consequently, Paleolithics were very attentive to weak background sounds. Sound levels often
reached “din” levels but it was not noise. The soundscape was signal-rich, reflecting a healthy and resource-rich environment. Neurons
of predator and prey evolved to facilitate recognition of the identity, condition, and intention of their natural sources. The Neolithic era
that followed emphasized large scale agriculture and herding. Because of the need for organized labor Neolithic farmers were gradually
distanced from nature and its sounds. Background sounds no longer signified major threats. With the emergence of cities, over time
background sounds became noise. The Neolithic soundscape consisted largely of nearby human and agricultural sounds. So, Neolithic
farmers gradually shifted their auditory attention to foreground sounds. Thus, Neolithics gradually lost aural contact with nature. Noise
may be an important cause for “alienation from nature,” and for inattention to that “still small voice.”
3859
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
4p WED. PM
5:00–5:40 Panel Discussion
Acoustics ’17 Boston
3859
WEDNESDAY AFTERNOON, 28 JUNE 2017
ROOM 313, 1:20 P.M. TO 5:40 P.M.
Session 4pAB
Animal Bioacoustics: Fish Bioacoustics II: Session in Honor of Anthony Hopkins and Arthur Popper
Joseph A. Sisneros, Cochair
Psychol. Dept., Univ. of Washington, 337 Guthrie Hall, Seattle, WA 98195
Michaela Meyer, Cochair
Neurology, Boston Children’s Hospital, Harvard Medical School, 3 Blackfan Circle, Center for Life Sciences, 14th Floor,
Room 14021, Boston, MA 02115
Invited Papers
1:20
4pAB1. Acoustic communication in fishes: Effects of temperature. Friedrich Ladich (Dept. of Behavioural Biology, Univ. of Vienna,
Althanstrasse 14, Vienna 1090, Austria, friedrich.ladich@univie.ac.at)
Ambient temperature affects peripheral and central mechanisms of signal production and detection in ectothermic animals. The
effects of temperature on sound production have been investigated in representatives of at least 8 families of teleosts, namely, piranhas,
thorny catfishes, toadfishes, gurnards, sculpins, gobies, croakers, and gouramis, mostly under laboratory conditions. Temperature affects
calling behaviour and characteristics of vocalizations, but only a few general trends are evident. Calling activity may increase with rising
temperature (toadfish, sculpins, catfish), or may not be affected (triglids) or even drop (toadfish), indicating that behavioral contexts and
seasonal factors other than temperature influence activity. Temperature affects sound duration differently in different species. In contrast, pulse period usually decreases and the fundamental frequency of drumming sounds therefore increases as temperature rises. The
dominant frequency increases as well, whereas sound pressure level may increase or remain unaffected. Studies in otophysines (cyprinids, catfishes) showed that auditory sensitivities increase at higher temperature in both eurythermal (temperate zone) and stenothermal
(tropical) species. Sensitivities typically increased with higher frequencies by maximally 10 dB with one exception (36 dB, ictalurid catfish). A temperature-dependent sensitivity was described in toadfishes during the breeding season and, together with hormonal changes,
presumably influences the attraction to conspecific calls.
1:40
4pAB2. Evidence for dopaminergic forebrain neurons as modulators of seasonal peripheral auditory sensitivity in a vocal fish.
Paul Forlano (Biology, City Univ. of New York, Brooklyn College and Graduate Ctr., 2900 Bedford Ave., Brooklyn, NY 11210, pforlano@brooklyn.cuny.edu)
Plasticity in sensory and motor circuitry underlies dramatic changes in seasonal reproductive behaviors across vertebrates. In female
midshipman fish, seasonal, steroid-dependent plasticity in the auditory periphery functions to better encode frequencies of the male advertisement call. While seasonal changes in number of hair cells and expression of large-conductance, calcium-activated potassium
(BK) channels contribute to this change in high frequency encoding, efferent input may also play a significant role in seasonal increases
in hair cell sensitivity. We found diencephalic dopamine (DA) neurons directly innervate both the saccule of the inner ear and the cholinergic auditory efferent nucleus in the hindbrain. Ultrastructural investigations using immuno-TEM suggest that DA release occurs in
a paracrine fashion in the saccule while inhibitory and excitatory-like synapses are located on auditory efferent neurons. Based on seasonal changes in DA innervation of both these areas, we hypothesized an inhibitory effect of DA on the peripheral auditory system.
Pharmacological studies combined with hair cell microphonic recordings confirm a significant inhibition of frequency sensitivity by DA
in the saccule. These data suggest that an increase in peripheral auditory sensitivity during the reproductive period results from a release
of DA inhibition in the saccule. Our studies support diencephalic DA neurons as important neuromodulators of adaptive seasonal
changes in audition for enhanced reproduction in midshipman fish.
2:00
4pAB3. Comparison of the Saccules and Lagenae in six macrourid fishes from different deep-sea habitats. Xiaohong Deng
(CFSAN, U.S. Food and Drug Administration, 5001 Campus Dr., CFSAN ORS Rm. 3E026, College Park, MD 20740, xiaohong.deng@
fda.hhs.gov), Hans-Joachim Wagner (Anatomisches Institut, Universitaet Tuebingen, Tuebingen, Germany), and Arthur N. Popper
(Biology, Univ. of Maryland, College Park, MD)
The structures of inner ears were compared between six species of Macrouridae (grenadiers and rattails) that live at different ocean
depths ranging from 200 to 5000 meters. The goal of this comparison is to find out if there are structural differences in their inner ears
related to the depth of habitat. The size of the saccular otolith relative to fish head length varies considerably among the six species;
with the largest otolith found in Nezumia aequalis and the smallest in N. parini, a mesopelagic species. N. aequalis is a species with
potential sound production and auditory dominance by its drumming muscle on the swim bladder. Its saccular sensory area is four times
3860
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3860
larger than those from the two species live in shallower depths and have vision as their dominant sense (N. parini and Coryphaenoides
rupestris). The differences found in the saccule and lagena of these species reflect the sensory advantages of habitats that are related to
the benefits and constraints at different depths. It also reflects the fish’s particular life style and the trade-off among different sensory systems. The most obvious trade-off among sensory systems is found between vision and hearing.
2:20
4pAB4. Complex acoustic signaling in the toadfish, Opsanus tau. Allen F. Mensinger (Biology Dept., Univ. of Minnesota Duluth,
Biology Dept., 207 SSB, 1035 Kirby Dr., Duluth, MN 55812, amensing@d.umn.edu) and Jacey Van Wert (Univ. of California, Berkeley, Berkeley, CA)
Male oyster toadfish, Opsanus tau, produce long duration (250 to 650 ms) sexual advertisement calls or “boatwhistles” during the
breeding season. When males are in close proximity, they employ multiple vocalization strategies. They normally alternate the production of boatwhistles to avoid temporal overlap. When the fundamental frequencies of multiple callers vary by more than 10 Hz, the frequency is correlated strongly with water temperature. However, when conspecifics with similar frequencies vocalized, the individuals
shifted their fundamental frequencies independent of water temperature in a jamming avoidance response. Males also emit a single, short
duration pulse or “grunt” (~100 ms), which are emitted almost exclusively during the boatwhistle of a conspecific male which can jam
the signal. The disruptive grunt specifically targeted the second stage or tonal portion of the boatwhistle, believed to be the primary
acoustic attractant for females, and its brevity and precision may allow its emitter to remain undetectable. While the acoustic repertoire
of teleost fishes may be less diverse compared with terrestrial species, the toadfish can detect conspecific signals and employ different
strategies to avoid temporal and frequency overlap, thus displaying a capacity for complex acoustic interactions.
2:40
4pAB5. First attempt to visualize otolith motion in-situ using synchrotron radiation imaging techniques. Tanja Schulz-Mirbach
(Dept. Biology II, Zoology, Ludwig-Maximilians-Univ. Munich, Großhadernerstr. 2, Planegg-Martinsried 82152, Germany, schulz-mirbach@biologie.uni-muenchen.de), Alberto Mittone, Alberto Bravin (ID 17 Biomedical Beamline, European Synchrotron Radiation Facility (ESRF), Grenoble, France), Alexander Rack (ID 19 Microtomography Beamline, European Synchrotron Radiation Facility
(ESRF), Grenoble, France), Friedrich Ladich (Dept. of Behavioural Biology, Univ. of Vienna, Vienna, Austria), and Martin Heß (Dept.
Biology II, Zoology, Ludwig-Maximilians-Univ. Munich, Planegg-Martinsried, Germany)
Regarding the basics of ear structure-function relationships in fishes, there is still a substantial lack of knowledge of functional morphology. In particular, the actual motion of the solid otolith relative to the underlying sensory epithelium in the ear has rarely been investigated. To date, analyses of otolith motion have been mainly based on mathematical modeling which have yielded conflicting results.
Outcomes of a recent modeling study suggest that otolith motion is not a simple oscillation, but depends on the 3D shape of the otolith,
among other factors. Our study thus aims to provide experimental data to test previous models of otolith motion focusing on the relationship between different species-specific otolith shapes and their respective influence on otolith motion. As in-situ investigation of the basic parameters of otolith motion requires an approach with high spatial and temporal resolution, we used synchrotron radiation imaging
techniques. In our comparative approach including the anatomically well studied and closely related cichlid species Steatocranus tinanti
(vestigial swimbladder) and Etroplus maculatus (swimbladder-inner ear connection), we studied the relative motion of the saccular otolith and surrounding tissues provoked by sound stimuli at 0.2 and 0.5 kHz. We will discuss first results as well as methodological
aspects.
3:00
4pAB6. Fish bioacoustics, big data, and changing ocean soundscapes. Aaron N. Rice (BioAcoust. Res. Program, Cornell Univ., BioAcoust. Res. Program, Cornell Lab of Ornithology, 159 Sapsucker Woods Rd., Ithaca, NY 14850, arice@cornell.edu)
4p WED. PM
For many species of vocalizing fishes, reproductive calls form sustained choruses lasting for weeks to months, and are the dominant
biological sounds in marine and freshwater habitats worldwide. The timing, duration, and magnitude of these choruses is strongly influenced by environmental factors and may further be influenced by increase in anthropogenic noise. Decades of pioneering work on fish
hearing has established a solid scientific foundation to inform what and how fishes hear. By combining this foundation on the auditory
system with long-term acoustic recordings of wild populations, we can link organismal-level and population-level processes in fishes to
evaluate how they produce, perceive, and respond to sound. The scientific community is now in a place to begin understanding mechanisms and magnitudes of anthropogenic noise impacts on fishes and evaluate how human activities may influence fish populations. An
integrative and comparative approach to the study of fish bioacoustics demonstrates how fundamental principles of auditory physiology,
behavior, and ecology can inform our interpretation of changing ocean soundscapes.
3:20–3:40 Break
3861
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3861
Contributed Papers
3:40
4pAB7. Changes in fish sound production in the northern Gulf of Mex
ico over six years. Ana Sirović
(Scripps Inst. of Oceanogr., 9500 Gilman
La Jolla, CA 92093-0205, asirovic@ucsd.edu), Jigarkumar
Dr. MC 0205S,
Patel (Univ. of California San Diego, La Jolla, CA), John Hildebrand
(Scripps Inst. of Oceanogr., La Jolla, CA), and Sarah Friedman (Univ. of
California Davis, Davis, CA)
Passive acoustics were used to evaluate changes in the presence of soniferous fishes in the northern Gulf of Mexico. Recordings were collected at a
site on the continental shelf at 90 m depth, approximately 40 miles east of
the Mississippi river delta, from July 2010 through 2016. Seven distinct
sounds were detected in 2010 during the first recordings. The occurrence of
these sounds over subsequent years was analyzed using automatic detection
and classification methods based on convolutional neural networks. Croaks
(likely produced by the Atlantic croaker, Micropogonias undulatus) and
“jet-ski” calls (likely produced by the Atlantic midshipman, Porichthys
plectrodon) were among the most common in 2010 with unique diel patterns. The monthly patterns of calling changed over the six years, likely
indicating a change in the fish community structure at this site across the
years. Factors that could explain this change include interannual variability
due to the extent of anoxic zones, variability in ocean temperature, circulation, or possible impact of the oil spill on fishes in the area. Further investigation on the behavioral context of these sounds would provide insight into
likely consequences of the change in calling to these soniferous fishes.
4:00
4pAB8. The effect of anthropogenic and biological noise on fish behavior and physiology: A meta-analysis. Francis Juanes, Kieran Cox, Lawrence Brennan (Biology, Univ. Victoria, 3800 Finnerty Rd., Victoria, BC
V8P 5C2, Canada, juanes@uvic.ca), and Sarah Dudas (Vancouver Island
Univ, Nanaimo, BC, Canada)
Aquatic noise has the potential to travel extreme distances and as such
many marine species rely on the soundscape for auditory information
regarding habitat selection, predator or prey locations, and communication.
These species not only take advantage of the prevailing sounds but also contribute to the soundscape through their own vocalizations. Certain sounds
have the potential to negatively effect marine species resulting in unbalanced predator-prey interactions and disrupted communication. In an
attempt to determine the implications that changes to the soundscape may
have on fishes, we conducted a meta-analysis focusing on how anthropogenic and biological noises may alter fish behavior and physiology. We
reviewed 3,174 potentially relevant papers of which 44 met our criteria and
were used in the analysis. Results indicated that anthropogenic noise has an
adverse effect on marine and freshwater fish behavior and physiology. Alternatively biological and environmental noises did not significantly alter fish
behavior and physiology. These findings suggest that although certain species may be more susceptible to anthropogenic noise than others, the vast
majority of fish have the potential to be negatively affected by noise pollution, while biological noises may not have the same negative consequences
for fish behavior and physiology.
4:20
4pAB9. Application of broadband sound for Bighead Carp Hypophthalmichthys nobilis and Silver Carp H. molitrix carps deterrence. James
Wamboldt, Kelsie A. Murchy, Marybeth K. Brey, and Jon J. Amberg
(Aquatic Ecosystem Health, U.S. Geological Survey, 2630 Fanta Reed Rd.,
La Crosse, WI 54603, jwamboldt@usgs.gov)
Bigheaded carps, Bighead Carp Hypophthalmichthys nobilis and Silver
Carp Hypophthalmichthys molitrix, were first imported to the U.S. in the
early 1970s, subsequently escaped and have become a threat to the ecology
and economy of the invaded region. Range expansion throughout the Mississippi River drainage threatens the Laurentian Great Lakes and has
prompted the development of control systems to alter their movement and
3862
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
behavior. Bigheaded carps’ response to the sound of an outboard motor
(0.06-10 kHz) has led researchers to investigate the use of an acoustic deterrent at points of interest (e.g., lock chambers). However, Bigheaded carp
habituation to repeated, prolonged exposures of sound remains unknown.
Therefore, we tested the ability of Bigheaded carps to habituate to continuous sound in 13.5 30 m concrete ponds. To simulate a navigational lock
chamber, a 2.5 5 m channel was constructed at the center of each pond.
Broadband sound (160 dB) was broadcast continuously for 30 minutes from
two speakers mounted in the center of the channel. Using an acoustic telemetry array, Bigheaded carps behavior was monitored in response to chronic
sound exposure. We will present the results from this study and their potential implications to Bigheaded carps management.
4:40
4pAB10. Auditioning fish for sound production in captivity to contribute to a catalogue of known fish sounds to inform regional passive
acoustic studies. Amalis Riera (Biology, Univ. of Victoria, School of EOS,
Bob Wright Ctr. A405, UVic, Victoria, BC V8P5C2, Canada, ariera@uvic.
ca), Rodney A. Rountree (Waquoit, MA), and Francis Juanes (Biology,
Univ. of Victoria, Victoria, BC, Canada)
Passive Acoustic Monitoring (PAM) is increasingly used as a method to
characterize underwater soundscapes and the impacts of noise on marine
ecosystems. The natural sounds produced by marine mammals have been
widely studied, enabling the use of PAM as an effective conservation tool.
However, much less is known about fish sound production, particularly in
the northeast Pacific. This lack of information makes it difficult to identify
fish sounds that are present in long-term recordings and thus precluding
accurate determination of fish species composition and evaluation of the
effects of noise on fishes. In order to identify fish sounds in British Columbia soundscapes, we need catalogues of validated fish sounds from Pacific
species. These catalogs will help in comparing validated examples to
unknown sounds found in long-term autonomous recordings. Our goal is to
contribute to building such catalogues and to fill knowledge gaps in fish
acoustic behaviour to support studies on the impact of anthropogenic noise
on Pacific fishes. We are collaborating with aquaria, ocean-based aquaculture facilities and commercial fisheries to monitor, audition, and record the
acoustic behavior of captive and semi-captive fish species. Here, we will
present the results of these studies.
5:00
4pAB11. Classification of pelagic fish using wideband echosounders. Jeroen v. Sande, Benoit Quesson, Peter Beerens (Acoust. & Sonar, TNO The
Netherlands, Oude Waalsdorperweg 63, Den Haag, Zuid-Holland 2597AK,
Netherlands, jeroen.vandesande@tno.nl), and Sascha Fassler (IMARES,
Ijmuiden, Netherlands)
The landing obligation for pelagic fisheries was introduced in January
2015 by the European Commission and will gradually be extended to all
fisheries in 2019. This regulation follows the trend for sustainable fisheries
in Europe and states that all catches must be landed and counted against the
quotas. Pre-catch acoustic fish classification can remotely provide information about fish school composition in terms of species and size distribution.
This technique has been tested using three commercial wideband transceivers mounted on a commercial freezer trawler. Together, the three SIMRAD EK80 transceivers cover a frequency range of 45 kHz to 260 kHz.
Acoustical data was collected for four months during fishing operations in
the North Sea and the English Channel. Custom, through-the-sensor image
processing algorithms have been developed to perform automatic school
detection under severe noise, interference and ship motion conditions.
Dynamic Factor Analysis was applied to exploit common trends in the spectra of the species, resulting in classification scores of 95-100% for the set of
38 homogeneous schools of herring, mackerel and horse mackerel. This
data set is too limited to draw firm conclusions on potential operational performance. However, since the utilized spectral trends match with theoretical
expectation, the approach is promising.
Acoustics ’17 Boston
3862
5:20
behaviors they represent. Deploying small hydrophone arrays can help fill
some of these knowledge gaps. Passive acoustic localization using fish calls
received on multiple hydrophones can be used to estimate swimming speed,
calling rate of individual fish, and source level of their calls. This paper
focuses on the three-dimensional localization of fish using a compact array
of 6 hydrophones using both simulated and measured data. Fish sounds
were detected manually on one of the hydrophones. Time difference of
arrivals (TDOAs) were then defined by cross correlating the detected signal
with signals from the other hydrophones. Linearized Bayesian inversion
was employed to localize fish sounds from the measured TDOAs. Localization uncertainties were below 10 cm inside the hydrophone array. Simulated
annealing optimization was used to define the hydrophone configuration
that could provide the smallest localization uncertainties.
4pAB12. Passive acoustic localization of fish using a compact hydrophone array. Xavier Mouy (JASCO Appl. Sci., 2305–4464 Markham St.,
Victoria, BC V8Z7X8, Canada, Xavier.Mouy@jasco.com), Rodney A.
Rountree (Waquoit, MA), Francis Juanes (Biology, Univ. of Victoria, Victoria, BC, Canada), and Stan E. Dosso (School of Earth and Ocean Sci., Univ.
of Victoria, Victoria, BC, Canada)
Passive acoustic monitoring of fish in their natural environment is a
research field of growing interest and importance. Although many fish species are soniferous, the characterization and biological understanding of
their sounds are largely unknown. Many underwater acoustic recordings
contain sounds likely produced by fish, but little information can be
extracted from them due to the lack of fundamental knowledge about the
WEDNESDAY AFTERNOON, 28 JUNE 2017
ROOM 310, 1:15 P.M. TO 5:20 P.M.
Session 4pAO
Acoustical Oceanography, Animal Bioacoustics, and Underwater Acoustics: Acoustics and Acoustic Ecology
of Benthic Communities
Jean-Pierre Hermand, Cochair
LISA - Environmental Hydroacoustics Lab, Universit
e libre de Bruxelles, av. F.D. Roosevelt 50, CP165/57,
Brussels 1050, Belgium
Preston S. Wilson, Cochair
Mech. Eng., Univ. of Texas at Austin, 1 University Station, C2200, Austin, TX 78712-0292
Chair’s Introduction—1:15
Invited Paper
1:20
4p WED. PM
4pAO1. Mapping of benthic biophony at ecologically relevant scales. cedric gervaise (Inst. of Research CHORUS, 22 Rue du pont
noir, Saint Egreve 38120, France, cedric.gervaise@chorusacoustics.com), Julie Lossent (Inst. of Research CHORUS, Grenoble cedex 1,
France), cathy anna valentini poirier, Pierre Boissery (Agence Eau RMC, Marseille, France), and Lucia D. Iorio (Inst. of Research
CHORUS, Grenoble, France)
Benthic communities emit a variety of sounds associated with movement and feeding activities. Snaps are wide-band signals, their
waveforms allow to detect and estimate time-of-arrival difference between sensors reliably. Because of the snap’s bandwidths, a 2 mcompact hydrophone array is adequate to locate emissions. Merging the localization results of successive emissions allows to map the
benthic biophony using 2 metrics as a function of position: (1) the number of snaps per minute and square meter and (2) their mean
Source Level dB re 1lPa@1m. Algorithms to easily detect and locate benthic snaps using two hydrophones are detailed, accuracy and
resolution are assessed. With data from artificial reefs, we were able to locate and image the shape of each artificial reef in a 100 m radius around a fixed array with a resolution of 5 meters. For the 3 m-nearest artificial reef, the image shows intra-reef details with a resolution of 20 cm. To map a larger area, the array is carried by a drifting buoy. With data from a seagrass on a rocky substrate, we image
the biophony over a 1 km2 surface and we demonstrate its very high spatial variability mainly driven by the topography of the rocky
substrate.
3863
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3863
Contributed Paper
1:40
repertoire of wild spotted dolphins (Stenella attenuata), using acoustic data
collected from hydrophones mounted directly on free-swimming animals.
The performance of a variety of machine learning algorithms, including
SVMs, random forests, dynamic time warping, and convolutional neural
networks, is quantified using data features from both the fields of cetacean
acoustics and human speech recognition. We demonstrate that maximal performance is achieved using acoustic spectrogram features in conjunction
with a convolutional neural network (CNN), a machine learning technique
known to produce state of the art results on image classification tasks. Even
on a relatively small dataset (~350 labeled vocalizations), the CNN achieves
86% classification accuracy on eleven unique vocalization types, with
groundtruth labels provided by domain scientists. This work demonstrates
that a complex, parametric model, such as a CNN, can be effectively applied
to small-data domains in ecology to achieve state-of-the-art performance in
acoustic classification problems.
4pAO2. A convolutional neural network based approach for classifying
delphinidae vocal repertoire. Genevieve E. Flaspohler (Appl. Ocean Phys.
and Eng., Woods Hole Oceanographic Inst., 266 Woods Hole Rd., Mailstop
07, Blake 209, Woods Hole, MA 02543, geflaspo@mit.edu), Tammy Silva
(Biology, Univ. of Massachusetts Dartmouth, Dartmouth, MA), T Aran
Mooney (Biology, Woods Hole Oceanographic Inst., Woods Hole, MA),
and Yogesh Girdhar (Appl. Ocean Phys. and Eng., Woods Hole Oceanographic Inst., Woods Hole, MA)
Delphinidae are known to produce a rich variety of vocalizations whose
function and form are not yet completely understood. The ability to reliably
identify the vocal repertoire of delphinidae species will permit further exploration of the role acoustic signals play in delphinidae social structure and
behavior. This work applies contemporary techniques from human speech
recognition and computer vision to automatically classify the vocal
Invited Paper
2:00
4pAO3. Comparative analysis of /kwa/ fish sounds recorded during an early ecoacoustics experiment in a Posidonia oceanica seagrass meadow (Ustica Island, 1999). Xavier Raick (Inst. of Res. CHORUS, Chaire CHORUS, Grenoble INP Foundation 46, Ave. Felix
Viallet, Grenoble 38031, France, xavierraick@hotmail.com), cedric gervaise (Inst. of Res. CHORUS, Saint Egreve, France), and JeanPierre Hermand (LISA Environ. HydroAcoust. lab, Brussels, Brussels Capital, Belgium)
Passive acoustics enables studying marine habitats thanks to the sound production of their inhabitants. A key question is: “Can fish
sounds be used as environmental proxies?” Posidonia oceanica (L.) Delile 1813 seagrass meadows constitute an important ecosystem of
the Mediterranean Sea which protects many species of invertebrates and fishes, some of which producing sounds with distinctive features. An arched frequency modulation centred at 747 Hz which aurally sounds like /kwa/ is being investigated as an environmental
proxy (Raick et al. 2017). Remarkably, these sounds were already noted as a distinctive feature of the soundscape recorded during an
early acoustic ecology experiment, USTICA 99, which measured simultaneously photosynthetic gaseous exchange and fish migration at
the scale of a Posidonia meadow (Hermand 2003). We revisit that dataset to describe the specific characteristics and variability of the
early recorded /kwa/ sounds and make a comparison with our recent study carried out in France about 500 km north of the Sicilian Island
and 15 years later. The sounds have been analysed both individually and all together, i.e., considering the entire choruses. Our comparative analysis based on a dictionary confirms a high homogeneity of the /kwa/’s but with little variations which are examined in detail.
Contributed Papers
2:20
2:40
4pAO4. Not all snaps are from shrimp: Broadband, impulsive sounds
from different benthic organisms can provide information on animal
presence and behavior. Joseph Warren, Kayla Hartigan, and Colin Wirth
(Stony Brook Univ., 239 Montauk Hwy, Southampton, NY 11968, joe.warren@stonybrook.edu)
4pAO5. Who’s making all that racket? Seasonal variability in kelp for
est soundscapes. Jack Butler, Ed Parnell, and Ana Sirović
(Scripps Inst. of
Oceanogr., UCSD/SIO/0205, La Jolla, CA 92093-0205, j7butler@ucsd.
edu)
Snapping shrimp are a dominant source of sound in many marine environments; however, there are several other benthic organisms which produce broadband, impulsive sounds as part of their daily activity. We used a
passive acoustic recorder with a video camera to identify and categorize the
soniforous behaviors of several different species (fish and bivalves) in both
laboratory and field environments. Bay scallops (Argopecten irradians) in
shallow water estuaries in New York “cough” when they move via jet propulsion as well as to clean their filter feeding apparatus. Additionally, the
substrate they are “scooting” on can alter the characteristics of their
“cough.” Parrotfish (multiple species) “chomp” as they forage on seagrasses
and surgeonfish (Acanthuridae) “scrape” algae from hard substrates in Caribbean lagoons. We investigated whether these signals can be distinguished
from other broadband sounds occurring in the marine environment and, if
so, how longer-duration passive acoustic recordings can be used to monitor
the abundance or behaviors of these animals.
3864
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Kelp forests off the coast of southern California harbor a myriad of soniferous organisms, creating diverse and complex soundscapes. However,
the temporal and spatial variability of kelp forest soundscapes remain poorly
documented. Here, the seasonal variability in the soundscapes from two
areas, one inside and the other outside a marine protected area off the coast
of La Jolla, CA, is described. The din of snapping shrimp snaps was constant in both areas and throughout the seasons, and many distinct, likely fish
calls in the 80 Hz—1000 Hz band were heard year-round. Most notably, a
late spring/early summer chorus of two putative fish calls (one with most
energy at ~90 Hz, the other ~380 Hz) dominated the dusk soundscape of
both areas. Many stressors (e.g., pollution run-off, anthropogenic noise, and
ocean warming) buffet and degrade the kelp forests of southern California.
Documenting how these soundscapes change as the kelp forest conditions
change might provide a way to rapidly assess the quality of an area, as well
as provide a way to measure the success of conservation techniques and recovery within protected areas.
Acoustics ’17 Boston
3864
4pAO6. The ecoacoustics of a kelp forest: Measurement and prediction
(Ecklonia radiata). Jean-Pierre Hermand, Jo Randall (LISA - Environ.
HydroAcoust. Lab, Universite libre de Bruxelles, av. F.D. Roosevelt 50,
CP165/57, Brussels, Brussels Capital 1050, Belgium, jhermand@ulb.ac.be),
Jeff Ross, and Craig Johnson (Inst. for Marine and Antarctic Studies, Univ.
of Tasmania, Hobart, TAS, Australia)
Despite of over a century of intensive research on kelp (brown macroalgae of the order Laminariales), very few acoustic studies have been undertaken and much remains unknown about their species-specific acoustic
properties, which are nevertheless essential for surveying and managing the
respective habitats. In this paper, we discuss results of an in-situ, acoustic
and ecological investigation of species Ecklonia radiata which forms dense
aggregations at Canoe bay, Tasmania (FORTES 12). The experiment measured extensively the time-varying Green’s function (0.2–20 kHz) between
pairs of distant source and receivers together with physical and biogeochemical variables of the ecosystem. Propagation and scattering features are
interpreted in the light of other results obtained by the authors including
ray-based estimation of the channel impulse response envelope, finite-element scattering model of a whole E. radiata thallus, and a study of production from diel oxygen modelling, oxygen exchange, and electron transport
rate in the kelp. Comparison will be made with similar experiments carried
out in Mediterranean seagrass meadows [Handbook of Scaling Methods in
Aquatic Ecology: Measurement, Analysis, Simulation, CRC Press, 2003].
[Work supported by ONR.]
3:20–3:40 Break
3:40
4pAO7. Predicting the cuescape from the reef soundscape and its role
in larval fish settlement. Andria K. Salas (Integrative Biology, The Univ.
of Texas at Austin, 205 W. 24th St. A6700, Austin, TX 78712, aksalas@
utexas.edu), Preston S. Wilson (Mech. Eng. and Appl. Res. Labs., The
Univ. of Texas at Austin, Austin, TX), Andrew H. Altieri (Smithsonian
Tropical Res. Inst., Panama City, Panama), Megan S. Ballard (Appl. Res.
Labs., The Univ. of Texas at Austin, Austin, TX), and Timothy H. Keitt
(Integrative Biology, The Univ. of Texas at Austin, Austin, TX)
The combined acoustic behavior of soniferous organisms living on
benthic habitats from coral to oyster reefs produces habitat-specific soundscapes. These soundscapes are predicted to have a role in the settlement
behavior of fish and invertebrate larvae searching for a benthic habitat on
which to settle at the conclusion of their pelagic phase. Given the frequency-specific sound detection abilities of these organisms, only a portion
of the soundscape will have the frequency and amplitude characteristics to
serve as potential navigational and habitat selection cues. We recorded the
soundscapes of four coral reefs in Caribbean Panama for six weeks and predicted the sounds most likely to be used as cues by larval fishes by using
knowledge of their hearing sensitivity. Next we used an individual-based
model to test the relationship between the temporal characteristics of the
acoustic cues and settlement success. To do this we created cue time-series
that represented the temporal variability observed at the four reef sites. We
found that even short range, temporally variable cues produced at a low rate
improved settlement success, suggesting these cues may improve the probability of survival under a broader range of conditions than has been typically
considered.
4:00
4pAO8. Sound speed and attenuation in seagrass from the water column into the seabed. Kevin M. Lee, Megan S. Ballard, Andrew R.
McNeese (Appl. Res. Labs., The Univ. of Texas at Austin, 10000 Burnet
Rd., Austin, TX 78758, klee@arlut.utexas.edu), and Preston S. Wilson
(Dept. of Mech. Eng. and Appl. Res. Labs., The Univ. of Texas at Austin,
Austin, TX)
Biological processes and physical characteristics associated with seagrass can greatly affect acoustic propagation in coastal regions. An important acoustical effect is due to bubble production by the plants, which can
have significant impact on both object detection and bottom mapping sonars
by increasing clutter through reflection, absorption, and scattering of sound.
3865
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
In addition to photosynthesis bubbles and gas-bearing leaf tissue in the
water column, the plant rhizomes also contain aerenchyma (gas-filled channels), which allow for diffusion of oxygen into the surrounding sediment.
To study these effects, in situ acoustic measurements were conducted in a
bed of Thalassia testudium in east Corpus Christi Bay, TX. Direct measurements of sound speed and attenuation were obtained in the water column
above the seagrass canopy, inside the seagrass canopy, and at discrete
depths within the sediment. A complimentary set of measurements were
obtained in a bare sediment region located a few meters away. In addition to
standard measurements of geoacoustic properties (sediment density, grain
size, etc.), biomass was also estimated from cores collected at each site. The
sediment beneath the seagrass bed had significantly lower wave speed and
higher attenuation compared to the bare sediment. [Work supported by
ARL:UT IR&D program and ONR.]
4:20
4pAO9. Anthropogenic noise generated by mobile vehicles used to survey rockfishes in the Channel Islands, California. Brijonnay Madrigal,
Alison K. Stimpert (Vertebrate Ecology Lab, Moss Landing Marine Labs,
8272 Moss Landing Rd., Moss Landing, Moss Landing, CA 95039, bmadrigal@mlml.calstate.edu), Mary M. Yoklavich (Fisheries Ecology Div.,
Southwest Fisheries Sci. Ctr. National Marine Fisheries Service, Santa
Cruz, CA), and W. Waldo Wakefield (Fishery Resource Anal. and Monitoring Div. Northwest Fisheries Sci. Ctr., National Marine Fisheries Service,
Newport, OR)
Impacts of ambient noise in the ocean are a concern for fish populations,
as well as for other marine vertebrates. The influence of noise associated
with mobile equipment, such as autonomous vehicles and occupied submarines used to survey demersal rockfishes (genus Sebastes), has not previously
been quantified. Such noise likely occurs within the same low frequency
range as sound produced by soniferous rockfish species, whose calls have
source levels ranging from 103-113 dB re 1 lPa. A digital acoustic monitoring (DMON) instrument was deployed in association with optical and ultrasonic surveillance cameras in October 2016 off the Channel Islands in
Southern California, to quantify mobile vehicle noise and to monitor
changes in rockfish behavior during these surveys. The DMON sampled
over 5 days for a total of 45.5 h. Analysis of a bandpass filtered (100-500
Hz) data subset showed ambient noise levels of approximately 100 dB RMS
re 1lPa, with vehicle activities generating spikes in noise levels of 10-20
dB above baseline. These preliminary results indicate that the DMON captured a diversity of both abiotic and biotic sources of sound, and may indicate local masking of rockfish sounds by survey vehicle noise.
4:40
4pAO10. Use of acoustic and stereo camera systems for assessing
demersal fish of Robinson Crusoe Island (Juan Fern
andez Archipelago,
off central Chile). Billy Ernst, Pablo Rivara, Braulio Tapia (Oceanogr.,
Univ. of Concepcion, Barrio Universitario s/n, Concepcion, Bio Bio
410000, Chile, biernst@udec.cl), Stephane Gauthier (DFO, Inst. of Ocean
Sci., Sidney, BC, Canada), Francisco Santa Cruz (Oceanogr., Univ. of Concepcion, Concepcion, Bio Bio, Chile), and Esteban Molina (Instituto de
Fomento Pesquero, Valparaiso, Valparaiso, Chile)
The Juan Fernandez Archipelago, constituted by Robinson Crusoe, Santa
Clara and Selkirk Islands, are peaks of a continuous submarine ridge extended
in the east-west direction off the coast of central Chile. It supports a rich
endemic fish community, which maintains the livelihood of hundreds of fishermen and their families. In recent years there has been a growing interest in
assessing the demersal fish species composition, distribution, and abundance in
this system. To accomplish this task traditional trawl surveys are inappropriate
because of the complexity and fragile nature of the seafloor, therefore we used
scientific echosounders combined with a stereo camera system. Two acoustic
surveys (2015 and 2016) were conducted using various configurations of transducers and cameras from ROV and tow-bodies. Results indicate a strong association of demersal fish to bottom reefs and species compositions varying with
depth. Using ex-situ target strength (TS) estimates combined with species composition and size data provided by stereo cameras, we estimated biomass of
Breca (morwong), the main commercial fish species. Because of the mixed nature of fish assemblages in the reefs, species composition remains an important
source of uncertainty.
Acoustics ’17 Boston
3865
4p WED. PM
3:00
5:00
Groups of scatters are defined a priori (from in situ sampling). Then we
apply a sensitivity analysis to maximize energy allocation within a given
group when compared to the nearest one. This method was applied on four
frequencies (38, 70, 120 and 200 kHz) data collected in the coastal and oceanic ecosystems of Northeast Brazil; an area characterized by a high biodiversity. By applying the method we discriminated six groups: Fish like, two
types of High Resonant at 38 kHz associated with gelatinous, Fluid Like
associated with crustacean macrozooplankton, High Resonant at 70 kHz
associated with algae, and a group of Unclassified echoes. Results are coherent in terms of distribution pattern of each group. Among other, our results
reveal that gelatinous are the dominant group close to oceanic islands where
they form a dense layer above the thermocline. These results open new perspectives to improve knowledge on the patterns of distribution and the interaction of a variety of functional groups in different ecosystems.
4pAO11. A new multifrequency acoustic method for the discrimination
of biotic components in pelagic ecosystems: Application in a high diversity tropical ecosystem off Northeast Brazil. Gary Vargas, Flavia Fredou
(Pos-Graduaç~ao em Recursos Pesqueiros e Aquicultura, Universidade Federal
Rural de Pernambuco (UFRPE), Rua Dom Manoel de Medeiros, s/n, Dois
Irm~aos, Recife, Pernambuco 52171-900, Brazil, garyrvc@gmail.com), Gildas
Roudaut, Anne Lebourges-Dhaussy, Jeremie Habasque, and Arnaud Bertrand
(Institut de Recherche pour le Developpement (IRD), Brest, France)
Underwater acoustics have an unrealized potential for multicomponent
ecosystem characterization. Various methods are used for multifrequency
classification. To improve scatter discrimination we propose a new method
based on the distribution of scatters on multifrequency spatial planes.
WEDNESDAY AFTERNOON, 28 JUNE 2017
BALLROOM B, 1:15 P.M. TO 4:40 P.M.
Session 4pBAa
Biomedical Acoustics, Physical Acoustics, and Underwater Acoustics: Session in Honor of Edwin
Carstensen II
David T. Blackstock, Cochair
Applied Research Labs, University of Texas at Austin, PO Box 8029, Austin, TX 78713-8029
Gail ter Haar, Cochair
Physics Department, Institute of Cancer Research, Royal Marsden Hospital, Sutton SM2 5PT, United Kingdom
Chair’s Introduction—1:15
Invited Papers
1:20
4pBAa1. Ed Carstensen, advisor and mentor to the shockwave lithotripsy program project group. James McAteer, Andrew P.
Evan, James E. Lingeman, Lynn R. Willis, Philip M. Blomgren, James C. Williams, Rajash Handa, Bret A. Connors (Indiana Univ.
School of Medicine, 635 Barnhill Dr., MS-5065, Indianapolis, IN 46202, jmcateer@iupui.edu), Lawrence Crum, Michael Bailey, Tom
Matula (Ctr. for Industrial and Medical Ultrasound, Appl. Phys. Lab., Univ. of Washington, Seattle, WA), Vera A. Khokhlova, Oleg A.
Sapozhnikov (Phys., Moscow State Univ., Seattle, Washington), Robin Cleveland (Inst. of Biomedical Eng., Univ. of Oxford, Oxford,
United Kingdom), Tim Colonius (Eng. and Appl. Sci., California Inst. of Technol., Pasadena, CA), and Yuri A. Pishchalnikov (Burst
Labs., Grass Valley, CA)
In the 1980s shockwave lithotripsy emerged as a revolutionary advancement for the treatment of kidney stones. Initial studies with
patients showed SWL to be highly effective. The technology was elegant, outcomes exceptionally positive and early tests suggested
treatment was safe. As experience with SWL grew, limitations surfaced. A key finding was that SWs have the potential to induce significant trauma to the kidney. Our group convinced the NIH it was time to conduct a rigorous assessment to characterize the adverse effects
of SWL and determine the mechanisms of SW action in stone breakage and tissue injury. The NIH Program Project Grant mechanism
mandated we establish a panel of external advisors to help guide our work. We needed expertise in physical acoustics, cavitation and
animal models of ultrasound exposure. We wanted a leading expert. We were extremely fortunate to land Ed Carstensen. Ed worked
with us for nearly 15 years, well into our third renewal cycle. He was a brilliant scientist, a man dedicated to the highest standards of
conduct in research. Ed taught us a great deal, he inspired by example and had an exceptional influence on our work and on the greater
field of lithotripsy research. [Work supported by NIH-DK43881.]
3866
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3866
1:40
4pBAa2. Innovative strategies for improved outcomes in nephrolithiasis. Michael Bailey (Ctr. for Industrial and Medical Ultrasound, Appl. Phys. Lab, Univ. of Washington, 1013 NE 40th St., Seattle, WA 98105, mike.bailey.apl@gmail.com), Julianna C. Simon
(Ctr. for Industrial and Medical Ultrasound, Appl. Phys. Lab, Univ. of Washington, University Park, Pennsylvania), Wayne Kreider,
Barbrina Dunmire, Lawrence Crum (Ctr. for Industrial and Medical Ultrasound, Appl. Phys. Lab, Univ. of Washington, Seattle, WA),
Adam D. Maxwell (Dept. of Urology, Univ. of Washington, Seattle, WA), Vera Khokhlova, Oleg A. Sapozhnikov (Phys. Faculty, Moscow State University, APL, Univ. of Washington, Seattle, WA), Robin Cleveland (Inst. of Biomedical Eng., Univ. of Oxford, Oxford,
United Kingdom), Tim Colonius (Dept. of Mech. Eng., Caltech, Pasadena, CA), James E. Lingeman (Dept. of Urology, Indiana Univ.
School of Medicine, Indianapolis, IN), James McAteer, James C. Williams (Dept. of Anatomy and Cell Biology, Indiana Univ. School
of Medicine, Indianapolis, IN), and Jonathan Freund (Mech. Sci. and Eng., Aerosp. Eng., Univ. of Ilinois, Urbana, IL)
Edwin Carstensen, Ph.D., was an advisor of NIH NIDDK Program Project Grant DK043881, created to investigate shock wave lithotripsy (SWL). We now develop solutions to improve all aspects of the management of stone disease. Our goal in this paper is to report
progress built on Dr. Cartsensen’s advice and inspiration. The work ranges from numerical simulation to clinical trials and from device
development to bioeffects and metrology. Much of our work involves bubbles and cavitation. This work has contributed to the body of
knowledge defining limits for the safe use of ultrasound which Dr. Carstensen worked hard to establish. Specifically, an update will be
given on the development of ultrasound to image, fragment, trap, and reposition stones. In particular, we demonstrated bubbles contribute to the twinkling artifact used by NASA and others to image stones, and we drew on Dr. Carstensen’s paper [UMB, 19(2) 147-165
1993] to demonstrate that breathing the elevated carbon dioxide levels present in NASA vehicles suppresses this signal making stone
imaging more difficult. We have since developed imaging countermeasures and pushing and breaking technologies that appear less dependent on cavitation than SWL. [Work supported by NIH P01DK043881, K01DK104854, R01EB007643, and NSBRI through NASA
NCC 9-58.]
2:00
4pBAa3. Biliary lithotripsy and what we learned from Carstensen. E. Carr Everbach (Eng., Swarthmore College, 500 College Ave.,
Swarthmore, PA 19081, ceverba1@swarthmore.edu)
In the early 1990s, kidney-stone lithotripsy was a new burgeoning field and applications to gallstone destruction were being investigated. I came as the 1989-90 Hunt Fellow to work on the mechanisms of lithotripsy and related areas with Edwin Carstensen and his collaborators at the University of Rochester. The composition and liquid environment of gallstones is very different from that of kidney
stones, however, and successful fragmentation of gallstones was rare. Comparing a Diasonics piezoelectric lithotripter and Dornier HM3 lithotripter provided insights, as did studies of the mechanical and acoustical properties of gallstones. More generally, biliary lithotripsy led to fundamental studies of mechanisms of fragmentation and the interaction of acoustic shock waves outside and inside stones.
While eventually the success of laproscopic surgery spelled the end of biliary lithotripsy as a competing technology, Ed Carstensen and
his collaborators added knowledge that later paid dividends in related fields of biomedical acoustics, such as atherosclerotic plaque removal, sonothrombolysis, and cavitation-based imaging and therapy.
2:20
4pBAa4. Edwin Carstensen’s contributions to early research at the Navy’s Underwater Sound Reference Laboratory. David A.
Brown (Elec. Eng., Univ. of Massachusetts Dartmouth, 151 Martine St., Fall River, MA 02723, dbAcoustics@cox.net)
4p WED. PM
Edwin Carstensen enrolled at the Case School of Applied Science to study acoustics under Dayton Miller but upon Miller’s passing
in 1941, he took up study under Robert Shankland. The influence of World War II reached deeply into academia in those years and Carstensen’s advisor Shankland became the Director of Underwater Acoustic Metrology at Columbia University—Division of War
Research (DWR). Thus Carstensen was first introduced to practical problems in underwater acoustics of critical importance to the Navy
as a graduate student and employee (of DWR). At that time, the Underwater Acoustic Metrology center was headquartered in New York
City at the Empire State building but established a remote calibration and research facility near Orlando, Florida, which subsequently
became the Navy’s Underwater Sound Research Laboratory (USRL). Carstensen began his life-long study of propagation of sound in
bubbly media and the effects of cavitation. He also worked on diffraction and calibration of hydrophones by self-reciprocity, the subject
of his Master’s thesis that was delayed until 1947 due to his war-effort service. This paper summarizes some of his early work, publications and collaborations in the Underwater Acoustics.
2:40–3:00 Break
3:00
4pBAa5. Low frequency sound scattering from a submerged bubble cloud: The Seneca Lake experiment. Ronald Roy (Eng. Sci.,
Univ. of Oxford, Parks Rd., Oxford OX1 3PJ, United Kingdom, ronald.roy@hmc.ox.ac.uk)
Acoustic backscatter from the sea surface is governed by the roughness of the surface and subsurface microbubble distributions. At
low frequencies, scattering results primarily from coherent and/or collective scatter from bubbles entrained by the subsurface vorticity
or carried to depth by Langmuir circulation and thermal convection. In 1947, Carstensen and Foldy published seminal work on sound
scattering and attenuation from bubble screens (JASA 19, 481-501, 1947) in which they employed an effective medium approximation
to model the problem. Drawing insights from this work, W. Carey postulated that resonance scattering from submerged bubble clouds
can be described by a Minnaert formula modified to account for the enhanced compressibility of the mixture. In 1990, experiments to
test this concept were conducted at the US Navy Sonar Test Facility at Seneca Lake, New York (JASA 92, 2993-2996, 1992). Measurements of frequency-dependent backscatter from a submerged bubble cloud proved consistent with model predictions based on independent measurements of bubble size distribution, volume fraction and cloud size. This paper presents a brief overview of the relevant
theory, a description of the lake experiment, and representative results, which serve to further affirm the validity of the effective medium
approximation employed in Carstensen and Foldy’s work.
3867
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3867
3:20
4pBAa6. The acoustic characteristics of bubbly liquids near the individual bubble resonance frequency. Preston S. Wilson (Mech.
Eng. Dept. and Appl. Res. Labs., The Univ. of Texas at Austin, 1 University Station, C2200, Austin, TX 78712-0292, pswilson@mail.
utexas.edu) and Ronald Roy (Eng. Sci., Univ. of Oxford, Oxford, United Kingdom)
In biomedical, naval and oceanographic arenas, the acoustic characteristics of bubbly liquids have simultaneously confounded and
been exploited by acoustic tools used in the respective fields. The work of Carstensen and Foldy [J. Acoust. Soc. Am. 19, 481-501
(1947)] served as a cornerstone of the post-World War II understanding of this system, but at high bubble concentrations near the individual bubble resonance frequency (IBRF), the quantitative prediction and measurement of bubbly liquid sound speed and attenuation
remains an open topic. In 1989, Commander and Prosperetti [J. Acoust. Soc. Am. 85 732-746 (1989)] advanced the field with a rigorous
model valid at the IBRF but it was limited to low void fractions. The data available in 1989 did not corroborate their model at IBRF, but
at the time, it was unclear whether this was due to model or experimental deficiencies. This paper reviews an impedance tube measurement [J. Acoust. Soc. Am. 117, 1895-1910 (2005)] that did validate the model for bubble volume fractions up to about 10-4, and also
reviews more recent work associated with an industrial underwater noise mitigation system, that validates their model up to void fractions exceeding 10-2, significantly higher than expected.
Contributed Papers
3:40
4pBAa7. Ultrasound in air: Today’s guidelines have an insufficiently
solid basis for today’s exposures. Tim Leighton (Inst. of Sound and Vib.
Res., Univ. of Southampton, Highfield, Southampton, Hampshire SO17
1BJ, United Kingdom, tgl@soton.ac.uk)
Today we see a proliferation of technology that purports to use the low
ultrasonic frequency range (~20-40 kHz) in air for a wide variety of purposes: pest deterrents; through-air electrical charging; haptic feedback;
acoustic spotlights; etc. These exposures are in addition to the inadvertent
exposures of humans to ultrasound in air from cleaning baths, dental treatments etc. that have occurred for decades. New forms of exposure might
possibly occur in future as more technology is introduced into homes, workplaces and classrooms. Whilst the vast majority of humans have not reported
ill effects from this, some have, although there have not been the resources
for widespread testing of the validity of these claims. However the dozens
of national and international guidelines for such exposures are not currently
adequate for the task of offering guidance for public exposures, since they
are based on a very sparse dataset (of observations of primarily adult males),
and all but one are for occupational exposure, and so cannot cover the unmonitored exposures of, say, infants taken into public location by adults
who potentially have different susceptibilities to possible adverse effects.
4:00
4pBAa8. Exposure measurements for ultrasound in air. Craig N. Dolder,
Sarah R. Dennison, Michael Symmonds (Inst. of Sound and Vib. Res.,
Univ. of Southampton, Highfield Campus, Southampton SO17 1BJ, United
Kingdom, C.N.Dolder@soton.ac.uk), and Tim Leighton (Inst. of Sound and
Vib. Res., Univ. of Southampton, Southampton, Hampshire, United
Kingdom)
even audible to a small part of the population. This presentation relates experimental measurements of sound exposures in the high-frequency regime
that we are not intended to hear. There has been recent speculation about
whether the regulations in the near-audible (or audible for a small population) regime are sufficient. Whether it is annoyance or some other mechanism that affects some part of the population is unknown, but this data
present an idea of what exposure we receive at the edge of our hearing
range.
4:20
4pBAa9. Ultrasonic activated stream cleaning of a range of materials.
Tim Leighton (Inst. of Sound and Vib. Res., Univ. of Southampton, Highfield, Southampton, Hampshire SO17 1BJ, United Kingdom, tgl@soton.ac.
uk), Thomas Secker (Ctr. for Biological Sci., Univ. of Southampton, Southampton, Hampshire, United Kingdom), Craig N. Dolder, Mengyang Zhu
(Inst. of Sound and Vib. Res., Univ. of Southampton, Southampton, United
Kingdom), David Voegeli (Faculty of Health Sci., Univ. of Southampton,
Southampton, United Kingdom), and William Keevil (Ctr. for Biological
Sci., Univ. of Southampton, Southampton, United Kingdom)
Despite decades of routine use (starting from the industrial setting but
now also with domestic products available), ultrasonic cleaning faces technical challenges that have never been overcome, and the root of many of
these lies with an understanding of the interaction between the bubble population and the sound field. Ultrasonically Activated Stream (UAS) technology is designed to produce ultrasonic cleaning, and in this paper it does so
for scenarios for which an ultrasonic cleaning bath would be unsuitable,
e.g., removing key contaminants (such as biofilms) from delicate substrates
(tissues, etc.), without damaging that substrate.
Everyday we are exposed to a world of sounds that we do not hear.
Many of the sounds come on the edge of our hearing range and some are
3868
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3868
WEDNESDAY AFTERNOON, 28 JUNE 2017
BALLROOM C, 1:20 P.M. TO 4:20 P.M.
Session 4pBAb
Biomedical Acoustics: Biomedical Acoustics Best Student Paper Competition (Poster Session)
The ASA Technical Committee on Biomedical Acoustics offers a Best Student Paper Award to eligible students who are presenting at
the meeting. Each student must defend a poster of her or his work during the student poster session. This defense will be evaluated by a
group of judges from the Technical Committee on Biomedical Acoustics. Additionally, each student will give an oral presentation in a
regular/special session. Up to three awards will be presented to the students with USD $500 for first prize, USD $300 for second prize,
and USD $200 for third prize. The award winners will be announced at the meeting of the Biomedical Acoustics Technical Committee.
Below is a list of students competing, with abstract numbers and titles. Full abstracts can be found in the oral sessions associated
with the abstract numbers.
All posters will be on display and all authors will be at their posters from 1:20 p.m. to 4:20 p.m.
1aBAa5. Passive acoustic mapping in aberrating media with the angular spectrum approach
Student author: Scott Schoen
1aBAb2. Optimizing gold nanorod volume for minimum cell toxicity and maximum photoacoustic response
Student author: Oscar Knights
1aBAb3. Ultrasound-mediated blood-brain barrier disruption: Correlation with acoustic emissions
Student author: Miles Aron
1aBAb5. Focus ultrasound for augmenting convection-enhanced delivery of nanoparticles in the brain
Student author: Ali Mohammadabadi
1pBAa1. Transcranial acoustic imaging for real-time control of ultrasound-mediated blood-brain barrier opening using a
clinical-scale prototype system
Student author: Ryan Jones
1pBAa2. Toward transcranial focused ultrasound treatment planning: A technique for reduction of outer skull and skull
base heating in transcranial focused ultrasound
Student author: Alec Hughes
1pBAa4. Optimizing passive cavitation mapping by refined minimum variance-based beamforming method: Performance
evaluations in Macaque models
Student author: Tao Sun
1pBAa9. Passive acoustic mapping of extravasation for vascular permeability assessment
Student author: Catherine Paverd
1pBAb1. Ex vivo testing of basal cell carcinomas and melanomas with high-frequency ultrasound
Student author: Christine Dalton
1pBAb4. The frequency-dependent effects of low-intensity ultrasound exposure on human colon carcinoma cells
Student author: Chloe Verducci
1pBAb5. Simulating Fibrin clot mechanics using finite element methods
Student author: Brandon Chung Yeung
4p WED. PM
1pBAb6. Numerical investigation of the subharmonic response of a cloud of interacting microbubbles
Student author: Hossein Haghi
1pBAb8. Toward the accurate characterization of the shell parameters of microbubbles based on attenuation and sound
speed measurements
Student author: Amin Jafari Sojahrood
2aBAa7. Focusing ultrasound through the skull for neuromodulation
Student author: Joseph Blackmore
2aBAa8. Imaging cortical bone using the level-set method to regularize travel-time and full waveform tomography
techniques
Student author: Jonathan Fincke
2aBAb2. Real-time feedback control of high-intensity focused ultrasound thermal ablation using echo decorrelation imaging
Student author: Mohamed Abbass
2aBAb4. Real-time acoustic-based feedback for histotripsy therapy
Student author: Jonathan Macoskey
3869
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3869
2aBAb5. Characterization of cavitation-radiated acoustic power using single-element detectors
Student author: Kyle Rich
2pBA1. Fast compressive pulse-echo ultrasound imaging using random incident sound fields
Student author: Martin Schiffner
3aBAa6. Passive acoustic mapping of cavitation during shock wave lithotripsy
Student author: Kya Shoar
3aBAa7. Interaction between lithotripsy-induced surface acoustic waves and pre-existing cracks
Student author: Ying Zhang
3pBAa2. A multi-element HIFU array system: Characterization and testing for manipulation of kidney stones
Student author: Mohamed Ghanem
3pBAa4. Investigation of stone damage patterns and mechanisms in nano pulse lithotripsy
Student author: Chen Yang
3pBAb2. Shear waves in pressurized poroelastic media
Student author: Navid Nazari
3pBAb3. Magnetoelastic waves in a soft electrically conducting solid in a strong magnetic field
Student author: Daniel Gendin
3pBAb4. Shear elasticity and shear viscosity imaging in viscoelastic phantoms
Student author: Yiqun Yang
5aBAa2. Ultrafast ultrasound localization microscopy
Student author: Claudia Errico
5aBAa5. The dynamic of contrast agent near a wall under the excitation of ultrasound wave
Student author: Nima Mobadersany
5aBAa8. In vitro acoustic characterization of echogenic polymersomes with PLA-PEG and PLLA-PEG shells
Student author: Lang Xia
5aBAa9. Acoustic vaporization threshold of lipid coated perfluoropentane droplets
Student author: Mitra Alibouzar
5aBAb7. Full 3D dynamic functional ultrasound imaging of neuronal activity in mice
Student author: Claire Rabut
5aBAb9. Model-based ultrasound attenuation estimation
Student author: Natalia Ilyina
5aBAa10. Study of acoustic droplet vaporization using classical nucleation theory
Student author: Krishna Kumar
5pBAb6. Enhanced delivery of a density-modified therapeutic using ultrasound: Comparing the influence of micro- and
nano-scale cavitation nuclei
Student author: Harriet Lea-Banks
5pBAb8. Integration of focused ultrasound with nanomedicine for brain drug delivery
Student author: Dezhuang Ye
5pBAb9. Comparative lytic efficacy of rt-PA and intermittent ultrasound in porcine versus human clots
Student author: Shenwen Huang
5pBAb10. Sonobactericide: An ultrasound-mediated adjunct treatment for bacterial infective endocarditis—In vitro proofof-principle
Student author: Kirby Lattwein
5pBAb11. The use of a novel microfluidics system for in vitro ultrasound-mediated drug delivery
Student author: Ines Beekers
5pBAc4. Ultrasound exposure during collagen polymerization produces pro-migratory fiber structures
Student author: Emma Grygotis
5pBAc6. Toward using acoustic waves as a therapeutic tool for osteogenic differentiation
Student author: Lucas Shearer
5pBAc8. Noninvasive and localized acoustic micropumping—An in vitro study of an ultrasound method that enhances drug
distribution through a physiologically-relevant material
Student author: Ahmed Elghamrawy
5pBAc11. Simulation of phased array ultrasound propagation for fluid flow regulation in enhancement of bone adaptation
Student author: Eashan Saikia
3870
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3870
WEDNESDAY AFTERNOON, 28 JUNE 2017
ROOM 205, 1:20 P.M. TO 4:00 P.M.
Session 4pEA
Engineering Acoustics: Micro-Perforates II
J. S. Bolton, Cochair
Ray W. Herrick Laboratories, School of Mechanical Engineering, Purdue University, Ray W. Herrick Laboratories,
177 S. Russell St., West Lafayette, IN 47907-2099
Mats Åbom, Cochair
The Marcus Wallenberg Laboratory, KTH-The Royal Inst of Technology, Teknikringen 8, Stockholm 10044, Sweden
Invited Papers
1:20
4pEA1. Sound attenuation in a curved duct using a microperforated panel. Cheng Yang (Mech. and Aerosp. Eng., Hong Kong
Univ. of Sci. and Technol., Clear Water Bay, Hung Hom, Hong Kong n/a, Hong Kong, chengyang@ust.hk)
A microperforated panel (MPP) sound absorption structure consists of a microperforated panel with a backing air cavity. The coupling between the two components allows an efficient sound absorption around resonance frequencies of the coupled system. In this paper, attempt is made to achieve sound attenuation using a MPP without the commonly used backing air cavity. The concept underlying
this is the generation of the air motion, as a result of the pressure difference on the two faces of the MPP, inside the perforates, such that
the acoustic energy can be dissipated. The acoustic environment in which such a pressure difference occurs is found in a curved duct
where the axial wavenumber of the duct field is dependent on the radius of the curve. A MPP inserted into the duct results in a pressure
difference between the two sub-curved duct domains, enabling the vibration of the air mass inside the perforates. Results show that such
a treatment can achieve a broadband sound attenuation even below the first cut-off frequency of the duct.
1:40
4pEA2. The application of microperforated panel in duct systems. Seungkyu Lee, Thomas P. Hanschen (3M Co., 3M Ctr., Bldg.
280-03-W-33, St. Paul, MN 55144-1000, sklee@mmm.com), and J. S. Bolton (Ray W. Herrick Labs., School of Mech. Eng., Purdue
Univ., West Lafayette, IN)
Microperforated panels (MPPs) are usually considered to be an alternative sound absorbing surface treatment in architectural acoustics applications, where they can serve as effective, fiber-free replacements for the more traditional glass fibers and other porous materials. However, the areas in which MPPs may be applied are not only limited to the replacement of conventional sound absorbing
materials, but MPPs can also be used in spaces where more conventional materials cannot be conveniently used. For example, heating,
ventilation, and air conditioning (HVAC) duct application is one area where MPPs can be used as a stand-alone material, in contrast
with glass fiber, which typically must be covered by perforated surface treatments to prevent erosion of the fibers due to the flow. In this
study, it will be shown how a suitably designed MPP can be an effective noise control element in ducts by reducing fan and flow noise
while at the same time maintaining the flow delivery performance of the duct.
4p WED. PM
2:00
4pEA3. Lightweight absorption and barrier systems comprising N-layer microperforates. Nicholas N. Kim and J. S. Bolton (Ray
W. Herrick Labs., School of Mech. Eng., Purdue Univ., Ray W. Herrick Labs., 177 S. Russell St., West Lafayette, IN 47907-2099, bolton@purdue.edu)
Since the concept of microperforated panels (MPPs) was introduced by Maa, there have been continuing efforts to apply MPPs, primarily as fiber-free sound absorbing materials, typically wall-mounted. The objective of the present work was to demonstrate that multilayer MPPs can also be effective functional absorbers and lightweight barrier systems. The acoustical properties of lightweight MPPs
depend on hole diameter, thickness, porosity, mass per unit area, and air cavity depth. In the case of a single layer, it is possible to find a
combination of these parameters that results in good performance over one or two octaves. However, to be effective for noise control
over a broader range of frequencies, it is necessary to design multi-layer MPPs. Thus here the focus was on the optimal design of multilayer MPPs in the speech interference range, 500 to 4000 kHz. In the case of functional absorbers, the total absorption of the system was
optimized, while in the case of barriers, a high transmission loss was desired, without necessarily sacrificing the absorption of the system. In the latter case, in particular, it was possible to create systems having transmission losses well in excess of the mass law over a
broad range of frequencies.
3871
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3871
2:20
4pEA4. Techniques to improve the low frequency and broadband performance of microperforated panel absorbers. David Herrin, Weiyun Liu (Dept. of Mech. Eng., Univ. of Kentucky, 151 Ralph G. Anderson Bldg., Lexington, KY 40506-0503, dherrin@engr.
uky.edu), and Emanuele Bianchini (American Acoust. Products, Holliston, MA)
Microperforated panel absorbers can be considered as the combination of the perforated panel and backing airspace. It has been demonstrated that partitioning the airspace improves performance. Additional performance improvements can be obtained by varying the
depth of the backing cavity, adding resonators behind the panel, or using double leaf panels. Each of these configurations is first considered for a single cell backing using plane wave simulation. Comparisons are made to impedance tube measurements with acceptable
agreement. Following this, the diffuse field sound absorption for each configuration was measured in a small reverberation room. Each
test sample consisted of a 0.6 m x 0.4 m panel with a backing cavity partitioned into 24 equal size cells. For the double leaf configuration, the cavity betwen panels was honeycomb partitioned but the backing cavity was unpartitioned.
2:40
4pEA5. Practical applications of transparent and opaque microperforated and microslit absorbers. Peter D’Antonio and Jeffrey
Madison (RPG Acoust. Systems LLC, 99 South St., Passaic, NJ 07055, pdantonio@rpgacoustic.com)
architectural acoustics. Currently, the architectural acoustic community faces ever changing challenges that require innovative
approaches to absorb sound. Due to the importance of day lighting in sustainable design certification, such as LEED, and the concern
about particulate material from fiberglass and mineral wool, there has developed a need for transparent and translucent absorptive materials that do not require the use of fibrous porous absorption. One approach has been the development of microperforated and microslit
foils and panels with sub-millimeter openings that provide significant viscous losses when spaced from a boundary. In addition, opaque
FSC certified, AWI Premium Grade, Class A fire rated microperforated veneered wooden panels have also been enthusiastically
accepted by architects, since the microperforations are relatively invisible at normal viewing distances. The theoretical absorption mechanism of microperforated and microslit surfaces is well documented in the literature and comparisons between calculated and measured
absorption coefficients will be illustrated. This presentation will also review practical applications of various transparent foil and plastic
panel materials, as well as microperforated veneered wooden panels.
3:00
4pEA6. A double-layer acoustic absorber as potential substitute for traditional micro-perforated elements. Fabio Auriemma
(Dept. of Mech. and Industrial Eng., Tallinn Univ. of Technol., Ehitajate tee, 5, Tallinn 19086, Estonia, fabio.auriemma@ttu.ee)
A double-layer fibre-less absorber is presented in this paper as possible substitute for Micro-Perforated Element (MPE), at least in
room acoustics applications. This absorber is called Micro-Grooved-Element (MGE) since grooves are present on one of the contact
surfaces of the two mating layers. These grooves generate a number of micro-channels when the element is assembled. Despite the
necessity of having two layers, it is possible to have a certain control of the weight because the layers are provided with wide slits. The
MGE offer the advantage of requiring basic manufacturing process (typically milling), instead of the less cost effective laser cut used for
MPE. Substituting micro-perforation with engraving allows the generation of micro-apertures which are smaller in cross section and
length than in case of traditional MPE, while it is still possible to preserve adequate porosity. In this work, the acoustic performance of
MPE and MPE are tested and compared in terms of absorption coefficient and transfer impedance in both linear and non-linear regimes.
In case of MGE, the presence of more minute apertures will result in an increase of the absorption coefficient up to 30%.
3:20
4pEA7. Absorption of thin micro-perforated partitions lined with anisotropic fibrous materials. Teresa Bravo (Consejo Superior
de Investigaciones Cientificas, Serrano 144, Madrid, Madrid 28006, Spain, teresa.bravo@csic.es), Cedric Maury (Lab. of Mech. and
Acoust. (LMA), Aix Marseille Univ, CNRS, Centrale Marseille, Marseille, France), and Carlos de la Colina (Consejo Superior de Investigaciones Cientificas, Madrid, Spain)
Recent trends in broadband noise reduction have considered the design of bio-inspired multi-layer sound absorbers. The layout usually comprises a suitable combination of thin micro-perforates, fibrous materials and airspaces with a view to mimic the remarkably low
acoustic emissions of the owl flight. One of the main features is the anisotropic texture of the highly porous material beneath the microperforate. This study examines how the absorption properties of rigidly-backed micro-perforated panels (MPPs) are modified when lined
with anisotropic fibrous materials with specified inclination of the parallel fibers within the material thickness. One discusses the effects
of the material constitutive parameters, such as the bulk density, the flow resistivities and the structure factors along and normal to the
fibers axis, on the MPP air-frame relative velocity. The model of propagation in the anisotropic medium accounts for energy losses and
is compared against a multiple scattering approach in case of fibers at grazing angle. The partition absorption properties are also studied
with respect to the nature of the forcing field, either an acoustic plane wave or an additive aeroacoustic excitation in which coexist both
acoustic and turbulent components.
3:40
4pEA8. Optically transparent sound absorber with micro-perforation. Christian Nocke (Akustikbuero Oldenburg, Sophienstr. 7,
Oldenburg 26121, Germany, nocke@akustikbuero-oldenburg.de)
More than 20 years after the first applications of micro-perforated sound absorbers in architectural acoustics there still is a growing
demand of optically transpatent sound absorbers. Based on the theory of micro-perforated panel sound-absorbing constructions by D.-Y.
Maa in 1975 various materials have been used as micro-perforated sound absorbers. Fully transparent sound absorbers as well as printed
and translucent materials allow a combination of acoustic and light design. 3D-shapes used as lamps and other applications have become
available. Measurements for different set-ups will be presented as well as applications in various projects will be discussed. Metal,
wood, polycarbonate plates, and foils as well as other sheets have been micro-perforated. In this contribution, a short review of the applications of various different materials with transparent micro-perforated sound absorbers will be presented.
3872
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3872
WEDNESDAY AFTERNOON, 28 JUNE 2017
ROOM 200, 1:20 P.M. TO 5:40 P.M.
Session 4pMU
Musical Acoustics and Psychological and Physiological Acoustics: Musical Instrument Performance,
Perception, and Psychophysics II
Edgar J. Berdahl, Cochair
Music, Louisiana State University, 102 New Music Building, Baton Rouge, LA 70803
Peter Rucz, Cochair
Dept. of Networked Systems and Services, Budapest University of Technology and Economics, 2 Magyar Tuod
osok k€
or
utja,
Budapest H1117, Hungary
Thomas Moore, Cochair
Department of Physics, Rollins College, 1000 Holt Ave., Winter Park, FL 32789
Claudia Fritz, Cochair
Institut Jean le Rond d’Alembert, UPMC, 4 place Jussieu, Paris 75005, France
Contributed Papers
1:20
baseline force Fmin,0 even for very low vB. Fmin,0 depends on R and ß. Fmin,
then grows proportionally with vB and with 1/ß rather than with 1/ß2.
4pMU1. Minimum bow force measured with a high-precision bowing
pendulum. Robert Mores (Design Media Information, Univ. of Appl. Sci.
Hamburg, Finkenau 35, 104, Hamburg 22081, Germany, robert.mores@
haw-hamburg.de)
1:40
The minimum bow force Fmin necessary to establish Helmholtz motion
on stringed instruments was widely believed to be proportional to the bow
velocity vB, to the reciprocal of the bridge resistance R, and to the reciprocal
of the relative bow-bridge distance ß squared [J. Schelleng, J. Acoust. Soc.
Am. 53, 26-41 (1973)]. More recently, a study reported independence from
vB and an overproportional reciprocal effect of R while confirming the 1/ß2
proportionality [E. Schoonderwaldt, Acta Acustica United with Acustica 94
604-622 (2008)]. Here, a bowing pendulum is used which facilitates precise
control and measurement of related parameters. The string excitation at the
contact point is recorded to instantly classify Helmholtz motion (HM) versus non-Helmholtz motion with one (nHM-1) or more slips during the stick
phase. Monitoring of the classification supports the control of bowing parameters during measurement. These are directed towards the regions of
transition between HM and nHM-1 to reveal the parameters related to Fmin.
The empirical data gained from cello strings suggest that HM requires a
Stringed instruments such as the cello have been evaluated extensively
from structural and acoustical perspectives, but the riser on which a cello
sits within the context of a concerto performance has been long overlooked.
Being that the podium is traditionally some variation of a hollow box, this
riser or “podium” in some regard becomes part of the instrument and can
play a productive or even counterproductive role in its acoustic projection.
While a cellist’s performance is understood to be enriched and enhanced by
this piece of furniture, no one has completed a significant study examining
the physics, dimensions, or capabilities of an ideally constructed cello podium. This first installment in a two-part study investigates the mechanical
interaction between the musician, instrument, and supporting surface, measuring useful or harmful energy transmitted through the legs of the chair, the
musician’s feet, and most importantly the endpin of the instrument.
Invited Papers
2:00
4pMU3. Guitarists’ evaluation and discrimination of steel-string acoustic guitars built with back/side woods of varying price,
prestige, and sustainability. Samuele Carcagno (Dept. of Psych., Lancaster Univ., Lancaster LA1 4YF, United Kingdom, s.carcagno@
lancaster.ac.uk), Roger Bucknall (Fylde Guitars, Penrith, United Kingdom), Jim Woodhouse (Dept. of Eng., Univ. of Cambridge, Cambridge, United Kingdom), Claudia Fritz (Lutheries-Acoustique-Musique, Institut Jean Le Rond d’Alembert, Universite Pierre et Marie
Curie, Paris, France), and Christopher J. Plack (Dept. of Psych., Lancaster Univ., Manchester, United Kingdom)
The different woods used for the back plates of acoustic guitars are often compared by guitarists for their sound qualities, but these
comparisons are rarely done under blinded conditions. For this experiment, six steel-string acoustic guitars were built to be as similar as
possible except for the woods used for the backs and sides. Bridge admittance measurements and spectral analyses of acoustic recordings
revealed small differences between the guitars in their low-frequency modes. Fifty-two experienced guitar players rated the guitars for
3873
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3873
4p WED. PM
4pMU2. A platform for performance: Making the case for an improved
cello podium. David J. Tagg (Audio Eng. and Sound Production, Indiana
Univ. Jacobs School of Music, 555 Sherbrooke St., Montreal, PQ h2x 2a4,
Canada, jamietagg@gmail.com)
sound quality, playability, and other perceptual attributes while wearing welder’s goggles to prevent visual identification. The guitars
received on average similar ratings for sound quality and playability. A factor analysis showed that the other perceptual attributes clustered around the dimensions of “clarity,” “warmth,” and “loudness,” which were all positively related to perceived sound quality, and
did not differ significantly between the guitars. An ABX discrimination test performed by a subset of 31 guitarists showed that guitarists
could not easily distinguish the guitars by their sound. Overall, the results suggest that the woods used for the back plate of a steel-string
acoustic guitar have only a minimal influence on its perceived sound, despite varying greatly in price, prestige, and sustainability.
2:20
4pMU4. Investigating multimodal perception during the musical performance: The case of harpsichord voicing. Arthur Pate
(LAM/d’Alembert, UPMC, UMR CNRS 7190, MCC, PO box 100, 61 Rte. 9w, Palisades, New York 10964-1000, pate@ldeo.columbia.
edu), Arthur Givois, Jean-Lo€ıc Le Carrou, Michele Castellengo (LAM/d’Alembert, UPMC, UMR CNRS 7190, MCC, Paris, France),
Sandie Le Conte, and Stephane Vaiedelich (CRC, USR 3224 - ECR, Musee de la Musique, Cite de la Musique, Philharmonie de Paris,
Paris, France)
The plectrum/string interaction appears to be the main phase during which harpsichord players control the instrument’s sound quality. This is why both harpsichord players and makers commit themselves to the “voicing process,” a necessary work prior to performance, where the plectra are shaped in order to provide the instrument with a good sound quality and homogeneity over the whole
tessitura. Part of a project aiming at understanding the makers’ gesture, this paper will present the results of a free playing and verbalization task. This experimental protocol allows (a) the musicians to produce evaluations within an ecologically valid situation, and (b) the
experimenters to access the strongly multimodal nature of these evaluations. Two sets of plectra were designed by two professional makers, and given to play to 8 professional musicians during a blind test. Each musician freely played successively both plectrum sets while
verbally expressing her/his feelings. The psycholinguistic analysis lead to a perceptual description of each set of plectra, which were
found to differ in the perceived hardness of the plectra, the perceived loudness and resonance of the produced tones, as well as the judgment of the sound quality homogeneity and evolution over the tessitura.
2:40
4pMU5. Piano tone control through variation of “weight” applied on the keys. Caroline Traube (Faculty of music, Univ. of Montreal, Universite de Montreal, Faculte de musique, C.P. 6128, succursale Centre-ville, Montreal, QC H3C 3J7, Canada, caroline.traube@
umontreal.ca)
To control the tone of their instrument, piano teachers at University of Montreal recommend to act on the double escapement action
by modifying the “weight” applied on the keys. When the pianist uses more weight, the key is pressed to the bottom of the keyboard and
the pianists feel a bump when they pass the escapement threshold. When using less “weight”, they play more at the surface of the keyboard. The present study aims to verify if this variation of weight has an impact on the piano tone. Two series of recordings were analysed. In a first series, pianists played a short musical phrase varying several control parameters (with/without weight, with/without
pedal) and at several intensity levels. In a second series of recordings, isolated notes were played in the same conditions. Simultaneously
to the recording of the piano tones, a video image of double escapement grand piano action was captured with a camera placed inside
the piano. The analysis of the data shows that the piano tones produced with and without weight differ along several acoustical descriptors (temporal and spectral features). The main parameters which are modified are related to the quality of the attack.
Contributed Papers
3:00
4pMU6. Playability in flute-like instruments: Investigating the relation
between flute making and instrumental control. Patricio de la Cuadra
(Chair thematique Sorbonne Universites, Pontificia Universidad Cat
olica,
Jaime Guzman Errazuriz 3300, Providencia, Santiago 07866, Chile, pcuadra@uc.cl), Augustin Ernoult (LAM, Institut Jean le Rond d’Alembert, Universite Pierre et Marie Curie, Paris, France), Cassandre Balosso-Bardin
(Chair thematique Sorbonne Universites, Univ. of Lincoln, Lincoln, United
Kingdom), and Benoit FABRE (LAM, Institut Jean le Rond d’Alembert,
Universite Pierre et Marie Curie, Paris, France)
The musician’s ability to modify the pitch of his instrument is a common
feature for most instruments from the flute family. Musicians can alter a
given note by adjusting the airjet velocity or by changing the resonator impedance through the modification of the embouchure (opening or closing
the open end of the resonator where the embouchure is). A trained musician
can adapt to a wide variety of instruments and even correct the pitch of
poorly built instruments. Flute makers, on the other hand, propose a pitch
structure (diapason) by placing the tone holes, adapting their size and height
and by choosing the bore’s internal geometry. The choices made by the
maker take into account the musician’s evaluation, creating a loop that
through centuries of iterations has produced optimal instruments following
a range of cultural and technological constraints. Throughout this article,
the relationship between flute manufacture and the musician’s control will
be discussed and analyzed. Identifying, modeling and quantifying the possibilities the musician has to control the instrument as well as the parameters
the flute maker can modify to close this optimization loop, we propose a
methodology to determine the geometry of the instrument for a given control strategy and vice versa.
3:20–3:40 Break
3874
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3874
3:40
sound qualities. The player’s vocal tract is, to a first approximation, connected with the instrument bore acoustically in series. If the magnitude of
the vocal tract impedance is comparable with the bore impedance, the
acoustic pressure in the mouth can then exert influences with a vibrating
reed. However, studies reliant enough to confirm the configuration of the
vocal tract while playing are limited. In this study, a magnetic resonance
imaging (MRI) device was used to scan the vocal tract in three dimensions.
The participant was an amateur saxophone player and the instrument was
made from reed, mouthpiece, and a short uniform tube. Image data revealed
how the vocal tract was formed when she blew a number of different notes.
They were also used to determine the cross-sectional area and the input impedance of the vocal tract. The results were compared with trumpeter data
taken and published previously, and also with data taken for the act of whistling. [This work was supported by JSPS KAKENHI Grant Number
JP16K00242.]
4pMU7. On the efficiency of vocalization in humans and other vertebrates. Ingo R. Titze (National Ctr. for Voice and Speech, Univ. of Utah,
136 South Main St., Ste. 320, Salt Lake City, UT 84101-3306, ingo.titze@
utah.edu)
Human vocalization is an inefficient process in terms of energy
expended to produce acoustic output. A traditional measure of vocal efficiency, the ratio of acoustic power radiated from the mouth to aerodynamic
power produced by the lungs, ranges between 0.001 % and 0.1 % in speechlike vocalization. Non-human vertebrates like birds cannot afford to operate
with such low efficiency when calling or singing over long distances. A hypothesis is given here that humans have traded away efficiency to maximize
phonetic contrast in speech. A Navier-Stokes solution of non-steady compressible airflow from trachea to lips was used to calculate steady aerodynamic power, acoustic power, and combined total power at seven strategic
locations along the airway. At low (speech-like) frequencies, it is shown
that little power is radiated from the mouth because acoustic wave reflection
at the mouth is very high. Wall vibration and kinetic pressure losses consume on the order of 99.99 % of the power produced. With higher frequency
and greater mouth opening, the efficiency for calling, singing, or screaming
can be increased by several orders of magnitude, approaching the efficiency
of high intensity animal calls.
4:20
4pMU9. Tonal characteristics of saxophone mouthpieces: A comparison. Charles Kinzer (Dept. of Music, Longwood Univ., 201 High St., Dept.
of Music, Farmville, VA 23901, kinzerce@longwood.edu), Stanley A.
Cheyne, and Walter C. McDermott (Phys. & Astronomy, Hampden-Sydney
College, Hampden-Sydney, VA)
4:00
The tonal characteristics of five different types of mouthpieces for the
tenor saxophone will be compared. The mouthpieces are all widely available
and used by present-day musicians; some are designed and marketed for
general use and others are intended for specific musical styles or performance situations. Frequency spectra will be presented for each mouthpiece,
and results will be discussed relative to the design features of each. Analytical data are in turn compared to the results of a survey among listeners seeking descriptive terms for the tonal characteristics produced.
4pMU8. Observation of the vocal tract configuration while playing a
woodwind instrument. Tokihiko Kaburagi (Kyushu Univ., Shiobaru 4-9-1,
Minami-ku, Fukuoka 815-8540, Japan, kabu@design.kyushu-u.ac.jp) and
Yuri Fukuda (Kyushu Univ., Fukuoka City, Fukuoka Pref., Japan)
Skillful players of brass instruments and reed woodwinds are able to
control the vocal tract effectively and perform with intended pitches and
Invited Paper
4:40
4pMU10. Touchscreen based music instruments. M. Ercan Altinsoy (Chair of Acoust. and Haptic Eng., Technische Universitaet Dresden, Helmholtzstr. 18, Dresden 01062, Germany, ercan.altinsoy@tu-dresden.de)
The usage of touch screen devices, such as smartphones, tablet computers, personal computers, game consoles, etc., increases globally. They offer users various gesture based interaction possibilities rather than the limited mouse based human computer interaction.
Therefore, various Apps were developed which turn the touchscreen based device into a musical instrument. In this study, some user experience interviews were conducted to evaluate the suitability of the touchscreen based devices for the musical experience. Besides of
the ease and the joy, the attractiveness, the simplicity, the stimulation, the pleasantness, and the practicality are important parameters.
Therefore, to evaluate the pragmatic and hedonic qualities, the Attrakdiff semantic differential was used.
5:00
4pMU11. Perception of a virtual violin radiation in a wave field synthesis system. Leonie B€ohlke and Tim Ziemer (Inst. of Systematic Musicology,
Univ. of Hamburg, Neue Rabenstr. 13, Hamburg 20354, Germany, leonieboehlke@gmx.de)
A method to synthesize the sound radiation characteristics of musical
instruments in a wave front synthesis system is proposed and tested. Radiation patterns of a violin are measured with a circular microphone array
which consists of 128 pressure receivers. For each critical frequency band
one exemplary radiation pattern is decomposed to circular harmonics of
order 0 to 64. So the radiation characteristic of the violin is represented by
25 complex radiation patterns. On the reproduction side, these circular harmonics are approximated by 128 densely spaced monopoles by means of
128 broadband impulses. An anechoic violin recording is convolved with
these impulses, yielding 128 filtered versions of the recording. These are
then synthesized as 128 monopole sources in a wave front synthesis system
3875
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
and compared to a virtual monopole playing the unfiltered recording. The
subjects perceive the tone color of the recreated virtual violin as being dependent on the listening position and report that the two source types have a
different “presence.” The test persons rate the virtual violin as less natural,
sometimes remarking that the filtering is audible at high frequencies. Further
studies with a denser spacing of the virtual monopoles and a presentation in
an anechoic room are planned.
5:20
4pMU12. Applying information theory to investigate music performance with continuous electronic sensors. Edgar J. Berdahl, Michael Blandino, and Daniel Shanahan (Music, Louisiana State Univ., 102 New Music
Bldg., Baton Rouge, LA 70803, eberdahl@ccrma.stanford.edu)
A wide variety of electronic sensors can be used for designing new digital musical instruments and other human-computer interfaces. However,
presently human abilities for continuously controlling such sensors are not
Acoustics ’17 Boston
3875
4p WED. PM
Contributed Papers
well quantified. The field of information theory suggests that a human together with a user interface can be modeled as a communication channel.
Previously, Fitts’ Law used a discrete communications channel to model information conveyed by a human pointing at discrete targets. In contrast, the
present work employs a continuous communications channel to model a
human continuously controlling an analog-valued sensor. The ShannonHartley theorem implies that the channel capacity (e.g., HCI throughput)
can be estimated by asking human subjects to perform gestures that match
idealized, bandlimited Gaussian “target gestures” across a range of bandwidths. Then, the signal-to-noise ratio of the recorded gestures determines
the channel capacity (e.g., HCI throughput). This approach is tested on
human users alternately operating simple analog sensors. Suggestions are
made for creating knowledge about user interfaces that could potentially
transmit an enhanced amount of information to a computer.
WEDNESDAY AFTERNOON, 28 JUNE 2017
ROOM 203, 1:15 P.M. TO 5:00 P.M.
Session 4pNSa
Noise, Structural Acoustics and Vibration, Architectural Acoustics, Speech Communication, and
Psychological and Physiological Acoustics: E-Mobility–Challenge for Acoustics
Klaus Genuit, Cochair
HEAD acoustics GmbH, Ebertstr. 30a, Herzogenrath 52134, Germany
Steve Sorenson, Cochair
3M, 7911 Zionsville Road, Indianapolis, IN 46268
Chair’s Introduction—1:15
Invited Papers
1:20
4pNSa1. The new sounds of electric vehicles—Quieter but really better? Klaus Genuit (HEAD Acoust. GmbH, Ebertstr. 30a, Herzogenrath, NRW 52134, Germany, Klaus.Genuit@head-acoustics.de)
The development of new vehicles—cars and scooters—with electric powertrain is not a simple evolution but means a revolution
with respect to sound. After listening of more than 150 years to the specific sound of combustion engines, electric vehicles provide the
opportunity of complete new sound experiences. Not only the sound level is lower but the whole character of the sound is changed. Several questions are coming up with the new generation of powertrains: Does the sound fit to the vehicle, what are the expectations of the
customers and the people around the vehicles, which information should be given by the sound, do we need new regulations with respect
to sound? All these topics and their implications must be discussed carefully. It is evident that effective sound design must consider
diverse perceptual needs. This paper gives an introduction and an overview of new challenges arising from E-Mobility considering the
interior and exterior sound as well as the environmental and safety aspects.
1:40
4pNSa2. The investigation of a consumer satisfaction metric for electric vehicle sound signatures. Daniel J. Swart and Anri€ette
Bekker (Stellenbosch Univ., Cnr of Banghoek, Stellenbosch 7602, South Africa, djswart@sun.ac.za)
The attributes of electric vehicle sound signatures have been investigated for some time, especially considering the sound quality
and warning features of the exterior sound. The interior vehicle sound quality influences both the driving pleasure and market uptake of
these vehicles. However to date, minimal research has been published on the consumer satisfaction and objective metrics that could predict satisfying electric vehicle sound. Conventional subjective methods of determining consumer satisfaction are tedious and time consuming. The possibility of a simple and efficient method for determining perceived consumer satisfaction of interior wide-open throttle
sound is investigated. Several existing objective psychoacoustic metrics are used to establish and propose a new consumer satisfaction
metric for electric vehicle sound signatures. The metric is proposed through the statistical correlation of six psychoacoustic metrics and
the subjective responses of 31 subjects. Two original and three modified sound signatures are used as test stimuli. The results showed a
high correlation with several psychoacoustic metrics, which were then used to formulate a metric that could approximate perceived consumer satisfaction. The proposed metric was evaluated against an additional dataset of electric vehicle sound signatures and conclusions
were drawn as to the accuracy and potential implementation of the proposed metric.
3876
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3876
2:00
4pNSa3. Evaluation of electric vehicle sounds and new concepts regarding speed-dependency. Lisa Steinbach, M. Ercan Altinsoy,
Serkan Atamer, and Robert Rosenkranz (Chair of Acoust. and Haptics, TU Dresden, Helmholtzstr. 18, Dresden 01069, Germany, lisa.
steinbach@tu-dresden.de)
In today’s urban environment inhabitants are permanently exposed to elevated noise levels, which are mostly dominated by traffic
noise. The current electrification of vehicles might affect the traffic noise in city centers. The aim of this work was to determine the pedestrian reaction, the annoyance and the warning effect of electric vehicle sounds. For this purpose the differences in the perceived
annoyance, warning effect and detection time were investigated with perception studies. Furthermore the sound level of a full speedscaling of an approaching vehicle starting from 0 km/h at the critical distance is nearly 10 dB below the level of a constant speed of 10
km/h. Therefore variants of electric vehicle sounds were generated, in which a constant level is used below 5 or 10 km/h. The results
show that the change of the speed-scaling influences the detection time enormously. In this study, Artificial neural network (ANN) is
used as an indexing tool to imitate subjective perceptions, because in some further work the results of artificial neural networks show
great correlation with the assessments of subjects in listening tests. Through the use of ANN, a flexible model can be developed which
can predict the annoyance or the warning effect of future electric vehicle sounds.
2:20
4pNSa4. Quiet cars and blind pedestrians. Robert Wall Emerson (Mech. and Aeronautical Eng., Western Michigan Univ., 1903 West
Michigan Ave., Kalamazoo, MI 49008-5218, robert.wall@wmich.edu), Dae Shik Kim (Blindness and Low Vision Studies, Western
Michigan Univ., Kalamazoo, MI), and Koorosh Naghshineh (Mech. and Aeronautical Eng., Western Michigan Univ., Kalamazoo, MI)
People who are blind use traffic sounds to determine alignment, verify relative position to a street, identify their location, and decide
an appropriate time to cross a street. With quieter vehicles, these pedestrians often do not have the amount or kind of acoustic information needed to make good, reliable travel decisions. Blind participants listened to internal combustion vehicles and hybrid vehicles (with
and without added artificial sounds) approaching while moving forward, approaching while backing, and turning at an intersection. Participants indicated when they heard a vehicle and the direction of travel. Participants also aligned themselves to passing vehicles or indicated when crossable gaps between vehicles existed. Performance degraded most when ambient sound levels increased. However, even
under extremely low ambient conditions, some tasks were not performed well. Some artificial sounds improved performance of particular tasks, but no artificial sound adequately enhanced performance in all tasks. Quieter vehicles have a potentially severe safety impact
on pedestrians who are blind. Blind pedestrians may not be aware how their travel is impacted by less acoustic information. Addressing
the impact by adding artificial sounds promises only a partial solution and should be augmented by other efforts.
2:40
4pNSa5. Observation and evaluation model of warning sound in an electric vehicle warning index based on whine index. Sang
Kwon Lee and Man Uk Han (Mech. Eng., Inha Univ., 253 Yonghyun Dong, Incheon 402 751, South Korea, sangkwon@inha.ac.kr)
Electric Vehicles (EV) generate sound pressure lower than internal combustion engines(ICE) at low speed. Pedestrians familiar with
internal combustion engines are difficult to recognize such a quite vehicles. Therefore, an additional warning sound generator is
required. The warning sound must have high detectability and low annoyance to pedestrians, while complying with legislation for warning sound of various countries. In this paper, the validity of the index was verified using previous research results. In previous research,
AI (Annoyance Index) and DI (Detectability Index) of the electric vehicle warning sound were obtained with a consideration of the
masking effect. 12 signal synthesized four warning sounds and three background noises are used. In this study, a warning sound generator was installed into electric vehicle, and jury test result was obtained. The member of the jury tester, distance, and speed from the electric vehicle were performed under the same conditions as the previous study.
3:00–3:20 Break
3:20
4p WED. PM
4pNSa6. Relationship between frequency characteristic of fluctuated sound and detectability of warning sounds for electrical
vehicle. Nozomiko Yasui (National Inst. of Technol., Matsue College, 14-4 Nishi-ikuma-cho, Matsue, Shimane 690-8518, Japan,
n_yasui@matsue-ct.jp) and Masanobu Miura (Faculty of Sci. and Technol., Ryukoku Univ., Otsu, Japan)
The sound on electrical vehicle is quiet at low speeds, so pedestrians have difficulty recognizing electrical vehicles approaching
them. Those vehicles were designed to play a warning sound to solve this problem. However, it has not been solved yet, and the introduction of warning sounds is broadly discussed on different political levels. It is necessary to design warning sound to concern detectability of approaching electrical vehicle in order to make it easier for pedestrians to recognize the vehicle. Our previous studies
investigated detectability of fluctuating sound, and then found that the fluctuation frequency, non-periodic fluctuation and shape of envelope are effective to enable people to recognize approaching vehicles. However, frequency characteristic of fluctuating sound used in
those studies was one pattern, namely motor sound. Here, we investigate relationship between frequency characteristic of fluctuated
sound and detectability of warning sounds for electrical vehicle. Investigations were carried out by using different fluctuated sounds
which were designed to have periodic and non-periodic fluctuations, and their detectability by pedestrians were assessed. The results
revealed that the frequency characteristic of fluctuated sound influences the ability with which people detect approaching electrical
vehicles.
3877
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3877
3:40
4pNSa7. Humanoid EV sound design and communications. Norio Kubo (Yokohama Inst. of Acoust., Inc., Level 10 TOC Minato
Mirai 1-1-7 Sakuragi-cho, Naka-ku, Yokohama-shi, Kanagawa-ken 2310062, Japan, kubo@yokohama-onkyo.jp)
Since electric vehicle has no combustion engine, extreme quietness of newly developed car is often discussed, especially on exterior
sound design. For example, with additional engine sound, What is the suitable sound for pedestrian? This paper focuses on opposite way
of such question. It means EV sound design for car driver, thus this ams for interior sound design. It also means better communication
between automobile and driver. Since recent improvement of AI (artificial intelligence) technology, people will communicate with
future automobile. Therefore, EV sound design on communication are discussed.
4:00
4pNSa8. A modeling approach in simulating vibro-acoustic responses of electric motors. Samaneh Arabi, Glen Steyer, and Zhaohui
Sun (NVH Eng., American Axle & Manufacturing, Inc., 2965 Technol. Dr., Rochester Hills, MI 48309, samaneh.arabi@aam.com)
The trend of electric or hybrid-electric vehicles in today’s automotive industry brought new challenges into science and engineering
world including the Noise and Vibration issues, which are usually tied up with both airborne and structural noises. The electromagnetic
force typically plays a significant role in acoustic noise radiation in the electric motor. This paper describes an innovative approach to
model the physics of noise radiated by the electric motor. The dynamic response of the structure is obtained by modal decomposition of
the electromagnetic radial force. The stator structure is considered as an equivalent cylinder subjected to sinusoidal distributed modal
loads. The mathematical formulations of the modal forces are implemented in Finite Element Modelling. An acoustic FE model is used
by taking the vibration velocity of the housing structure as a boundary condition for calculating the noise radiation. This methodology
provides a better fundamental understanding of how the electric motor as a noise and vibration subsystem behaves, and facilitates design
parametric studies and optimization. The approach is validated by comparing the predicted vibration and noise from an electric model to
experimental test data, and excellent agreement is obtained.
4:20–5:00 Panel Discussion
WEDNESDAY AFTERNOON, 28 JUNE 2017
ROOM 202, 1:20 P.M. TO 5:40 P.M.
Session 4pNSb
Noise: Measuring, Modeling, and Managing Transportation Noise II
Matthew Kamrath, Cochair
Acoustics, Pennsylvania State University, 717 Shady Ridge Road, Hutchinson, MN 55350
Lisa Lavia, Cochair
Noise Abatement Society, 8 Nizells Avenue, Hove BN3 1PL, United Kingdom
Invited Papers
1:20
4pNSb1. Quantifying uncertainties in predicting aircraft noise in real-world situations. Manasi Biwalkar and Victor Sparrow
(Graduate Program in Acoust., The Penn State Univ., 201 Appl. Sci. Bldg., University Park, PA 16802, mxb1096@psu.edu)
A relatively new project in the ASCENT Center of Excellence at Penn State and Purdue is focused on validating aircraft noise models and quantifying uncertainties of both model prediction and measurement. Multiple data sets are being identified for the validation
efforts, and these will be described. This presentation will particularly focus on the BANOERAC and Vancouver Airport Authority data
sets. Meteorological influences are very important in assessing the uncertainties on received sound levels from aircraft, so having
weather data in conjunction with sound level measurements is essential. An ultimate outcome of the project will be a better understanding of the needs for acoustic and meteorological data to validate modern aircraft noise propagation tools such as the FAA’s Aviation
Environmental Design Tool (AEDT) and the parabolic equation (PE) algorithm. [Work supported by the FAA. The opinions, findings,
conclusions, and recommendations expressed in this material are those of the authors and do not necessarily reflect the views of
ASCENT FAA Center of Excellence sponsor organizations.]
3878
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3878
1:40
4pNSb2. A multipole expansion technique for predicting en-route aircraft noise. Yiming Wang (Mech. Eng., Purdue Univ., 2120
McCormick Rd., Apt. 711, West Lafayette, IN 47906, wymchihiro@gmail.com) and Kai M. Li (Mech. Eng., Purdue Univ., West Lafayette, IN)
This study investigates the noise emitted by aircraft overflights. A multipole expansion method is used to model the noise source
generated by propeller-driven aircrafts. Indeed, the multipole expansion method is frequently used in the study of electromagnetic wave
propagation in which the far-field potential due to a localized charge distribution is approximated. This same technique is equally applicable in acoustics where the directivity of the aircraft noise source is modeled by summing a series of multipoles. For instance, the zero,
first and second moment represent a monopole, dipole, and quadrupole, respectively. The first moment term is omni-directional and high
order multipole terms have shown increasingly strong dependency on the receiver’s angular positions. The multipole expansion method
leads to a more accurate model for evaluating the directivity effect due to aircraft flyovers. In the current study, precise (comprehensive)
experimental data were used to determine the strength of each multipole term by means of non-linear least squares optimization. Our
preliminary results have demonstrated that higher order multipoles up to and including quadrupole terms are needed to give reasonable
predictions of the sound fields. [Work Sponsored by the Federal Aviation Administration.]
2:00
4pNSb3. Effects of modifications of the crackle percept in high-amplitude noise on sound quality and statistical metrics. S. Hales
Swift (Phys. and Astronomy, Brigham Young Univ., 2286 Yeager Rd., West Lafayette, IN 47906, hales.swift@gmail.com), Kent L.
Gee, Tracianne B. Neilsen (Phys. and Astronomy, Brigham Young Univ., Provo, UT), Micah Downing, and Michael M. James (Blue
Ridge Res. and Consulting, Asheville, NC)
Three methods of altering a waveform that exhibits the crackle percept in high-amplitude noise were previously considered [Swift,
Gee, Neilsen, 2014]. Two alterations—expanding shocks and transforming the derivative of the waveform to a Gaussian distribution—
were found to eliminate crackle, while a third modification—transforming the waveform pressure to exhibit a Gaussian distribution—
did not eliminate the crackle percept. In this paper, a fourth alteration, altering the phase of a crackling signal in the frequency domain
in the manner of an all-pass filter in selected frequency bands, is considered and its effects on the crackle percept is compared to the previous methods. In addition to waveform playback, metrics such as skewness of the pressure waveform derivative and loudness and
sharpness, along with their time-varying forms, are used in the comparison. [Work supported by AF SBIR.]
2:20
4pNSb4. Enabling noise engineering methods to model complex geometries. Matthew Kamrath, Philippe Jean (CSTB, CSTB, 24
Rue Joseph Fourier, Saint-Martin-d’Hères 38400, France, kamrath64@gmail.com), Christophe Langrenne (LMSSC/CNAM, Paris
Cedex 03, France), and Judica€el Picaut (AME, LAE, IFSTTAR, Bouguenais, France)
Many governments use standard engineering methods (e.g., CNOSSOS or Nord2000) to model outdoor noise propagation from cars,
trains, and trams. These methods provide efficient approximations of long-term averaged noise levels based on the geometrical divergence, atmospheric absorption, ground effect, refraction, reflection, and diffraction. However, these methods are limited to diffraction
around simple, box-shaped geometries; for example, engineering methods cannot model a T-barrier. In addition, reference methods like
boundary elements or finite differences are too computationally expensive for many realistic cases. Thus, a hybrid method has been
developed to improve the accuracy of the engineering methods for complex geometries and surfaces. This approach interpolates a table
of 2.5D boundary element results to estimate the influence of the complex objects. The hybrid method produces more accurate results
than the standard engineering methods compared to a fast-multipole boundary element method for a small test case using a T-barrier up
to almost 2 kHz. However, significant differences between the hybrid and boundary element methods remain because the boundary element method sums the pressures coherently and the hybrid method sums them incoherently like the engineering methods do for simplicity and efficiency.
2:40
4pNSb5. On the sound fields due to a high-speed line source moving above an extended reaction ground. Kai M. Li and Yiming
Wang (Mech. Eng., Purdue Univ., 140 South Russell St., West Lafayette, IN 47907-2031, mmkmli@purdue.edu)
3879
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
4p WED. PM
Predicting sound fields due to a moving sound source is of great importance in assessments of road traffic noise and the study of enroute aircraft noise. In a recent publication, Dragna and Blanc-Benon [J. Sound Vib. 349, pp 259-275] have developed an asymptotic solution for predicting the sound fields due a moving line source placed above a locally reacting ground surface. However, for many outdoor ground types, such as forest floors and snow covered grounds, the assumption of a locally reacting surface is not accurate enough
for many practical applications. In the present paper, an asymptotic solution for the evaluation of sound fields above an extended reaction ground has been derived for a line source moving with a constant speed. Based on the boundary conditions developed by the time
domain approach, a Fourier transform method and its associate conformal mapping have been used to give an integral expression for the
frequency-domain solution. The steepest descent method is then applied in the contour integration to yield an asymptotic approximation
for a closed form expression due to a moving source placed above an extended reaction ground.
3879
Contributed Paper
3:00
analysis is made of different interior noise sources. To evaluate mitigation
measures for railroad noise, a simulation platform is presented that provides
both audible as well as visual stimuli for test subjects. In this platform,
source sounds are converted to the corresponding sound source at the receiver by taking into account propagation through materials and the atmosphere. These can be combined with sound from the environment to create a
realistic simulation of train noise inside the cabin. In addition to the audio
experience, a visual experience is also presented to make the simulation realistic for the test subjects. Experience from the use of a similar simulator
for exterior noise sources shows that a visual representation of the environment increases the realism of the simulation. It also allows for better comparison between different mitigation alternatives.
4pNSb6. A simulation platform for interior railroad noise. Roalt Aalmoes (Environ. Dept., Netherlands Aerosp. Ctr., Anthony Fokkerweg 2,
Amsterdam 1059CM, Netherlands, roalt.aalmoes@nlr.nl), Paul Vos, de
(SATIS, Weesp, Netherlands), and Henk Lania (Environ. Dept., Netherlands
Aerosp. Ctr., Amsterdam, Netherlands)
High railroad noise levels are a challenge for both the exterior and interior environment of a train. The DESTINATE project (partially funded by
the European Union’s Shift2Rail programme) aims to find cost-effective
mitigation measures to reduce railroad noise. Although most focus is placed
on exterior noise, reducing noise inside the cabin increases the cabin comfort for passengers and thus the attractiveness of this mode of transport. An
3:20–3:40 Break
Invited Paper
3:40
4pNSb7. Sounding out smart cities: Auralization and soundscape monitoring for environmental sound design. Alex Southern
(Acoust., AECOM Ltd., 7th Fl., Aurora, 120 Bothwell St., Glasgow G2 7JS, United Kingdom, alex.southern@aecom.com), Francis Stevens, and Damian Murphy (AudioLab, Dept. of Electronics, Univ. of York, York, United Kingdom)
Auralization is key in developing a better understanding of how significant changes or infrastructure planning in our urban environment can have an impact on our related environmental soundscape. It allows consultants, planners and other stakeholders to hear the
potential acoustic changes that might result, so that designs might be better optimized; it is also a valuable dissemination tool for informing the public as to the nature of such changes. Auralization also facilitates subjective soundscape assessment of proposed developments
at the design stage and once construction is complete, smart sensor networks enable soundscape monitoring and objective evaluated on
an ongoing basis. Of particular interest is sound emitted from transportation, as it is generally considered as unwanted sound and hence
defined as noise. Transportation noise and road traffic noise in particular, is considered a concern for public health by the WHO, and
annoyance with some aspect of our daily soundscape is not uncommon. This work presents an overview of how auralization has been
used in the context of some recent transportation noise related case studies. The complete auralization chain is presented including
source measurement and soundscape monitoring, sound propagation modeling using numerical simulation, soundfield rendering, and the
potential for immersive multimodal presentation.
Contributed Paper
4:00
4pNSb8. Sound propagation methods for road traffic noise auralization.
Matthew Muirhead (Noise and Acoust., AECOM, Midpoint, Alençon Link,
Basingstoke, Hampshire RG21 7PP, United Kingdom, matthew.muirhead@
aecom.com) and Alex Southern (Noise and Acoust., AECOM, Glasgow,
United Kingdom)
When changes to an area involving an upgrade to a road network (new
or existing) are planned, an assessment of the potential for change in the
sound environment is carried out. However this assessment usually only
considers local residents’ daily noise exposure and it can be difficult to
understand what an increase of 2 dB, for example, actually means in terms
of what you hear. Further to this, perceived changes in sound character in
3880
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
relation to road traffic and associated noise mitigation measures are not
assessed by current methodologies. There has been increased interest
recently in auralization as a potential environmental sound design tool.
Without the existence of standardized guidance on the application of existing sound propagation models for auralization there is a potential to obtain a
wide range of perceptually different results, depending on the adopted methodology. This work discusses important considerations for the auralization
of road traffic noise. Accurately creating these experiences requires bringing
together existing sound recordings with the latest and most advanced methods for traffic and environmental noise modeling. Here we discuss some
proposed algorithms and methods for achieving this and potential improvements to the nascent methodology for the future.
Acoustics ’17 Boston
3880
Invited Paper
4:20
4pNSb9. A method for plausible road tyre noise auralization. Alex Southern (Acoust. Dept., AECOM Ltd., 7th Fl., Aurora, 120
Bothwell St., Glasgow G2 7JS, United Kingdom, alex.southern@aecom.com) and Damian Murphy (AudioLab, Dept. of Electronics,
Univ. of York, York, United Kingdom)
Interest in research relating to the impact and abatement of road traffic noise has increased since the Environmental Noise Directive
was introduced. The World Health Organisation has recognized road traffic noise as a serious problem for public health, and annoyance
with some aspect of our daily soundscape is well recognized as a common complaint. Auralization tools can allow designers, planners
and relevant stakeholders to listen and converse on human response to a planned development or mitigation strategy. An ideally detailed
road traffic noise auralization system would render the acoustic emission of every vehicle on the road network at any desired receptors.
This work focuses on the sound emission from the road tyre interaction for the purpose of auralization and extends on previous work for
synthesising road tyre noise based on a small dataset of roadside recordings. The proposed method discussed in comparison to a recently
published approach and the plausibility of the proposed method is verified.
Contributed Papers
4pNSb10. Shape optimization of reactive mufflers using threshold Acceptance and FEM. Abdelkader Khamchane (Departement de Genie
mecanique, Laboratoire de Mecanique, Materiaux et Energetique (L2ME).
Universite A/Mira de Bejaia., Rte. de Targa Ouzemour, Bejaia 06000, Algeria, abdelkader.khamchane@yahoo.fr), Youcef Khelfaoui, and Brahim
Hamtache (Departement de Genie mecanique, Laboratoire de Mecanique,
Materiaux et Energetique (L2ME). Universite A/Mira de Bejaia., Bejaia,
Algeria)
In this paper, the acoustic performance of three different expansionchamber mufflers with extended tube under space constraint is presented. A
shape optimization analysis is performed using a novel scheme called
Threshold Acceptance (TA), the best design obtained by the shape optimization method are analyzed by Finite Element Method (FEM). This numerical
approach is based on the maximization of the sound transmission loss (STL)
using the Transfer Matrix Method (TMM), a modeling method based on the
plane wave propagation model. The FEM solution used to analyse the STL
of the shape optimized mufflers is based on the Acoustic Power method, a
standard computational code COMSOL Multiphysics is used to analyse in
3D the sound attenuation of the mufflers by the FE method. The acoustical
ability of the mufflers obtained is than assessed by comparing the FEM solution with the analytical method. Results show that the maximal STL is precisely located at the desired targeted tone. In addition, the acoustical
performance of muffler with outlet extended tube is found to be superiors to
the other one. Consequently, this approach provides a quick and novel
scheme for the shape optimization of reactive mufflers.
5:00
4pNSb11. Evaluation of an ANSI standard for predicting transportation
noise-induced sleep disturbance. Sanford Fidell (Fidell Assoc., Inc., 23139
Erwin St., Woodland Hills, CA 91367, sf@fidellassociates.com)
An ANSI Working Group has completed its review of a noise-induced
sleep disturbance standard (Part 6 of ANSI/ASA S12.9-2008) by deciding
not to re-affirm it. The Working Group’s decision was based on the relatively small and non-representative corpus of field observations of noiseinduced behavioral awakenings available for analysis; on the poor
3881
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
generalizability of predicted awakening rates from airport to airport; on
practical experience with the limited utility of predictions of “at least one
behavioral awakening per night” for purposes of assessing environmental
noise impacts; on the statistical assumptions of convenience and post hoc
analysis methods used to generate predictions of awakenings; on information published subsequent to adoption of the original standard; and on the
findings of peer-reviewed re-analyses of the findings on which the original
standard was based. The Working Group further recommended that no further analytic effort intended to develop alternate awakening prediction
methods be undertaken until a substantial body of new peer-reviewed field
observations has been published.
5:20
4pNSb12. Simplified airport noise models for integrated environmental
impact assessment of aviation. Antonio J. Torija, Rod H. Self, and Ian H.
Flindell (ISVR, Univ. of Southampton, ISVR, Southampton SO17 1BJ,
United Kingdom, A.J.Martinez@soton.ac.uk)
With the projected significant increase in air traffic demand over the
next few years, to ensure the sustainability of the aviation sector and to
avoid further deterioration of quality of life of communities near airports,
the various stakeholders, i.e., manufacturers, airlines, airports, and government are required to address projections of future aviation scenarios in an
integrated manner, where noise and also air quality and carbon release are
considered. Our research group is currently developing simplified airport
noise models to overcome the important input data requirements and computation complexity of existing airport noise models for computing precise
noise contours (e.g., FAA’s Integrated Noise Model, INM), and also to
ensure compatibility against input and output requirements of climate and
air quality models. These models are intended to be incorporated into an
overall architecture for the strategic environmental assessment of aviation
scenarios where different technology and fleet composition options (considering retirements and market penetrations) and growth rates are examined.
This paper presents the results of a series of cases studies for illustrating the
applicability of simplified airport noise models for analysing strategic aviation noise impact. Both the unavoidable limitations and advantages of simplified models for computing noise outputs under complex scenarios are
discussed.
Acoustics ’17 Boston
3881
4p WED. PM
4:40
WEDNESDAY AFTERNOON, 28 JUNE 2017
BALLROOM A, 1:20 P.M. TO 3:40 P.M.
Session 4pNSc
Noise: Urban Environment and Noise Control (Poster Session)
Bennett M. Brooks, Cochair
Brooks Acoustics Corporation, 30 Lafayette Square - Suite 103, Vernon, CT 06066
Greg Watts, Cochair
Engineering and Informatics, University of Bradford, Chesham, Richmond Road, Bradford BD7 1DP, United Kingdom
All posters will be on display from 1:20 p.m. to 3:40 p.m. To allow contributors in this session to see the other posters, authors of
odd-numbered papers will be at their posters from 1:20 p.m. to 2:30 p.m. and authors of even-numbered papers will be at their posters
from 2:30 p.m. to 3:40 p.m.
Contributed Papers
4pNSc1. Case-study evaluation of a combined noise barrier and solar
panel in an urban area. Dag Glebe (Bldg. Technology/Sound & Vib.,
RISE Res. Institutes of Sweden, Borås, Sweden), Krister Larsson (Bldg.
Technology/Sound & Vib., RISE Res. Institutes of Sweden, Box 857, Boras
SE-50115, Sweden, krister.larsson@sp.se), and Xuetao Zhang (Bldg. Technology/Sound & Vib., RISE Res. Institutes of Sweden, Borås, Sweden)
The LIFE + project NOISUN, concluded in February 2016, had as main
objective to demonstrate an innovative noise barrier that produces solar
energy for distribution to local district heating systems. Specially adapted
solar collectors were installed at a major transport thoroughfare for both
road (the E20 motorway) and railway traffic in the Swedish municipality of
Lerum, and the effect of the intervention was evaluated in several ways. In
the beginning of the project, calculations of expected noise mitigation were
performed. Before and after the noise barrier was erected two actions were
performed: Questionnaires were sent out to concerned households, and
sound level measurements were performed in combination with recordings
of noise events. The insertion loss of the noise barrier showed to be very
close to the estimations. The noise reduction was most pronounced for loud
train pass-bys but not as much for road traffic noise from the E20 motorway
(also, contributions from other streets made the room for road traffic noise
improvement smaller). The questionnaire results are presented and discussed in relation to the measurement results. The analysis shows a clear
improvement of the perceived noise situation, but also points to some of the
problems inherent in field projects analyses.
4pNSc2. Binaural sound map of Malaga. Carmen Rosas and Salvador
Luna (Univ. of Malaga, Av. de Cervantes, 2, Malaga 29016, Spain, crosas@
uma.es)
Streets, squares, parks, and even buildings and businesses, present a
sonic footprint which characterises every city at a certain period. Despite
this, in the field of acoustics these sounds are usually analysed as noises to
be measured and reduced. This fact may have caused the study of city
sounds to be deemed secondary and may also have led to prioritising the resolution of conflicts brought about by undesired noises. As it happens with
the natural environment, however, taking proper care of an acoustic environment stems from its knowledge and appreciation. This sound map aims at
creating a practical tool that collects all the most distinctive soundscapes of
Malaga so that they can be listened to by people from all over the world,
become part of the city’s cultural heritage and be archived and catalogued
for their conservation. Besides, its contrast with noise maps, a sound map
allows for the characterisation of the territory from a different perspective,
in which the identity of the depicted area is defined by all its sounds. All the
recordings are being made with binaural microphones, so the sounds produce a more immersive experience when using headphones. These microphones were built for this project, and a series of HRTF measurements was
3882
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
obtained and applied to different audio signals for the realization of a psychoacoustic test, in order to assess the spatiality provided by the system.
Another series of audio samples was generated from the MIT’s HRTFs, and
both results have been compared.
4pNSc3. Effects of train and truck traffic on noise levels in urban communities. INKYU HAN (Epidemiology, Human Genetics, and Environ.
Sci., Univ. of Texas Health Sci. Ctr. School of Public Health, 1200 Pressler
St., RAS W-642, Houston, TX 77030, inkyu.han@uth.tmc.edu), Lara
Samarneh (Biomedical Eng., Univ. of Texas Austin, Austin, TX), and
Elaine Symanski (Epidemiology, Human Genetics, and Environ. Sci., Univ.
of Texas Health Sci. Ctr. School of Public Health, Houston, TX)
The objective of this study was to measure noise pollution in urban communities near industrial facilities in Houston, Texas and to evaluate the
impact of train and truck (e.g., 18-wheeler) traffic associated with industrial
activities on noise pollution. In this pilot study, we monitored noise levels
as A-weighted decibels (dBA) using a sound level meter in two communities adjacent to industrial facilities for 11 days between 9 a.m. and 1 p.m.
Our preliminary results showed that average noise levels in residential communities were 58.465.9 dBA without any trains or trucks passing by as
compared to noise levels of 65.667.0 dBA and 64.167.1 dBA with train or
truck traffic. Railroad and roadway traffic resulted in increases in background noise levels of 8.6% or 3.6%, respectively. Compared to background
noise levels, railroad traffic significantly increased low (<500 Hz), medium
(500-1000 Hz), and high frequencies (> 1000 Hz) whereas truck traffic significantly increased only low and medium frequencies. Additional studies
would be needed to address other potential sources of noise such as major
road traffic during morning and evening rush hour periods that might be
also responsible for elevation of noise pollution in these urban communities.
4pNSc4. Identification of potential noise conflict zones on the construction and operation of the new Chile-Argentina Aguas Negras Interconection Road. Sebastian Fingerhuth (School of Elec. Eng., Pontificia
olica de Valparaıso, Av. Brasil 2147, 3er piso, Valparaıso
Universidad Cat
2362804, Chile, sebastian.fingerhuth@pucv.cl) and Alejandro Araya (Faculty of Eng., Pontificia Universidad Cat
olica de Valparaıso, Valparaıso,
Chile)
As part of the infrastructure integration between Chile and Argentina
there is the plan to build the Aguas Negras Interconexion Road. It seeks,
finally, to improve the physical and commercial connectivity of the zones
between the ports of Porto Alegre (Brasil) and Coquimbo (Chile). It passes
the Andes Mountaines and includes the construction of a tunnel for vehicular traffic as well as the extension of the access road D-41CH on the Chilean
side. This paper presents the result of a study done to establish those cities
Acoustics ’17 Boston
3882
4pNSc5. Identification of acoustic moving sources in the context of a
road vehicle at pass-by: A numerical comparison. Remi COUSSON,
Marie-Agnes PALLAS (AME/LAE, IFSTTAR, 25 Ave. Francois Mitterrand, Bron 69675, France, remi.cousson@ifsttar.fr), Quentin Leclerc (LVA,
INSA Lyon, Villeurbanne, France), and Michel C. BERENGIER (AME/
LAE, IFSTTAR, BOUGUENAIS, France)
The problem of the identification of acoustic sources has been widely
treated in static or quasi-static source contexts, more recently for moving
sources, in a wide range of applications. Different methods have been developed in this endeavor. In the context of identification of sources from a passing-by vehicle, the beamforming method is a reference that has well-known
limitations: a poor resolution at low frequencies causes difficulties to discriminate close sources or with too different levels, and the convolution of
the point spread function with the source-signal makes it difficult to evaluate
the level precisely. Deconvolution methods are used to overcome those limitations. Mostly developed in static contexts at the beginning, extensions to
moving sources have been proposed in the transportation field, mainly in air
transportation and underwater acoustics. In the present investigation, these
methods are numerically tested with parameters fitted to the road vehicle
context. Their performance is assessed at different speeds, with varying
additional noise levels. Beamforming—as the reference method—and several deconvolution methods (DAMAS, CLEAN, NNLS) extended to moving sources are compared using performance including: location of the
maximum, source gravity center, maximum level and extended level. Illustrations involving an academic pendulum setup are given.
4pNSc6. Classifying type of vehicles on the basis of data extracted from
audio signal characteristics. Karolina Marciniuk (Multimedia Systems
Dept., Gdansk Univ. of Technol., Gdansk, Poland), Bozena Kostek (Audio
Acoust. Lab., Gdansk Univ. of Technol., Narutowicza 11/12, Gdansk 80233, Poland, bokostek@audioakustyka.org), and Andrzej Czy_zewski (Multimedia Systems Dept., Gdansk Univ. of Technol., Gdansk, Poland)
The aim of this study is to find and optimize a feature vector for an automatic recognition of the type of vehicles, extracted form an audio signal.
First, the influence of weather-based conditions of road surface on spectral
characteristic of the audio signal recorded from a passing vehicle in close
proximity to the road is discussed. Next, parameterization of the recorded
audio signal is performed. For that purpose, the MIRtoolbox, designed for
music parameter extraction, is used to obtain a vector of parameters. Correlation analyses are performed to check whether extracted parameters enable
to separate selected types of vehicle-associated noise, e.g.: car, truck and
motorcycle. Behrens-Fisher statistics is used to find the most suitable parameters that may be contained in the optimized feature vector. The last step
is to build a decision system that allows for the automatic classification of a
vehicle type. The results of automatic classification of prepared vehiclenoise related samples are shown and discussed. [Research was supported by
the Polish National Centre for Research and Development within the grant
No. OT4-4B/AGH-PG-WSTKT.]
4pNSc7. Warning sound generation system of an electric vehicle system.
Man Uk Han and Sang Kwon Lee (INHA Univ., 100 INHA-RO, Incheon
22212, South Korea, nanoneo@naver.com)
Electric Vehicles (EV) generate sound pressure lower than internal combustion engines(ICE) at low speed. Pedestrians familiar with internal combustion engines are difficult to recognize such a quite vehicles. Therefore,
an additional warning sound generator is required. The warning sound must
have high detectability and low annoyance to pedestrians, while complying
with legislation for warning sound of various countries. In this paper, we
3883
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
design the electric sound with detectability and low-annoyance. An active
generation system is also designed based on active sound design and in
attached in the electeic vehicle to generate this sound. The sound is validated with subjective test in the running condition on the road.
4pNSc8. Critical frequencies for the annoyance judgments of noises
with tonal component, to develop an equal-annoyance contour. Matias
A. Pace, Nicolas Urquiza, Florent Masson (Universidad Nacional de Tres de
Febrero, Mosconi 2736, Saenz Pe~
na, Buenos Aires 1674, Argentina, matiasalejopace@gmail.com), and Shin-ichi Sato (Universidad Nacional de Tres
de Febrero, Caseros, Provincia de Buenos Aires, Argentina)
The influence of the tonal component in noise annoyance has been the
focus of many previous researches. It has been found that the annoyance
varies when the frequency of the tone changes. In this work a subjective
tests on the noise annoyance were performed by using environmental, urban
and industrial tonal noises. Stationary signals were used and modified by
adding tones. The tonal noise was defined according to ISO 1996-2. An 11point numerical scale test was conducted under the same sound pressure
level (SPL) in dBA. The results of the tests showed that the perceived
annoyance varied mainly due to the center frequency of the test stimuli
rather than other psychoacoustic parameters of the signals. These results establish the critical frequencies of the spectrum at which the annoyance of
the tonal noises varies considerably, toward the development of an equalannoyance contour.
4pNSc9. Estimation of incident energy angular distribution at the building envelope in urban environment. Miodrag Stanojević, Milos Bjelic,
Dragana Sumarac Pavlovic, and Miomir Mijić (School Of Elec. Eng., Univ.
Of Belgrade, Serbia, Bulevar kralja Aleksandra 73, Belgrade, Serbia, miodragstanojevic@bitprojekt.co.rs)
This paper presents preliminary results of a research dedicated to noise
analysis at the building envelope using microphone arrays. Motif for such a
research is a dependence of façade elements insulation characteristics on
incident angles of noise impeding a façade. The goal is to determine the
influence of different building and street configurations on the discrepancies
between laboratory and in-situ values of façade elements sound insulation,
and develop a model for predicting in-situ façade performances depending
on building surroundings. Microphone array mounted on a façade can provide an insight into impeding noise energy distribution over angles of arrival. This distribution is examined for a number of different scenarios,
including buildings in canyon and non-canyon streets, different mounting
heights of the array, different widths of streets etc. Planar array consisting
of 24 microphones with a diameter of 2.2 m is used for measurements.
Results for given scenarios are presented and compared with a discussion on
building surroundings influence on the performances of their façade
elements.
4pNSc10. Dredge noise control. Arno S. Bommer and Adam Young (CSTI
Acoust., 16155 Park Row, Ste. 105, Houston, TX 77084, arno@cstiacoustics.com)
When dredges operate near residences, noise can be a concern. The
major noise sources of a dredge can include diesel engines (casing and
exhaust), generators, transformers, ventilation fans, electric motors, pumps,
and winches. Some equipment is enclosed and some is on the open decks.
Both airborne and structure-borne noise can be issues. As the dredge operates, its location and orientation relative to residences continuously change.
Noise data and treatments are discussed from several dredging projects.
4pNSc11. Performance analysis of the low-cost acoustic sensors developed for the DYNAMAP project: A case study in the Milan urban area.
Francesc Alıas, Rosa M. Alsina-Pagès, Joan Claudi Socoro, Ferran Orga
(Grup de Recerca en Tecnologies Mèdia, La Salle - Universitat Ramon
Llull, C/Quatre Camins, 30, Barcelona 08022, Spain, falias@salleurl.edu),
and Luca Nencini (BlueWave Acoust., Follonica, Italy)
Dynamic noise maps are asked to accurately represent in real time the
noise levels captured by acoustic sensors in urban environments. Thus, the
Acoustics ’17 Boston
3883
4p WED. PM
or communities that may be affected during construction or future operation
of the route. It also presents the mitigation actions and solutions that could
be used to reduce the noise impact. It was established that there are cities or
towns linked to the current route of the D-41 CH route that will be affected
directly or indirectly due to its proximity to the current route layout. Finally,
it was concluded that the construction and future operation of the D-41 route
will cause diverse sound impacts in the different communities along the
route.
measurement precision of the employed sensors becomes of paramount importance to obtain a reliable picture of the urban noise levels. In this work, a
case study designed to validate the accuracy of the Leq values of the lowcost acoustic sensor developed for the DYNAMAP project in real-life conditions is presented. To that effect, simultaneous acoustic measurements
have been conducted in the same locations in the Milan’s project pilot area
using a Class I sound level meter and the low-cost sensor. The analysis of
the parallel acoustic data collected during the recording campaign compares
both measuring equipments accounting for traffic noise, background city
noise, and non-traffic noise events.
perspective to the future, related ideas, technologies and products will be
introduced.
4pNSc13. Transportation noise exposure is strongly correlated with
increased morbidity and mortality. Daniel Fink (The Quiet Coalition, The
Quiet Coalition, P.O. Box 533, Lincoln, MA 01733, DJFink@thequietcoalition.org)
Transportation noise has long been considered just a nuisance but
numerous studies now show that transportation noise exposure is strongly
correlated with increased morbidity and mortality. Noise interferes with
human activity at 45 decibels. Nighttime noise interferes with sleep,
needed for optimal human health and function. At 55 decibels daily timeweighted average (TWA), noise causes increases in stress hormone levels,
correlated with changes in lipid and glucose metabolism, hypertension,
stroke and heart disease, and hospitalization and death. At 70 decibels
TWA, noise causes hearing loss. Other than death, the non-auditory effects
of noise on health are small for each individual, but great in aggregate.
Approximately 100 million Americans are exposed to noise loud enough to
damage health. The loss of the nighttime quiet perioid in cities is a major
contributor to this problem. Technologies for a quieter America are well
known and, with the exception of those needed to reduce aircraft noise, neither difficult nor costly to implement. The goal for both human exposure
and noise from all sources, adapted from the Nuclear Regulatory Commission, should be ALARA, As Low As Reasonably Achievable. Acoustics
professionals must work with physicians and other health professionals to
accomplish this.
4pNSc12. Acoustic potentials of urban surfaces. Wolfgang Herget and
Peter Brandst€att (Acoust., Fraunhofer Inst. for Bldg. Phys. IBP, Nobelstrasse
12, Stuttgart 70569, Germany, wolfgang.herget@ibp.fraunhofer.de)
Noise levels in larger cities have been increasing tremendously due to
their growth and the concentration of population. These noise levels have to
be controlled properly in order to reduce health hazards, lower the annoyance level for near habitants and work forces and improve life quality. For
that purpose, the treatment of ambient sound by means of controlling sound
transmission, diffraction, refraction and absorption from sources to receivers
are of great importance. Besides reducing the sound radiation of sources, we
can analyse and map the resulting sound fields and exploit the walls and
surfaces of buildings or streets to reduce the sound level. But, it is a challenge employing currently existing surfaces with innovative concepts and
technologies intelligently. Such acoustic urban design and a gain insight of
acoustic measurements in the laboratory are shown in the paper. As a
WEDNESDAY AFTERNOON, 28 JUNE 2017
ROOM 204, 1:20 P.M. TO 4:20 P.M.
Session 4pPAa
Physical Acoustics and Signal Processing in Acoustics: Outdoor Sound Propagation II
Philippe Blanc-Benon, Cochair
Centre acoustique, LMFA UMR CNRS 5509, Ecole Centrale de Lyon, 36 avenue Guy de Collongue,
Ecully 69134 Ecully Cedex, France
Sandra L. Collier, Cochair
U.S. Army Research Laboratory, 2800 Powder Mill Rd, RDRL-CIE-S, Adelphi, MD 20783-1197
Invited Papers
1:20
4pPAa1. A wide-angle topography-capable parabolic equation using a non-transformed coordinate system. Michelle E. Swearingen, Michael J. White (US Army ERDC, Construction Eng. Res. Laborartory, P.O. Box 9005, Champaign, IL 61826, michelle.e.swearingen@usace.army.mil), and Mihan H. McKenna (US Army ERDC, Vicksburg, MS)
Low-frequency acoustic propagation for long ranges (up to 200 km) is of significant military interest, particularly for the purposes of
persistent surveillance of denied areas and infrastructure/activity sensing. A deep understanding of the natural environment’s influence
on the signal is critical for accurate interpretation of received signals at monitoring stations. Influenced by the underwater acoustics community, a flexible, wide-angle, finite-difference parabolic equation model has been developed. This model handles discontinuities and
gradual variations in density and wavenumber, allowing terrain/topography to be represented as range-dependent density and wavenumber profiles. Traditional coordinate-transforming methods for propagation over topography require interpolations and/or extrapolations
near the top of the computational boundary when transformed back into the original coordinate system, introducing potentially significant errors. These methods work well for shorter distances, where interpolations/extrapolations are minimized, and bounded systems,
such as certain ocean conditions, but potentially produce questionable results as the propagation distances, and influences from the upper
domain, increase. While not as computationally efficient, the method presented here does not require a coordinate transformation. An
3884
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3884
overview of the mathematical development, comparisons to benchmark cases, and a discussion of this method are presented. (Distribution Statement A—Approved for Public Release; distribution unlimited.)
1:40
4pPAa2. On the atmospheric-driven variability of impulse sounds. Sylvain Cheinet, Markus Christoph (ISL, 5 Rue General Cassagnou, Saint-Louis 68300, France, sylvain.cheinet@isl.eu), Sandra L. Collier (Army Res. Lab., Adelphi, MD), Matthias Cosnefroy, Loic
Ehrhardt, Florian K€
onigstein (ISL, Saint-Louis, France), Vladimir E. Ostashev (U.S. Army Engineer Res. and Development Ctr., Hanover, NH), Winfried Rickert (WTD91, Meppen, Germany), Alexandre Stefanovic (ISL, Saint-Louis, France), Thomas Wessling
(WTD91, Meppen, Germany), and D. K. Wilson (U.S. Army Engineer Res. and Development Ctr., Hanover, NH)
Sound propagation is largely affected by the near-surface conditions (ground type, turbulence, and weather). The presentation reports
on a joint research effort aimed at improving fundamental understanding of these propagation effects on impulsive sounds. Data collections will be presented, which consist of measuring transient acoustic signals at distance from a reproducible sound source, under a variety of acoustic-atmospheric conditions. Analysis of these data, supported by theoretical studies and numerical simulations, is conducted
to reveal how the environmental conditions drive the observed variations of acoustic pulses. Challenges in measurements, simulations,
and analysis will be discussed.
2:00
4pPAa3. Effects of atmospheric nonstationarity on models of low-frequency surface wind noise. Gregory W. Lyons (National Ctr.
for Physical Acoust., The Univ. of MS, ERDC/CERL, 2902 Newmark Dr., Champaign, IL 61822, gregory.w.lyons@usace.army.mil)
Measurements of sound propagation in the atmosphere are often limited by wind noise, particularly at low frequencies. The spectrum
of wind noise at the ground surface, i.e. turbulent static pressure, can be modeled by the dominant shear-turbulence mechanism within
the atmospheric boundary layer. This model assumes statistically-stationary boundary-layer turbulence, and as in the atmospheric sciences, quasi-stationarity is hypothesized to justify the assumption. However, this hypothesis is not uniformly valid, due to the diurnal cycle
and large-scale atmospheric dynamics, and should be tested. This study applies several existing tests of nonstationarity to atmospheric
measurements from a field experiment near Laramie, Wyoming and characterizes their relevance to simultaneous recordings of wind
noise by flush-mounted infrasound sensors. By estimating the parameters for a mirror flow model of boundary-layer turbulence from the
anemometer records, surface pressure model spectra are compared with the experimental wind noise. The performance of both the
atmospheric turbulence and wind noise models are assessed with respect to each nonstationarity test.
2:20
4pPAa4. Broadband transmission loss in the audible regime with a 3D sonic crystals by use of resonance overlap. Jean-Philippe
Groby (LAUM, UMR6613 CNRS, LAUM, UMR 6613 CNRS, AV. Olivier Messiaen, Le Mans F-72085, France, Jean-Philippe.Groby@
univ-lemans.fr), Alexandre Lardeau (Laboratoire DRIVE-ISAT, F-58027 Nevers cedex, France), and Vicente Romero-Garcıa (LAUM,
UMR6613 CNRS, Le Mans cedex 9, France)
The acoustic properties of a three-dimensional sonic crystal made of square-rod rigid scatterers incorporating a periodic arrangement
of quarter wavelength resonators are theoretically and experimentally reported in this work. The periodicity of the system produces
Bragg band gaps that can be tuned in frequency by modifying the orientation of the square-rod scatterers with respect to the incident
wave. In addition, the quarter wavelength resonators introduce resonant band gaps that can be tuned by coupling the neighbor resonators.
Bragg and resonant band gaps can overlap allowing the wave propagation control inside the periodic resonant medium. In particular, we
theoretically and experimentally show that this system can produce a broad frequency band gap exceeding two and a half octaves (from
590 Hz to 3220 Hz) with transmission lower than 3%. Finite element methods were used to calculate the dispersion relation of the
locally resonant system. The visco-thermal losses were accounted for in the quarter wavelength resonators to simulate the wave propagation in the semi-infinite structures and to compare the numerical results with the experiments performed in an echo-free chamber. The
simulations and the experimental results are in good agreement. This work motivates interesting applications of this system as acoustic
audible filters.
2:40
4pPAa5. Green’s function retrieval by crosscorrelation and multidimensional deconvolution: Application to atmospheric acoustics. Max
Denis (RDRL-CIE-S, U.S. Army Res. Lab., 1 University Ave., Lowell, MA
01854, max_f_denis@hotmail.com), Sandra L. Collier, John Noble, W. C.
Kirkpatrick Alberts, David Ligon, Leng Sim, and Christian G. Reiff
(RDRL-CIE-S, U.S. Army Res. Lab., Adelphi, MD)
In this work, the Green’s function of an atmospheric acoustic propagation channel is extracted from experimental measurements. Of particular interest, is the accuracy of Green’s function retrieval methods in ideal and
non-ideal situations. To this end, an emitting acoustic source (impulsive and
audible) is placed at a distance of more than 90 meters from receiving triaxis microphone arrays in open (ideal) and wooded areas (non-ideal) in
southern Maryland. Green’s function retrieval methods are employed and
investigated on the collected data. Green’s function retrieval by crosscorrelation has been successful in various applications despite the limitations of
3885
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
its lossless medium and equipartitioned wavefield assumptions. To overcome the violation of these assumptions, the multidimensional deconvolution has been proposed. Comparisons of the results between impulsive and
audible sources using the two methods will be presented. The effects of multipath, source-receiver distance, temperature, and wind will be discussed.
3:00
4pPAa6. An atmospheric turbulence model plugin for the NASA auralization framework. Aric R. Aumann (Sci. Applications Int. Corp., 2 North
Dryden St., Hampton, VA 23681, aric.r.aumann@nasa.gov) and Stephen A.
Rizzi (NASA Langley Res. Ctr., Hampton, VA)
Temporal and spatial fluctuations in wind and temperature affect sound
propagation causing both amplitude and phase modulations, or scintillations.
The amplitude modulations can be heard, e.g., when listening to a distant aircraft. The phase modulations cause additional decorrelation between different
propagation paths. NASA has been developing a software framework for the
Acoustics ’17 Boston
3885
4p WED. PM
Contributed Papers
purposes of auralizing sound. The framework allows users to synthesize aircraft noise at a moving source and propagate it to a listener. This propagation
occurs in the time domain through the application of gain, time delay, and filters. The framework includes a plugin architecture for rapid integration of
new modules. A model has been developed to simulate the scintillation
effects of acoustic propagation through a turbulent atmosphere. Further, this
model has been implemented as a plugin to the NASA auralization framework. The turbulence model plugin generates a time-varying filter along each
propagation path that describes the frequency dependent amplitude and phase
modulations. The framework converts the filter descriptions to finite impulse
responses, which are appended to other propagation filters, including atmospheric absorption and ground impedance. Because real scintillations can be
heard, inclusion of the model is demonstrated to result in higher fidelity auralizations than those generated without the model.
3:20–3:40 Break
3:40
4pPAa7. Detection and localization of impulsive sound events for environmental noise assessment. Peter W. Wessels (TNO, Oude Waalsdorperweg 63, The Hague 2597 AK, Netherlands, peter.wessels@tno.nl), Jeroen v.
Sande (TNO, Den Haag, Zuid-Holland, Netherlands), and Frits Van der Eerden (TNO, The Hague, Netherlands)
At military training areas events are created that can be heard kilometers
away. The noise levels are subject to variation, mostly due to changes in
meteorological conditions. The variation of these events is of importance
for environmental noise assessment. Results are presented on the design of a
system that automatically detects and localizes these events. It is based on a
coarse grid of measurement units that detect relevant acoustic events within
the training area. The locations of the events are obtained with a time
difference of arrival method and detections from multiple measurement
units, including dynamic corrections for meteorological effects. The performance of the system is evaluated with a large set of augmented audio
files for different ranges. This set is based on a collection of source measurements of several heavy weapon types. A model that includes the attenuation
due to ground and the meteorology is used to extrapolate the audio to more
distant locations. The audio is also mixed with relevant background noise
for a broad range of wind speeds. With the obtained detection and localization information it is possible to estimate the source levels of events and to
extrapolate the measured sound levels to other locations of interest.
4:00
4pPAa8. Numerical analysis of atmospheric sound propagation in the
littoral zone. Diego Turo (Mech. Eng., The Catholic Univ. of America, 620
Michigan Ave., N.E., Washington DC, DC 20064, diegoturo@gmail.com),
Teresa J. Ryan (Eng., East Carolina Univ., Greenville, NC), and Joseph F.
Vignola (Mech. Eng., The Catholic Univ. of America, Washington, DC)
The propagation of sound over long distances is subject to many effects
that are deemed negligible over short distances or in situations where idealized estimates are sufficient. This work describes a numerical model of
acoustic propagation in littoral zone. The model therefore must include such
effects for the purpose of investigating transmission loss during long-range
propagation over the complex sea surface. Prior work by the investigators
has presented the basic approach, which is a Crank-Nicolson parabolic
equation that accounts for surface geometry, spreading loss, wind, and
ground impedance. That work evaluated moderated propagation distances
(up to 500 meters) and used a uniform temperature distribution. The current
work will investigate considerations required for adding a temperature gradient component to the numerical model and explore the sensitivity of transmission loss to temperature gradient.
WEDNESDAY AFTERNOON, 28 JUNE 2017
ROOM 210, 1:20 P.M. TO 4:40 P.M.
Session 4pPAb
Physical Acoustics, Biomedical Acoustics, and Structural Acoustics and Vibration: Propagation in
Inhomogeneous Media II
Valerie J. Pinfield, Cochair
Chemical Engineering Department, Loughborough University, Loughborough LE11 3TU, United Kingdom
Olga Umnova, Cochair
University of Salford, The Crescent, Salford m5 4wt, United Kingdom
Josh R. Gladden, Cochair
Physics & NCPA, University of Mississippi, 108 Lewis Hall, University, MS 38677
Contributed Papers
1:20
4pPAb1. Finite element modeling of thermal and shear decay fields
around particles in liquids in an ultrasound field. Derek M. Forrester and
Valerie J. Pinfield (Chemical Eng., Loughborough Univ., Loughborough,
Loughborough LE11 3TU, United Kingdom, d.m.forrester@lboro.ac.uk)
Ultrasonic multiple scattering models of wave propagation through inhomogeneous particulate media frequently assumed only acoustical interactions between particles, neglecting thermal and hydrodynamic (shear3886
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
mediated) interactions. This assumption is, however, only valid at low concentrations of particles where the particle separation distances are large.
Each particle has the propensity to have a short-ranged thermal or shear field
around it, caused by the acoustic field, and at moderate to high concentrations these can overlap. Our recent work has shown that, in these conditions
in the long acoustic wavelength regime, when the shear or thermal wavelengths are similar to the particle radius, the overlapping shear and thermal
fields can result in a significant reduction in effective attenuation. The effect
can be described as the conversion of compressional waves to shear/thermal
Acoustics ’17 Boston
3886
waves and back again at particle/liquid boundaries. Herein, we use finite
element modelling of single and groups of particles to show the thermal and
shear wave fields and the interactions of these fields that lead to the reconversion effect. Thus, we demonstrate the mechanisms of thermo-acoustic
and shear-acoustic interaction for various particulate inhomogeneous
media[SS1]. [SS1]This is 180 words.
combined with optical detection as well as rheological testing. In this work,
we describe the use of acoustic radiation force and ultrafast ultrasound
imaging to generate shear waves and measure their propagation, respectively. This shear wave elastography (SWE) method has been used to measure the viscoelastic mechanical properties in tissue-mimicking phantoms
and soft tissues. We tested micellar fluids made from cetrimonium bromide
(CTAB) and sodium salicylate (NaSAL) with a 5:3 ratio with different concentrations (100, 200, 300, and 400 mM). We fit an extended Maxwell
model to the rheological test data (0.001-15.91Hz) and used the model parameters to calculate the shear wave phase velocity in the range of the SWE
data (100-500 Hz). The phase velocities calculated from the rheology testing
compared well with the SWE results. The mean absolute error was less than
0.02 m/s for all the micellar fluids tested. The SWE method is nondestructive and can be used for characterization of the viscoelasticity of micellar
fluids, which could be used as a model for biological tissues. [This work
was supported in part by grant R01DK092255.]
1:40
4pPAb2. Generation and measurement of shear waves in micellar fluids
using ultrasound-based methods. Matthew W. Urban (Dept. of Radiology,
Mayo Clinic College of Medicine and Sci., 200 First St. SW, Rochester,
MN 55905, urban.matthew@mayo.edu), Carolina Amador (Dept. of Physiol. and Biomedical Eng., Mayo Clinic College of Medicine and Sci., Rochester, MN), Bruno Otilio (Siemens Healthineers, Pirituba, Brazil), and
Randall R. Kinnick (Dept. of Physiol. and Biomedical Eng., Mayo Clinic
College of Medicine and Sci., Rochester, MN)
Wormlike micellar fluids are viscoelastic and their mechanical properties have been evaluated with mechanically generated shear waves
Invited Paper
2:00
4pPAb3. What do ultrasound velocity and attenuation tell us about crystal nucleation? Malcolm J. Povey, Mel Holmes, and Fei
Sheng (School of Food Sci. and Nutrition, Univ. of Leeds, Leeds, West Yorkshire ls2 9jt, United Kingdom, m.j.w.povey@leeds.ac.uk)
Akulichev and Bulanov showed that a coupling between ultrasound and crystals in equilibrium with their melt would occur leading
to high attenuation. This phenomenon is well established. However, what happens in circumstances such as in the induction period
where critical fluctuations occur and crystal embryos appear and disappear, i.e. under non-equilibrium circumstances? Evidence is presented that the mean solution adiabatic compressibility is altered by these transient phenomena, providing a unique insight into a poorly
understood and industrially crucial process step. Fluctuations in adiabatic compressibility observed as a result of the high measurement
rates available with high frequency pulsed ultrasound provide further insight into the early stages of crystal growth phenomena. Later
on, when individual crystals begin to interact and sinter a frame modulus emerges which necessitates a semi-phenomenological
interpretation.
2:20
2:40
4pPAb4. A quantitative ultrasound method to estimate dental implant
stability. Romain Vayron (Multiscale Modeling and Simulation Lab.,
CNRS, Creteil, France), Vu-Hieu Nguyen (Multiscale Modeling and Simulation Lab., CNRS, Creteil, France), Romain Bosc (Universite Paris-Est
Creteil, INSERM U955, IMRB, Creteil, France), and Guillaume Haiat (Multiscale Modeling and Simulation Lab., CNRS, Laboratoire MSMS, Faculte
des Sci., UPEC, 61 Ave. du gal de Gaulle, Creteil 94010, France, guillaume.
haiat@univ-paris-est.fr)
4pPAb5. A numerical study on effect of evaporation and condensation
of water vapor on acoustic oscillations of a gas parcel with a Lagrangian
approach in a thermoacoustic prime-mover. Kyuichi Yasui and Noriya
Izu (National Inst. of Adv. Industrial Sci. and Technol. (AIST), 2266-98
Anagahora, Shimoshidami, Moriyama-ku, Nagoya 463-8560, Japan, k.
yasui@aist.go.jp)
Dental implants are widely used clinically. However, implant failures,
which may have dramatic consequences, still occur and remain difficult to
anticipate. Accurate measurements of implants biomechanical stability are
of interest since they could be used to improve the surgical strategy. The
aim of this paper is to determine whether quantitative ultrasound can be
used to determine dental implant stability. Firstly, an implant was initially
completely inserted in the proximal part of a bovine humeral bone sample.
The 10 MHz ultrasonic response of the implant was then measured and a
quantitative indicator was derived. Then, the implant was unscrewed by 2p
radians and the measurement was realized again. The value of indicator significantly increases as a function of the number of rotation. The results show
that bone quantity in contact with the implant has a significant influence on
its ultrasonic response. Secondly, a 3D finite element model was employed
and the geometrical configuration was assumed to be axisymmetric. The numerical results show that the implant ultrasonic response changes significantly when a liquid layer is located at the implant interface. This study
paves the way for the development of an ultrasonic tool to be used for the
monitoring of osseointegration.
3887
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustic oscillations of a gas parcel in a wet stack of a thermoacoustic
prime-mover are numerically simulated with a Lagrangian approach taking
into account Rott equations and the effect of non-equilibrium evaporation
and condensation of water vapor at the stack surface. A gas parcel gradually
drifts to higher and lower temperature side of a stack in traveling-wave and
standing-wave prime-movers, respectively, owing to a shift of phase angle
of particle’s velocity with a movement of a parcel according to Rott equations even without evaporation and condensation. It is an alternative acoustic streaming in a narrow tube with temperature gradient along a tube. In a
traveling-wave prime-mover, the volume oscillation amplitude of a gas parcel always increases by evaporation and condensation, and pV work done by
a gas parcel is accordingly enhanced. On the other hand, in a standing-wave
prime-mover, the volume oscillation amplitude sometimes decreases by
evaporation and condensation, and pV work is suppressed. Presence of a
tiny traveling-wave component, however, results in the enhancement of pV
work by evaporation and condensation. In other words, thermoacoustic
effect is enhanced by evaporation and condensation under most conditions,
which is consistent with experimental results.
Acoustics ’17 Boston
3887
4p WED. PM
Contributed Papers
3:00
4:00
4pPAb6. Acoustical properties of nanoporous activated carbon fibres.
Hugo Karpinski, Olga Umnova (Univ. of Salford, The Crescent, Salford m5
4wt, United Kingdom, o.umnova@salford.ac.uk), Rodolfo Venegas
(ENTPE, Univ. of Lyon, Lyon, France), Jonathan A. Hargreaves (Univ. of
Salford, Salford, United Kingdom), and Mohamad Nahil (Univ. of Leeds,
Leeds, United Kingdom)
4pPAb8. Effect of a unidirectional compression on sound absorbing
properties of porous materials. LEI LEI, Nicolas D. Dauchez, and JeanDaniel Chazot (Laboratoire Roberval - CR H264e CS 60319 - 60203 COMPIEGNE Cedex, UTC, Ctr. de Recherches de Royallieu CS 60319, Compiegne 60200, France, lionel0823@gmail.com)
This work continues a series of studies on the link between the microstructure of multiscale materials and their acoustical properties [1]-[3]. Granular activated carbons are excellent low frequency sound absorbers. Two
factors contribute to this. (i) They have three scales of heterogeneities: millimetric grains and micrometric and nanometric inner-grain pores. (ii) The
presence of sorption in nanometric pores leads to a decrease of static bulk
modulus and, consequently, of the effective low-frequency sound speed.
Activated carbon felts also show promising low-frequency sound absorption
but have simpler microstructure. They do not contain inner-fibre micrometric
pores, but still have inner-fibre nanometric pores. This, combined with a relatively regular fibre arrangement, makes them ideal for studying the effect
of sorption on their acoustical properties. In this work, parameters describing
the microstructure and sorption kinetics are measured independently for several types of felts with different levels of activation. The predictions of a
microstructure-based model [3] are compared with measurements of sound
absorption, characteristic impedance and propagation constant. It is concluded that sound absorbers with controlled properties can be designed by
varying the level of activation, fibre size and porosity. References: [1] Venegas, Umnova, J. Acoust. Soc. Am. 2011. [2] Venegas, Umnova, J. Acoust.
Soc. Am. 2016. [3] Venegas, Boutin, Wave Motion, 2017.
The addition of acoustic shields in the engine bay has shown the potential of reduction of powertrain exterior noise in the automotive industries.
These shields are usually made by several layers of thermo-compressed porous materials. Some researches concerning the effect of compression on
the acoustics properties of porous materials can be found in the literature.
The effect of compression has been taken into account in the Biot-Allard
model by modifying the physicals parameters (porosity, resistivity, tortuosity characteristic lengths,) versus to the compression rate n, defined as the
ratio between nominal and compressed thickness. However, these laws are
pointed out to be limited to a low compression rate (n<2). In our work, the
compression rate can be attended to 10 for a porous material. New analytical
laws are derived from the literature for the above mentioned physicals parameters. The static thermal permeability is also given as a function of compression rate in this paper. These laws take into account microscopic
geometric changes, in particular the angular distribution of the fibers for fibrous type materials. A good agreement is obtained between direct measurements and the proposed laws for each parameters and absorption coefficient
at normal incidence for a glass fiber and a melamine foam.
4:20
4pPAb7. The influence of the synthesis parameters of nanoporous composite on the sound absorption coefficient. Ivana Ristanovic (Univ. of Belgrade, School of Elec. Eng., Bulevar Kralja Aleksandra 73, Belgrade 11000,
Serbia, ivana.ristan@gmail.com), Aleksa Maricic (Univ. of Kragujevac, Faculty of Tech. Sci. Cacak, Cacak, Serbia), Dragana Sumarac Pavlovic, and
Miomir Mijic (Univ. of Belgrade, School of Elec. Eng., Belgrade, Serbia)
4pPAb9. Influence of porosity, fiber radius, and fiber orientation on anisotropic transport properties of random fiber structures. Hoang Tuan
Luu (Groupe d’Acoustique de l’Universite de Sherbrooke (GAUS),
Departement de Genie Mecanique, Universite de Sherbrooke, Sherbrooke, QC,
Canada), Camille Perrot (Laboratoire Modelisation et Simulation Multi Echelle,
MSME UMR 8208 CNRS, Universite Paris-Est, Universite Paris-Est Marne-laVallee, Laboratoire Modelisation et Simulation Multi Echelle, MSME UMR
8208 CNRS, 5, Boulevard Descartes, Champs-sur-Marne 77454, France,
camille.perrot@univ-paris-est.fr), and Raymond Panneton (Groupe d’Acoustique de l’Universite de Sherbrooke (GAUS), Departement de Genie
Mecanique, Universite de Sherbrooke, Sherbrooke, QC, Canada)
Metal materials with nanopore structure have multifunctional properties,
they retain some properties of metal and also possess the characteristics of
porous structures such as good sound absorption. This paper investigates the
sound absorption characteristics of porous metal composites obtained by the
mechanical mixing of polycrystalline powders. The mixture consists of
(100—x) % Cu i x % Zn, (1 x 5). During the sample preparation process,
several parameters were varied: activation time in the planetary mill in the
non-oxide environment, pressing duration of activated powders and different sintering times of samples on the specific sintering temperature. Depending on the activation time, nanopowders with different sizes of nanocrystals
are obtained. The activated powders were pressed into disk-shaped samples
of different thickness, with a diameter of 30 mm. The normal incidence
sound absorption coefficient of sintered samples was measured in an impedance tube. The obtained results showed that this material possesses very
good absorption properties. These results can lead us to the conclusion that
this kind of porous material may be potentially used for some specific
absorber applications.
The ability of fibrous media to mitigate sound waves is controlled by
their transport properties that are themselves greatly affected by the geometrical characteristics of their microstructure such as porosity, fiber radius,
and fiber orientation. Here, the influence of these geometrical characteristics
on the anisotropic transport properties of random fiber structures is investigated. First, representative elementary volumes (REVs) of random fiber
structures are generated for different triplets of porosity, fiber radius and
fiber orientation. The fibers are allowed to overlap and are motionless
(rigid-frame assumption). The fiber orientation is derived from a second
order orientation tensor. Second, the transport equations are numerically
solved on the REVs which are seen as periodic unit cells (PUCs). These solutions yield the transport properties governing the sound propagation and
dissipation in the respective fibrous media. These transport properties are
the tortuosity, the viscous and thermal static permeabilities, and the viscous
characteristic length. Finally, relations are proposed to estimate the transport
properties and the thermal characteristic length when the geometry of the
fiber structures is known.
3:20–3:40 Break
3:40
3888
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3888
WEDNESDAY AFTERNOON, 28 JUNE 2017
ROOM 311, 1:15 P.M. TO 5:40 P.M.
Session 4pPPa
Psychological and Physiological Acoustics: Perceptual Weights and Cue Integration in Hearing: Loudness,
Binaural Hearing, Motion Perception, and Beyond
Daniel Oberfeld, Cochair
Experimental Psychology, Johannes Gutenberg-Universit€
at Mainz, Wallstrasse 3, Mainz 55122, Germany
Bernhard U. Seeber, Cochair
Audio Information Processing, Technische Universit€
at M€
unchen, Arcisstrasse 21, Munich 80333, Germany
Virginia Richards, Cochair
Cognitive Sciences, Univ. of California, Irvine, Irvine, CA 92697
Chair’s Introduction—1:15
Invited Papers
1:20
4pPPa1. Effect of hearing loss on spectral weighting of broadband signals. Suyash N. Joshi (Hearing Systems Group, Tech. Univ. of
Denmark, Tech. University of Denmark, Bldg. 352, Ørsteds Plads, Kgs. Lyngby 2800, Denmark, suyashnjoshi@gmail.com) and Walt
Jesteadt (Boys Town National Res. Hospital, Omaha, NE)
Several studies have shown that normal hearing (NH) listeners place greater weight on the edge frequencies of broadband sounds
when they are asked to pick the louder of two intervals in a sample discrimination task. However, little is known about the effects of
hearing loss on spectral weighting for broadband sounds. In a classic study of sample discrimination, Doherty and Lutfi (1996) found
that listeners with hearing loss (HL) placed greater weight on frequencies in the region of the HL and proposed that components that
were less audible were weighted higher. The current study investigated the effect of audibility and hearing loss on spectral weighting for
loudness judgments using two spectrally overlapping 18-tone complexes (from 208- to 8708-Hz). Four NH and nine HL listeners judged
the loudness of tone complexes in two conditions where the mean level of each tone in the complex was set to either 10-dB SL or 75-dB
SPL. The results show that listeners placed greater weight on the high-frequency edges of both tone complexes in 10-dB SL condition
but not in 75-dB SPL condition, suggesting that the HL listeners place less weight on portions of the spectrum near the threshold of audibility. [Work supported by NIH.]
1:40
4pPPa2. Towards the individualized estimation of the band importance function for Speech Intelligibility Index. Yi Shen (Speech
and Hearing Sci., Indiana Univ. Bloomington, 200 S Jordan Ave., Bloomington, IN 47405, shen2@indiana.edu)
4p WED. PM
Speech Intelligibility Index (SII, ANSI S3.5-1997) is a widely used tool for predicting speech recognition performance based on the
audibility of the speech stimulus across frequencies. The core of the SII model is the band importance function (BIF), which represents
the contributions of different frequency regions to speech understanding. Potential problems associated with the classic approach to
assess the BIF have recently been pointed out by a number of researchers, and novel psychophysical techniques have been developed
consequently. This presentation presents adaptive psychophysical procedures for the efficient estimation of the BIF. These procedures
estimate the spectral weights using either Bayesian or correlational approaches and optimize the stimulus presentation iteratively on a
trial-by-trial basis. Initial experimental verification of the procedures using both word and sentence materials will be presented.
2:00
4pPPa3. Auditory bubbles: An application of relative weights to explore perceptual and neural representations of continuous
speech. Jonathan Venezia (Auditory Res., VA Loma Linda Healthcare System, 11201 Benton St., Loma Linda, CA 92357, jonathan.
venezia@va.gov)
Several labs have recently developed relative weights procedures for estimating crucial time-frequency regions of the speech spectrogram. Due to the large degree of acoustic variability in natural speech, these spectrogram-based methods work best with rather limited
sets of brief (e.g., syllable-length) utterances. We have developed a procedure that measures relative weights on the modulation power
spectrum (MPS), which is obtained from the 2D Fourier transform of the speech spectrogram. Modulation patterns are relatively stable
across utterances, which allows measurement of relative weights in the MPS domain using more ecologically-valid sets of continuous
utterances (e.g., sentences). Our procedure works by applying randomly-shaped filters to the MPS over many trials and relating the filter
3889
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3889
patterns to an outcome measure using reverse correlation. Here, we estimate relative weights in the MPS domain for two outcome measures: keyword recognition/intelligibility (perceptual weights) and physiological activity measured with blood-oxygen-level-dependent
fMRI (neural weights). Perceptual weights indicate that a circumscribed region of the MPS supports intelligible speech perception. Patterns of neural weights differ across regions of the auditory cortex—namely, primary auditory areas encode a broad range of the MPS
while downstream regions converge toward a circumscribed representation that strongly resembles the pattern observed in perceptual
weights.
2:20
4pPPa4. How pitch dynamically drives social judgments in speech. Emmanuel Ponsot (STMS Lab (Ircam / CNRS / UPMC), 1 Pl.
Igor Stravinsky, Paris 75004, France, ponsot@ircam.fr), Juan Jose Burred (Paris, France), Pascal Belin (Institut de NeuroSci. de la Timone, CNRS UMR 7289 et Aix-Marseille Universite, Marseille, France), and Jean-Julien Aucouturier (STMS Lab (Ircam / CNRS /
UPMC), Paris, France)
Human listeners in social interaction possess the remarkable ability to continuously form high-level social representations from very
thin slices of behaviors, and notably from others’ voice. However, it turns out to be particularly challenging to design experiments that
allow uncovering mental representations of such complex dynamic auditory signals. Here we were able to show how voice’s pitch
dynamically drives judgements of social dominance and trustworthiness, two most salient traits in first-impression personality evaluation. We used a specifically developed voice-processing algorithm to create, from a few recordings of brief utterances such as the word
“hello,” thousands of novel voices presenting random intonations. These stimuli were then presented in several psychophysical experiments and reverse-correlation was deployed to assess the temporal processing of voice’s pitch underlying the judgments. We found that
both dominance and trustworthiness rely on dynamic, not static, pitch contour processes, and that these processes were strikingly similar
across both stimuli’ and observers’ gender. These results suggest that humans have developed a unique cross-gender dynamic code to go
beyond the dimorphic characteristic of the voice and be able to infer social traits from intonation. The present approach constitutes a
starting point to provide mechanistic accounts in population with impaired emotion recognition.
2:40
4pPPa5. The use of accurate versus heuristic auditory and visual cues for time-to-collision judgments. Daniel Oberfeld (Experimental Psych., Johannes Gutenberg-Universit€at Mainz, Wallstrasse 3, Mainz 55122, Germany, oberfeld@uni-mainz.de), Patricia R.
DeLucia (Psychol. Sci., Texas Tech Univ., Lubbock, TX), Behrang Keshavarz, and Jennifer L. Campos (Dept. of Res., Toronto Rehabilitation Inst., Toronto, ON, Canada)
Estimating time-to-collision (TTC) is needed when pedestrians cross a road and a vehicle is approaching. An accurate auditory cue
to judge the TTC of a sound source approaching at constant velocity is provided by the ratio of an object’s instantaneous sound intensity
to its instantaneous rate of change in sound intensity (auditory s). However, heuristic-based auditory cues might also be used, as suggested by research in vision. We presented auditory and visual simulations of approaching objects. Auditory and visual TTC cues were
decorrelated by slightly shifting auditory TTC against visual TTC. This permitted the estimation of cue weights for auditory cues (e.g.,
auditory s, final sound pressure level) and visual cues (e.g., visual s, final optical size) in three sensory conditions: auditory-only, visualonly, and audiovisual. Results showed that TTC estimates in the auditory-only condition were primarily based on an auditory heuristic
cue (final sound pressure level) rather than on auditory s. In the visual-only condition, visual s was more important than the heuristic
cues. In the audiovisual condition, participants relied more strongly on visual cues than auditory cues. We discuss the need for more
refined auditory simulations to gain further insight into the cue weighting in everyday situations.
3:00–3:20 Break
3:20
4pPPa6. Consistent and inconsistent interaural cues don’t differ for tone detection but do dffer for speech recognition. Frederick
J. Gallun (National Ctr. for Rehabilitative Auditory Res., VA Portland Health Care System, 3710 SW US Veterans Hospital Rd., Portland, OR 97239, Frederick.Gallun@va.gov), Rachel Ellinger (School of Commun., Northwestern Univ., Evanston, IL), and Kasey
Jakien (Dept. of Otolaryngology/Head and Neck Surgery, Oregon Health and Sci. Univ., Portland, OR)
In 1965, Dr. Colburn and his advisor, Dr. Durlach, whose wide and deep influence on the field of acoustics we honor in another Special Session at this meeting, conducted a seminal experiment on binaural hearing. This experiment showed that detection of a 500-Hz
tone in broadband noise is improved equally well by a “consistent” arrangement, where an interaural difference in time (ITD) favors the
same ear as does a simultaneously presented interaural difference in intensity (IID) as it is by an “inconsistent” arrangement, in which
ITD favors one ear and IID the other. In 2008, Gallun et al. extended these results by showing that consistent and inconsistent differences are equally effective for tone detection in multitone maskers, even though the source of masking is clearly different. Here we present
data from a speech recognition task performed in the presence of speech maskers where inconsistent interaural differences produced less
release from masking than did consistent cues. These results argue for the importance of a perceived location cue for speech recognition
tasks (at least of this type) that is not needed for tone detection.
3:40
4pPPa7. System identification of auditory processes using noise probes in isolation or embedded into soundscapes. Peter Neri
(Laboratoire des Systèmes Perceptifs, Ecole
Normale Superieure, 29 rue d’Ulm, Paris 75005, France, neri.peter@gmail.com)
This talk will cover large experimental datasets spanning different levels of auditory analysis from low-level to higher-level, with
specific focus on two inter-related problems: (1) how to exploit perceptual weights and their derivatives to infer the structure of the
underlying auditory process; (2) how to transfer these tools from identification of isolated processes to situations where the process is
embedded within complex sounds (e.g., speech). I will highlight connections with recent developments in both auditory and vision literature, for example (in relation to number 1 and 2 above, respectively) detailed understanding of how important nonlinearities are
3890
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3890
reflected in identifiable modulations of perceptual weights, and construction of noise probes for smooth embedding within structured
contexts together with associated tasks specifically designed to meanigfully probe contextual phenomena. These developments demonstrate very substantial progress over the past two decades, and offer new exciting directions for further characterization of human sensory processes via tight integration of experimental/computational tools.
4:00
4pPPa8. Acoustic-phonetic encoding in the human auditory. Edward F. Chang (Neurosurgery, Univ. of California at San Francisco,
505 Parnassus Ave, M779, UCSF NEUROSURGERY, San Francisco, CA 94134, changed@neurosurg.ucsf.edu)
In this talk, I will address the functional organization of the human auditory cortex and its role in speech processing. I will describe
new evidence for global partitioning of the auditory cortex into domains relevant for the encoding the temporal dynamics of connected
speech. I will discuss new evidence demonstrating the neural representations of prosodic intonation, phoneme, and speaker identity.
4:20
4pPPa9. Dual rate codes for the pitch of harmonic complex tones in the auditory midbrain. Bertrand Delgutte (Eaton-Peabody
Labs., Massachusetts Eye & Ear, Massachusetts Eye & Ear, 243 Charles St., Boston, MA 02114, Bertrand_Delgutte@meei.harvard.edu)
and Yaqing Su (Dept. of Biomedical Eng., Boston Univ., Boston, MA)
Harmonic complex tones contained in speech and music evoke a pitch at their fundamental frequency (F0). This pitch can be produced through either patterns of harmonics individually resolved by the cochlear frequency analysis or periodicity in the amplitude envelope resulting from beating between neighboring harmonics. To investigate how these two pitch cues are coded in the auditory midbrain,
we recorded from single units in unanesthetized rabbits in response to harmonic complex tones with varying F0 so as to create conditions
where harmonics were likely resolved or unresolved. For F0 above 600 Hz, some neurons showed local maxima in firing rate when a
low-order (<6-8) harmonic of F0 coincided with the neuron’s best frequency (BF), thereby providing a rate-place code to resolved harmonics similar to that found in the auditory nerve. For F0 below 800 Hz, some neurons showed rate tuning to a particular F0 unrelated
to the BF. Altering the phase relationships among the harmonics showed that this nontonotopic rate code was sensitive to envelope repetition rate and therefore likely derived from temporal cues in the auditory nerve. Thus there are two complementary rate codes for distinct pitch cues, but neither of these codes provides a direct representation of pitch.
4:40
4pPPa10. Temporal weighting of azimuthal cues in the free field and in headphone listening. G. Christopher Stecker (Hearing and
Speech Sci., Vanderbilt Univ., 1215 21st Ave. South, Rm. 8310, Nashville, TN 37232, g.christopher.stecker@vanderbilt.edu), Ervin
Hafter (Psych., Univ. of California, Berkeley, CA), Andrew D. Brown (Physiol. & Biophys., Univ. of Colorado School of Medicine, Aurora, CO), and Travis M. Moore (Hearing and Speech Sci., Vanderbilt Univ., Nashville, TN)
Dynamic aspects of binaural cue processing can be captured via several types of psychophysical measurements in human listeners.
These include sensitivity measures with spatially dynamic stimuli, temporal integration measurements, and perceptual weighting measurement with stochastic spatial-cue variation. The latter approach has been adopted in a series of papers and ongoing studies. These
assessed the time course of binaural lateralization and azimuthal localization as a function of stimulus frequency, bandwidth, modulation
rate, temporal regularity, duration, envelope shape, and reverberation. The detailed temporal profile of perceptual weights, or “temporal
weighting function,” can be captured psychophysically and through Monte Carlo simulations based on computational binaural model
outputs. Among the results that will be reviewed in this talk are (1) demonstration of sparse temporal sampling of binaural cues at
moments of positive envelope slope, (2) impacts of cochlear interference on low-frequency binaural cues, (3) evidence for leaky temporal integration of binaural information in late-arriving sound, (4) effects of temporal irregularity on enhanced ongoing-cue sensitivity,
(5) distinct patterns in temporal weighting of interaural time and level differences, and (6) enhanced onset dominance in reverberation.
[Work supported by R01DC011548.]
5:00
4p WED. PM
4pPPa11. Spectral weighting of interaural time- and level differences for broadband signals. Bastian Epp (Elec. Eng. - Hearing
Systems group, Tech. Univ. of Denmark, Tech. University of Denmark, Ørsteds Plads, Bldg. 352, Rm. 118, Lyngby 2800, Denmark,
bepp@elektro.dtu.dk), Axel Ahrens, and Suyash Narendra Joshi (Elec. Eng. - Hearing Systems group, Tech. Univ. of Denmark, Kgs.
Lyngby, Denmark)
An important ability of the auditory system is to localize sound sources in complex acoustical environments. Two important cues for
localization are interaural time- and level differences (ITD, ILD). The sensitivity to these cues differs across frequency and has previously been estimated through frequency-specific detection thresholds. Detection thresholds of ITDs/ILDs are, however, affected by stimulus energy in remote spectral regions, referred to as binaural interference. In this study, the spectral weights of ITD- and ILD cues in
the lateralization of a broadband signal was investigated using regression analysis. The stimuli consisted of eleven 1-ERB-wide noise
bands (442 Hz-5544 Hz) containing ITD or ILD cues. In experiment 1, ITDs or ILDs were applied to the noise bands and roved independently on every trial. In experiment 2, the noise bands centred at 442 Hz and 5544 Hz were removed to investigate the effect of stimulus bandwidth. In experiment 3, the same two noise bands were present, but contained uncorrelated noise, reducing the effective
bandwidth of binaural information. The results show that the cue-bands with the lowest- and highest centre frequency received the highest weights, while the other bands are equally weighted. This indicates that these edge frequency bands play an important role in lateralizing sounds.
5:20–5:40 Panel Discussion
3891
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3891
WEDNESDAY AFTERNOON, 28 JUNE 2017
ROOM 304, 1:20 P.M. TO 4:20 P.M.
Session 4pPPb
Psychological and Physiological Acoustics: Physiology Meets Perception II
Tom Francart, Cochair
Neurosciences, KU Leuven, Herestraat 49 bus 721, Leuven 3000, Belgium
Jan Wouters, Cochair
Neurosciences, KU Leuven, Leuven 3000, Belgium
Invited Papers
1:20
4pPPb1. Neural encoding of vowel formant frequency in normal-hearing listeners. Mario Svirsky (Otolaryngology-HNS, New
York Univ., 550 First Ave., NBV-5E5, New York, NY 10010, mario.svirsky@nyumc.org), Jong-Ho Won (Speech & Hearing Sci.,
Univ. of Washington, Seattle, WA), Christopher G. Clinard (Commun. Sci. and Disord., James Madison Univ., Harrisonburg, VA),
Richard Wright (Linguist, Univ. of Washington, Seattle, WA), Elad Sagi (Otolaryngology-HNS, New York Univ., New York, NY), and
Kelly Tremblay (Speech & Hearing Sci., Univ. of Washington, Seattle, WA)
Physiological correlates of speech acoustics are particularly important to study in humans because it is uncertain whether animals
process speech the same way humans do. Studying the physiology of speech processing in humans, however, typically requires the use
of noninvasive physiological measures. This is what we attempted in a recent study (Won, Tremblay, Clinard, Wright, Sagi, and Svirsky,
JASA 2016) which examined the hypothesis that neural representations of formant frequencies may help predict vowel recognition. To
test the hypothesis, the frequency-following response (FFR) and vowel recognition were obtained from 38 normal-hearing listeners
using four different vowels. This allowed direct comparisons between behavioral and neural data in the same individuals. FFR was used
because it reflects temporal encoding of formant frequencies below about 1500 Hz. Four synthetic vowels with formant frequencies
below 1500 Hz were used. Duration was 70 ms for all vowels to eliminate temporal cues and to make identification more difficult. A
mathematical model (Sagi et al., JASA 2010) was used to predict vowel confusion matrices based on the neural responses. The mathematical model was successful in predicting good vs poor vowel identification performers based exclusively on physiological data.
1:40
4pPPb2. Envelope-following responses elicited by modified speech sounds for estimating temporal processing dysfunction. Steven
J. Aiken (School of Human Commun. Disord., Psych. and Surgery, Dalhousie Univ., 5850 College St., Box 15000, Halifax, NS B3H
4R2, Canada, steve.aiken@dal.ca), David Purcell (School of Commun. Sci. and Disord., Western Univ., London, ON, Canada), and Jian
Wang (School of Human Commun. Disord., Dalhousie Univ., Halifax, NS, Canada)
Speech is an ideal stimulus for eliciting envelope-following responses (EFR) given its inherent periodicities and biological importance. However, the speech EFR may reflect multiple aspects of temporal coding in the auditory nerve and brainstem driven by phaselocking to temporal fine-structure (TFS; carried predominantly by low-frequency speech harmonics) and the periodicity envelope (carried predominantly by unresolved high-frequency harmonics). This limits its utility as a measure of specific encoding deficits as a function of frequency. Multiple-fundamental frequency (multi-f0) speech sounds give rise to EFR related to narrow ranges of speech
harmonics and thus may allow for assessment more specific with respect to frequency and type of temporal coding. This study investigated the relationship between the EFR obtained from multi-f0 speech, and individual differences in release from masking thought to
reflect poor TFS coding. Multi-f0 EFR was measured in adults across a wide age-range with normal and near-normal hearing. Speechin-noise thresholds were measured adaptively while manipulating talker f0 and spatial location using a virtual sound field. Results indicate that release from masking based on talker f0 is associated with the EFR elicited from only low-frequency speech harmonics, suggesting that speech-in-noise difficulties reflect distinct deficits in the encoding of TFS in the auditory periphery.
2:00
4pPPb3. Intracranial electrophysiology of human auditory and auditory-related cortex during speech categorization tasks. Kirill
V. Nourski (Dept. of Neurosurgery, The Univ. of Iowa, 200 Hawkins Dr., 1815 JCP, Iowa City, IA 52242, kirill-nourski@uiowa.edu),
Mitchell Steinschneider (Departments of Neurology and Neurosci., Albert Einstein College of Medicine, Bronx, NY), and Ariane E.
Rhone (Dept. of Neurosurgery, The Univ. of Iowa, Iowa City, IA)
Speech perception engages a large cortical network encompassing multiple regions within temporal and frontal lobes. This study
examined spatiotemporal dynamics of speech processing using direct electrophysiological recordings from patients undergoing invasive
monitoring for surgical treatment of refractory epilepsy. Words were presented in target detection tasks that required acoustic, phonemic
and semantic processing. High gamma (70-150 Hz) activity was recorded directly from Heschl’s gyrus (HG), superior, middle temporal
and supramarginal gyri (STG, MTG, SMG), and prefrontal cortex (PFC). Analysis focused on task effects (responses to non-target words
3892
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3892
in a control tone detection task vs. semantic categorization tasks) and target effects (target vs. non-target words in a semantic task).
Responses within posteromedial HG (auditory core cortex) represented acoustic stimulus attributes, but did not show task or target
effects. Non-core auditory cortex (anterolateral HG and lateral STG) primarily exhibited sensitivity to task. Auditory-related areas
(MTG and SMG) and PFC showed both target and, to a lesser extent, task effects. Task and target effects were more prominent in the
language-dominant hemisphere. Findings support hierarchical organization of speech processing at the cortical level, wherein acoustic,
phonemic, and semantic processing are primarily subserved by core, non-core, and auditory-related cortex, respectively.
2:20
4pPPb4. Neural representations of restored acoustic rhythm in noise. Francisco Cervantes Constantino and Jonathan Z. Simon
(Univ. of Maryland, College Park, 2267 A.V. Williams, College Park, MD 20742, fcc@umd.edu)
Factors influencing the dynamics of auditory restoration, where acoustically missing information is nevertheless perceived, remain
unresolved despite their important contributions to sensory cognition. We present the case of perceptual restoration of acoustic rhythms
subject to masking and removal. Neurally, in addition to expected acoustically-driven cortical rhythms, cortical rhythms are also
observed in response to absent but expected acoustic rhythms, reflecting purely endogenous neural processes. Experimentally, brief
noise masker probes were added to a prolonged 5-Hz rhythmic pulse train, and in half those cases the underlying rhythm was also
removed. Listeners continually reported whether probes were perceived as rhythmic or not. Analysis of neural responses obtained by
magnetoencephalography (MEG) shows that for cases where an absent rhythm was nonetheless perceived as present, the responses contained greater evoked rhythmic power than when the absent rhythm was perceived as absent. Moreover, such percept-specific neural
modulations (at the target rate and others) predict behavioral sensitivity, suggesting that this neural variability in evoked rhythmic power
is directly tied to variability in perception. Given the relevance to human communication in noisy environments, it is proposed that sufficiently synchronized neural dynamics may underlie the subjective experience of a sound as modulated, even if unsupported by sensory
data.
2:40
4pPPb5. EEG in response to running speech: Relation with intelligibility. Tom Francart, Jonas Vanthornhout, Lien Decruy, and Jan
Wouters (NeuroSci., KU Leuven, Herestraat 49 bus 721, Leuven 3000, Belgium, tom.francart@med.kuleuven.be)
It has recently been shown that the envelope of a running speech signal can be decoded from the EEG signal. The correlation
between the decoded signal and the original envelope yields a measure of acuity of coding of the envelope (g). As the speech envelope
is an important cue for speech intelligibility, we hypothesise that its neural representation is a prerequisite for speech intelligibility. So
when factors related to higher order speech processing are ruled out, g should be highly correlated with speech intelligibility. We measured the EEG evoked by running speech masked by speech spectrum weighted noise at a number of signal-to-noise ratios (SNR), and
(1) verified that g monotonically increased with SNR, and (2) compared a measure derived from g with behaviourally measured speech
intelligibility. This was done for normal- and impaired-hearing listeners. To investigate the method’s suitability for audiometry, we systematically varied the level of attention of the subject to the speech material, and found that the results were not strongly influenced by
the level of attention when the right parameters are chosen for the decoder. We additionally measured listening effort using a new dual
task paradigm and evaluated its effect on g.
3:00–3:20 Break
3:20
4pPPb6. Toward a cognitively controlled hearing aid. Torsten Dau (Ørsteds Plads., Bldg. 352, Kgs. Lyagby, 2800 Denmark, tdau@
elektro.dtu.dk) and Alain de Cheveigne (CNRS / ENS / UCL, 29 rue d’Ulm, Paris 75230, France)
4p WED. PM
The healthy auditory system can attend to weak sounds within complex acoustic scenes, a skill that degrades with aging and hearing
loss. Recent technology such as microphone array processing should alleviate such impairment, but its uptake is limited by the lack of
means to steer the processing towards one source among many. Within our auditory brain, efferent pathways put peripheral processing
stages under the control of central stages, and ideally we would like such cognitive control to extend to the external device. Recent progress in the field of Brain Computer Interfaces (BCI) and some promising attempts at joint decoding of streams of audio and ECoG,
EEG, or MEG suggests that such control might be possible. Is it? What scientific and technological hurdles need to be overcome to produce a “Cognitively Controlled Hearing Aid”? I will speak more specifically about our efforts to determine the reliability of EEG attention decoding in realistic acoustic scenes.
3:40
4pPPb7. Auditory attention detection: Application in neuro-steered hearing aids. Neetha Das, Simon Van Eyndhoven, Alexander
Bertrand, and Tom Francart (Dept of NeuroSci. and Dept of Elec. Eng., KU Leuven, ExpORL, Onderwijs en Navorsing 2, Herestraat
49, Leuven B-3000, Belgium, neetha.das@student.kuleuven.be)
State-of-the-art hearing prostheses are equipped with acoustic noise reduction algorithms to improve speech intelligibility. However,
cocktail party scenarios with multiple speakers pose a major challenge since it is difficult for the algorithm to determine which speaker
it should enhance. To address this problem, electroencephalography (EEG) signals can be used to perform auditory attention detection
(AAD), i.e., to detect which speaker the listener is attending to. Taking a step further towards realization of a neuro-steered hearing prosthesis, we worked on AAD-assisted noise suppression in a competing-speakers scenario in the presence of babble noise. We use an
EEG-informed AAD module in combination with a blind source separation algorithm to extract the per-speaker envelopes from the
microphone recordings, as well as a multi-channel Wiener filter to extract the denoised speech signal(s). Using a new algorithm pipeline,
we obtain better AAD accuracies, and a better robustness to variations in speaker positions and signal-to-noise ratios (SNR), compared
to previously reported results. Furthermore, the algorithm allows to switch more swiftly to the other speaker’s stream when there is a
switch in attention.
3893
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3893
Contributed Paper
4:00
4pPPb8. Physiologically motivated individual loudness model for normal hearing and hearing impaired. Iko Pieper, Manfred Mauermann
(Medizinische Physik and Cluster of Excellence Hearing4All, Universit€at
Oldenburg, Carl-von-Ossietzky Straße 9-11, Oldenburg 26129, Germany,
iko.pieper@uni-oldenburg.de), Dirk Oetting (Project Group Hearing,
Speech and Audio Technol. of the Fraunhofer IDMT and Cluster of Excellence Hearing4all, Oldenburg, Germany), Birger Kollmeier, and Stephan D.
Ewert (Medizinische Physik and Cluster of Excellence Hearing4All, Universit€at Oldenburg, Oldenburg, Germany)
One consequence of sensorineural hearing loss is an altered loudness
perception with a typically steeper progression of loudness as a function of
stimulus level (loudness recruitment). Existing loudness models aim to
explain altered loudness functions in hearing impaired (HI) effectively by
means of an attenuation and compression component. Here the physiologically motivated loudness model of Pieper et al. [J. Acoust. Soc. Am. 139,
2896 (2016)] which simulates the nonlinear inner ear mechanics (transmission-line model, TLM) is used and extended to help distinguishing the role
of peripheral factors, like damage to the outer hair cells (reduction of cochlear gain), and higher stages of auditory processing on loudness perception.
Individual hearing thresholds were simulated by cochlear gain reduction in
the TLM and linear attenuation (damage of inner hair cells) prior to an internal threshold. Hearing threshold and cochlear gain loss were estimated from
individual loudness scaling data for narrowband noise. It was demonstrated
that existing loudness models fail to predict individual loudness functions
for HI. The current model showed better agreement with the data and
accounted for individual loudness functions in HI and normal hearing using
a linear weighting above the internal threshold (referred to as post gain).
WEDNESDAY AFTERNOON, 28 JUNE 2017
BALLROOM A, 1:20 P.M. TO 5:20 P.M.
Session 4pPPc
Psychological and Physiological Acoustics: Attention, Learning, Perception, and Physiology Potpourri
(Poster Session)
Elin Roverud, Chair
Dept of Speech, Language & Hearing Sciences, Boston University, 635 Commonwealth Avenue, Boston, MA 02215
All posters will be on display from 1:20 p.m. to 5:20 p.m. To allow contributors in this session to see the other posters, authors of
odd-numbered papers will be at their posters from 1:20 p.m. to 3:20 p.m. and authors of even-numbered papers will be at their posters
from 3:20 p.m. to 5:20 p.m.
Contributed Papers
4pPPc1. Neural correlates of modulation masking release: The role of
sound deprivation. Antje Ihlefeld, Matthew Ning, Sahil S. Chaubal, and
Nima Alamatsaz (Biomedical Eng., New Jersey Inst. of Technol., 323 Martin Luther King Blvd, Fenster Hall, Rm. 645, Newark, NJ 07102, ihlefeld@
njit.edu)
4pPPc2. Psychometric amplitude-modulation detection thresholds in
chinchillas before and after moderate noise exposure. Amanda C. Maulden, Michael K. Walls, and Michael G. Heinz (Speech, Lang., and Hearing
Sci., Purdue Univ., 715 Clinic Dr., West Lafayette, IN 47907, mkwalls@
purdue.edu)
It is well documented that for tone detection in background noise, normally-hearing (NH) listeners have better behavioral thresholds when that
noise is temporally modulated as compared to temporally unmodulated, a
perceptual phenomenon referred to as Modulation Masking Release
(MMR). However, hearing impaired listeners often do not show a dramatic
difference in performance across these two tasks. Behavioral evidence from
Mongolian gerbils (Meriones unguiculatus) with conductive hearing loss
(CHL) supports the idea that sound deprivation alone can reduce MMR.
Here, MMR was assessed in core auditory cortex in three NH animals, and
one animal with CHL. Trained, awake gerbils listened passively to a target
tone (1 kHz) embedded in modulated or unmodulated noise while a 16channel chronically implanted microelectrode array recorded multi-unit
neural spike activity in core auditory cortex. Results reveal that rate code
correlates with behavioral thresholds at positive, but not negative Signal-toNoise ratios. Effect of sound deprivation on MMR will be discussed using a
Wilson-Cowan neural network model of cortical function. [Work supported
by NIH R03 DC014008.]
Recent findings from animal studies suggest that moderate-level acoustic overexposure can produce permanent cochlear synaptopathy while not
significantly affecting hearing thresholds in quiet. It has been hypothesized
that this hidden hearing loss may underlie difficulties some listeners have in
noisy situations even with normal audiograms. However, this hypothesis has
not been tested directly due to difficulties measuring behavior in animal
models for which cochlear synaptopathy has been demonstrated, and the
inability to measure cochlear synaptopathy directly in humans. We recently
established a mammalian model (chinchilla) that has corresponding neural
and behavioral amplitude-modulation (AM) detection thresholds in line
with human behavioral thresholds, and which shows cochlear synaptopathy
following moderate noise exposure. Here, behavioral AM-detection thresholds were measured in six chinchillas, before and after noise exposure, using
the method of constant stimuli. Animals were trained to discriminate a sinusoidal AM (SAM) tone (4-kHz carrier) from a pure tone, in the presence of
a notched-noise masker. Behavioral thresholds before noise exposure were
consistent across individual animals and were in the range from -25 to -15
dB. Preliminary data collected following noise exposure do not show a substantial effect on AM-detection thresholds in this simple task. [Work supported by NIH grant R01-DC009838.]
3894
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3894
Understanding the physiological mechanisms that underlie the exquisite
frequency discrimination abilities of listeners remains a central problem in
auditory science. We describe a computational model of the cochlea and auditory nerve that was developed to evaluate the frequency analysis capabilities of a system in which the output of a basilar membrane filter, transduced
into a probability-of-firing function by an inner hair cell, is encoded on the
auditory nerve as the instantaneous sum of firings on a critical band of fibers
surrounding that filter channel and transmitted to the central nervous system
for narrow-band frequency analysis. Performance of the model on vowels
over a wide range of input levels was found to be robust and accurate, comparable to the Average Localized Synchronized Rate results of Young and
Sachs [J. Acoust. Soc. Am. 1979, 66, 1381-1403]. Model performance in
perceptual threshold simulations was also evaluated. The model succeeded
in replicating psychophysical results reported in classic studies of critical
band masking.
4pPPc4. Examining the role of a medial olivocochlear reflex elicitor on
the attentional modulation of cochlear function. Jordan A. Beim, Andrew
J. Oxenham (Psych., Univ. of Minnesota, N218 Elliott Hall, 75 East River
Rd., Minneapolis, MN 55455, beimx004@umn.edu), and Magdalena Wojtczak (Psych., Univ. of Minnesota, New Brighton, MN)
Selective attention has been shown to modulate cortical and subcortical
neural representations of sound in the auditory systems of humans and
research animals. Neuroanatomy of the auditory system suggests that cortical
activity is capable of modulating cochlear responses to sound via corticofugal
projections to the medial olivocochlear (MOC) efferent system. Several
human studies using otoacoustic emissions (OAEs) suggest that selective
attention can modulate cochlear responses to sound, but results across studies
typically show small and inconsistent effects. Recent work in our laboratory
has demonstrated much larger effects of cross-modal selective attention on
cochlear processing than previously reported, likely due to improved methods
for measuring MOC effects. One unanswered question is whether selective
attention can modulate cochlear function directly, or whether it only modulates stimulus-elicited MOC activity. Our current study compares OAE magnitudes measured while participants attend to auditory or visual stimuli to
perform a behavioral task in the presence or absence of a MOC eliciting
noise. Preliminary results indicate that the presence or absence of the MOC
elicitor did not change the attentional effects on OAE magnitude.
4pPPc5. Visually-guided auditory adaptation and reference frame of
the ventriloquism after effect. Peter Loksa (Faculty of Sci., Pavol Jozef
Safarik Univ., Jesenna 5, Kosice 04001, Slovakia, peter.loksa@gmail.com)
and Norbert Kopco (Faculty of Sci., Pavol Jozef Safarik Univ., Kosice,
Slovakia)
Ventriloquism aftereffect (VA) is observed as a shift in the perceived
locations of auditory stimuli, induced by repeated presentation of audiovisual signals with incongruent locations of auditory and visual components.
Since the two modalities use a different reference frame (RF), audition is
head-centered (HC) while vision is eye-centered (EC), the representations
have to be aligned. A previous study examining RF of VA found inconsistent results: the RF was a mixture of HC and EC for VA induced in the center of the audiovisual field, while it was predominantly HC for VA induced
in the periphery [Lin et al., JASA 121, 3095, 2007]. In addition, the study
found an adaptation in the auditory space representation even for congruent
AV stimuli in the periphery. Here, a computational model examines the origins of these effects. The model assumes that multiple stages of processing
interact: (1) the stage of auditory spatial representation (HC), (2) the stage
of saccadic eye responses (EC), and (3) some stage at which the representation is mixed (HC + EC). Observed results are most consistent with a suggestion that the neural representation underlying spatial auditory plasticity
incorporates both HC and EC auditory information, possibly at different
processing stages. [Work supported by VEGA-1/1011/16.]
3895
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
4pPPc6. Figure/ground segregation enhanced by spatially directed
attention. Darrin K. Reed (Dept. of Biomedical Eng., Boston Univ., 677
Beacon St., Boston, MA 02215, dkreed@bu.edu), Brigitta Toth (Inst. of
Cognit. Neurosci. and Psych., Hungarian Acad. of Sci., Budapest, Hungary),
Maria Chait (Ear Inst., Univ. College London (UCL), London, United Kingdom), and Barbara Shinn-Cunningham (Dept. of Biomedical Eng., Boston
Univ., Boston, MA)
We recently reported that detecting a “figure” of repeated chords amidst
“background” tones was worse when the figure was spatially separated from
the background, a counterintuitive result given the published studies of spatial unmasking. Because of the way we blocked trials and provided instructions/feedback, listeners may have directed attention to the front where most
figures appeared. Here, we investigated whether spatial cues would improve
figure detection when listeners were instructed that spatial cues could aid
performance. Figure detection was recorded for a condition where both the
background and figure were presented diotically and for two binaural conditions. In one binaural condition, the figure occurred as a brief token at a lateral position separated from the background. In the other binaural condition,
the background was composed of both a diotic stream and a lateralized
stream, which contained the target (if present). Figure detection accuracy
was better for both binaural conditions than for the diotic condition. Accuracy was best for the binaural condition in which the background was diotic
and the figure was a brief lateralized token. These results reinforce the idea
that binaural cues can be used to direct attentional focus, enhancing detectability of a target object amidst competing sounds.
4pPPc7. An investigation of selective attention for roughness and loudness. Alison Tan (Cognit. Sci., Univ. of California Irvine, Dept. of Commun. Sci. and Disord., Univ. of Wisconsin, Madison, Wisconsin 53706,
alisonytan@gmail.com) and Bruce G. Berg (Cognit. Sci., Univ. of California Irvine, Irvine, CA)
A novel method for measuring selective attention to different perceptual
dimensions in a discrimination task is evaluated. The standard consists of
three equal-intensity tones centered at 1000 Hz with 10 Hz separations. The
signal is an intensity increment for the central tone. Discriminations can be
based on either differences in loudness or roughness. In three conditions, listeners are instructed to attend to roughness when the signal and standard are
adjusted to have the same energy, attend to roughness without the equalenergy constraint, or attend to loudness. Compliance to instructions is
assessed by adding small level perturbations to stimuli and estimating a set
of decision weights. Theoretically, attention to loudness yields three positive
weights with equal magnitude, whereas attention to roughness yields a positive weight for the central tone and negative weights with half the magnitude for the two side tones. Estimated weights In the loudness and equalenergy roughness conditions are consistent with model simulations. In the
roughness condition without equal-energy, the patterns of decision weights
are a linear combination of the two other sets, indicating that listeners have
difficulty in attending exclusively to the roughness cue in the presence of a
valid intensity cue.
4pPPc8. Development of an open source audio processing platform.
William Audette, Odile Clavier (Creare LLC, 16 Great Hollow Rd., Hanover, NH 03755, wea@creare.com), Daniel Rasetshwane, Stephen T. Neely
(Boys Town National Res. Hospital, Omaha, NE), and Joel Murphy (Solutions Design & Prototyping, LLC, Brooklyn, NY)
The Open-Source software movement has demonstrated the value and
quality of open-source tools and includes well-established systems such the
Android Operating System. This movement is expanding to hardware and
when designed properly, open-source hardware combines the user-friendliness and dependability of commercial products with the high performance
and flexibility of scientific tools developed by researchers in their own laboratories. In this research, we present a hardware and software platform
entirely open-source, designed to spur innovation in sound and speech processing algorithms. The first prototype includes a Teensy 3.6 board which
leverages the Arduino development environment while providing powerful
computational capabilities. This processor board is paired with customdesigned electronics for audio control, power management and wireless
communication. The software base is designed for multiple users: (1)
Acoustics ’17 Boston
3895
4p WED. PM
4pPPc3. Multi-fiber coding on the auditory nerve and the origin of critical-band masking. Robert A. Houde (Ctr. for Communications Res., 35 R
ensselaer Dr., Rochester, NY 14618, rahoude@gmail.com), James Hillenbrand (Western Michigan Univ., Kalamazoo, MI), Robert T. Gayvert (Ctr.
for Communications Res., Rochester, NY), and John F. Houde (Univ. of
California San Francisco, San Francisco, CA)
“experts” who optimize new algorithms directly in firmware; and (2)
researchers who interact with existing algorithms to modify parameters and
evaluate their performance in different conditions. Hearing aid research and
other audio signal processing fields will greatly benefit from the open sharing of algorithms on one common platform that is easily obtained from
existing vendors. Here we present the current functionality of the platform,
as well as how to access it and contribute to this exciting Open Source
project.
4pPPc9. Robustness to real-world background noise increases between
primary and non-primary human auditory cortex. Alexander J. Kell and
Josh McDermott (MIT, 43 Vassar St., Bldg 46-4078, Cambridge, MA
02139, alexkell@mit.edu)
In everyday listening, the sounds from sources we seek to understand
are frequently embedded in background noise, which often profoundly alters
auditory nerve spiking. To recognize sources of interest the brain must be
somewhat robust to the effects of these background noises. To study the
neural basis of listening in real-world background noise, we measured fMRI
responses in human auditory cortex to a diverse set of thirty natural sounds,
presented in quiet as well as embedded in thirty different everyday background noise textures (e.g., a bustling coffee shop, crickets chirping, etc.).
We quantified the noise-robustness of neural responses by correlating each
voxel’s response to the natural sounds in quiet with its response to those
same sounds superimposed on background noise. Responses in core regions
(commonly identified with primary auditory cortex) were substantially
affected by background noise. However, noise-robustness increased with
distance from primary auditory cortex: nearby non-primary areas were
slightly more robust, while more distal areas were hardly affected by the
background noises. Our results provide a neural correlate of the noise
robustness of real-world listening, and offer evidence of a hierarchical organization in human auditory cortex.
4pPPc10. Objective determination of backward masking . Silas Smith
and Al Yonovitz (The Univ. of Montana, Dept. of Communicative Sci. and
Disord., Missoula, MT 59812, al.yonovitz@umontana.edu)
Backward Masking (BM) functions have been shown to relate to age,
lead toxicity, and are differentiated in children with language disorders.
These functions may be indicative of auditory processing deficits. This
study investigated if Evoked Potentials (EP) could be utilized to obtain BM
functions. A tonal stimulus, followed by an inter-stimulus interval (ISI) and
a noise masker was the EP stimulus. All were studied individually in the
appropriate temporal alignment. ISI”s of various durations to derive the BM
function middle, and late auditory evoked potentials. This study randomly
presented four different stimulus conditions, 1) tone alone, 2) noise alone,
3) tone and noise, and 4) silence. With a long inter-trial interval (1 sec) and
high sample rate (31250 Hz) EP’s were obtained for 4000 trials. The stimuli
were pure-tones (1000 Hz, 10 msec. duration with a Blackman function and
noise bursts of varying intensity. Agreement was found for the behavioral
and electrophysiological task. Results indicated that EP’s could be arithmetically combined to observe the differential electrophysiological
responses and neurologic loci of evoked potentials during the BM effect.
4pPPc11. Using visual cues to perceptually extract sonified data in collaborative, immersive big-data display systems. Wendy Lee, Samuel Chabot, and Jonas Braasch (School of Architecture, Rensselaer Polytechnic
Inst., 110 8th St., Troy, NY 12180, leew14@rpi.edu)
Recently, multi-modal presentation systems have gained much interest
to study big data with interactive user groups. One of the problems of these
systems is to provide a venue for both personalized and shared information.
Especially, sound fields containing parallel audio streams can distract users
from extracting necessary information. The way spatial information is processed in the brain allows humans to take complicated visuals and focus on
details or the whole. However, temporal information, which can be better
presented through audio, is processed differently, making dense sound environments difficult to segregate. In Rensselaer’s CRAIVE-Lab, sounds are
presented spatially using an array of 134 loudspeakers to address individual
participants who are working on analyzing data together. In this talk, we
3896
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
will present and discuss different methods to improve the ability of participants to focus on their designated audio streams using co-modulated visual
cues. In this scheme, the virtual reality space is combined with see-through,
augmented reality glasses to optimize the boundaries between personalized
and global information. [Work supported by NSF #1229391 and the Cognitive and Immersive Systems Laboratory (CISL).]
4pPPc12. Sound pressure distribution in natural or artificial human ear
canals in response to mechanical ossicular stimulation. Michael E. Ravicz, Jeffrey Tao Cheng, and John J. Rosowski (Eaton-Peabody Lab, Mass.
Eye & Ear Infirmary, 243 Charlest St., Boston, MA 02114, mike_ravicz@
meei.harvard.edu)
Otoacoustic emissions (OAEs) in the ear canal (EC) are produced by tympanic membrane (TM) and ossicular motion driven by the cochlea. Measurement of OAEs is complicated by nonuniformities in EC sound pressure,
which may lead to misinterpretation. Human temporal bones were prepared
by removing most of the cartilaginous EC or replacing the bony EC with an
artificial EC. The incus was stimulated mechanically through the facial recess
by a small piezoelectric actuator to produce TM motion. Sound pressures
resulting from broadband stimuli were measured at ~70 locations along the
TM surface (Ptm), in the tympanic ring plane 4-6 mm distal to the umbo
(Pec), and along the EC axis. Ptm showed considerable transverse spatial variation in narrow frequency bands at frequencies as low as 5 kHz, in contrast to
the simpler distribution when sound enters the EC from its lateral end. These
transverse variations generally had dissipated at the tympanic ring plane. Longitudinal sound pressure variations along the EC axis were consistent with a
simple uniform tube model. The choice of the best location to measure OAEs
above a few kHz involves a tradeoff between longitudinal and transverse Pec
variations. [Funding: NIDCD R01 DC00194 and MEEI.]
4pPPc13. Effect of ossicular-joint flexibility on bone conduction hearing—A finite-element model analysis. Xiying Guan and Sunil Puria (Dept.
of Otolaryngol., Harvard Med. School; Massachusetts Eye and Ear, 243
Charles St., Boston, MA 02114, xiying_guan@meei.harvard.edu)
The mammalian ossicular chain contains three distinct bones typically
connected by synovial joints, comprising of a flexible coupling path
between the eardrum and cochlea. It is hypothesized that one role for ossicular joint flexibility is to reduce the ossicular input to the cochlea in response
to self-generated bone-conducted sounds, but without significantly affecting
the normal air-conduction pathway. This hypothesis is tested using a finiteelement (FE) model of the human middle ear terminated in a cochlear load
impedance. The FE model of the middle ear with accurate representations
of the ossicles and ossicular-joint flexibility was developed from micro-CT
images. The model was validated using experimental data taken with airconduction stimulation. Calculations of 3D ossicular motion and intracochlear sound pressure in response to bone-conduction stimulation from 0.1
to 20 kHz were performed with the natural ossicular chain as well as with
modifications in which one or both ossicular joints were fused. Comparison
between the normal and modified ossicular chains helps to clarify the function of the ossicular-joint flexibility in bone conduction hearing. [Work supported by NIDCD R01 DC05960 and Hearing Health Foundation Emerging
Research Grant 2016.]
4pPPc14. Motion of intracochlear structures measured with a commercial optical coherence tomography (OCT) system. Michael E. Ravicz,
Nam H. Cho, Nima Maftoon, and Sunil Puria (Eaton-Peabody Lab, Mass.
Eye & Ear Infirmary, 243 Charlest St., Boston, MA 02114, mike_ravicz@
meei.harvard.edu)
Most knowledge of the motion of cochlear structures has been limited to
measurements through the round window at the extreme base of the cochlea
or through a hole made in the cochlear capsule, which can modify cochlear
mechanics. Optical coherence tomography (OCT) provides the ability to
measure shape or motion of structures through a thin layer of tissue or bone.
The motion of cochlear structures has been measured in the mouse cochlear
apex without making an opening into the cochlea, using a custom OCT system. Here we describe intracochlear vibrometry using a commercial OCT
system. Specimens were prepared by opening the middle ear while
Acoustics ’17 Boston
3896
4pPPc15. A non-traditional interpretation of cochlear mechanics of the
auditory system. Amitava Biswas (Speech and Hearing Sci., Univ. of
Southern MS, 118 College Dr. #5092, USM-CHS-SHS, Hattiesburg, MS
39406-0001, Amitava.Biswas@usm.edu)
Cochlear mechanics is known to be energized by motility of outer hair
cells. This study explored significance of diametric changes of outer hair
cells that is usually neglected in cochlear mechanics. Several different mathematical models will be presented to show that diametric changes of outer
hair cells can affect deflections of the basilar membrance in a very sensitive
pattern under certain naturally possible situations.
4pPPc16. Effect of adult aging on pupillary response to auditory stimuli
of varying levels of complexity. Eriko Atagi (Northeastern Univ., 226 FR,
360 Huntington Ave., Boston, MA 02467, e.atagi@northeastern.edu), Austin Luor, Max Bushmakin, and Arthur Wingfield (Brandeis Univ., Waltham,
MA)
Pupil dilation as a measure of listening effort has been well documented
(Kuchinsky et al., 2013), as has the association between adult aging and
increased effort in listening tasks (Wingfield et al., 2015). However, agerelated differences in the dynamics of pupillary response over time while
attending to auditory stimuli have received less study; nor has the nature of
the pupillary response to simple stimuli (tones) versus more complex speech
stimuli (words, sentences) been systematically explored. In this study we
examined multiple parameters of young adult and older adult listeners’
changes in pupil size, including peak amplitude and latency to peak, elicited
while making decisions in response to auditory stimuli that varied in acoustic or linguistic complexity. Results showed that the latency to peak pupil
size was slower for older adults and for increasingly complex stimuli. The
shape of pupillary response curve also changed dramatically with the nature
of the stimuli: attending to briefer stimuli (tones, words) resulted in a more
peaked pupillary response curve than for sentence-length stimuli, which
took a more complex form. These results suggest that both adult aging and
stimulus complexity influence the dynamics of the pupillary response as an
index of processing effort for acoustic stimuli.
4pPPc17. Rippled spectrum discrimination in noise: Manifestation of
compression. Alexander Supin, Olga Milekhina, and Dmitry Nechaev
(Inctitute of Ecology and Evolution, 33 Leninsky Prospect, Moscow
119071, Russian Federation, alex_supin@mail.ru)
In psychophysical experiments, cochlear compression function can be
derived by comparison of on- and off-frequency masking, assuming that in
the signal representation, the responses to both the signal and on-frequency
masker are equally compressed whereas the response to the off-frequency
masker is not compressed. In the present study, this approach was used to
assess the influence of compression on discrimination of complex signal
spectra. The signal was rippled noise, 0.5-oct wide, centered at 2 kHz, 40 to
90 dB SPL. Ripple-density discrimination limit was measured using the ripple-phase reversal test. Simultaneous maskers were 0.5-oct wide noise centered either at the signal frequency (on-frequency) or 0.75 oct below the
signal (off-frequency). Masker level increase resulted in decrease of rippledensity discrimination limit. The growth of the on-frequency masker was
approximately 1:1. The growth of the off-frequency masking was close to
1:1 at signal levels below 50 dB and 5:1 at signal levels above 60 dB SPL.
The results indicate compression of the ripple-pattern signal by approximately 1:5. The observed manifestation of compression implies that the rippled-spectrum discrimination is little susceptible to lateral suppression and
off-frequency listening. [Work supported by Russian Science Foundation.]
3897
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
4pPPc18. Lossy compression of uninformative stimuli in the auditory
system. Wiktor Mlynarski and Josh McDermott (Brain and Cognit. Sci.,
Massachusetts Inst. of Technol., 77 Massachusetts Ave. 46-4078, Cambridge, MA 02139, mlynar@mit.edu)
Despite its temporal precision, the auditory system does not encode fine
detail of some classes of natural sounds. For example, sounds known as
“auditory textures” seem to be encoded and retained with a lossy, compressed representation consisting of time-averaged statistics. One explanation is that the auditory system compresses stimuli that exceed its
informational bandwidth. Decreased sensitivity to temporal detail of sound
would reflect a limit of the auditory system to transmit sensory information
above a certain rate. Here we instead propose a normative explanation. We
assume that to minimize energy expenditure, the auditory system compresses stimuli that do not carry novel information about the environment.
We developed practical measures of stimulus coding cost (the number of
simulated auditory nerve spikes required to encode the sound) and stationarity (degree of change to the sound spectrum across successive time windows). We found that coding cost is not predictive of the ability to
discriminate exemplars of a sound. In a second experiment we found that
human listeners are sensitive to temporal detail of sounds with high coding
cost provided they are unexpected. Our results are consistent with the hypothesis that perceptual compression of auditory textures is a manifestation
of an adaptive coding strategy.
4pPPc19. Cochlear gain reduction in listeners with borderline normal
quiet thresholds. Elizabeth A. Strickland, Alexis Holt, and Hayley Morris
(Speech, Lang., and Hearing Sci., Purdue Univ., 715 Clinic Dr., West Lafayette, IN 47907, estrick@purdue.edu)
The medial olivocochlear reflex (MOCR) decreases the gain of the cochlear active process in response to sound. Evidence of the MOCR has been
measured in humans using physiologic and psychoacoustic techniques. We
have focused on psychoacoustic techniques, which convey the behavioral
effects of gain reduction. We have modified forward masking paradigms
understood to measure frequency selectivity and the input/output function at
the level of the cochlea so that the stimuli (masker and signal) should be too
short to evoke the MOCR. With this paradigm, a longer sound (precursor) is
presented before these stimuli to evoke the MOCR. The present study examines the relationship between quiet threshold and the magnitude of the
threshold shift (interpreted as gain reduction) produced by a pink noise precursor, as a function of precursor level. Gain reduction was measured at 2,
4, and 8 kHz in listeners with clinically normal quiet thresholds who had
borderline normal quiet thresholds and low OAEs at one or more of these
frequencies. Maximum gain was estimated by measuring the threshold
masker level for a masker at the signal frequency and a masker well below
the signal frequency. The relationship between quiet threshold, maximum
gain, and gain reduction will be discussed.
4pPPc20. Nonlinear response of human middle ear to high level sound.
Jeffrey Tao Cheng, Aaron Remenschneider, Elliott Kozin (Massachusetts
Eye and Ear Infirmary, Harvard Med. School, Eaton-Peabody Lab., 243
Charles St., Boston, MA 02114, tao_cheng@meei.harvard.edu), Cosme Furlong (WORCESTER POLYTECHNIC Inst., Worcester, MA), and John J.
Rosowski (Massachusetts Eye and Ear Infirmary, Harvard Med. School,
Boston, MA)
The human middle ear functions as an acoustic transformer that converts
environmental sound energy into mechanical drive to stimulate the cochlea
for hearing. This transformation has been demonstrated to be linear at sound
levels at least as high as 125 dB. While this behavior can be expected to
breakdown at some higher level, that level has not been demonstrated. In
this study, we use cadaveric human ears to study middle ear nonlinear
responses to high levels of sound. Tones between 200 and 10000 Hz with
levels as high as 160 dB SPL are generated by a customized horn speaker
and delivered to the ear canal opening. Sound pressure levels near the tympanic membrane surface are monitored by a calibrated pressure sensor.
Simultaneously, Laser Doppler Vibrometry is used to record sound induced
vibrations of the Umbo, in the center of the tympanic membrane, and the
stapes, at the interface of the middle and inner ear. These data allow us to
calculate the middle ear transfer function: the ratio of the stapes to umbo
Acoustics ’17 Boston
3897
4p WED. PM
maintaining part of the cartilaginous ear canal. A Thorlabs 905nm Ganymede III-HR OCT system with 100-kHz camera frame rate was used to
measure cochlear anatomy in a 2-D radial slice (B-scan) and dynamic displacements along a line (A-line) that intersected several cochlear structures
in response to tones presented to the ear canal. Differences in the magnitude
and phase of the displacements along the A-line show differences in the
motion of cochlear structures. These data will provide information for validating 3D finite element models of the cochlea. [Work supported by NIDCD
R01 DC07910.]
vibration. We use these measurements to investigate what is the threshold
sound pressure above which the human middle ear response is no longer
linear.
4pPPc21. The decodability of sound sources and categories from their
acoustic representations as they develop over time. Mattson Ogg and L.
Robert Slevc (Neurosci. and Cognit. Sci., Univ. of Maryland, College Park,
Biology/Psych. Bldg., Rm. 3150, 4094 Campus Dr., College Park, MD
20742, mogg@umd.edu)
The temporal dynamics of sound impose constraints on how listeners
can so rapidly and effectively identify objects in their environment. However, it is unclear what physical features distinguish acoustic sources, and
how the roles of those features change as a sound unfolds. Potential differences between the optimal acoustic features for distinguishing sounds and
those used by listeners could reveal new insights into how the auditory system functions and which aspects of sound it prioritizes. Thus, we investigated 216 high quality sound tokens decomposed into a set of acoustic
features derived from an ERB filter bank in 5 millisecond increments. Support vector machine classifiers were iteratively trained and tested on sound
categories (instrument, speech, environmental) and sound sources within
those categories (e.g. instruments, speakers) at each time point. Decoding
analyses were conducted on acoustic features (e.g. spectral centroid, aperiodicity) and the raw ERB filter output. Categories were decoded better using
high level acoustic features, whereas individual exemplars were decoded
better from the filter bank output, although these patterns varied over time.
By examining how the classifiers weight individual features, these data
reveal the relative contributions of specific acoustic features to sound discrimination as a function of time.
4pPPc22. Auditory perception of object properties as inverse acoustics.
Maddie Cusimano, James Traer, and Josh McDermott (Brain and Cognit.
Sci., MIT, 43 Vassar St., Cambridge, MA 02139, mcusi@mit.edu)
Perception relies on regularities in sensory data that are caused by physical laws. In audition, physical laws governing object interactions constrain
the structure of sounds. However, it remains unclear to what extent these
constraints have been internalized by the brain to support auditory inferences about the world. To investigate whether physical constraints are modeled by the auditory system, we developed a Bayesian ideal observer for a
physics-inspired generative model of impact sounds, and tested whether its
judgments of the mass of a ball falling onto a board accorded with those of
human listeners. To generate audio with the model, the time-varying impact
force was convolved with measured impulse responses of the ball and board.
The force varied parametrically with mass to simulate impacts involving
balls of different masses. This relationship was fitted to measurements of a
set of wooden balls with equal hardness. Inference was performed by Markov chain Monte Carlo sampling. The model accurately predicted most
human judgments in a two alternative forced choice task using recorded
audio. However, the model underestimated mass when the ball material was
harder than wood. The results suggest that additional physical parameters
(e.g., hardness) must be modeled to account for human perception.
4pPPc23. Increased reaction time adherence to response criterion in a
sample discrimination task. Bruce G. Berg and Michael Bellato (Cognit.
Sci., Univ. of California, Irvine, 2201 SBSG, Irvine, CA 92697-5100,
bgberg@uci.edu)
In a signal detection task with equal-variance distributions and an
unbiased criterion, the distribution of reaction times typically has a peak
near the point of overlap between the two distributions. A sample discrimination task is used to determine whether slower reaction times are associated with a likelihood ratio near unity or to the placement of the decision
criterion. Listeners decide whether a single presented tone was sampled
from a normal distribution with a mean of 1000 Hz or a mean of 1100 Hz,
with each distribution having a standard deviation of 100 Hz. Base rates are
varied across conditions so that the optimal decision criterion corresponds
to respective likelihood ratios of .25,.67, 1, 1.5, or 4. In all conditions, the
peak of the reaction time distribution as a function of frequency is near the
optimal criterion for maximizing percent correct. In conclusion, slower
3898
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
reaction times are found when the frequency of the tone is near the selected
criterion rather than near a point where stimuli from the two distributions
are most similar. The results suggest that increases in reaction time are associated with the decision stage rather than an earlier perceptual processing
stage.
4pPPc24. Exponential modeling of frequency-following responses in
American neonates and adults. Kristin M. Stump, Fuh-Cherng Jeng, and
Brandie Nance (Commun. Sci. and Disord., Ohio Univ., 1 Ohio University
Dr., Grover Ctr. W141A, Athens, OH 45701, ks276909@ohio.edu)
Frequency-following response (FFR) has been widely used to assess the
mechanisms of speech processing for speakers of tonal and non-tonal languages. For Chinese speaking adults, the characteristics of speech processing with increasing number of sweeps have been described with an
exponential curve-fitting model; however, these characteristics for non-tonal
language speakers have yet to be described in a similar manner. This study
examined the characteristics of speech processing for both adults and neonates who speak a non-tonal language, to determine the goodness-of-fit of
an exponential model to neonatal and adult FFRs, and compared the results
between groups to determine if any differences exist. Twelve American neonates and 12 American adults were recruited for this study. Participants
were native English speakers. The FFR was elicited using the English vowel
/i/ with a rising pitch contour for a total of 8000 sweeps from each participant. From the three indices (Frequency Error, Tracking Accuracy and Pitch
Strength) computed, the FFR trends were fit to an exponential curve-fitting
model to estimate the frequency tracking acuity and neural phase-locking
magnitude when the amount of sweeps increased in the averaged waveform.
Significant differences between groups were found for the objective indices.
4pPPc25. Envelope following responses to Schroeder-phase harmonic
stimuli. Ganesh Attigodu Chandrashekara, Steven J. Aiken, and Jian Wang
(School of Human Commun. Disord., Dalhousie Univ., Sir Charles Tupper
Medical Bldg. 5850 College St. 2nd Fl., Halifax, NS B3H 4R2, Canada,
ganesh.attigoduchandrashekara@dal.ca)
Recent studies suggest that peripheral nonlinearities only partially
account for the envelope-following response (EFR) recorded at the scalp,
with additional contributions from broadly tuned neurons in the central
nervous system. The present study investigated this by measuring EFR to
low and high-frequency stimuli designed to (1) optimize within-channel
phase synchrony to produce maximal envelope modulation depth on the basilar membrane, or (2) optimize phase synchrony across multiple frequency
channels to maximize envelope-related activity arising in the central nervous system. EFR was recorded in normal-hearing listeners using Schroederphase stimuli with harmonic components constrained to low and high-frequency ranges, as well to multi-frequency sinusoidally amplitude modulated
tones as a function of component phase. Estimates of basilar membrane
phase curvature varied across individuals. Response amplitudes and phaselocking values for the EFR varied as a function of carrier frequency range
and component phase. The present results are consistent with an understanding of scalp-recorded EFR as a measure of central envelope processing
extending beyond the initial encoding of the stimulus envelope arising in
the periphery. The scalp-recorded EFR is thus not a direct index of envelope
encoding in the auditory nerve.
4pPPc26. Ripple-spectrum resolution in humans: Evoked-potential
study. Dmitry Nechaev and Evgeniya Sysueva (Inst. of Ecology and Evolution, 33 Leninsky Prospect, Moscow 119071, Russian Federation, dm.
nechaev@yandex.ru)
Resolution of ripple spectra depending on spectral bandwidth was investigated in normal listeners using the phase reversal test in conjunction with
recording slow auditory evoked potentials. The test signal had a band limited rippled spectrum. The central frequency of the spectrum was 2 kHz,
and equivalent rectangular bandwidth was from 0.5 to 5 oct. The signal level
was 80 dB SPL. The principle of the test was to find the maximum ripple
density (rip/oct) at which the slow auditory evoked potentials (N1-P2 complex) to a ripple phase reversal could be recorded. The increasing of ripple
density resulted in decrease of evoked potential amplitude. The ripple-
Acoustics ’17 Boston
3898
4pPPc27. On the perception of audified seismic data. Lapo Boschi (Inst.
of Earth Sci., UPMC, 4 Pl. Jussieu, Paris 75005, France, larryboschi@
gmail.com), Arthur Pate (LDEO, Columbia Univ., Palisades, NY), Laurianne Delcor (LVA, Lyon, France), Jean-Lo€ıc LE CARROU, Daniele
Dubois, Claudia Fritz (Musical Acoust. Lab, IJLRA, UPMC, Paris, France),
and Ben Holtzman (LDEO, Columbia Univ., Palisades, NY)
Auditory display of scientific data has been applied successfully to several disciplines. Seismic recordings can be “audified” by accelerating them
before playing them through an audio reproduction system. We present a
suite of experiments, designed to help determining whether auditory display
of seismic data can find practical research applications. Experiments consist
of asking listeners to (i) categorize audified seismograms that are presented
to them binaurally, and (ii) define with a few lines of text (that we later analyzed semantically) the features of audified signals that helped them in completing this task. In a first experiment listeners categorize freely, i.e., they
form their own categories: they are found to follow similar categorization
strategies, which suggests the existence of auditory clues perceived by most
of them. From these results it is, however, hard to identify a simple correlation between such clues and the geophysical parameters that should control
the data. In a second round of experiments, listeners are assigned a constrained categorization task, and ad-hoc data sets are selected to isolate
some specific geophysical parameters: this way, the sensitivity of the auditory system, via our display method, to such parameters can be evaluated.
Results suggest that listeners are able to categorize signals according to the
parameters we had chosen, and that their performance can be improved by
training. Our work opens the way to new applications of auditory display to
seismology.
4pPPc28. Learning-related improvements in auditory detection sensitivities correlate with changes in sensory- and decision-related components of the event-related potential. Natalie J. Ball (Univ. at Buffalo,
373H Park Hall, Buffalo, NY 14260, njball@buffalo.edu), Matthew G. Wisniewski (U.S. Air Force Res. Lab., Wright-Patterson Air Force Base, Wright
Patterson Air Force Base, OH), Alexandria C. Zakrzewski (Univ. of Richmond, Richmond, VA), Nandini Iyer (U.S. Air Force Res. Lab., Wright-Patterson Air Force Base, Wpafb, OH), Brian Simpson, Eric R. Thompson
(U.S. Air Force Res. Lab., Wright-Patterson Air Force Base, Wright-Patterson AFB, OH), and Nathan Spencer (U.S. Air Force Res. Lab., Wright-Patterson Air Force Base, Dayton, OH)
Listener performance in an auditory detection task can improve with
practice (Zwislocki, Maire, Feldman, & Rubin, 1958). This could result
from a selective attention process and/or sensory plasticity (e.g., if trained
stimuli receive increased cortical representation). Here, listeners were
trained to detect either an 861-Hz or 1058-Hz tone (counterbalanced across
participants) presented in a noise masker. On the following day, high-density EEG was collected while listeners: 1) attempted to detect 861-Hz and
1058-Hz tones in noise at an SNR of -21 dB, and 2) passively heard the
same tones presented in quiet. Listeners were significantly better at detecting tones at their trained frequency. In addition, P3 amplitudes were larger
for trained than for untrained tones during the detection task. During passive
exposure to the same tones, P2 amplitudes were similarly larger for trained
than for untrained tones. The difference in P3 amplitudes suggests that training leads to more efficient decisional processing, perhaps related to an expectation for tones at the trained frequency. Differences in P2 amplitudes
may reflect training-induced sensory cortical plasticity that is frequency specific. Further, the difference in P2 amplitudes suggests that training-induced
improvements in detection are not merely related to attentional
modulations.
3899
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
4pPPc29. Repeatability of non-invasive physiogical measures from the
early auditory pathway. Hari M. Bharadwaj (Speech, Lang., & Hearing
Sci., and Biomedical Eng., Purdue Univ., 715 Clinic Dr., Lyles-Porter Hall,
West Lafayette, IN 47907, hbharadwaj@purdue.edu), Lenny A. Varghese,
and Barbara Shinn-Cunningham (Biomedical Eng., Boston Univ., Boston,
MA)
Objective physiological measures such as otoacoustic emissions (OAEs)
and evoked potentials (e.g., auditory brainstem responses, envelope-following responses) provide a non-invasive window into the function of specific
early portions of the auditory pathway. However, both the clinical utility of
such measures and our ability to reproducibly relate such measures to individual behavior are limited by many sources of variability. Indeed, variability may be introduced by confounding physiological and anatomical factors
(e.g., individual differences in cochlear dispersion, head and brain tissue geometry, efferent effects), by the non-standardized nature of calibration techniques for acoustic stimulation and in-ear measurements (e.g., taking into
account individual ear-canal properties), and by measurement-related factors (e.g., contact impedance, and noise from unrelated brain activity). Over
the last five years, we have conducted several experiments to explore individual differences in suprathreshold hearing, which has forced us to address
some of these sources of variability. In this presentation, summarizing the
results from several unpublished “mini” experiments, we describe some
methodological choices that we have employed, or plan to employ, to
improve the test-retest reliability of physiological measures.
4pPPc30. Investigating brain stimulation during amplitude modulation
detection: A pilot study. Daniel Fogerty, Julius Fridriksson, Rachel E.
Miller, Alexandra Basilakos (Commun. Sci. and Disord., Univ. of South
Carolina, 1224 Sumter St., Columbia, SC 29208, fogerty@sc.edu), Chris
Rorden (Psych., Univ. of South Carolina, Columbia, SC), and Leonardo
Bonilha (NeuroSci., Medical Univ. of South Carolina, Charleston, SC)
Positive effects of transcranial brain stimulation have been observed
across a number of cognitive and sensory domains. This non-invasive external electric stimulation technique has been shown to induce changes in patterns of brain activity, and these changes have been suggested to be
responsible for affecting sensory perception. Recent findings have suggested
transcranial alternating current stimulation (tACS) can entrain neural oscillations, meaning the externally applied alternating current can affect firing
rates by inducing oscillatory coherence. Thus, tACS introduces a methodology to causally investigate the effect of neural oscillations on auditory perception. The purpose of this study was to determine if tACS improves
amplitude modulation detection for individuals with normal hearing. In a
crossover design, young healthy adults with normal hearing were presented
with sinusoidally amplitude-modulated (SAM) tones while undergoing
tACS. SAM tones were presented with a modulation rate of 4 Hz or 8 Hz;
the modulation depth was set to each participant’s modulation detection
threshold. The frequency of tACS was varied across two sessions to match
the modulation rate of one of the SAM tones. Preliminary results suggest
variable responses to tACS across the participants tested. Considerations for
auditory tACS research will be discussed.
4pPPc31. Effects of processing depth on lexically guided perceptual
learning. Julia R. Drouin, Jacqueline Ose, and Rachel M. Theodore (Univ.
of Connecticut, 850 Bolton Rd., Unit #1085, Storrs, CT 06269, julia.
drouin@uconn.edu)
Listeners use lexical information to modify the mapping to representations for individual speech sounds. This mechanism, termed lexically guided
perceptual learning (LGPL), results in long-lasting changes to speech sound
categories and may be modulated by attention mechanisms. The literature
on LGPL has demonstrated that learning may be influenced by explicit or
implicit attention towards anomalous aspects of the input and may be
affected by individual factors, such as cognitive abilities. The current study
used three experiments to examine the degree to which graded lexical
recruitment, indexed by explicit attention to the lexicon compared to other
stimulus features, influenced the magnitude of LGPL. Listeners completed
an exposure phase, where they heard an ambiguous phoneme, midway
between /s/ and /S/. Attention was manipulated across experiments through
task instructions during exposure such that attention was shifted either
Acoustics ’17 Boston
3899
4p WED. PM
density resolution was 8.5 rip/oct at spectrum bandwidth from 5 to 1 oct.
This result is in agreement with psychophysical data. At a bandwidth of 0.5
oct, the ripple density resolution was 6.9 rip/oct, which was lower than has
been shown in previous psychophysical studies. The difference between
psychophysical and evoked-potential data is discussed. [Work supported by
The Russian Foundation for Basic Research, grant @ 16-34-00742.]
towards lexical information (lexical decision task), surface variation (amplitude judgment task), or syntactic knowledge (syntactic decision task). Following exposure, all participants completed a phoneme identification task
for members of an /s/-/S/ continuum. Preliminary results indicate that LGPL
occurred in each experiment; however, the magnitude of learning differed
as a function of attention. The final results will be discussed in terms of constraints of the LGPL mechanism.
4pPPc32. Loudness perception of pure tones in Parkinson’s disease.
Defne Abur (Speech,Lang., and Hearing Sci., Boston Univ., 635 Commonwealth Ave., Boston, MA 02215, defneabur@gmail.com), Ashling A.
Lupiani (Speech,Lang., and Hearing Sci., Boston Univ., Boston, MA), Ann
E. Hickox (Decibel Therapeutics, Boston, MA), Barbara Shinn-Cunningham
(Biomedical Eng., Boston Univ., Boston, MA), and Cara E. Stepp (Speech,Lang., and Hearing Sci., Boston Univ., Boston, MA)
Previous work has shown evidence for atypical externally and self-generated speech loudness perception in Parkinson’s disease (PD) with variable
methods. This study comprehensively examined loudness ratings of pure
tones in individuals with PD and healthy controls, controlling for hearing
status. Twenty individuals with PD and twenty-three controls rated the loudness of pure tones on a scale from 1, “Very Soft” to 7, “Uncomfortably
Loud.” Tones at 500, 750, 1000, 2000, and 4000 Hz were presented from 35
dB HL to 80 dB HL (or until a rating of 7). A mixed-model analysis of variance (ANOVA) was performed on the ratings to assess the effects of group,
frequency, sound pressure level, and ear. Mean loudness growth was determined for each group. There were no significant differences in ratings by
frequency. A small but significant difference was seen in mean loudness
growth: controls had a shallower slope compared to the PD group. Findings
suggest that individuals with PD have a steeper growth of loudness of externally generated tones, in contrast with the findings of previous studies of
externally generated and self-generated speech. The underlying causes for
impaired perception and production of loudness in PD requires further
investigation.
4pPPc33. Understanding bone-conduction hearing: Measurements and
model development. Peter N. Bowers (Speech and Hearing BioSci. and
Technol., Harvard Med. School, 25 Shattuck St., Boston, MA 02115, peter_
bowers@meei.harvard.edu), Michael E. Ravicz, and John J. Rosowski
(Dept. of Otology and Laryngology, Harvard Med. School, Boston, MA)
Bone conduction (BC), the transmission of sound to the inner ear by
way of skull vibration, is thought to stimulate the inner ear by several mechanisms: external ear (vibration of the ear-canal walls), middle ear (ossicular
inertia), and inner ear (inertia of the cochlear fluids, compression of the
cochlear walls, and sound flow through “third windows”). In chinchilla, we
explore the inner-ear mechanisms with measurements of compound action
potentials and intra-cochlear sound pressures after manipulations of the middle and inner ear. These measurements suggest that while contribution of
cochlear fluid inertia to BC is reduced by occlusion of both cochlear windows, effects of cochlear compression and/or sound pressure transmission
via “third window” pathways increase. We also explore external-ear mechanisms by applying an analytically-derived reverse input impedance of the
external-ear canal, pre-existing measurements of middle-ear input impedance, and new measurements of ear canal sound pressure. We define an
equivalent volume velocity source representative of the combined BC external-ear mechanisms. Estimates of the source magnitude are in agreement
with experimental data (Chhan et al., 2016), showing that during ear-canal
occlusion, the ear-canal source is a significant contributor to BC hearing at
mid-frequencies (1-3 kHz).
4pPPc34. Noise characteristics and their impact on working memory
and listening comprehension performance. Jeffrey J. DiGiovanni, Travis
L. Riffle (Commun. Sci. and Disord., Ohio Univ., Grover W151c, Athens,
OH 45701, digiovan@ohio.edu), and Naveen K. Nagaraj (College of Health
Professions, Univ. of Arkansas for Medical Sci., Little Rock, AR)
An emerging body of literature is demonstrating a significant relationship between cognition and listening performance. Noise has been shown to
be detrimental to maintaining focus of attention in cognitive tasks. Noise
3900
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
characteristics, for a given SNR, may have differing impacts on these tasks.
Fourteen normal-hearing individuals participated in an experiment designed
to examine the effects of different background noises on auditory cognitive
tasks while still maintaining a high-level of intelligibility (90%). Three tasks
were used: a working memory span task, an attention-switching task, and a
language comprehension task. The first two tasks were completed in quiet
and three different types of modulated background noise, and the language
comprehension task was performed in quiet and one modulated noise. Performance in both the working memory span task and attention- switching
task were correlated significantly with language comprehension, suggesting
that the cognitive resources tapped by these tasks are similar to those
required for a complex activity such as comprehending language. Also, all
three noise types had a significant effect on performance, which supports
the notion that the noises used in the experiment imposed an increase in
cognitive load which may impair an individual’s ability to thrive in these
situations.
4pPPc35. Accounting for verbal and spatial working memory load in an
auditory Stroop task. Travis L. Riffle and Jeffrey J. DiGiovanni (Commun.
Sci. and Disord., Ohio Univ., W151a Grover Ctr., Ohio University, Athens,
OH 45701, tr240312@ohio.edu)
The Stroop task (Stroop, 1935) has been used extensively in cognitive
psychology research for the study of selective attention and processing
speed and accuracy. A recent experiment in our lab used an auditory-spatial
Stroop task where participants were required to determine the spatial location of congruent and incongruent directional words. Results of that experiment showed that incongruent trials had a significant effect on performance,
but only in the vertical plane. Special load theory (Park, Kim, & Chun,
2007) states that distractibility is a function of the relationship between
working memory (WM) load and the targets/distractors of the task. The auditory-spatial Stroop task was modified by using WM loads intended to
interfere specifically with the semantic or spatial aspects of the task. Given
that targets in the task are the spatial locations of the words, special load
theory predicts that a spatial WM load results in decreased performance
(increased interference) while a verbal WM load predicts increased performance (decreased interference). Data presented will illuminate whether spatial
or verbal WM load has a greater impact on performance in an auditory-spatial Stroop task.
4pPPc36. Middle ear muscle reflex in children with auditory processing
and/or attention deficit/hyperactivity disorders. Nicole E. Johnson
(Psych., Villanova Univeristy, 800 East Lancaster Ave., Villanova, PA
19085, njohns23@villanova.edu), Thierry Morlet (Nemours/Alfred I.
duPont Hospital for Children, Wilmington, DE), Rachele Sklar (Linguist
and Cognit. Sci., Univ. of Delaware, Newark, DE), Laura Grinstead
(Audiol., Speech-Lang. Pathol. & Deaf Studies, Towson Univ., Towson,
DE), Julianne Nemith (Linguist and Cognit. Sci., Univ. of Delaware, Newark, DE), and Kyoko Nagao (Nemours/Alfred I. duPont Hospital for Children, Wilmington, DE)
Previous studies suggest abnormal auditory efferent systems in children
with Auditory Processing Disorder (APD) and children with Attention-Deficit/Hyperactivity Disorder (ADHD). The current study examined the middle
ear muscle reflex (MEMR) of three groups of children. The APD group consisted of 32 children with auditory processing deficits and the ADHD group
consisted of 50 children. The children with ADHD were either taking
ADHD-related medication (n = 31) or not taking such medication (n = 19).
All subjects were selected from an existing database. Ipsilateral MEMR
responses at 0.5, 1, 2, and 4 kHz in each ear were categorized as normal or
abnormal (defined as any response over 90dB HL, or no response). Contingency table analyses and corresponding chi-square tests were used to compare the proportion of abnormal MEMR responses between groups and ears.
We found no significant difference between APD and ADHD groups. The
children with APD and children with ADHD medication showed no significant ear difference (ps > 0.5), whereas children without ADHD medication
exhibited a marginally significant difference between the left and right ear
(p = 0.044). The results suggest abnormal asymmetry in auditory efferent
pathways in ADHD. Further studies should include contralateral pathways
to further evaluate asymmetrical patterns in these groups.
Acoustics ’17 Boston
3900
When objects collide they vibrate and emit sound. Physical laws govern
these collisions and subsequent vibrations. As a result, sound contains information about objects (density/hardness/size/shape), and the manner in which
they collide (bouncing/rolling/scraping). Everyday experience suggests that
human listeners have some ability to discern material and kinematics from
impact sounds. However, the accuracy of these perceptual inferences
remains unclear, and the underlying mechanisms are uncharacterized. Listeners could rely on stored templates for particular familiar objects. Alternatively, we could infer generative parameters for a sound via probabilistic
inference in an internal model of the generative process. To explore these
possibilities we constructed a generative model of impact sounds, modeling
sounds as the convolution of a time-varying impact force with the impulse
responses (IRs) of two objects. The force was modeled as a function of
mass, hardness and impact velocity. IRs were measured from a range of
objects using contact speakers and microphones to measure an object’s
effect on vibrational input. IRs for arbitrary objects were generated via interpolation between recorded examples. The model generates compelling renditions of impact sounds. Physically motivated alterations to the force and/
or IRs produced physically plausible synthetic impact sounds that can be
used in perceptual experiments.
4pPPc38. A novel method for quantifying the amplitude of the N1 peak
of the human compound action potential. Carolyn M. McClaskey, Judy
R. Dubno, and Kelly C. Harris (Otolaryngol., Medical Univ. of South Carolina, 135 Rutledge Ave., MSC 550, Charleston, SC 29425-5500, mcclaske@
musc.edu)
Human auditory nerve (AN) activity is commonly estimated via the
compound action potential (CAP), a population response generated by the
AN and measured using tympanic-membrane (TM) electrodes. The amplitude of the CAP’s first prominent negative peak (N1) is typically quantified
by calculating the voltage difference between the N1 and the maximum
voltage of the surrounding positive peaks (“peak-to-peak amplitude”). Despite the widespread use of this method, these peaks have different neural
generators, potentially confounding the interpretation of N1 amplitudes.
Variability in the amplitudes of these surrounding peaks may also introduce
variability to the N1 response, further obscuring findings and reducing reliability. An alternate method is measuring N1 amplitude relative to a timeaveraged baseline (“absolute amplitude”), a practice ubiquitous throughout
cortical electroencephalography (EEG) studies. The current study recorded
CAPs from younger adults with normal hearing to either clicks or tone
bursts using TM electrodes placed on the tympanum and measured both
peak-to-peak and absolute amplitudes of the N1. For absolute amplitudes,
two baseline windows were evaluated, a pre-stimulus window and a wholetrial window. Advantages of measuring absolute N1 amplitudes and implications for assessing AN function will be discussed. [Work supported by
NIH/NIDCD.]
4pPPc39. Mechanisms of contextual plasticity in sound localization.
Norbert Kopco, Beata Tomoriova, and Gabriela Andrejkova (Inst. of Comput. Sci., P. J. Safarik Univ. in Kosice, Jesenna 5, Kosice 04001, Slovakia,
gabriela.andrejkova@upjs.sk)
Contextual plasticity (CP) is a form of short-term adaptation in sound
localization, operating on time scales of tens of seconds to minutes. At least
two different mechanisms have been proposed to underlie this effect: 1) adaptation in auditory spatial representation that reflects a change in the stimulus range caused by the context (as the context typically included a
distractor presented from a new location outside the range of the experimental targets), and 2) precedence buildup-like mechanism activated by the context (because the context typically consisted of distractor-target click pairs
presented on majority of trials and interleaved with target-alone experimental trials). Here a new experiment was performed and previous experimental
data were analyzed with the goal of determining which of the two mechanisms is more likely to cause CP. The experiments manipulated the following aspects of the context: 1) whether the distractor and target were
presented as pairs 2) the distribution of distractor and target locations, and
3901
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
3) the order of distractor and target. Results are partially consistent with the
adaptation mechanism. But in some conditions, the shifts predicted by the
adaptation mechanism were not observed, suggesting that the precedence
buildup or other mechanisms are also active. [Work supported by APVV0452-12.]
4pPPc40. Predicting performance across different psychoacoustic tasks.
E. C. Wilson, Zachary D. Perez, Louis D. Braida, and Charlotte M. Reed
(Res. Lab. of Electroniocs, Massachusetts Inst. of Technol., Rm. 36-751,
MIT, 77 Massachusetts Ave., Cambridge, MA 02139, cmreed@mit.edu)
How likely is it that listeners who are superior on one aural attribute are
also superior on a second aural attribute? The ability of listeners was measured on two psychoacoustic tasks: (1) detection of tones in noise and (2)
detection of amplitude modulation for masked tone complexes. We tested
nine listeners (4 M and 5 F) with clinically normal hearing and an age range
of 18-21 years using a transformed up-down method with trial-by-trial feedback, and averaged over three measurements. First we determined absolute
thresholds for 500-ms tones. Then, using three levels of tones spanning a 30
dB range, we determined both masked tone detection (F = 1, 2, 4, 8 kHz)
and masked three-tone discrimination (same center frequencies) with modulation frequencies of Fm = 3, 30, 300 Hz. We discuss how to analyze the
data obtained to determine whether a listener’s performance on a given task
can be used to predict performance on other tasks.
4pPPc41. Sonification of a friction sensor for an acoustic characterization of the human skin. Jean-François Petiot (IRCCyN, Ecole Centrale de
Nantes, 1 rue de la noe, BP92101, NANTES 44321, France, jean-francois.
petiot@irccyn.ec-nantes.fr), Armelle Bigouret, Sonia GAGNAIRE, Alex
Nkengne (Laboratoires CLARINS, PONTOISE, France), Hassan
ZAHOUANI, Roberto Vargiolu (LTDS, UMR CNRS 5513, Ecully, France),
E
(IRCCyN, Ecole Centrale de Nantes, NANTES,
and Ludovic FOUER
France)
The friction properties of surfaces can be investigated via acoustic sensors that measure the noise generated during the sliding of the probe on the
surface. This work presents a characterization of the human skin using a
friction sensor, designed to measure the signal generated during the interaction between the finger of the experimenter and the skin of a subject. A
panel of subjects with different skin characteristics (age, hydration, firmness) was first measured and supervised learning algorithms were used to
infer the typology from the labeled data and the signals of the sensor. Next,
a sonification of the signals was designed to make audible the differences
between the skin categories. The sonification is based on a sound synthesis
of a bowed string musical instrument, the violin, which mimics the interaction of the finger on the skin and constitutes a relevant metaphor. From the
synthetized sounds corresponding to different experimental conditions
(before and after the application of a cosmetic product), hearing tests were
performed with a panel of experts. Results show that noticeable differences
between the sounds can be highlighted that could make possible to test the
efficacy of a product and to promote it with an original musical universe.
4pPPc42. Infants’ use of onset asynchrony in the segregation of concurrent vowels. Monika-Maria Oster and Lynne A. Werner (Speech and Hearing Sci., Univ. of Washington, 1417 North East 42nd St., Seattle, WA
98105, mmoster@uw.edu)
Separating speech from competing speech is a difficult task for infants
even though their ability to encode sounds appears to be mature. One possible explanation is that infants have greater difficulties extracting speech
from the complex sound mixture arriving at their ears. Adults use acoustic
cues such as differences in onset or harmonicity to group components that
belong to the same sound source and separate those that do not. The sparse
evidence on how infants segregate sounds indicates that they use some of
these acoustic cues. However, temporal cues, which are strong cues for
adults, have not been investigated in infants. This study examined infants’
ability to use onset asynchrony cues to separate simultaneous vowels using
a single-interval observer-based procedure. Three- and seven-month-old
infants and young adults were trained to identify one target vowel in a modified double-vowel paradigm. Listeners were tested in a simultaneous onset
Acoustics ’17 Boston
3901
4p WED. PM
4pPPc37. Investigating audition with a generative model of impact
sounds. James Traer and Josh McDermott (Brain and Cognit. Sci., MIT, 77
Massachusetts Ave, Cambridge, MA 02139, jtraer@mit.edu)
condition and either a 100 or 200 ms asynchrony condition. Cue benefit was
defined as the improvement in sensitivity resulting from the addition of the
asynchrony cue. Pilot data indicates that an onset asynchrony of 100ms
improves performance for all age groups and that the cue benefit increases
with age.
4pPPc43. Analysis on decorrelation effect of audio signal in multichannel reproduction . Dan Rao (School of Phys., South China Univeristy of
Technol., Acoust. Lab.,School of Phys.,South China Univeristy of Technol.,Tianhe district, Guangzhou, Guangdong 510641, China, phdrao@scut.
edu.cn)
Decorrelation of audio signal is a process that generates two or more
incoherent signals from a single input signal, which has many applications
in artificial auditory effects, such as broadening the apparent source width
(ASW), enhancing the subjective envelopment, producing subjective diffusion in multichannel reproduction, etc. Frequently used decorrelation
method is convoluting audio signal with random (or pseudo-random) noise.
In this work, we focus on the relationship of reproduction zone of multichannel reproduction with decorrelation degree. Simulation calculation
result show that, by changing the decorrelation degree which is measured by
the interaural cross-correlation coefficients (IACC) of reproduced signals,
the timbre will change when move off the listening center. Listening zone
with small timbre deviation (for example, binaural loudness spectra less
than 1dB/ERB) relative to center position increase with the value of IACC
decrease. But as well known, for good subjective diffusion, the least IACC
value does not mean the best effect. There exit a best IACC value range
from 0.2~0.4 for different audio signal. Adapting a best IACC value, a
reproduction zone across different frequencies can be achieved.
4pPPc44. Does hearing loss affect the use of information at different frequencies? Results from a simultaneous tonal pattern discrimination
task in normal-hearing and hearing-impaired listeners. Elin Roverud,
Virginia Best (Boston Univ., 635 Commonwealth Ave., Dept of Speech,
Lang. & Hearing Sci., Boston, MA 02215, emroverud@gmail.com), Judy R.
Dubno (Otolaryngology- Head and Neck Surgery, Medical Univ. of South
Carolina, Charleston, SC), Christine Mason, and Gerald Kidd (Boston
Univ., Boston, MA)
This study extends previous work [Roverud et al., Trends Hear, 20, 117, 2016] reporting differences between normal-hearing (NH) and hearingimpaired (HI) listeners in their use of low- vs. high-frequency information.
In that study, listeners identified well-learned tonal patterns presented simultaneously at two center frequencies (CFs). The CFs corresponded to lesser
and greater hearing loss in the HI group. Despite matched identification performance at each CF in quiet, when patterns were presented simultaneously,
HI listeners were better able to identify patterns at the CF corresponding to
better hearing. Here, we extend that work using a discrimination rather than
identification task, with less reliance on recall. Each trial consisted of three
intervals, each containing simultaneous tonal patterns at two CFs. Individual
frequency discrimination thresholds were used to set the separation between
elements within each pattern, ensuring equal discriminability across listeners and CFs. The first (referent) interval contains two simultaneous randomfrequency patterns, one at each CF. One of the two sequential comparison
intervals contains a match to the referent pattern at one of the CFs, and the
listener indicates the comparison interval containing the match. Results
examining differences for the two CFs across NH and HI listeners will be
presented. [Work supported by NIH/NIDCD.]
4pPPc45. Effects of induction sequences on the tendency to segregate
auditory streams: Exploring the stream biasing effects of constant- and
alternating-frequency inducers. Saima L. Rajasingam, Robert J. Summers,
and Brian Roberts (Psych., School of Life and Health Sci., Aston Univ., Birmingham B4 7ET, United Kingdom, b.roberts@aston.ac.uk)
The extent of stream segregation for a test sequence comprising high(H) and low-frequency (L) pure tones, presented in a galloping rhythm (e.g.,
LHL-LHL-…), is much greater when preceded by a constant-frequency
induction sequence matching one subset of the constituent tones (e.g., L-LL-L-…) than by an inducer of the same duration configured like the test
3902
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
sequence. This difference persists for several seconds after the test sequence
begins. The origin of this effect was explored using short (2 s) induction
sequences followed by long (16-20 s) test sequences. In experiment 1, LHLsequences were used and one or other subset of the inducer tones was attenuated (0-24 dB in 6-dB steps, and 1). Greater attenuation of either subset
led to a progressive increase in the segregation of the subsequent test
sequence, towards that following the constant-frequency inducer. In experiment 2, HLH- sequences were used and the frequency of the L-subset of inducer tones was raised or lowered relative to their test-sequence
counterparts. Either change increased subsequent stream segregation. These
outcomes support the notion of stream biasing—that constant-frequency
inducers promote segregation by capturing the matching subset of testsequence tones into an on-going, pre-established stream. [Work supported
by Aston University.]
4pPPc46. Auditory sequential integration of spectral cues revealed
using an informational masking paradigm. Yi Shen (Speech and Hearing
Sci., Indiana Univ. Bloomington, 200 S Jordan Ave, Bloomington, IN
47405, shen2@indiana.edu)
When a target tone is embedded in a six-tone, random-frequency
masker, a preview of the masker, i.e., a precursor, would provide a priori
knowledge with regard to the masker, leading to release from informational
masking. In the current study, the precursor was in the form of a sequence
of 50-ms, six-tone bursts, and the effect of precursor was evaluated through
the detectability of the target. In Experiment 1, each burst in the sequence
matched the spectral content of the masker. As the number of bursts
increased, the precursor gradually became a more effective cue. On the
other hand, the effect of inter-burst interval was not significant. In Experiment 2, each of the six tones in each of the precursor burst was presented
with a probability p, therefore the spectral information in each burst was
incomplete. As the number of bursts increased, a lower value of p was
required to maintain the detectability of the target tone, suggesting that listeners were able to integrate information over time. As the inter-burst interval increased from 0 to 200 ms, the sequential integration became less
efficient.
4pPPc47. Assessment of the human ability to reproduce known sounds
via a synthesis procedure. Jennifer Lentz (Speech and Hearing Sci., Indiana Univ., 200 S. Jordan Ave., Bloomington, IN 47405, jjlentz@indiana.
edu)
One method used to measure the properties of tinnitus perception is
allow listeners to “synthesize” their tinnitus by summing sounds of various
center frequencies, intensities, and bandwidths. The values of the parameters can be set by the listener, and the acoustic parameters of the resultant
sound are thought to describe the characteristics of an individual’s tinnitus.
However, the validity of this method is unknown. Here, we present data
establishing the abilities of normal-hearing listeners to reproduce the acoustics of known stimuli including pure tones, complex tones, and noises containing various bandwidths and frequencies. Of particular interest is the
validity of this method in estimating acoustic properties of high-frequency
sounds >8kHz. Results will be interpreted in the context of the limits of auditory perception and in the context of data collected on listeners reporting
tinnitus of tonal and noisy qualities.
4pPPc48. Effects of temporal coherence and signal-frequency uncertainty for tone detection in a random-frequency multi-tonal masker.
Emily Buss (UNC Chapel Hill, 170 Manning Dr., G190 Physicians, Chapel
HIll, NC 27599, ebuss@med.unc.edu) and Huanping Dai (The Univ. of Arizona, Tucson, AZ)
Detection thresholds tend to be lower when the spectral and/or temporal
characteristics of the signal are predictable than unpredictable, particularly
in an unpredictable masker. The present study evaluated the detrimental
effect of frequency uncertainty and the beneficial effect of temporal coherence for a pure-tone signal presented in a multi-tonal masker with unpredictable frequency sequences. Stimuli were composed of 80-ms tone bursts.
The signal was 1, 2, 4, or 8 sequential tone bursts, and burst frequency was
either fixed across trials or randomly varied across trials but fixed across
Acoustics ’17 Boston
3902
4pPPc49. A multi-second adaptive integration process underlies sound
texture perception. Richard McWalter (Hearing Systems, Tech. Univ. of
Denmark, Ørsteds Plads b. 352, Kgs. Lyngby 2800, Denmark, rmcw@elektro.dtu.dk) and Josh McDermott (Brain and Cognit. Sci., MIT, Cambridge,
MA)
Temporally homogenous sound textures—as produced by rain, fire or
insect swarms—are thought to be represented with time-average statistics
measured from early auditory representations. We explored the averaging
process involved in sound texture perception using “texture steps”—stimuli
whose statistics changed at some point in time. We reasoned that judgments
should be biased by the stimulus history included in the averaging process.
Listeners were presented with two sounds, a texture step and a probe texture
(with constant statistics). Listeners were asked to compare the endpoint of
the texture step to the probe and to select the sound that was most similar to
a familiar reference texture. Judgments were biased by the presence of a
step 1 second prior to the stimulus endpoint, but the bias was reduced when
the step occurred 2.5 seconds from the endpoint. In addition, the bias was
substantially larger for textures whose statistics were less homogenous
(ocean waves) than textures whose statistics were more homogenous (rain).
The results suggest a texture integration process operating over several seconds, but whose integration window is adapted to the homogeneity of the
sound signal, averaging over longer periods of time for more variable
textures.
4pPPc50. Power dissipation in the Organ of Corti. Srdjan Prodanovic,
Sheryl Gracewski, and Jong-Hoon Nam (Mech. Eng., Univ. of Rochester,
212 Hopeman Bldg, Mech. Eng., Rochester, NY 14627-0132, jong-hoon.
nam@rochester.edu)
In the cochlea, acoustic energy is transmitted toward the apex through
the vibrations of a viscoelastic partition known as the organ of Corti complex. The dimensions of the vibrating structures range from a few hundred
micrometers to a few micrometers. Vibrations of micro-structures in viscous
fluid are subjected to energy dissipation. Because the viscous dissipation is
considered to be detrimental to the function of hearing—sound amplification
and frequency tuning, the cochlea is believed to use cellular actuators to
overcome the dissipation. We have developed a computational model of the
cochlea that incorporates viscous fluid dynamics, organ of Corti microstructural mechanics, and electro-physiology of the outer hair cells. The
model is validated by comparing with experimental results in the literature,
such as the viscoelastic response of the tectorial membrane, and the cochlear
input impedance. Using the model, we investigated how dissipation components in the cochlea affect its function. Our results suggest that most energy
dissipation occurs within the organ of Corti complex, not in the scalar fluids.
Our results suggest that appropriate dissipation enhances the tuning quality
by confining the spread of energy from the amplification site.
4pPPc51. Different vibration modes of the Organ of Corti due to mechanical and electrical stimuli. Wenxiao Zhou, Talat Jabeen, and JongHoon Nam (Mech. Eng., Univ. of Rochester, 140 Hutchison Rd, 235
Hopeman Bldg (Dep’t of Mech. Eng), Rochester, NY 14627, wzhou10@ur.
rochester.edu)
The organ of Corti complex (OCC) is highly organized with structurally
significant matrices such as the tectorial and basilar membranes and cells
such as the pillar cells, Deiters cells and outer hair cells. It is getting clearer
that the fine structures in the OCC vibrates out of phase under certain conditions. The functional consequence of the complex vibration modes is
3903
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
unclear. We present our computational and experimental results on different
vibration modes of the OCC. Using a custom-fabricated micro-chamber, the
vibrations of isolate OCC due to mechanical and an electrical impulse were
measured. The same conditions were simulated with a computer model.
When there is no outer hair cell feedback, the OCC fine structures vibrate in
phase upon a mechanical impulse. With the outer hair cell feedback, initial
response is similar to the passive case, but after first couple of oscillatory
cycles, the top and bottom surfaces of the OCC vibrate out of phase, similar
to the response to an electrical impulse. Our measurements and model simulations show that the outer hair cell’s mechanical feedback modulates the
OCC vibration modes.
4pPPc52. Finite-element model of the nonlinear distortion and linear
reflection sources of the otoacoustic emissions within the mouse cochlea.
Hamid Motallebzadeh and Sunil Puria (Med. School, Harvard Univ., 243
Charles St., Boston, MA 02114, h.motallebzadeh@gmail.com)
It has been hypothesized that the otoacoustic emissions (OAEs) are generated at least by two fundamental mechanisms; the nonlinear distortion and
linear reflection within the cochlea. The recent studies show that different
components of the organ of Corti (OoC) vibrate unsynchronized both in
phase and magnitude. The difference is dramatically frequency dependent.
We have developed and validated a finite-element model of the mouse cochlea against two sets of measurement at the cochlear base and apex. The nonlinear distortion and linear reflections have been implemented in the modelthe former one by the active outer hair cells (OHCs) and the later one by
introducing the impedance irregularity in the baseline model components
(e.g., stiffness, mass and geometrical parameters). To differentiate the contribution of different components of OoC on the total OAE response, the
baseline parameters of the model has been adjusted to vary vibration magnitudes of those components. The preliminary results show that the impedance
irregularities generate the phase decay of the stimulus-frequency OAEs
more than the nonlinear distortion (e.g., 1-2 cycles per 1kHz with 10% random perturbations of Young’s modulus of the basilar membrane), consistent
with data in the literature. The perturbations of other model parameters are
under investigation.
4pPPc53. A novel dominant GJB2 (DFNA3) mutation in a Chinese family. Hongyang Wang (Eaton-Peabody lab, massachusetts eye and ear infirmary, 243 Charles St., Boston, MA 02114, hongyang_wang@meei.harvard.
edu)
To decipher the phenotype and genotype of a Chinese family with autosomal dominant non-syndromic hearing loss (ADNSHL) and a novel dominant missense mutation in the GJB2 gene (DFNA3), mutation screening of
GJB2 was performed on the propositus from a five-generation ADNSHL
family through polymerase chain reaction amplification and Sanger
sequencing. The candidate variation and the co-segregation of the phenotype were verified in all ascertained family members. Targeted genes capture and next-generation sequencing (NGS) were performed to explore
additional genetic variations. We identified the novel GJB2 mutation
c.524C>A (p.P175H), which segregated with high frequency and was
involved in progressive sensorineural hearing loss. One subject with an
additional c.235delC mutation showed a more severe phenotype than did
the other members with single GJB2 dominant variations. Four patients
diagnosed with noise-induced hearing loss did not carry this mutation. No
other pathogenic variations or modifier genes were identified by NGS. In
conclusion, a novel missense mutation in GJB2 (DFNA3), affecting the second extracellular domain of the protein, was identified in a family with
ADNSHL.
4pPPc54. Assessment of hearing in coma patients employing auditory
brainstem response, electroencephalography, and eye-gaze-tracking.
Andrzej Czyzewski and Bozena Kostek (Gdansk Univ. of Technol., Narutowicza 11/12, Gdansk 80-233, Poland, ac@pg.gda.pl)
The results of the study conducted by Tagliaferri et al. in 12 European
countries indicate that the ratio of registered brain injury cases in Europe
amounts to 150-300 per 100 000 people, with the European mean value of
235 cases per 100 000 people. The project presented in the paper assumes
Acoustics ’17 Boston
3903
4p WED. PM
bursts. The masker was composed of four streams of tone bursts; the frequency of each burst was randomly drawn from a uniform distribution 2504000 Hz, with the caveat that synchronous masker and signal bursts were
separated by 1/5 oct or more. Signal and masker bursts were synchronously
gated, and the masker played continuously throughout a threshold estimation track. Thresholds tended to improve with increasing numbers of signal
bursts and worsen with increasing signal frequency uncertainty. These factors appeared to interact; signal frequency uncertainty was less detrimental
when the signal was composed of larger numbers of bursts. Preliminary
models of detection will be discussed.
development of a combined metric of patients’ state remaining in coma by
intelligent fusion of GCS (subjective Glasgow Coma Scale or its derivatives) with objective data acquired using ABR (Auditory Brainstem
Response), EEG (electroencephalography), and EGT (Eye-Gaze-Tracking).
Variety of coma patients from cooperating medical care centers were examined. Senses examination involved the assessment of their function by a
medical specialist, with special attention paid to hearing tests results
obtained with an ABR measuring device. The assessment included speechbased cognitive functions such as comprehension, phonematic hearing and
auditory gnosia. Achieved results are discussed in the paper showing that
most patients remaining in coma after a severe brain injury have preserved
the ability to receive sound stimuli. [The project was partially funded by the
Polish National Science Centre on the basis of the decision No. DEC-2014/
15/B/ST7/04724.]
WEDNESDAY AFTERNOON, 28 JUNE 2017
ROOM 201, 1:20 P.M. TO 5:40 P.M.
Session 4pSAa
Structural Acoustics and Vibration, Biomedical Acoustics, Signal Processing in Acoustics, and Physical
Acoustics: Novel Techniques for Nondestructive Evaluation II
Brian E. Anderson, Cochair
N145 Esc, Brigham Young Univ., MS D446, Provo, UT 84602
Marcel Remillieux, Cochair
Los Alamos National Laboratory, Geophysics Group (EES-17), Mail Stop: D446, Los Alamos, NM 87545
Sylvain Haupert, Cochair
Laboratoire d’Imagerie Biom
edicale, UPMC Sorbonnes Universit
e CNRS INSERM, 15 rue de l’
ecole de m
edecine,
PARIS 75006, France
Invited Papers
1:20
4pSAa1. Toward real-time assessment of material properties using elastic guided waves. Nicolas Bochud, Jer^
ome Laurent, François
Bruno, Aurelien Baelde, Daniel Royer, and Claire Prada (Institut Langevin, ESPCI Paris, CNRS (UMR 7587), PSL Res. Univ., 1 rue
Jussieu, Paris 75005, France, nicolas.bochud@espci.fr)
Elastic guided waves are one of the most promising quantitative ultrasound techniques for the assessment of elastic properties in
plate-like structures. Measurements of guided waves, associated with suitable waveguide modeling, can yield accurate estimates of
waveguide properties like thickness and stiffnesses. Such a model-based approach requires solving a multi-parametric inverse problem
to match experimental data with guided modes. Sensitivity studies suggest that isolated areas of the dispersion curves have predominant
influence on specific model parameters. In particular, guided waves in a free plate exhibit a resonant behavior at frequencies where their
group velocity vanishes while their phase velocity remains finite (so-called zero-group velocity Lamb modes). In this study, we investigate the feasibility of exploiting targeted data in the vicinity of these particular points, along with data associated with the cut-off frequencies. To this end, guided waves measurements are performed on a series of materials (anisotropic plates and tri-layer structures)
using a linear multi-element transducer array and dedicated signal processing. A genetic algorithm-based inverse problem is then solved
to recover the waveguide properties. Preliminary results indicate that, owing to the measurements versatility in terms of acquisition
speed, this approach has the potential for inferring reliable structural and material properties in real-time.
1:40
4pSAa2. Acoustic emission to probe slow dynamics in complex materials. Mourad Bentahar, Xiaoyang Yue, Charfeddine Mechri,
and Silvio Montresor (Laboratoire d’Acoustique de l’Universite du Maine, Ave. Olivier Messiaen, Le Mans cedex 09 72085, France,
mourad.bentahar@univ-lemans.fr)
Acoustic emission has already been applied to give complementary information on the evolution of nonlinear behavior microcracked
materials. In particular, energies of elastic waves emitted during the creation and propagation of microcracks in composites revealed to
be in good correlation with relaxation time [1]. However, it remains important to study the relaxation (and/or conditioning) of complex
materials as a function of mechanisms that lie behind the experimental observations. In this contribution, composite and concrete samples taken at microcracked states are submitted to slow dynamics experiments. Acoustic emission hits recorded during conditioning are
first presented. During the high level excitation, the pump signal is at a very low frequency (few hundreds of hertz) in order to separate
clearly the acoustic emission activity (> 50 kHz) and the low frequency excitation. Relaxation of complex materials has been realized
3904
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3904
with and without a probing ultrasonic wave. Results revealed that despite the weak acoustic emission activity, recorded acoustic emission hits contain enough information in order to improve our understanding of relaxation probed in complex materials. [1] M. Bentahar,
R. El Guerjouma, “Monitoring progressive damage in polymer-based composite using nonlinear dynamics and acoustic emission.”
Acoust. Soc. Am. 125, EL39 (2009).
2:00
4pSAa3. A non-collinear mixing technique to measure the acoustic nonlinearity parameter of an adhesive bond from one side of
the sample. Jianmin Qu (Tufts Univ., 200 College Ave, Medford, MA 02155, jianmin.qu@tufts.edu), Taeho Ju, Jan Achenbach (Northwestern Univ., Evanston, IL), and Laurence Jacobs (Georgia Tech, Atlanta, GA)
he Acoustic Nonlinearity Parameter (ANLP) of adhesive bonds. One of the most significant features of the new method is that it
requires only one-side access to the adhesive bond being measured, which significantly increases its utility in field measurements. To
demonstrate the effectiveness of the newly developed technique, an adhesively jointed aluminum sample was measured with different
heat loading time, using the non-collinear mixing technique with a longitudinal and a shear wave as incident waves to obtain the ANLP
of the adhesive bond. The measured results clearly show that the ANLP varies with aging time. To verify that the signals received from
the shear wave receiver are indeed the mixed wave, the finite element method was used to simulate the wave motion in the test sample.
The simulation results clearly show that the signals recorded by the shear wave receiver are the desired mixed wave, whose amplitude is
proportional to the ANLP of the adhesive bond.
2:20
4pSAa4. Inversion of acoustic nonlinearity parameter (beta) from nonlinear resonance ultrasound spectroscopy measurements.
Sunil Kishore Chakrapani and Daniel J. Barnard (Ctr. for Nondestruct. Eval., Iowa State Univ., 127 ASC II, 1915 Scholl Rd., CNDE,
Ames, IA 50011, csk@iastate.edu)
The acoustic nonlinearity parameter (beta) is well known to be sensitive to lattice parameters and defects. There are several techniques which can measure this parameter such as laser interferometry, capacitance microphone, piezoelectric methods, dynamic acoustoelastic methods, etc. However, the instrumentation required is expensive and complex for some of these techniques. There are also
limitations on geometry and surface finish of the samples. Nonlinear resonance ultrasound spectroscopy has been previously used to
measure the classic and non-classical nonlinearity of materials. The current work presents a model which can be used to invert the acoustic nonlinearity parameter from nonlinear resonance measurements. A diverse group of solids including metals, and composites were
chosen, and nonlinear resonance experiments were conducted to measure the frequency shift. A nonlinear vibration model built from
first principles was used to relate the frequency shift to the acoustic nonlinearity parameter. It was also observed that the sign or phase of
the parameter can also be measured using this technique. The measured acoustic nonlinearity parameters were found to be in good agreement with reference values obtained from literature.
2:40
3:20
4pSAa5. Study of the mechanical properties of thin films involved in
ophthalmic glasses. Frederic FAESE (INSP, UPMC, 4 Pl. Jussieu - Boite
courrier 840, Paris cedex 05 75252, France, frederic.faese@insp.upmc.fr),
Delphine Poinot (Adv. Characterization Group, Essilor R&D, Creteil,
France), Philippe Djemia (LSPM, Universite Paris XIII, Villetaneuse,
France), Sebastien Chatel (Adv. Characterization Group, Essilor R&D,
Creteil, France), and Laurent Belliard (INSP, UPMC, Paris, France)
4pSAa6. Ultrasonic measurements by means of continuous waves in a
foam saturated with air. Roberto Longo (ESEO Group - Laboratoire
d’Acoustique de l’Universite du Maine LAUM – UMR CNRS 6613, 10
Boulevard Jean Jeanneteau, Angers 49000, France, roberto.longo@eseo.fr),
Aroune Duclos, and Jean-Philippe Groby (Laboratoire d’Acoustique de
l’Universite du Maine LAUM – UMR CNRS 6613, Le Mans, France)
The mechanical properties of thin films involved in the manufacturing
of ophthalmic glasses have been investigated by two complementary laser
based techniques. On the one hand, picosecond ultrasonics (PU) measurements allows to determine the acoustical longitudinal velocity in the sample,
hence the elastic constant c11. On the other hand, Brillouin light scattering
(BLS) measurements leads to the value of the transverse velocity, hence c44.
In the case of isotropic materials, these two coefficients give a direct access
to the value of the Young’s modulus and Poisson’s ratio of the film. The
influence of the film characteristics on its mechanical properties will be discussed, mainly regarding its thickness and the manufacturing process.
Another issue concerning the effect of the film environment will be
addressed. Especially, we will discuss the influence of the substrate on the
mechanical properties of a single film, and the link between the mechanical
properties of a single film and the ones of a films stacking.
3:00–3:20 Break
3905
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
The study of porous materials has always been of great interest. Several
characterization methods have been developed by means of ultrasonic
waves, mainly because of their non-invasive behavior. A typical set-up
involves transmission and reflection measurements through the test material
using pulse signals. The received echoes are analyzed and compared with
analytical models in order to estimate some specific acoustic properties of
the sample itself. The main drawback of this approach is the low signal-tonoise ratio recorded when testing highly attenuating materials. This disadvantage is even more pronounced for measurements in air. The present
work aims to overcome these limitations, replacing the excitation signals by
continuous muti-harmonic waves. These signals have been developed by
optimizing the phase of each harmonic, resulting in a low crest factor and
consequently in a better signal-to-noise ratio Moreover their frequency content can be easily adapted to the different transducers used during the test.
The method has been used to test a foam saturated with air, performing ultrasonic measurements in transmission/reflection, at different angles of incidence. The results show a significant improvement of the measurements,
facilitating the estimation of the foam acoustics properties.
Acoustics ’17 Boston
3905
4p WED. PM
Contributed Papers
3:40
4pSAa7. Psychoacoustic advancement of the tap test for wind turbine
blades. Gaetano Andreisek (Audio Information Processing, Technische
Universit€at M€unchen, Arcisstrasse 21, Munich 80335, Germany, gaetano.
andreisek@tum.de), Christian U. Große (Chair of Non-destructive Testing,
Technische Universit€at M€
unchen, Munich, Germany), and Bernhard U.
unchen,
Seeber (Audio Information Processing, Technische Universit€at M€
Munich, Germany)
Wind power plants, and in particular their blades, have to withstand significant environmental stresses. Regular testing of the blades’ structural integrity is essential to ensure a lifetime of fifteen to twenty years. Such
testing is performed by experienced engineers with a tap test, which is a fast
and robust non-destructive technique. By tapping on the shell of the blade
and listening to the emitted sound, engineers can assess potential defects in
the composite material. This work aims at identifying acoustic features that
enable an automated algorithm to perform a tap test. Ten engineers familiar
with the inspection of blades participated in a listening experiment in which
audible differences between tap test recordings from intact and defective
material were rated using a set of defined adjectives. As a result, acoustic
features, such as statistical moments of the spectrum, could be correlated
with defect-indicating responses. For a more detailed acoustic assessment,
further acoustic features were associated with different types of defects and
the effect of bearing on the acoustic profile (full blade vs. cut-out) was
investigated. Consequently, an informed algorithm incorporating the knowledge of inspectors will be proposed to support the automated determination
of defects in blades of wind power plants.
4:00
4pSAa8. Non destructive evaluation of adhesion quality in a bi-material
structures. Lynda Chehami and Nico Declercq (Georgia Tech LorraineMetz, 2 rue Marconi, Metz Technopole, laboratoire Georgia Tech Lorraine,
Metz 57070, France, chehamily@hotmail.fr)
The difficulty of applying ultrasonic techniques to bi-material structures
lies in the fact that different phenomena coexist, such as the non-linear
effects caused by the adhesion properties between matrix-fibers and the diffraction effect caused by the periodicity of those structures. This work deals
with the study of the combined effect of non-linear effects on a 3D bi-materials structure “polymer-titanium.” For this purpose, a Snapscan system is
used which generates high amplitude pulses and receives signals composed
of the fundamental mode and the higher harmonics. The transducers used
are chosen according to the thickness and the periodicity scale of the sample. First, measurements in transmission are made where the transmitted ultrasonic waves are measured by changing the angle between the receiver
and the sample. For each angle, a spectrogram is realized. The experimental
observations on the spectrograms show that for small incident amplitudes
we measure only the fundamental frequency. When the amplitude is gradually increased, high harmonics are generated due to the non-linear stressstrain relationship. Then, reflection measurements were made to measure
the Bragg spectrum. The results show that the internal defects (due to lack
of adhesion) disturb the periodicity of the structure and thus the Bragg spectrum. This non-destructive technique can be a valuable aid for the automotive industry, for example, in order to control their products during
manufacturing process.
4:20
4pSAa9. Optimization of time reversal focusing in resonant and nonresonant systems through various signal processing techniques. Sarah
M. Young, Matthew L. Willardson, Michael Denison, Trent Furlong, and
Brian E. Anderson (Phys. and Astronomy, Brigham Young Univ., ESC
N283, Provo, UT 84601, sarahmyoung24@gmail.com)
This research optimizes a time reversal focus of energy in terms of amplitude and quality by exploring several signal processing techniques
applied to the reversed impulse response. Techniques explored here include
deconvolution or inverse filtering [M. Tanter et al., J. Acoust. Soc. Am.
108, 223-234 (2000)], one-bit time reversal [A. Derode et al., J. Appl. Phys.
85, 6343-6352 (1999)], and a new technique termed “clipping.” A parameterization study comparing the maximum focal amplitude and focal temporal quality of time reversal focal signals for these techniques with different
3906
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
settings will be presented. Focal signals are explored in two different systems. The first is a resonant system comprised of a steel sample with a piezoelectric transducer and a scanning laser Doppler vibrometer (SLDV). The
second, nominally non-resonant system, comprises a studio monitor loudspeaker and a precision microphone in a reverberant space above the
Schroeder frequency. Comparing the focal quality of various time reversal
techniques in two starkly different systems shows the optimal conditions for
focusing in each system.
4:40
4pSAa10. An investigation of detection metrics for the remote acoustic
sensing of mechanical changes in a vibrating plate. Tyler J. Flynn and
David R. Dowling (Mech. Eng., Univ. of Michigan, Ann Arbor, 1231 Beal
Ave., Ann Arbor, MI 48109, tjayflyn@umich.edu)
Acoustic radiation from a mechanical structure due to broadband forcing
is inherently dependent on the structure’s material, geometry, and boundary
conditions. Measurements of the radiated field can be used to detect mechanical changes (i.e., defects) when compared to known baseline measurements of the same structure. However, there are many available options for
evaluating these changes, and each presents benefits and liabilities. In this
presentation, several detection metrics are evaluated for the remote acoustic
detection of mechanical changes in a 0.3-m-square by 3-mm-thick aluminum plate with 100 to 2000 Hz broadband forcing. The radiated acoustic
field from the vibrating plate is recorded with a 15-element receiver array
used to reconstruct an estimate of the plate’s acoustic response. This
response is then quantitatively compared to previous baseline responses
using cross-correlation, spectral analysis, wavelet analysis, frequency
response function analysis, and other relevant detection techniques. The performance of each detection metric is evaluated and ranked by considering
the minimum detectible size of a change, required a priori knowledge of the
system, and ease of implementation. The feasibility of classification of the
detected changes is also considered. [Sponsored by NAVSEA through the
NEEC, and by the US DoD through an NDSEG Fellowship.]
5:00
4pSAa11. Acoustic waves in fluid-solid nested cylindrically layered
structures: Theoretical investigation. Yang Liu, Bikash K. Sinha, and
Smaine Zeroug (Math & Modeling, Schlumberger-Doll Res., 1 Hampshire
St., Cambridge, MA 02139, liuyang5199@gmail.com)
Understanding the complex elastic-wave physics prevalent in fluid-elastic cylindrically-layered structures is of importance in many NDE fields,
and most pertinently in the domain of well integrity evaluation in the oil and
gas industry. Well construction requires lowering steel casings into the hole
and cementing them to the wellbore as well as to each other so as to provide
mechanical support and enable control over the channeling of the produced
fluids—thus preventing unwanted leakage of hydrocarbons to the surface or
shallow aquifers. Evaluating whether cement has cured and sealed the
desired annuli between casing strings and between the outer string and rock
formation is important to ascertain before putting the well in production. In
this talk, we establish a mathematical framework to analyze the guided
wave fields in a dual-string system embedded in infinite media—a configuration that is of great significance in oil field. We introduce a novel Sweeping Frequency Finite Element Modeling (SFFEM) method to investigate the
dispersions and modal characteristics of the complex propagating signals
synthesized over an axial array of receivers. The SFFEM provides for a flexible framework to study the modal sensitivities in a multi-string system with
arbitrary eccentricity, azimuthal heterogeneities, and partial bonded
interfaces.
5:20
4pSAa12. Acoustic waves in fluid-solid nested cylindrically layered
structures: Experimental validation. Yang Liu, Ralph M. D’Angelo,
Larry McGowan, Bikash K. Sinha, and Smaine Zeroug (Math & Modeling,
Schlumberger-Doll Res., 1 Hampshire St., Cambridge, MA 02139,
liuyang5199@gmail.com)
This abstract describes an experimental study that accompanies the theoretical development reported in a companion abstract “Acoustic waves in
fluid-solid nested cylindrically layered structures: theoretical investigation.”
Acoustics ’17 Boston
3906
Scaled laboratory experiments are conducted to acquire reference data used
to verify the modeling approach developed to predict the guided modal
characteristics of axially-propagating waves in concentric and non-concentric cylindrical structures immersed in fluid. These structures simulate the
geometries and environment encountered downhole when conducting acoustical measurements to evaluate the cementation of oil and gas cased wells.
Typically, cement is expected to fill the annular space between the two
nested steel casing strings and between the outer string and rock formation.
Measurements are made in concentric, and non-concentric, dual-string steel
pipe of 1=4 scale that are suspended in an immersion tank. The acquired data
set is then analyzed for modal content using both Slowness-Time-Coherence
and modified Matrix Pencil methods and compared to theoretical predictions. A comparison of the experimental and numerical results indicates that
the Sweeping Frequency Finite Element (SFFEM) is capable of accurately
reproducing the higher order multiple wave fields observed experimentally
in the fluid-filled double string geometries.
WEDNESDAY AFTERNOON, 28 JUNE 2017
ROOM 205, 4:15 P.M. TO 5:40 P.M.
Session 4pSAb
Structural Acoustics and Vibration: Probabilistic Finite Element Analysis and Uncertainty Quantification
in Vibro-acoustic Problems
Micah R. Shepherd, Cochair
Applied Research Lab, Penn State University, PO Box 30, mailstop 3220B, State College, PA 16801
Kheirollah Sepahvand, Cochair
Mechanical, Technical University of Munich, Boltzmannstraße 15, Garching bei Munich 85748, Germany
Chair’s Introduction—4:15
Invited Papers
4:20
4pSAb1. Stochastic finite element on vibration analysis of composite plates having random damping parameters. Kheirollah
Sepahvand (Mech., Tech. Univ. of Munich, Boltzmannstraße 15, Garching bei Munich 85748, Germany, k.sepahvand@tum.de) and
Steffen Marburg (Mech., Tech. Univ. of Munich, Garching b. M€
unche, Germany)
4p WED. PM
Damping parameters of composite structures possess significant uncertainty due to the structural complexity of such materials. Considering the parameters as random variables, this work uses the generalized polynomial chaos (gPC) expansion to capture the uncertainty
in the damping and in the vibration responses of structures. A non-sampling based stochastic finite element formulation for damped
vibration analysis is developed in which the gPC expansion is used to represent damping uncertainty and stochastic responses with
unknown deterministic functions. The constructed gPC expansions for the parameters are used as random inputs to the FEM model to realize the responses on a few number of collocation points generated in random space. The realizations then are employed to estimate the
unknown deterministic functions of the gPC expansion approximating the responses. The application of the proposed method is practiced on sample fiber-reinforced composite plates having random modal damping parameters. Utilizing a few random collocation points,
the method indicates a very good agreement compared to the sampling-based Monte Carlo simulations with large number of
realizations.
4:40
4pSAb2. Random matrix theory and complexity in structural acoustics and vibrations. Richard Weaver (Dept. of Phys., Univ. of
Illinois, 1110 West Green St., Urbana, IL, r-weaver@uiuc.edu)
I present a short synopsis of (the Gaussian Orthogonal Ensemble, or GOE, of) Random Matrix Theory, and its relevance for Structural Acoustics. It is argued that, if a structure is generic, lacking special symmetries and with dynamics that depends on uncontrolled or
unknown details, i.e, is complicated, then its stiffness matrix is—for many purposes—as if it were taken from the Gaussian Orthogonal
Ensemble of matrices. Such matrices are symmetric and real with uncorrelated Gaussian entries, and have no preferred directions, that
is—it is an ensemble whose statistics are invariant under orthogonal transformations. The consequences are several predictions for the
statistics of high frequency responses, including those due to level repulsion, spectral rigidity, and Gaussian random modal amplitudes,
predictions that appear to be well satisfied in practice. We discuss why and when the GOE should be relevant to real structures whose
stiffness matrices are clearly not, to most appearances, typical members of this ensemble.
3907
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3907
5:00
4pSAb3. Modal analysis of vehicle engine-transmission unit: Finite element model and experimental investigation. Patrick Langer
(Chair of VibroAcoust. of Vehicles and Machines, Tech. Univ. of Munich, Boltzmannstraße 15, Garching bei M€
unchen 85748, Germany, p.langer@tum.de), Christian Guist (BMW Group, Munich, Germany), Kheirollah Sepahvand (Chair of VibroAcoust. of Vehicles
and Machines, Tech. Univ. of Munich, Garching bei Munich, Germany), and Steffen Marburg (Chair of VibroAcoust. of Vehicles and
Machines, Tech. Univ. of Munich, Garching bei M€
unchen, Germany)
The scope of this work is to enhance the reliability of three dimensional finite element (FE) models for modal analysis of enginetransmission of vehicles taking into account uncertainties. Uncertainty sources may be due to the system characteristics such as geometry, material behavior, boundary conditions and mesh density or as results of assumptions made during the modeling. To this end, the
natural frequencies of the entire unit are computed numerically using a 3D FE model and compared to experimental results obtained
from the non-contact experimental modal analysis using Laser Doppler Vibrometer (LDV). Essential experiences and knowledge are
collected from primary experimental investigations on single and bolted joints beam-like components. This helps to identify uncertainty
sources and limitations in FE modeling. Finally, uncertainties due to the initial torque of bolted connections and boundary conditions
associated to the unit are considered. The FE model then is updated to achieve accurate results compared to experimental data. In addition to accomplish a reasonable accurate model for the investigated system, the gained experiences and results in this paper can be
employed as general guidance and references for FE modeling of real complex engineering systems.
Contributed Paper
5:20
4pSAb4. Modal response uncertainty in structural acoustic systems
using generalized polynomial chaos expansion. Andrew S. Wixom, Sheri
Martinelli, Micah R. Shepherd, Stephen Hambric, and Robert Campbell
(Appl. Res. Lab., Penn State Univ., P.O Box 30, M.S. 3220B, State College,
PA 16801, axw274@psu.edu)
When knowledge of material design parameters is lacking, it is important to understand how this uncertainty effects the system response. Generalized polynomial chaos (gPC) expansions provide a means for quantifying
this uncertainty and typically show good agreement with Monte Carlo techniques at a much reduced cost [K. Sepahvand et al. / Applied Acoustics 87
3908
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
(2015) 23-29]. In this work, we apply gPC expansions to study the effects of
design parameter uncertainty on a structure’s modal characteristics—both
eigenvalues and eigenfunctions. The gPC expansions permit the propagation
of uncertainty from the design parameters to the modes and natural frequencies, which then characterize the vibration response of the system as a function of the random parameters. The response uncertainty can then be
described using the solutions of the deterministic system sampled carefully
over the parameter space. Uncertainty in the forcing function can also be
included in this formulation. Numerical calculations demonstrate these techniques to predict the structural acoustic uncertainty of a vibrating plate with
the results validated against Monte Carlo simulations. The effects of uncertain forcing functions will also be discussed.
Acoustics ’17 Boston
3908
WEDNESDAY AFTERNOON, 28 JUNE 2017
ROOM 312, 1:15 P.M. TO 5:20 P.M.
Session 4pSC
Speech Communication and Animal Bioacoustics: Measuring Speech Perception and Production Remotely:
Telehealth, Crowd-Sourcing, and Experiments over the Internet
Benjamin Munson, Cochair
University of Minnesota, 115 Shevlin Hall, Minneapolis, MN
Sebastian M€oller, Cochair
Quality and Usability Lab, TU Berlin, Sekr. TEL-18, Ernst-Reuter-Platz 7, Berlin 10587, Germany
Chair’s Introduction—1:15
Invited Papers
1:20
4pSC1. Using smartphone apps to map phonetic variation in British English, German, and Swiss German. Adrian Leemann
(Dept. of Linguist and English Lang., Lancaster Univ., University of Cambridge, Dept. of Theor. and Appl. Linguist, Sidgwick Ave.,
Cambridge, Cambridgeshire CB3 9DA, United Kingdom, al764@cam.ac.uk)
Traditional data collection methods in dialectology have difficulty in gathering sufficient quantities of data from a sufficient range of
localities to map phonetic variation at a national scale. By contrast, online surveys, whether browser-based or in the form of smartphone
apps, allow researchers to gather much larger quantities of data very quickly. We present results from data collected through such smartphone apps for 100K + speakers of British English, German, and Swiss German. Our apps ask users a set of questions about their language use—such as do they pronounce the ’u’ in ‘butter’ as /ˆ/ or /U/ (eliciting the FOOT-STRUT split)—and uses their responses to
predict their locality of origin. When selecting their regional variant, users essentially perform a perception task, as they listen to prerecorded items to make their decision. Using this data we can perform analyses of language change, comparing contemporary data with
historical surveys, such as the Survey of English Dialects form the 1950s. The second functionality allows users to provide production
data—recording ten sentences per speaker. Both, the prediction and recording functionality enable large-scale, unprecedented analyses
of national and regional phonetic variation. In this contribution, we further discuss the challenges and potential of this paradigm of data
collection.
1:40
4pSC2. Obtaining subjective ratings of voice likability through in-lab listening tests as opposed to mobile-based crowdsourcing.
Laura Fernandez Gallardo and Rafael Zequeira Jimenez (Quality and Usability Lab, Technische Universit€at Berlin, Ernst-Reuter-Platz
7, Berlin 10587, Germany, laura.fernandezgallardo@tu-berlin.de)
4p WED. PM
Micro-task crowdsourcing has emerged as a powerful approach for rapid collection of user input from a large set of participants at
low cost. While previous studies have investigated the acceptability of the crowdsourcing paradigm for obtaining reliable perceptual
scores of audio or video quality, this work examines the suitability of crowdsourcing to collect voice likability ratings. Voice likability,
or voice pleasantness, can be viewed as a speaker social characteristic that can determine the listener’s attitudes and decisions towards
the speaker and their message. The collection of valid voice likability labels is crucial for a susccessful automatic prediction of likability
from speech features. This work presents different auditory tests that collect likability ratings of a common set of 30 voices. These tests
are based on direct scaling and on paired-comparisons, and were conducted in the laboratory under controlled conditions—the typical
approach—and via crowdsourcing using micro-tasks. Design considerations are proposed for adapting the laboratory listening tests to a
mobile-based crowdsourcing platform. The likability scores obtained by the different test approaches are highly correlated. This outcome motivates the use of crowdsourcing for future listening tests, reducing the costs involved in engaging participants and administering the test on-site.
2:00
4pSC3. Influence of environmental background noise on speech quality assessments task in crowdsourcing microtask platform.
oster, and Laura Fernandez Gallardo (Quality and Usabiloller, Friedemann K€
oller, Frank Neubert, Victor H€
Babak Naderi, Sebastian M€
ity Lab, Technische Universit€at Berlin, Ernst-Reuter-Platz 7, Berlin 10587, Germany, babak.naderi@tu-berlin.de)
It is important to realize in which environments a speech quality evaluation test can be carried out to achieve reliable results outside
the laboratory. We report on our current activity on using microphone signals for evaluating environmental conditions in crowdtesting.
In order to analyze the impact of environmental noise, a two-phase experiment is conducted using a mobile crowdsourcing platform. A
speech quality assessment task is used with stimuli from the SwissQual 501 database from the ITU-T Rec. P.863 competition (kindly
3909
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3909
provided by SwissQual AG). The first phase of the experiment is conducted in a laboratory, in which participants are assigned to either
silent or background noise simulation (with Cafeteria and Road Noise) study groups. In each case 5 seconds of environmental sound are
recorded by the participants before and after performing the task. Next, the same participants were asked to perform the same task in a
real living room, cafeteria, or any other environment (and proofing it by taking photos) and again to record the environmental noise.
Environments are labelled based on their influence on the result of speech quality assessments. Later, different features like loudness,
noisiness, etc. will be extracted from the recorded audio files and used to predict if the environment is suitable for performing a crowdtesting speech quality assessment.
2:20
4pSC4. Crowd-sourcing prosodic annotation. Jennifer Cole (Dept. of Linguist, Northwestern Univ., 2016 Sheridan Rd., Evanston, IL
60208, jennifer.cole1@northwestern.edu), Timothy Mahrt (Laboratoire Parole et Langage, Aix-Marseille Universite, Aix-en-Provence,
France), and Joseph Roy (Linguist, Univ. of Illinois, Urbana, IL)
Much of what is known about prosody is based on native-speaker intuitions of idealized speech, or on prosodic annotations from
expert annotators trained to interpret a visual display of f0. These approaches have been deployed to study prosody primarily in languages accessible to university researchers, and largely based on small, homogenous speech samples from college-aged adult speakers.
We describe an alternative approach, with coarse-grained annotations collected from a cohort of untrained annotators performing realtime Rapid Prosody Transcription (RPT) using LMEDS, an open-source software tool we developed to enable large-scale, crowdsourced prosodic annotation over the internet. We compared nearly 100 lab-based and crowd-sourced RPT annotations for a 300-word,
multi-talker sample of conversational American English, with annotators from the same (US) vs. different (Indian) dialect groups.
Results show greater inter-annotator agreement for same-dialect annotators, and the best overall reliability from crowd-sourced US
annotators. Statistical models show that a common set of acoustic and contextual factors predict prominence and boundary labels for all
annotator groups. Overall, crowd-sourced prosodic annotation is shown to be efficient, and to rely on established cues to prosody, supporting its use for prosody research across languages, dialects, speaker populations, and speech genres.
2:40
4pSC5. Not-quite-na€ıve listeners: Students as an audience for gamified crowdsourcing. Tara McAllister Byun (Communicative Sci.
& Disord., New York Univ., 665 Broadway, New York, NY 10012, tara.byun@nyu.edu), Daphna Harel (Ctr. for the Promotion of Res.
Involving Innovative Statistical Methodology New York Univ., New York, NY), Elaine R. Hitchcock (Commun. Sci. & Disord., Montclair State Univ., Pompton Plains, NJ), and Melissa Lopez (Commun. Sci. & Disord., Montclair State Univ., Montclair, NJ)
Collecting independent listeners’ judgments of speech accuracy/intelligibility is an essential component of research on speech disorders. Raters can be trained clinicians, students in speech-language pathology, or na€ıve listeners (now commonly recruited online via
crowdsourcing platforms such as Amazon Mechanical Turk/AMT). However, limited comparison data exist to guide researchers in
determining which rater population to use. We describe a study (Hitchcock et al., in prep) in which 2,256 tokens of English /r/ at the
word level, produced by five children receiving intervention for /r/ misarticulation, were rated in a binary fashion using the online platform Experigen. Raters were certified clinicians (n = 3), students in speech-language pathology (n = 9 unique listeners per token), or
na€ıve listeners recruited on AMT (n = 9 unique listeners per token). Interrater reliability was higher when comparing modal ratings
between clinicians and students (Cohen’s kappa=.73, CI=.7-.77) than between clinicians and na€ıve listeners (Cohen’s kappa=.64, CI=.6.68). However, students offered none of the speed/efficiency advantages available through AMT. We posit that students would be more
motivated to complete ratings if they received feedback on accuracy. As a future direction, we propose a hybrid crowdsourcing platform/educational game, designed to sharpen important perceptual skills while also motivating students to contribute valid ratings for
research data.
3:00–3:20 Break
3:20
4pSC6. Speakers adapt speech based on percieved miscommunication. Esteban Buz (Dept. of Psych., Princeton Univ., Princeton, NJ
08540, ebuz@princeton.edu)
Recent work suggests that speakers can exaggerate their speech based on perceived communicative success (Buz et al., 2016). However, the flexibility and sophistication of this speech adaptation is relatively under-explored. We thus investigate whether this adaptation
can be responsive to context-specific demands (e.g., exaggerating speech in contexts with increased potential for miscommunication),
whether adaptation can target specific phonetic/acoustic properties (e.g., specifically exaggerating aspects of speech that may reduce
miscommunication), and whether this adaptation involves inference about the potential causes of miscommunication. We investigate
these questions in a web-based spoken communication game where we manipulate speakers’ perceived communicative success. We find
that speakers “smartly” adapt to miscommunication so as to increase speakers’ likelihood of subsequent communicative success. The
results argue for a speech production system that incorporates past experience and can adapt fine-grained properties of speech to better
achieve communicative goals.
3:40
4pSC7. Wildcat Voices: Remote recording using open web standards. Kevin B. McGowan and Jennifer Cramer (Linguist, Univ. of
Kentucky, 1415 Patterson Office Tower, Lexington, KY 40506, kbmcgowan@uky.edu)
The University of Kentucky is conducting a speech production study with the goal of recording every member of our roughly 29,000
graduate, undergraduate, and professional students–the ‘Wildcat Voices’ project: http://voices.uky.edu/. This talk describes the technology supporting this project, the strengths and shortcomings of this technology, and the challenges of crowd-sourcing a large database of
recordings. The web site is written using open web standards including HTML5 Audio and Javascript. Participants can record using software already on their computer, tablet, or cell phone. This arrangement minimizes support costs and ensures interoperability with sites
3910
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3910
like Amazon’s Mechanical Turk that do not allow software downloads. The site presents a prompt, records using the device’s microphone, stores recordings in a database, and runs forced alignment to create a TextGrid object. Finally, this talk will present production
data from the Wildcat Voices project and discuss the challenges of reaching all of the target participants, managing the server and
backup infrastructure, and issues with recording quality that one does not face in a laboratory (computer fans, background noise, echo,
etc.). Remote collection of speech production data is not without its problems, but the benefits of scalability offered by crowd-sourcing
production studies are tremendous.
4:00
4pSC8. Crowdsourcing speech intelligibility judgments. Maria K. Wolters and Karl B. Isaac (School of Informatics, Univ. of Edinburgh, Rm. 4.32A, 10 Crichton St., Edinburgh EH8 9AB, United Kingdom, maria.wolters@ed.ac.uk)
When we crowdsource judgements about the intelligibility of speech stimuli, the results are similar to those obtained under laboratory conditions, but contain substantially more noise. In this talk, we report on the extent to which self-reported variables such as background noise, headphone type, and hearing ability explain some of this variation in listener judgements. Our data comes from a total of
7 data sets, 4 studies conducted for a PhD thesis (Isaac, 2015; n = 276) and 3 data sets collected from Mechanical Turk for the Blizzard
Challenges in 2009, 2010, and 2011 (King and Karaiskos 2009, 2010, 2011; n = 247). The statistical analysis, using generalised linear
mixed models, focuses on two outcome variables, word error rate (WER) and perceived difficulty in understanding the sentences. We
will discuss the implications of our findings for the design and analysis of large-scale crowdsourced speech intelligibility studies. This
discussion will be framed with reference to current best practice in crowdsourcing perception studies.
4:20
4pSC9. Use of crowdsourcing platforms to examine listener perception of disordered speech. Kaitlin L. Lansford (School of Commun. Sci. and Disord., Florida State Univ., 201 W. Bloxham, Tallahassee, FL 32306, klansford@fsu.edu) and Stephanie A. Borrie
(Communicative Disord. and Deaf Education, Utah State Univ., Logan, UT)
Crowdsourcing websites, such as Amazon’s Mechanical Turk (MTurk), offer a promising platform for researchers to examine the
perceptual consequences of disordered speech production from a diverse sample of listeners. Although data collection via crowdsourcing
mechanisms is largely unconstrained, and not without its limitations, it is cost-effective, time-efficient, and has been demonstrated to
yield similar results to the same experiments conducted in the laboratory under the direct supervision of the researcher. For example, in
our recently published perceptual learning experiment, MTurk and laboratory participants demonstrated equivalent intelligibility
improvements following familiarization (perceptual training) with moderate hypokinetic dysarthric speech. Thus, in addition to supporting the ecological validity of perceptual training as a listener-focused means to reduce the intelligibility burden of dysarthria, these
results support the continued use of crowdsourcing platforms to address empirical questions related to listener perception of disordered
speech. The current presentation will review our previous findings of data equivalence between laboratory and MTurk perceptual experiments and will introduce the preliminary findings of an ongoing project in which perceptual data were collected exclusively from MTurk
participants to examine generalization effects of perceptual training with different types and severities of dysarthric speech.
4p WED. PM
4:40–5:20 Panel Discussion
3911
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3911
WEDNESDAY AFTERNOON, 28 JUNE 2017
ROOM 302, 1:20 P.M. TO 3:20 P.M.
Session 4pSPa
Signal Processing in Acoustics, Underwater Acoustics, and Biomedical Acoustics: Sparse and Co-Prime
Array Processing II
Efren Fernandez-Grande, Cochair
Acoustic Technology, DTU - Technical University of Denmark, Ørsteds Plads, B. 352, DTU, Kgs. Lyngby DK-2800, Denmark
John R. Buck, Cochair
ECE, UMass Dartmouth, 285 Old Westport Road, North Dartmouth, MA 02747
Contributed Papers
1:20
4pSPa1. Comparing the effect of aperture extension on the peak sidelobe level of sparse arrays. Ferdousi Sabera Rawnaque (Elec. and Comput.
Eng., Univ. of Massachusetts Dartmouth, 285 Old Westport Rd, Dartmouth,
MA 02747, frawnaque@umassd.edu) and John R. Buck (Elec. and Comput.
Eng., Univ. of Massachusetts Dartmouth, North Dartmouth, MA)
This paper compares the performance of Uniform Linear Arrays (ULA),
Minimum Redundancy Arrays (MRA) and Co-prime Sensor Arrays (CSA)
in terms of the Peak Sidelobe Level (PSL) of their beampatterns. A ULA
distributes its sensor elements equidistantly on a line, achieving a PSL of 13.5dB [Van Trees, 2002]. Sparse arrays span the equivalent aperture as a
fully populated ULA with fewer sensors providing cost and computational
advantages but with higher PSLs. To span a given aperture, MRAs [Moffet,
1968] require the fewest sensors to include all the spatial correlation lags in
its co-array [Johnson and Dudgeon, 1993]. A CSA interleaves a pair of
ULAs undersampled by co-prime factors [Vaidyanathan and Pal, 2011]. A
CSA can be conventionally processed as a single non-uniform array or by
product processing of its subarrays. This paper shows only the product processed CSA sharply decreases its PSL with increasing aperture, eventually
matching ULA PSLs [Adhikari et al, 2014]. The MRA and linearly processed CSA PSLs remain unaffected by aperture extension, nearly equal to
each other and much higher than the ULA PSLs. Thus, the product processed CSA has the best PSL performance among the considered extended
sparse arrays. [Work supported by ONR grant N00014-13-1-0230.]
1:40
4pSPa2. Extending the usable bandwidth of an acoustic beamforming
array using phase unwrapping and array interpolation. Caleb B. Goates,
Blaine M. Harker, Kent L. Gee, and Tracianne B. Neilsen (Brigham Young
Univ., N283 ESC, Provo, UT 84602, calebgoates@gmail.com)
The response of a frequency-domain beamformer, which is based on the
beamforming array cross-spectral matrix, is usually limited by the spatial
Nyquist frequency. This paper presents a method for overcoming grating
lobes in beamforming on broadband signals using phase unwrapping and
array interpolation. When the phase of each cross spectrum is successfully
unwrapped across frequency, spatial array interpolation can be performed
on both the magnitude and unwrapped phase of the cross spectral matrix at
each frequency. This process can approximate the response of a dense array
from one that is undersampled. For cases where the cross-spectral magnitude and phase vary smoothly, interpolation is straightforward, even above
the spatial Nyquist frequency. Two example applications from anechoic
measurements are presented: localization of a single broadband source, and
characterization of a broadband source whose location changes with frequency. It is found that grating lobes are suppressed and the source is localized at frequencies up to at least eight times the spatial Nyquist frequency
3912
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
for these cases. [Work supported by the National Science Foundation and
the Office of Naval Research.]
2:00
4pSPa3. Synthesis of non-uniformly spaced linear antenna arrays using
a data-driven probabilistic model. Nicholas Misiunas (ECE, Univ. of
Massachusetts, Lowell, 125 Heath St., Tewksbury, MA 01854, nmisiuna@
purdue.edu), Kavitha Chandra, and Charles Thompson (ECE, Univ. of Massachusetts, Lowell, Lowell, MA)
The problem of beam-forming from non-uniformly spaced antenna elements on linear and planar arrays is investigated. The objective is to design
a probabilistic model for identifying the locations of a minimal set of elements that collaboratively generate a beam that matches specified beam
width and side lobe level. A data-driven approach is undertaken with several
thousand viable element positions obtained computationally using the firefly
metaheuristic optimization algorithm, where each firefly models an antenna
array of fixed number of elements. Analysis of inter-element spacings of the
viable positions shows that whereas all spacings exhibit some degree of dependence with neighboring spacings, the region at the end of the array
exhibits a higher correlation radius. Accurate modeling of this section is important for minimizing variability in generated random beam forms. A sequential regression model is proposed to predict successive inter-element
spacings beginning with the boundary of the aperture. Application of the aggregate of previously predicted spacings as the predictor allows one to transform the non-linear structure to a linear predictive model. Arrays
synthesized by this model yield a good performance, with nearly 90% of
generated array beam patterns satisfying required beam width and side lobe
level.
2:20
4pSPa4. Localization and tracking performance of a stationary compact
array of synchronized hydrophones. Ildar R. Urazghildiiev (JASCO Appl.
Sci. (Alaska) Inc., 19 Muriel St., Ithaca, NY 14850, ildar.urazghildiiev@
jasco.com), David E. Hannay (JASCO Appl. Sci., Victoria, BC, Canada),
and John Moloney (JASCO Appl. Sci., Dartmouth, NS, Canada)
Stationary compact arrays of synchronized hydrophones are an efficient
tool to evaluate the azimuth and elevation angles and positions of marine
mammals, vessels, and other sources that produce detectable sounds. A
compact array designed by JASCO Applied Sciences (Canada) Ltd., was
used to estimate the azimuth and elevation angles of ship noise and marine
mammal calls and to estimate positions and to track sources from bearing
only measurements provided by both single and multiple arrays. Array performance was tested with various sources that transmitted impulsive and
continuous sounds; GPS coordinates were known for all sources. Bearing
and position accuracy as functions of sound bandwidth, duration, and other
parameters, were estimated. Test results demonstrated that the array’s
Acoustics ’17 Boston
3912
accuracy came close to the Cramer-Rao bounds. In in situ tests, correlated
bearing errors were observed. Refraction, surface and bottom reflections and
other unpredictable sound propagation effects caused most of the bearing
errors. The source position and heading angle estimation accuracy was evaluated using the array deployed in the Strait of Georgia, BC, Canada. Test
results demonstrated that the array can provide the highest possible accuracy
and can be used in various applications involving long-term passive acoustic
monitoring of large areas.
2:40
4pSPa5. Wavefield separation projector processing with a prioriknowledge
for acoustic source localization in rooms. Julien de Rosny, Thibault Nowakowski, and Laurent Daudet (ESPCI Paris, PSL Res. Univ., CNRS, Institut
Langevin, 1 rue Jussieu, Paris 75005, France, julien.derosny@espci.fr)
In bounded media, wavefield separation projector is an array processing
used to extract the direct path from the reverberated ones. Associated with a
simple sparse localization algorithm, it makes an attractive method for
source localization in rooms of unknown shape. However, the large number
of microphones required by the method can prevent its applicability. In this
work, we propose to add three different a priori knowledge to dramatically
decrease the number of microphones. First, an estimation of the critical distance allows to reduce the required rank of the projection operator and therefore the number of receivers. The second method increases the number of
receivers thanks to “virtual measurements” on the boundaries, when the
room geometry is partially known. Finally, the last method requires a simple
calibration step based on the passive recovering of the Green’s functions
between all the pairs of microphones, which also extends the model to
weakly inhomogeneous propagation medium. The properties of the 3
methods are discussed. We show numerically and experimentally that these
methods lead to a precise source localization, with a moderate number of
microphones.
3:00
4pSPa6. Simulations of source localization in the deep ocean using frequency-difference matched field processing. David J. Geroski (Appl.
Phys., Univ. of Michigan – Ann Arbor, Randall Lab., 450 Church St., Ann
Arbor, MI 48109, geroskdj@umich.edu) and David R. Dowling (Mech.
Eng., Univ. of Michigan – Ann Arbor, Ann Arbor, MI)
Matched field processing (MFP) is an established technique for source
localization in multi-path acoustic environments that relies on correlating
array-recorded and simulated sound fields. However, when the recorded
sound field’s frequency is high enough, this field may be sensitive to details
of the acoustic environment that are uncertain or unknown, and therefore
not included in the simulated sound field. Thus, the actual and modeled
acoustic fields are mismatched, the severity of this mismatch increases with
frequency, and may cause MFP to fail at relevant frequencies and ranges in
the deep ocean. One remedy to this problem may be analyzing the frequency-difference autoproduct of the acoustic field instead of analyzing the
field itself. Thus, this presentation covers the simulated performance, and
likely limitations, of frequency difference MFP [Worthmann and Dowling
(2015). J. Acoust. Soc. Am. 138, 3549-3562] in a generic deep-ocean channel using a ray code (Bellhop) for the source-broadcast field in the signal
bandwidth and a mode code (Kraken) for the replica calculations in the difference-frequency bandwidth. Here, refractive index fluctuations in the
ocean are modeled by random time delays for each ray-path between the
source and each receiver. [Sponsored by ONR.]
WEDNESDAY AFTERNOON, 28 JUNE 2017
BALLROOM C, 1:20 P.M. TO 4:20 P.M.
Session 4pSPb
Signal Processing in Acoustics: Topics in Signal Processing in Acoustics (Poster Session)
Hongya Ge, Cochair
Electrical & Computer ENgineering, New Jersey Institute of Technology, University Heights, Newark, NJ 07102
Sean A. Fulop, Cochair
Linguistics, California State University Fresno, 5245 N Backer Ave, PB92, Fresno, CA 93740-8001
Contributed Papers
4pSPb1. Estimation of speech source direction for hearing aid application using smartphone. Issa M. Panahi, Nasser Kehtarnavaz (Elec. Eng.,
Univ. of Texas at Dallas, EC33, 800 West Campbell Rd., Richardson, TX
75080, issa.panahi@utdallas.edu), and Linda Thibodeau (BBS, Univ. of
Texas at Dallas, Richardson, TX)
microphones in the right direction of source to enhance the signal to noise
ratio and improve the reception of speech signal. A demo unit will be presented showing the real-time operation of the proposed method on an
Android smartphone.
In our hearing aid research project funded by NIH-NIDCD we utilize
smartphone and its features as a powerful platform to implement complex
signal processing algorithms which would improve hearing aid applications.
Among the required algorithms, is finding the direction of arrival of speech
source signals which can improve the performance of hearing aid devices
for the users. In this poster presentation, a signal processing algorithm for
estimating the speech source direction and its real-time implementation on
an Android smartphone are demonstrated. The visual display of source
direction on the smartphone panel can be used to align the phone and its two
4pSPb2. Automatic recognition of accessible pedestrian signals. Arturo
Camacho (School of Comput. Sci., Univ. of Costa Rica, San Pedro, San
Jose, Costa Rica), Sebastian Ruiz Blais, and Juan M. Fonseca Solıs (Res.
Ctr. on Information and Commun. Technologies, Univ. of Costa Rica, San
Pedro, Montes de Oca, San Jose 11501, Costa Rica, juan.fonsecasolis@ucr.
ac.cr)
3913
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Accessible pedestrian signals (APS) enhance accessibility in streets
around the world. Recent attempts to extend the use of APS to people with
Acoustics ’17 Boston
3913
4p WED. PM
All posters will be on display from 1:20 p.m. to 4:20 p.m. To allow contributors in this session to see the other posters, authors of oddnumbered papers will be at their posters from 1:20 p.m. to 2:40 p.m. and authors of even-numbered papers will be at their posters from
2:50 p.m. to 4:20 p.m.
visual and audible impairments have emerged from the area of audio signal
processing. Even though few authors have studied the recognition of APS
by sound, comprehensive literature in Biology have been published for recognizing other simple sounds like bird and frog calls. Since these calls exhibit the same periodic and modulated nature as APS, many of the existent
approaches can be adapted for this purpose. We present an algorithm that
uses the mentioned approach. The algorithm was evaluated using a collection of 79 recordings gathered from streets in San Jose, Costa Rica, where
the solution will be implemented. Three types of sounds are available: a
low-pitch chirp, a high-pitch chirp and, a cuckoo-like. The results showed a
precision of 87%, a specificity of 83%, a sensibility of 86%, and a F-measure of 85%.
4pSPb3. Toward the assessment of walls acoustic impedances from the
analysis of first and second order reflections based on room impulse
responses. Helena Peic Tukuljac, Thach Pham Vu, Herve Lissek, and Pierre
Vandergheynst (Ecole
polytechnique federale de Lausanne, Rue Saint Laurent 33, Lausanne 1003, Switzerland, hela_su@yahoo.com)
In order to address optimal acoustic control in rooms, an accurate characterization of the room shape and wall properties is required. There are
only a few approaches that model the wall impedances. Most of them rely
on finite difference time domain methods, which are limited to shoeboxshaped rooms and only valid at low frequencies (non-rectangular rooms and
high frequencies lead to extremely high computational complexity). In order
to overcome these limits, we propose the estimation of walls’ acoustic impedances based on the analysis of the room impulse responses. In room
impulse responses, the early reflections’ amplitudes are proportional to the
reflection coefficient of the corresponding wall element. The location of
points on the walls (origins) for the first and second order echoes can be easily determined for a known room. The values of impedances at the origins
of the first order echoes are determined directly from the room impulse
response. The second order echoes’ represent a product of the influence of
wall points that belong to its path. Values of individual impedances are
extracted from these echoes by overlapping origins of first order echoes of
one receiver’s position with one origin of the second order echoes of another
receiver’s position.
4pSPb4. Experimental results on acoustic communication through drill
strings using a strain sensor receiver. Ali H. Alenezi and Ali Abdi (Elec.
and Comput. Eng., New Jersey Inst. of Technol., Dept. of Elec. & Comp.
Eng., 323 King Blvd., Newark, NJ 07102-1982, aha36@njit.edu)
Drilling oil wells is a costly, complex and high risk operation, especially
for many wells whose depths can be at least several thousand feet. To minimize the cost and risk involved in drilling, typically there are several sensors
mounted near the drill bit, to measure important parameters such as temperature and pressure around the drill bit. The collected information needs to
be sent to the surface, to assist the driller with controlling and steering the
drill bit. Compared to the use of highly expensive and vulnerable cables for
wired communication in deep oil wells, or very low rate communication
using pulses of mud in wells, transmission of information using acoustic signals thru the drill string is a feasible and promising method. In this paper,
we use a drill string acoustic communication testbed, where the receiver is a
strain sensor. This sensor measures local fractional displacements due to
received vibrations. Using experimental data, we study key characteristics
of a drill string channel impulse response, sensed by a strain sensor. We also
demonstrate how the strain channel response structure can control the communication system performance. [This work was supported in part by the
National Science Foundation (NSF), Grant IIP-1340415.]
4pSPb5. Coherence-based phase unwrapping for broadband signals.
Mylan R. Cook, Kent L. Gee, Scott D. Sommerfeldt, and Tracianne B. Neilsen (Brigham Young Univ., N201 ESC, Provo, UT 84602, mylan.cook@
gmail.com)
In this paper, a coherence-based method for unwrapping the relative
phase between microphones are investigated. For broadband signals, this
method has the potential to lead to more accurate intensity vector estimations using the Phase Amplitude and Gradient Estimator (PAGE) method
3914
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
[D. C. Thomas et al., J. Acoust. Soc. Am. 137, 3366-3376 (2015)] Simple
unwrapping methods function by detecting phase jumps above a threshold,
which works well for frequencies associated with high signal coherence.
However, since unwrapping for these methods is triggered by only one previous frequency data point, frequency regimes of low coherence are often
unwrapped incorrectly. By including coherence in a phase unwrapping algorithm, these errors can be reduced. Regions of relatively low coherence are
given less weight in phase unwrapping, and are checked for unwrapping
errors. For broadband signals with continuous relative phase, using both the
coherence and multiple data points to unwrap, frequencies associated with
low coherence result in fewer unwrapping errors. Phase values for jet noise
data with low coherence (<0.1) have been successfully unwrapped using
this method, and have resulted in more reliable PAGE intensity estimates.
This paper also investigates unwrapping in interference nulls produced by
coherent, radiating sources. [Funded by the NSF.]
4pSPb6. Pitch pattern matching based speech enhancement. Dongmei
Wang and John H. L. Hansen (Elec. Eng., Univ. of Texas at Dallas, 800
West Campbell Rd., ECSN 4.414, Richardson, TX 75080, dongmei.wang@
utdallas.edu)
The problem of speech enhancement in diverse noisy conditions has historically focused on the vocal tract spectral magnitude. However, studies
have shown that improved quality, speaking style and speaker identity are
all impacted by reliable prosody/F0 (pitch) information for human listening.
In this study, we propose a speech enhancement algorithm based on pitch
pattern matching. It can be considered as an example-based method since
we attempt to replace the speech segment in the noisy speech with corresponding detected components from a dictionary which contains the clean
speech signals. The average pitch value as well as overall pitch dynamic
trends are used as features for pitch pattern matching. The speech segment
in the dictionary with the best matched pitch pattern feature will be used to
assist in the computation of an enhanced speech segment in the noisy speech
sample. Here, Wiener filter is used for obtaining the target baseline speech
from the noisy speech. The experimental results show that the pitch pattern
feature is more computational efficient than a spectral based feature alone
for speech enhancement, while obtaining similar speech enhancement performance in terms of both speech intelligibility and speech quality.
4pSPb7. Acoustic transfer characteristics study through the fingers
vibrations using wrist vibrator. Hyung Woo Park and myungjin bae (IT,
SoongSil Univ., 1212 Hyungham Eng. Building 369 Snagdo-Ro, DongjakGu, Seoul, Seoul 06978, South Korea, pphw@ssu.ac.kr)
In these days, people are exposed in many kind of noise, such as machinery, aircraft, construction site, or road traffic. The noise is situated in
one part of the modern life of human. In this study, we confirmed that the
characteristics of acoustic transmission through the finger in noisy environment. To investigate the function of the sound transfer to ear, we measure
the vibration from the wrist actuator fingertips to figure out the mechanisms
of transmission. And we using the bone-conduction equivalent circuit for
compensation with normal hearing and estimate the acoustic characteristics.
The experiment of propose mechanism, set the noise environment with
monitor speakers in semi-anechoic room and check the acoustic transfer
performance.
4pSPb8. Parameter optimization for acoustic fault samples kernel confidence measure. Na Wei (Key Lab. of Modern Acoust. and Inst. of Acoust.,
Nanjing Univ., 22 Hankou Rd., Gulou District, Nanjing 210093, China,
weina1223@126.com) and linke zhang (School of Energy and Power Eng.,
Wuhan Univ. of Technol., Wuhan, Hubei, China)
Sample expansion is an effective way to solve the incomplete samples
problem in acoustic fault source identification, and high confidence
expanded samples contribute to the classifier performance. Before a kernelbased confidence measure (KBCM) was introduced for acoustic expanded
samples confidence measure. Kernel parameter is a key point to maintain
the KBCM algorithm performance. A novel parameter optimization criterion for Gaussian function with consideration of sample confidence is proposed. First, the expanded samples of confidence more than 0.5 are selected
Acoustics ’17 Boston
3914
4pSPb9. Bearing of unmanned aerial vehicles by a volumetric microphone array. Miriam Haege and Marc Oispuu (Sensor Data and Information Fusion, Fraunhofer Inst. for Commun., Information Processing and
Ergonomics, Fraunhoferstrasse 20, Wachtberg 53343, Germany, miriam.
haege@fkie.fraunhofer.de)
The technical abilities of Unmanned Aerial Vehicles (UAV, here also
denoted as drones) are steadily increasing. UAVs are often equipped with
state-of-the-art technology which enables illegal surveillance and mapping
of terrains and buildings as well as smuggling or even terrorism. For this
reason, sensor systems are required which detect the intrusion of incoming
drones in safety-relevant areas at an early stage. Beside other means, acoustical sensors can be used to detect approaching drones. In this paper we will
present a volumetric array consisiting of 12 microphones, which enables to
monitor the upper half sphere above the array. Efficient bearing algorithms
like coherent or incoherent beamforming are employed to determine the
direction of arrival (azimuth and elevation) of UAVs. These algorithms
have also been tested within the framework of practical experiments. For
this purpose, different types of UAVs flying on various courses around the
microphone array were used. The performance of the applied algorithms as
well as the microphone array could be successfully confirmed.
4pSPb10. Spatial focusing of a one-channel time-reversal acoustic mirror in an underground water storage tank at audible frequencies. Gaston Maffei, Gabriel Scoccola, Diego A. Wisniacki (Phys., Universidad de
Buenos Aires, Buenos Aires, Argentina), Ignacio Spiousas, Alejo Alberti,
and Manuel C. Eguia (LAPSo, Universidad Nacional de Quilmes, R S Pena,
352, Bernal, Buenos Aires 1876, Argentina, meguia@unq.edu.ar)
We performed a one-channel time-reversal acoustic mirror for a wideband (30 Hz to 20 kHz) impulse in a 30 5 meters (diameter X height)
empty cylindrical tank made of concrete, with columns and beams. The
time-reversed pulse (or equivalently, the autocorrelation of the impulse
response) peaks 30 dB above the temporal sidelobes. We measured the twodimensional spatio-temporal focusing pattern of the time-reversal process
with a 7.5 cm grid resolution and observed a circular converging/diverging
wave. The wide-band time-reversed pulse peak decays at the level of the
sidelobes around 70 cm away from the focusing point. We also analyzed
this focal spot width as a function of frequency and compared the results
with a numerical simulation of the cavity using a finite element method.
4pSPb11. Impact of random failures and tolerances on ensemble average
array beampattern and array gain. Evan F. Berkman (Appl. Physical Sci.
Inc., 49 Waltham St., Lexington, MA 02421, fberkman@aphysci.com)
Analytic expressions for expected array beampattern with random element failures and random perturbations in element amplitude and phase
response have been long available for arrays of omnidirectional elements.
This work extends prior results to address random failures and tolerances of
directive as well as omni-directional elements in curved as well as rectilinear
array geometries and to blocks or groups of elements as well as single elements. The analytic result for both failures and multiple types of tolerances is
concisely expressed in a single simple universal expression with explicit parametric dependencies. This expression is amenable to efficient numerical evaluation and quantitative results are trivially scaled to any combination of
failure probabilities and standard deviation of various types of tolerances.
Insightful interpretation of the results in terms of response of an ideal array
plus a perturbation array is provided. The impact of failures and tolerances on
array beampattern is translated to again simple expressions for array noise
response and array gain for an incident diffuse acoustic noise field of arbitrary
directionality. The emphasis on concise physically interpretable expressions
is due to my training as a young engineer by Ira Dyer and the group of outstanding scientists and engineers he assembled at BBN.
3915
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
4pSPb12. The relative importance of static versus spectral change
acoustic features for automatic speaker identification. Stephen Zahorian,
Peter Guzewich, Xiao Chen (Dept. of Elec. and Comput. Eng., State Univ.
of New York at Binghamton, PO Box 6000, Binghamton, NY 13902, zahorian@binghamton.edu), Roozbeh Sadeghian (Dept. of Analytics, Harrisburg
Univ. of Sci. and Technol., Binghamton, New York), and Hao Zhang (Dept.
of Elec. and Comput. Eng., State Univ. of New York at Binghamton, Binghamton, NY)
For at least two decades, the primary acoustic features used for both
automatic speech recognition (ASR) and automatic speaker identification
(SID) have been Mel frequency cepstral coefficients (MFCCs) and their first
and second order difference terms, referred to as Delta and double Delta
terms. The MFCC’s capture static spectral information, whereas the Delta
terms capture spectral change information. In this experimental paper, we
first reformulate the MFCC’s and Delta terms, as discrete cosine transform
coefficients (DCTCs), which take the place of the MFCC’s, and discrete cosine series coefficients (DCSCs), which take the place of the Delta terms.
Low dimensionality DCSC spaces (spectral change) result in very poor
speaker discriminability, as compared to discriminability based on DCTCs.
However, reasonably accurate automatic speaker identification can be
achieved in a high dimensionality DCSC space. Combining DCTC terms
and DCSC terms results in only modest improvements in identification accuracy over what can be achieved with DCTC terms alone. We conclude,
for the purposes of automatic speaker identification, static spectral information is far more informative than spectral change information. The results of
this study, plus results in the literature, support the hypothesis that a similar
conclusion could be reached for human ability to recognize speakers.
4pSPb13. Experimental results on synchronization with chirp signals
using a vector sensor receiver. Erjian Zhang and Ali Abdi (Elec. Comput.
Eng., New Jersey Inst. of Technol., 323 Martin Luther King Boulevard,
Newark, NJ 07102, ez7@njit.edu)
Chirp signals, also known as linear frequency modulated signals, are
widely used for synchronization, signal acquisition, and frame detection in
underwater communication systems. This is due to the peak at the output of
the chirp matched filter at the receive side. In low signal-to-noise ratio
(SNR) scenarios, however, this peak can be buried in noise, which results in
major synchronization errors and system performance loss. While a scalar
array of spatially separated hydrophones can increase SNR to improve synchronization, the size of the array may not be suitable for small platforms.
Acoustic vector sensors, on the other hand, are small-size devices that can
serve as multichannel communication receivers. In this paper, performance
of a vector sensor receiver for synchronization using a chirp signal is studied. Our experimental results indicate that a compact vector sensor receiver
can significantly enhance the output of the filter matched to the chirp signal.
This is because the proposed vector matched filter significantly suppresses
the noise and provides a sharp peak at the output. This is particularly important for synchronization and signal acquisition in underwater communication systems operating in low SNR environments. [This work was supported
in part by the National Science Foundation (NSF), Grant IIP-1500123.]
4pSPb14. Towards automated dementia diagnosis based upon speech.
Roozbeh Sadeghian (Dept. of Analytics, Harrisburg Univ. of Sci. and Technol., 326 Market St., Harrisburg, PA 17101, rsadegh1@binghamton.edu), J.
D. Schaffer (Inst. for Multigenerational Studies, State Univ. of New York at
Binghamton, Binghamton, NY), and Stephen Zahorian (Electricla and Comput. Eng. Dept., State Univ. of New York at Binghamton, Binghamton,
NY)
The clinical diagnosis of Alzheimer’s disease and other dementias is
very challenging, especially in the early stages. Our hypothesis is that any
disease that effects particular brain regions involved in speech production
and processing will also leave detectable finger prints in the speech. The
goal of this work is an easy-to-use, non-invasive, inexpensive diagnostic
test for dementia that can easily be applied in a clinician’s office or even at
home. Experimental evidence suggests that strong discrimination between
subjects with a diagnosis of probable Alzheimer’s versus matched normal
controls can be achieved with a combination of acoustic features from
speech, linguistic features extracted from a transcription of the speech, and
Acoustics ’17 Boston
3915
4p WED. PM
and further combined with the real samples into a new training samples set.
Then based on this new set, an optimization objective function of maximizing between-class separability and minimizing within-class separability is
put forward. Experiments on two-dimensional normal distribution data and
multi-dimensional noise source data verify the efficiency of the proposed
method.
results of a mini mental state exam. Progress is reported toward a fully automatic speech recognition system tuned for the speech-to-text aspect of this
application. In addition to using state-of-the- art automatic speech recognition techniques such as Deep Learning, recurrent neural networks are used
to predict the punctuation in transcribed speech, which is later used for
extracting linguistic features. This fully automated system for 73 speakers is
combined with 140 manually transcribed speech samples and used for experimental testing of a system for automated detection of Alzheimer’s from
speech.
4pSPb15. A study of characteristics of underwater acoustic particle velocity channels measured by acoustic vector sensors. Erjian Zhang and
Ali Abdi (Elec. Comput. Eng., New Jersey Inst. of Technol., 323 Martin
Luther King Boulevard, Newark, NJ 07102, ez7@njit.edu)
Acoustic vector sensors measure orthogonal components of acoustic particle velocity. When used in underwater communication systems, they act as
multichannel receivers. One advantage of a vector receiver, compared to an
array of spatially-separated scalar receivers such as hydrophones, is its compact size. Some characteristics of particle velocity channels are studied theoretically or via simulations (A. Abdi and H. Guo, “Signal correlation
modeling in acoustic vector sensor arrays,” IEEE Transactions on Signal
Processing, vol. 57, pp. 892-903, 2009; H. Guo, et al., “Delay and Doppler
spreads in underwater acoustic particle velocity channels,” J. Acoust. Soc.
Am., vol. 129, pp. 2015-2025, 2011). In this paper, we use data measured
by a vector sensor to study various key characteristics of underwater particle
velocity channels, including delay spreads, signal-to-noise ratios, and possible correlations among different channels. By inspecting the eigen structure
of channel matrices, we also investigate how various measured particle velocity channel impulse responses can affect the performance of an equalizer
to detect transmitted symbols. The results are useful for designing proper
vector sensor-based multichannel receivers in underwater communication
systems. [This work was supported in part by the National Science Foundation (NSF), Grant IIP-1500123.]
4pSPb16. Phase correction in extended towed array method. Yu Wang,
ZaiXiao Gong, and Renhe Zhang (State Key Lab. of Acoust., Inst. of
Acoust., Chinese Acad. of Sci., Haidian District, Beijing 100190, China,
wangyu313@mails.ucas.ac.cn)
An important issue in research on passive ASW operations is to improve
signal-to-noise ratio (SNR) and bearing resolution for targets emitting low
frequency signals. Passive Synthetic Aperture Sonar (PSAS) is one of the
techniques believed to improve these characteristics. In the past decades,
various passive synthetic aperture techniques have been investigated, such
as Extended Towed Array Method (ETAM), Fast Fourier Transform Synthetic Aperture (FFTSA), and Maximum Likelihood (ML) algorithm.
ETAM is shown to be superior in performance to the other two algorithms.
Estimation for phase correction factor is the core idea in original ETAM.
Firstly, the cross-correlations of signals on multi-pairs of overlapped hydrophones are estimated, secondly the average value of the cross-correlation
phase angles is taken as the least-square estimation for the phase correction
factor. However, when the phase angle is close to the discontinuous point,
the above estimation method will lead to large estimation error. In this paper, a modified phase correction method in ETAM is proposed, and the discontinuity of phase angle is eliminated to ensure the accuracy of the phase
correction estimation. Simulation and experimental results show that the
modified method can effectively improve the stability of ETAM.
4pSPb17. Phase-locked loops in acoustic analysis. Jerome Helffrich
(Southwest Res. Inst., San Antonio, TX) and Sean A. Fulop (Linguist, California State Univ. Fresno, 5245 N Backer Ave, PB92, Fresno, CA 937408001, sfulop@csufresno.edu)
Tracking of signal frequency and/or phase is made much more challenging by the presence of “rapid” frequency or phase changes in a signal—
3916
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
occurring in a time short compared to the amount of frequency shift: DfDt
<< 1. In these scenarios, the usual Fourier decomposition does not provide
detail sufficient to tell what is going on. Although there are time-frequency
tricks one can use (such as the reassigned spectrogram, or wavelet analysis),
these do not provide simple answers to the question of what is the phase as a
function of time, and they do not track the instantaneous frequency as effectively as is wanted. An interesting solution from the realm of electrical engineering is provided by the phase-locked loop (PLL) detector: a construct
that takes the input signal and tries to phase lock an on-board oscillator to it.
It turns out that this process is surprisingly effective at tracking instantaneous frequency/phase while discarding noise if an initial estimate of the target signal frequency is known. We present examples of the superior
frequency and phase tracking provided by a PLL approach, including the
extremely brief chirps produced by electric fish, and the forensic detection
of editing cuts in audio samples.
4pSPb18. Passive tracking of autonomous underwater vehicles (AUVs)
using a low-cost acoustic data collection system. Erin M. Fischell, Kristen
Railey, Oscar A. Viquez, and Henrik Schmidt (Mech. Eng., MIT, MIT, Rm. 5204, 77 Massachusetts Ave., Cambridge, MA 02139, krailey@mit.edu)
One challenge to harbor security is monitoring and tracking autonomous
underwater vehicle (AUV) activity. A self-contained low-cost acoustic data
collection system (acbox) has been developed and demonstrated for this purpose. The acbox consists of an 8-element configurable off-the-shelf hydrophone array, a data acquisition system, a GPS for timing and navigation,
and a computer for data logging and processing. Noise data on the Bluefin
SandShark and Bluefin 21-inch AUVs were collected using the configurable
array in the Charles River and Massachusetts Bay. From this experiment,
AUV noise characteristics were determined. The bearing to the AUV was
estimated based on beamformed and frequency filtered data, and compared
to logged AUV position. The bearing estimates based on propeller noise
were consistent with the reported AUV position. These results for range
estimation and filtering bearing were tested to provide improved vehicle
localization. Performance in the presence of boat noise was also assessed.
Moving forward, this work will be expanded to tracking AUVs from other
AUVs for collision-avoidence and behavioral analysis for security monitoring applications. [Work supported by Battelle, DARPA, and Draper.]
4pSPb19. Supervised learning in voice type discrimination using neckskin vibration signals: Preliminary results on single vowels. Zhengdong
Lei (Mech. Eng., McGill Univeristy, Rm. 364, MacDonald Bldg., 845 Sherbrooke St. West, Montreal, QC H3A 0G4, Canada, zhengdong.lei@mail.
mcgill.ca), Nicole Y. Li-Jessen (Commun. Sci. and Disord., McGill Univ.,
Montreal, QC, Canada), and Luc Mongeau (Mech. Eng., McGill Univeristy,
Montreal, QC, Canada)
Discrimination between normal and pathological voice is a critical component in laryngeal pathology diagnosis and vocal rehabilitative treatment.
In the present study, a portable miniature glottal notch accelerometer
(GNA) device with supervised machine learning techniques was proposed
to discriminate between three human voice types: normal, breathy, and
pressed voice. Fourteen native American English speakers who were wearing a GNA device produced five different English single vowels in each of
the three voice types. Acoustic features of the GNA signals were extracted
using spectral analysis. Preliminary assessments of feature discrepancy
among different voice types were made to present physical clues of discrimination. The linear discriminant analysis technique was applied to reduce
the dimensionality of the raw-feature vector of the GNA signals. Maximization of between-class distance and minimization of within-class distance
were synchronously achieved. The voice types were then classified using
several supervised learning techniques, such as Linear Discriminant, Decision Tree, Support Vector Machine, and K-Nearest Neighbors. A classification accuracy of up to 91.0% was achieved. One mapping model from voice
input to type output was eventually obtained based on the training set, so as
to make predictions with new data in the future work.
Acoustics ’17 Boston
3916
Construction job-sites are noisy workplaces and construction equipment
and machines create discrete sound patterns while performing their daily
operations. Construction engineers usually considered job site noise as a
negative phenomenon, but if processed properly, the generated sound patterns could be used as a rich source of information for analyzing ongoing
operations at job-sites. This paper presents the current research efforts of the
authors regarding initiating and developing an audio-based model for analysis and modeling of construction operations. The audio-based model is
based on placing single or multiple microphones at the jobsite, recording the
developed sounds patterns, and using various techniques for processing the
recorded audio files and detecting and recognizing different operations taking place at the jobsite. The implemented techniques include noise removal
and signal enhancement, source separation, signal processing and machine
learning algorithms. The paper also discusses about the necessary hardware
settings (number, type and locations of microphones). The results of using
the proposed system can be used by construction managers for several purposes including productivity analysis, project scheduling, differentiating
between idle and active times of machine, etc. The authors also present several case studies from construction job sites to illustrate how the system
works in real world settings.
4pSPb21. The research on the optimal bionic waveform design. Siyuan
Cang, Xueli Sheng (College of Underwater Acoust. Eng., Harbin Eng.
Univ., Heilongjiang 150001, China, cangsiyuan@hrbeu.edu.cn), SONGHAI
LI (IDSSE, SANYA, China), Jintao Sun, Longxiang Guo, and Jingwei Yin
(College of Underwater Acoust. Eng., Harbin Eng. Univ., Harbin, Heilongjiang, China)
In this paper, the authors address the design of optimal waveform for target detection and parameter estimation based on the features analysis of the
whistle of Indo-pacific humpback dolphins. A set of types of waveforms
exist in the sequence of whistle calls. Considering the perfect performance
of non-linear frequency modulated and multi-harmonics structure in antirange sidelobe and anti-reverberation, several fusion schemes of several
kinds of waveforms are explored and fusion results are studied analytically
and from simulation. It is concluded that the bionic harmonic fused signal
shows good time resolution, and improve the performance of suppressing
reverberation. Research result provides nice reference and application value
in underwater sonar based on bionic acoustic signal.
4pSPb22. Spatial-division-multiplexing (SDM) in time-variant channel
based on vector adaptive time-reversal technique. Dian Lu, Xueli Sheng,
Yali Shi, Hanjun Yu, Longxiang Guo, and Jingwei Yin (Underwater Acoust.
Eng. College, Harbin Eng. Univ., No.145, Nantong Rd., Nangang District,
Harbin, Heilongjiang Province 150001, China, bryce_era@sina.com)
In the field of multistatic remote detection for underwater target, as a
time-variant and multi-path channel interfered by environmental noise,
underwater acoustic environment limits the spatial share of channel and
poses a great challenge against spatial-division-multiplexing(SDM) in multistatic detection. In this paper, a kind of method of spatial-division-multiplexing(SDM) in time-variant channel based on vector adaptive timereversal technique is proposed. By using time-reversal technique, multi-path
structure of channel is efficiently restrained ; Applied adaptive filtering,
time-variant characteristic of channel is effectively suppressed; With single
vector hydrophone, spatial focus of target echoes and noise interference suppression is accomplished by spatial filtering. Finally, excellent results are
performed in spatial-division-multiplexing(SDM) of multistatic detection,
supported by the experimental simulation.
3917
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
4pSPb23. Ultrasonic tomography for multiphase flow characterization.
Weichang Li, Max Deffenbaugh (Aramco Res. Ctr. - Houston, 16300 Park
Row, Houston, TX 77084, lwc@alum.mit.edu), and Mohamed Noui-Mehidi
(EXPEC Adv. Res. Ctr., Saudi Aramco, Dhahran, Saudi Arabia)
Characterizing multiphase flow dynamics and estimating phase composition are important in many applications including oil/gas production. Conceptually ultrasonic travel-time tomography is an appealing method. It
estimates flow phase composition by mapping out the sound speed distribution over a cross section or volume traversed by a large number of travel
paths. However, low sound speed contrast over multiphase mixtures
imposes high accuracy requirement in travel time estimation which can be
especially challenging in the presence of dispersion and multipath interference. On the other hand, high impedance contrast between phases, such as
at gas/liquid interfaces, introduces significant losses and hence leads to
reduced signal versus noise ratio. In this talk we first provide a characterization of the signal structure based on both numerical wave propagation modeling and lab test data, then we analyze the sensitivity and accuracy
requirement for travel time estimation using several different approaches,
and finally we evaluate the performance of flow imaging and phase estimation for several common compositional scenarios.
4pSPb24. Underwater pipeline leakage detection via multibeam sonar imagery. Wanyuan Zhang, Tian Zhou, Dongdong Peng, and Jiajun Shen (College
of Underwater Acoust. Eng., Harbin Eng. Univ., Dis.Nangang Str.Nantong
No.145, Harbin 150001, China, zhangwanyuan@hrbeu.edu.cn)
New developments in multibeam technology now permit MBES to collect and record acoustic data not only from the strongest return (normally
the seabed), but also echo returns from the complete travel paths of the
acoustic pulse through the water column. This now allows they are established as standard tools for the remote detection of targets in the water column, such as gas bubbles leaking from pipeline. In this study, a multibeam
sonar operating at 300kHz is used to detect the gas leakage of pipeline based
on acoustic backscatter imagery. Some behavioural traits of the leakage gas
bubbles have been discussed, such as shape, distribution pattern and contour
centroid characteristics. Firstly, an adaptive beamforming algorithm is
applied to sonar imaging for suppressing background noise and side lobe interference. And then these features are extracted by mathematic morphological processing of image sequences. Finally, a tank test with different
leakage scales caused by leakage pressures, amounts and sizes has verified
the validity and stability of the characteristics of gas bubbles. The proposed
method is feasible to make a qualitative assessment for AUV pipeline detection surveys.
4pSPb25. A weak target detecting method based on multistatic information fusion. Yang Chen (College of Underwater Acoust. Eng., Harbin Eng.
Univ., Harbin 150001, China, cy5311@hrbeu.edu.cn), Xueli Sheng, Siyuan
Cang, Dian Lu, Longxiang Guo, and Jingwei Yin (Harbin Eng. Univ., Harbin, Heilongjiang Province, China)
The detecting performance of active sonar have been affected by the
weak targets scattering and serious noise background. If artificial reducing
the threshold to get high detecting capability, the false alarm probability
will be very high. Therefore, this paper propose a weak target detecting
method based on multistatic information fusion. After getting more information about the target from different positions and extracting the target feature vectors to make a fusion, a better recogniting performance will be
present. The target scattering centers are extracted by CLEAN technique,
which could avoid extracting the false scattering centers, and adapt in low
signal to noise ratio (SNR). The neural network classifier is used to make a
fusion and recognize the target. Simulation results show that this method
has better target identification performance and low false alarm probability,
demonstrating the target features fusion has high value for active sonar
detecting in the future.
Acoustics ’17 Boston
3917
4p WED. PM
4pSPb20. Achievements and challenges in audio-based modeling of construction job sites. Abbas Rashidi (Georgia Southern Univ., 552 E Main
St., Apt. 808, Statesboro, GA 30461, arashidi@georgiasouthern.edu), Mark
A. Davenport, David V. Anderson, Chieh-Feng Cheng (Georgia Inst. of
Technol., Atlanta, GA), and Chris A. Sabillon (Georgia Southern Univ.,
Statesboro, GA)
4pSPb26. Acoustic analogs for the characterization room electromagnetics. Pratik Gandhi, Charles Thompson, and Kavitha Chandra (Univ. Of
Massachusetts Lowell, FA 203, 1 University Ave., Lowell, MA 01854, pratik_gandhi1@student.uml.edu)
positions, we developed a new empirical canonical correlation-based technique for the analysis utilizing lucky moments to blindly estimate the array
manifold.
In indoor wireless communication the radio channel is comprised of the
direct and diffuse multipath components. As in the acoustics case these diffuse components are the result of non-specular and random scattering. This
state-of-affairs is particularly problematic for short-range millimeter wavelength radio system. The Eyring model has recently been shown to be effective in modeling the electromagnetic power decay profile in rooms. In this
work temporal decay rates of the energy in coupled spaces is considered.
The role which inter-room transmission and energy absorption play on the
power decay profile will be examined. Special session: Acoustics Network
Protocol, Underwater Acoustic Communications. Pacs Number: 43.60.Dh,
43.38.Si
4pSPb29. Excitation of leaky lamb waves in cranial bone using a phased
array transducer in a concave therapeutic configuration. Chris Adams
(School of Electron. Eng., Unviersity of Leeds, School of Electron. and
Elec. Eng., Leeds LS2 9JT, United Kingdom, elca@leeds.ac.uk), James R.
McLaughlan (Div. of Biomedical Imaging, Univ. of Leeds, Leeds, United
Kingdom), Luzhen Nie, David Cowell, Thomas Carpenter, and Steven
Freear (School of Electron. Eng., Unviersity of Leeds, Leeds, United
Kingdom)
4pSPb27. Target tracking technology for reducing false alarm. Xiaoyu
Wang, Xueli Sheng, Hanjun Yu, Longxiang Guo, and Jingwei Yin (Harbin
Eng. Univ., No.145, Nantong St., Harbin, Heilongjiang 150000, China,
18846423671@163.com)
Because of the complex marine environment, there are a large number
of false arms when the active sonar is detecting targets. It is difficult to find
the real targets, so the false negative rate has a significant increase. In order
to suppress the clutter better and reduce the false alarm rate maximally, the
problem of target detection via Multiple Hypothesis Tracking based Track
Before Detect (MHT -TBD) is considered in this paper. MHT -TBD is a
data association method that could remove wild spots in a clutter environment and save useful data. But for high maneuvering targets, the tracking
success rate of MHT-TBD is significantly reduced. An improved MHTTBD method named interactive multimode MHT-TBD (IMM-MHT-TBD),
which combines interactive multimode with multi hypothesis tracking is
introduced in this paper. It shows a superior performance in high maneuvering targets tracking by maintenance and data association in a clutter environment. Simulation results show that IMM-MHT-TBD can solve the
problem of tracking high maneuvering targets effectively.
4pSPb28. Characterization and exploitation of lucky scintillations in
HLA and VLA data from the 2006 Shallow Water Experiment. Hongya
Ge (Elec. & Comput. Eng., New Jersey Inst. of Technol., University
Heights, Newark, NJ 07102, ge@njit.edu) and Ivars P. Kirsteins (NUWC,
Newport, RI)
The detection and localization of signals using large arrays is challenging in low coherence underwater environments. Poor spatial coherence is a
consequence of signal wave front distortions caused by time-dependent
three-dimensional spatial fluctuations in the sound speed from internal
waves, fronts, and random medium effects such as turbulence. In an earlier
paper (Lucky ranging with towed arrays in underwater environments subject
to non-stationary spatial coherence loss in Proc. of ICASSP 2016, March
2016) we had proposed a new paradigm for array processing in poor coherence environments, motivated by real data observations, which exploited
lucky moments or favorable scintillations when the signal wave front momentarily had little or no distortion. Here we examine the HLA and VLA
data from the 2006 Shallow Water Experiment provided by the Woods Hole
Oceanographic Institute to better understand and characterize the occurrence
of lucky moments or favorable scintillations in actual data and how they can
be utilized in array processing. Because of uncertainties in the HLA element
3918
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Ultrasonic therapeutic transducers that consist of large numbers of unfocused, low power elements have begun to replace single, focused, high
power elements. This allows the operator to use phased array techniques to
change the focal position in the tissue during therapy. In transcranial therapy, this phased array configuration is essential to reduce local heating at
the highly attenuating bone. Recently, Dual Mode Ultrasound Arrays
(DMUAs) have been developed which leverage existing elements for imaging during therapy. DMUAs have the benefit of both the therapeutic and
imaging systems being co-registered. This improves upon the existing
approach of using a separate ultrasound system for guidance, as the acoustic
beam path is the same for both. Unfortunately, the highly reflective nature
of bone means that DMUAs have not been applied to transcranial therapy.
However the recent near-field observation of lamb waves in cranial bone
opens the possibility for DMUAs to be applied to a guided wave scan of the
skull. This would allow co-registration of the bone’s ultrasonic properties
with the therapeutic axis which would facilitate adaptive beamforming. In
this work, a beamforming scheme for the excitation of guided waves in cranial bone using a therapeutic phased array is described and demonstrated
experimentally.
4pSPb30. A novel method for cement placement diagnosis behind multiple strings based on sonic dipole cutoff mode processing. Maja Skataric,
Sandip Bose, Smaine Zeroug, and Bikash K. Sinha (Mathematics and Modeling, Schlumberger-Doll Res., 1 Hampshire St., Cambridge, MA 02139,
mskataric@slb.com)
Acoustic measurements are widely used to diagnose the condition and
placement of cement in cased oil and gas wells, and its bond to interfaces in
contact with it. However current methods, encompassing high frequency
sonic CBL-VDL and ultrasonic measurements, are designed for single steel
casings and insufficient to probe behind more than one casing, and so cannot
address the diagnosis of the placement and bond of cement behind a second
casing that have become critical in the oil and gas industry. We therefore
look at the use of lower frequency sonic measurements with deeper radial
probing capability, to identify features appropriate for such a diagnosis. In
particular, in this work we look at the modeling of sonic monopole and
dipole modes as generated by existing tools and identify a set of cutoff
modes that are indicative of annular fill behind a second casing when the
annulus behind the first is known. Therefore the acquisition and processing
of such modes in conjunction with the use of traditional cement evaluation
techniques could become a viable avenue for the acoustic diagnosis of
cement placement in multiple casing string geometries. Modeling results for
such scenarios will be presented along with potential application to experimental and field data to indicate the feasibility of such a diagnosis using feature extraction.
Acoustics ’17 Boston
3918
WEDNESDAY AFTERNOON, 28 JUNE 2017
ROOM 302, 3:35 P.M. TO 5:20 P.M.
Session 4pSPc
Signal Processing in Acoustics, Architectural Acoustics, Biomedical Acoustics, and Physical Acoustics:
Extraction of Acoustic Signals by Remote Non-Acoustic Methods
Geoffrey H. Goldman, Chair
U.S. Army Research Laboratory, 2800 Powder Mill Road, Adelphi, MD 20783-1197
Chair’s Introduction—3:35
Invited Paper
3:40
4pSPc1. Measurement and analysis of a piezoelectric mirror shifter impulse response using self-correcting synthetic-heterodyne
demodulation based Michelson interferometer vibrometer with gain and phase control feedback. Michael J. Connelly (Electron.
and Comput. Eng., Univ. of Limerick, University of Limerick, Dept. of Electron. and Comput. Eng., Limerick V94 T9PX, Ireland, michael.connelly@ul.ie), Jose H. Galeti, and Claudio Kitano (Dept. of Elec. Eng., Universidade Estadual Paulista, Ilha Solteira, Brazil)
Laser Doppler vibrometry is often employed for non-contact vibration measurements. A test beam scattered from a target is interfered with a reference beam on a photodiode and the resulting signal is processed to determine the target vibration. Most commercial
vibrometers use a phase modulator in one branch of the interferometer. The requirement for a phase modulator can be removed by the
use of synthetic-heterodyne demodulation. The optical source is a sinusoidal current modulated laser diode resulting in a frequency modulation of the input lightwave. The photocurrent has the form of the cosine of a carrier at the modulation frequency plus the dynamic
phase difference between the beams, the time differential of which is proportional to the vibration. The detected signal spectrum comprises bands centered at integer multiples of the modulation frequency, which are processed to retrieve the vibration. We describe a new
self-correcting synthetic-heterodyne technique employing phase and gain feedback, which is significantly less sensitive to the received
optical power. The system, which has a frequency range of 0.2-9 kHz, is used to measure the impulse response of a piezoelectric mirror
shifter. The vibration signal is analyzed using Hilbert transform techniques to determine the shifter resonant frequencies and decay rates.
4:00
4:20
4pSPc2. Capturing Bragg-scattered structural waves with Digital Image
Correlation for underwater localization applications. Dagny Joffre, Alessandro Sabato, Christopher Niezrecki, and Peter Avitabile (UMass Lowell,
220 Pawtucket St., Lowell, MA 01854, Dagny_Joffre@student.uml.edu)
4pSPc3. Seismic response of fiber optic seismic sensors. R D. Costley (U.S.
Army Engineer Res. and Development Ctr., 3909 Halls Ferry Rd., Vicksburg,
MS 39180, casa.costley@gmail.com), Kent K. Hathaway (U.S. Army Engineer Res. and Development Ctr., Kitty Hawk, NC), Darren C. Flynn (Naval
Undersea Warfare Ctr., Newport, RI), Gustavo Galan-Comas (U.S. Army
Engineer Res. and Development Ctr., Vicksburg, MS), Stephen A. Ketcham
(U.S. Army Engineer Res. and Development Ctr., Alexandra, VA), and Clay
K. Kirkendall (Optical Sci. Divistion, Naval Res. Lab., Washington, DC)
Bragg scattering is a well-known occurrence in structures containing
periodically spaced obstructions. A urethane test panel, with periodically
spaced aluminum ribs, was built to study the possibility of using Bragg scattered responses to localize acoustic underwater signals. The presence of
Bragg scattered waves may allow the size of an array to be reduced without
a loss in the array gain. The measured structural response of the test panel to
acoustic excitations can be converted into a wavenumber spectrum; the
wavenumber spectrum can then be used to calculate the incident angle of
the original acoustic excitation. Precise bearing estimates require high spatial resolution measurements. Digital Image Correlation (DIC), which measures full-field displacement in a structure, may provide the high spatial
resolution necessary for improved bearing angle estimation. A study was
conducted comparing the wavenumber spectra of the test panel calculated
from DIC measurements, laser vibrometer measurements, and from theoretical FEM results.
3919
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Fiber-optic seismic sensors (FOSS) consist of an optical fiber connected
to an optical interrogator, which is electro-optic instrumentation that injects
pulses of coherent light into the fiber and receives and demodulates the
returned signals. The optical fiber is usually contained within a protective
cable. Experiments were performed on a barrier island setting where several
kilometers of cable had been buried at sub-meter depths. It has been hypothesized that seismic disturbances strain a segment of the optical fiber thus causing its optical path length to change. An array of eight 3 axis seismometers
were buried alongside the cable at the same depth, with a spacing of 0.5 m,
and with one axis parallel to the cable. An electromagnetic shaker, buried at
the same depth and in different configurations with respect to the array and
the FOSS, was excited with a 10-cycle tone burst varying in frequency from
10 to 200 Hz. The resulting seismic waves were simultaneously recorded with
the seismometers and the FOSS. Signals recorded with the seismometer array
were processed to calculate the average strain along the direction of the fiber.
The presentation illustrates the comparison between the computed strain outputs of the seismometers and that of the FOSS.
Acoustics ’17 Boston
3919
4p WED. PM
Contributed Papers
Invited Paper
4:40
4pSPc4. Optical methods for modeling and experimental detection of sound sources behind windows. Christoph Borel-Donohue,
Ed Habtour, and Geoffrey H. Goldman (SEDD, US Army Res. Lab., 2800 Powder Mill Rd, Bld 202 rm 3F068, Adelphi, MD 20783,
cborelc@gmail.com)
Detecting sound from a distance is not a novel concept and laser vibrometers are often used for this purpose. A potentially better
method is to measure the reflection of the usually stationary environment in a vibrating window. Experiments were conducted to capture
high-speed video of vibrating windows with sound sources located inside a building. Low frequency vibrations produced by a speaker
operating between 19 and 37 Hz were clearly visible on high contrast reflections under a very limited set of conditions. To better understand the limitations of the method in terms of variabilities of potential scenarios with sound, window, optical reflection, camera and
analysis methods a parallel modeling effort was initiated. The detection of sound using optical methods can vary with sound source parameters (e.g. frequency, spectrum, amplitude, …), the window structure (glass thickness, number of panes, elasticity,…), the character
of the reflected environment (contrast ratio, intensity,…), the measuring equipment (camera frame rate, spatial resolution, dynamic
range, integration time, noise level,…), and the analysis methods (short term Fourier analysis window size, apodization, preprocessing
steps,…). This talk will highlight some of the findings to determine the feasibility of optical detection of sound sources behind windows.
Contributed Paper
5:00
4pSPc5. Detection of vibrations on windows and doors excited by a
speaker using video cameras. Geoffrey H. Goldman and Christoph C.
Borel (U.S. Army Res. Lab., 2800 Powder Mill Rd., Adelphi, MD 207831197, geoffrey.h.goldman.civ@mail.mil)
Algorithms were developed and tested to estimate acoustic coupled
vibrations on doors and windows in a small building using data from commercial video cameras. The building was excited with a speaker that emitted
low frequency tones. Image processing based algorithms were developed to
3920
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
estimate the frequency, amplitude, and signal-to-interference-plus-noise ratio (SINR) of the tones. Sensors such as Laser Doppler Vibrometers (LVDs)
and accelerometers are much more sensitive than video cameras for measuring small vibrations and have a higher cutoff frequency. However, video
cameras are ubiquitous, passive, and can potentially monitor a large area.
Their performance is limited by the target range due to increased pixel size
and atmospheric turbulence, lighting conditions, the contrast of the object of
interest and the sample rate of the camera. Given these limitations, there are
still potential new applications and niches for using video cameras to
remotely measure vibrations of objects.
Acoustics ’17 Boston
3920
WEDNESDAY AFTERNOON, 28 JUNE 2017
ROOM 309, 1:20 P.M. TO 4:40 P.M.
Session 4pUWa
Underwater Acoustics, Acoustical Oceanography, and ASA Committee on Standards: Underwater Noise
From Marine Construction and Energy Production II
James H Miller, Cochair
Ocean Engineering, University of Rhode Island, 215 South Ferry Road, Narragansett Bay Campus URI,
Narragansett, RI 02882
Paul A. Lepper, Cochair
EESE, Loughborough University, Loughborough LE113TU, United Kingdom
Invited Papers
1:20
4pUWa1. Good noise, bad noise: A tricky case of balancing risk of physical injury against acoustic disturbance for marine mammals and tidal energy devices. Ben Wilson (Sci., SAMS-UHI, SAMS-UHI, Oban, Argyll PA371QA, United Kingdom, ben.wilson@
sams.ac.uk), Brett Marmo (Xi Consulting, Edinburgh, United Kingdom), Paul A. Lepper (Mech., Elec. and Manufacturing Eng., Loughborough Univ., Loughborough, United Kingdom), Denise Risch, Steven Benjamins (Sci., SAMS-UHI, Oban, Argyll, United Kingdom),
Gordon Hastie (SMRU, St. Andrews, United Kingdom), and Caroline Carter (Scottish Natural Heritage, Battleby, United Kingdom)
Tidal-stream turbines are a promising source of renewable electricity worldwide. These technologies are sufficiently new that only
single test devices have been deployed with arrays imminent. Being new, their interactions with marine organisms are poorly understood
and the risk of large marine vertebrates colliding with their moving blades is a consenting and ecological concern. Operational noise is
also considered a disturbance threat but under what circumstances is poorly defined. Further, the threats of collision and turbine noise
may be inversely correlated with animals needing to hear turbines to avoid them. Consequently, there have been proposals to add extranoise by fitting turbines with acoustic deterrents to warn or scare animals away. In this talk we examine the acoustic interactions between
marine mammals and tidal turbines. The interactions are complex and depend on turbine source levels, ambient sound, propagation in
moving water, sensory abilities, swim speeds and diving behaviour. In addition, the occurrence of turbines in arrays adds further complexity as responses to one turbine will impact collision risk with another. We then consider the options for and implications of adding
additional warning sounds but such quick fixes might have unintended consequences that either increase collision risk or lead to undesirable avoidance.
1:40
4pUWa2. Operational noise from tidal turbine arrays and the accessment of collision risk with marine mammals. Brett Marmo
(Eng., Xi Eng. Consultants, CodeBase, Argyle House, 3 Lady Lawson St., Edinburgh EH3 9DR, United Kingdom, brettmarmo@xiengineering.com)
3921
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
4p WED. PM
The ability for marine species to detect and thereby avoid potentially harmful collision with the moving parts of a tidal stream turbine depends on relative levels of the ambient sound with the acoustic emissions from the turbine. Tidal streams targeted for exploitation
by renewable energy converters are by their nature highly energetic environments often with high ambient sound levels. Commonly the
first time that the operational sound from new tidal turbines can be measured is after it has been installed and is already interacting with
animals in the marine environment. A modelling solution is therefore required to estimate whether marine animal will be able to hear
and avoid contact with turbines. An acoustic-structural interaction model is used to calculate the acoustic output of tidal turbines. The
cumulative sound of an array of tidal turbines and its dependence on bathymetry is calculated using a parabolic equation code. Modelled
and measured sound pressure levels give information on potential upstream warning distances/times for animals which help us consider
collision risk.
3921
2:00
4pUWa3. Noise impact assessment on Indo-Pacific Humpback Dolphin in the habitat of the East Taiwan Strait during the first
two pile driving activities of demonstration offshore wind farm. Chi-Fang Chen (Eng. Sci. and Ocean Eng., National Taiwan Univ.,
No. 1 Roosevelt Rd. Sec.#4, Taipei 106, Taiwan, chifang@ntu.edu.tw), Wei-Jay Wang, Chih-Hao Wu, Wei-Chun Hu (Eng. Sci. and
Ocean Eng., National Taiwan Univ., Taipei, Taiwan, Taiwan), Nai-Chang Chen (Ocean Technol. Res. Ctr., National Taiwan Univ., Taipei, Taiwan), Wei-Shien Hwang (Eng. Sci. and Ocean Eng., National Taiwan Univ., Taipei, Taiwan), Lien-Sian Chou (Inst. of Ecology
and Evolutionary Biology, National Taiwan Univ., Taipei, Taiwan), Shane Guan (Office of Protected Resources, NOAA/NMFS, Silver
Spring, MD), Sheng-Fong Lin (GEL, Industrial Technol. Res. Inst., Taipei, Taiwan), and Derrick Lin (Swancor Renewable Energy Co.,
Ltd, Taipei, Taiwan)
Foundation piles of the first two wind turbines were driven off the coast of Miaoli, Taiwan in 2016. However, that area is also the
habitat of the critically endangered Eastern Taiwan Strait (ETS) population of Indo-Pacific humpback dolphin (Sousa chinensis). To
assess potential noise effects from pile driving on the humpback dolphins, we collected underwater noise data during the construction
and found that the sound pressure level (Lrms) at 750m was less than 180dB re 1lPa and the peak sound pressure (Lpk, flat) was less than
190 dB re 1lPa. We also presented recommendations of noise and marine mammal monitoring for future pile driving activities. It
includes an exclusion zone of radius 750m from the piling location and the peak sound pressure (Lpk, flat) not to exceed 190~220 dB re
1lPa at 750 m distance, and an observation zone extends to 1500m radius where both marine mammal observers and passive acoustic
monitoring are posted. [This work was sponsored by Ministry of Science and Technology of Taiwan (105-3113-E-002 -002 -CC2), Taiwan Bureau of Energy and Industrial technology Research Institution (05HZT56002), and U.S. Marine Mammal Commission’s
Research and Conservation Grant.]
2:20
4pUWa4. Broad-scale acoustic monitoring for cetaceans and underwater noise in relation to offshore wind farm construction in
Scotland. Kate L. Brookes, Ewan Edwards (Marine Scotland Sci., Marine Lab., 375 Victoria Rd., Aberdeen AB11 9DB, United Kingdom, kate.brookes@gov.scot), Nathan D. Merchant (CEFAS, Lowestoft, Suffolk, United Kingdom), and Ian Davies (Marine Scotland
Sci., Aberdeen, United Kingdom)
Marine construction projects, such as offshore wind farms and port developments often use techniques that produce significant levels
of noise underwater, which could have effects on marine wildlife. Marine Scotland is the government body responsible for regulating
these activities in Scottish waters and for ensuring that wildlife populations are protected in line with legislation. Large scale offshore
wind farm construction will begin to take place off the Scottish east coast in 2017, using piled foundations. To monitor for potential
broad scale changes in distribution of protected cetacean species during construction activities, Marine Scotland have deployed an array
of 30 click detectors and 10 broadband acoustic recorders across the Scottish east coast each summer since 2013. Here we present baseline distributions for dolphins and harbour porpoises, along with ambient noise levels recorded concurrently. Dolphin detections across
the monitored area are highly variable, with some locations that are clearly favoured. Harbour porpoise are ubiquitous and in more than
60% of locations are detected on 100% of monitored days. This is likely to mean that there is more power to detect changes in porpoise
distribution in relation to offshore wind farm pile driving than for dolphins.
2:40
4pUWa5. Comprehensive summary of the impulsive pile driving sound exposure study series. Michele B. Halvorsen (CSA Ocean
Sci. Inc, 8502 SW Kansas Hwy, Stuart, FL 34997, mhalvorsen@conshelf.com), Brandon M. Casper, Arthur N. Popper (Dept. of Biology, Univ. of Maryland, College Park, MD), and Thomas J. Carlson (ProBioSound LLC, Holmes Beach, FL)
The high intensity controlled impedance fluid filled wave tube (HICI-FT) was used to expose fishes, in the laboratory, to impulsive
sound signals under controlled conditions. Fish species were exposed to pile driving signals under different experimental paradigms, followed by detailed investigations of their physiological tissue response (aka, barotrauma injuries). Most often the cumulative sound exposure level (SELcum) was held constant while varying the number of pile strikes and the single strike sound exposure level (SELss).
Altering these three variables proved the equal energy hypothesis irrelevant; the higher SELss values with fewer number of strikes
caused a highest injury levels. The first dose-response curve was generated for fish responses. Comparisons between different species
showed fish with no swim bladder at low injury risk, fish with a closed swim bladder (physoclists) at a high injury risk, and fish with an
open swim bladder (physostomes) at a moderate injury risk. If fish find safe haven in the wild, they have potential to heal within 10 days
of receiving moderate injuries. Tissue injury appears to occur before damage to hair cells occurs. And most recently, fish show injury after exposure to as few as 8-impulsive signals that have a high SELss value.
3:00–3:20 Break
3:20
4pUWa6. Acoustic characterization of wave energy converters. Brian L. Polagye, Paul Murphy (Mech. Eng., Univ. of Washington,
Box 352600, Seattle, WA 98195-2600, bpolagye@u.washington.edu), Patrick Cross, and Luis Vega (Hawai’i Natural Energy Inst.,
Univ. of Hawai’i, Honolulu, HI)
Wave energy converters produce sound as a consequence of their operation, but the specifics are not well-understood. Here, we present observations of two point-absorbing wave energy converters deployed at the US Navy’s Wave Energy Test Site in Kaneohe, HI.
Measurements are obtained by free-drifting instrumentation packages which acoustical isolate the hydrophone from the surface expression to minimize masking by flow-noise (i.e., pseudo-sound generated by relative motion between hydrophone and water) and self-noise
(i.e., propagating sound generated by the instrument package). Observations suggest that wave energy converters of different designs,
even within the general class of point absorbers, produce different stereotypical sounds. Further, during normal operation, sound signatures can change substantially with sea state as new sound generation mechanisms come into play. One example of this is wave breaking
around a shallow-hulled point absorber when waves exceed a critical steepness. The sound from bubble collapse is detectable up to
3922
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3922
several hundred kHz, whereas, below this critical steepness, wave converter sound is only detectable up to ten kHz. Finally, wave energy
converter sounds are contextualized relative to other natural and anthropogenic sound sources to qualitatively explore their potential
effects on marine animals.
3:40
4pUWa7. Acoustic life cycle assessment of offshore renewables—Implications from a wave-energy converter deployment in Falmouth Bay, UK. Philippe Blondel, Jodi Walsh (Phys., Univ. of Bath, University of Bath, Claverton Down, Bath, Avon and NE Somerset BA2 7AY, United Kingdom, p.blondel@bath.ac.uk), Jo K. Garrett, Philipp R. THIES (College of Eng., Mathematics and Physical
Sci., Univ. of Exeter, Penryn, United Kingdom), Brendan J. GODLEY, Matthew J. Witt (Environment and Sustainability Inst., Univ. of
Exeter, Penryn, United Kingdom), and Lars Johanning (College of Eng., Mathematics and Physical Sci., Univ. of Exeter, Penryn, United
Kingdom)
Marine Renewable Energy is developing fast, with hundreds of prototypes and operational devices worldwide. Two main challenges
are assessing their environmental impacts (especially in near-shore, shallow environments) and ensuring efficient and effective maintenance (requiring specialised ships and fair weather windows), compounded by the lack of long-term measurements of full-scale devices.
We present here broadband measurements (10 Hz to 32/48 kHz) acquired at the Falmouth Bay Test site (FaBTest, UK) from 2010
onwards, for a 16-m ring-shaped Wave Energy Converter, in waters up to 45 m deep. This period covers baseline measurements, including shipping from the neighbouring English Channel, one of the busiest shipping lanes in the world (ca. 45,000 ship transits annually)
and the full period of installation and energy production, including maintenance episodes. Acoustic signatures are measured as Sound
Pressure Levels (e.g. for impacts) and time/frequency variations (for condition-based monitoring via Acoustic Emissions). They change
through time, depending on weather and modes of operation. Long-term measurements are compared with modelling of potential variations in this complex environment and with laboratory experiments. These are used to outline the varying acoustic contributions through
the life cycle of a typical wave energy converter, yielding insights for other wave devices in other environments.
4:00
4:20
4pUWa8. An uncertainty analysis of propagated sound from an array
of marine hydrokinetic devices. Erin C. Hafla, Erick Johnson (Mech. Eng.,
Montana State Univ., 205 Cobleigh Hall, Bozeman, MT 59717-3900, erinhafla@gmail.com), and Jesse Roberts (Sandia National Labs., Albuquerque,
NM)
4pUWa9. Underwater operational noise level emitted by a tidal current
turbine and its potential impact on marine fauna. Julie Lossent (Res.
Inst. Chorus, 46, Ave. Felix Viallet, Grenoble cedex 1 38031, France, julie.
lossent@chorusacoustics.com), cedric gervaise (Chair CHORUS, Saint
Egreve, France), Lucia D. Iorio (Chair CHORUS, Grenoble, France),
Thomas Folegot, Dominique Clorennec (Quiet-Oceans, Plouzane, France),
and Morgane Lejart (France Energies Marines, Brest, France)
Marine hydrokinetic (MHK) devices provide an alternate energy source
from tidal, current, and wave motion; however, these devices introduce
anthropogenic noise in the marine ecosystem and must meet regulatory
guidelines. Paracousti is a 3D finite-difference, time-domain solution to the
governing velocity-pressure equations and is used to predict the propagation
of sound from an array of MHK sources in any environment. This solution
allows for multiple sources, each with unique sound profiles, and for spatially varying sound speeds, bathymetry, and soil composition. However,
fluctuations in the hydrodynamic field will introduce operational uncertainties for the sound profiles of the MHK devices, which may not be captured
through idealized sound profiles. Preliminary results for a single device in a
Pekeris waveguide indicate differences in broadband sound pressure levels
of 25 and 60 dB re 1lPa for peak amplitudes and frequencies of 1-20 m,
150 Hz and 1 m, 50-300 Hz, respectively. A Monte Carlo approach is presented for an array of sources to further characterize the range of uncertainties associated with variations in source amplitude and frequency. It is
demonstrated that the idealized, deterministic solution, vastly underestimates the compounding uncertainty on the final sound field.
3923
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Marine renewable energy development raised concerns over the impact of
underwater noise. We assessed the acoustic impacts of an operating tidal current turbine (Paimpol-Brehat site, France) on marine fauna. The turbine’s
source level (SL) was estimated using 19 acoustic drifting transects at distances between 100m to 2400m from the device. SL ranges from 118 to
152dBre1lPa@1m in the third-octave bands at frequencies between 40 and
8192Hz, a noise level comparable to the one emitted by a 19m boat travelling
at 10kt. The SL was used to estimate the impact of the turbine’s noise based
on acoustic propagation simulations. The overall acoustic footprint of the device corresponds to a disk of 350 meters radius. Our results showed that within
this footprint, physiological trauma is improbable but behavioral disturbance
may occur up to 350m around the device for marine mammals (impact limited
by the footprint area), and 55m, 5m and 5m respectively for pollocks, sea
basses and a shrimp species. Feedbacks from this study show that the assessment of TTS and PTS risk areas for marine mammals is rather mature, but
there are still many uncertainties about the assessment of risk areas for behavioral disturbance and masking for fishes and marine invertebrates.
Acoustics ’17 Boston
3923
4p WED. PM
Contributed Papers
WEDNESDAY AFTERNOON, 28 JUNE 2017
ROOM 306, 1:20 P.M. TO 5:00 P.M.
Session 4pUWb
Underwater Acoustics: Unmanned Vehicles and Acoustics II
Erin M. Fischell, Cochair
Mechanical Engineering, MIT, 77 Massachusetts Ave., 5-204, Cambridge, MA 02139
Martin Siderius, Cochair
ECE Dept., Portland State Univ., Portland State University, P.O. Box 751, Portland, OR 97207
Invited Papers
1:20
4pUWb1. Autonomous underwater vehicle self-localization using a tetrahedral array and passive acoustics. Nicholas R. Rypkema
(Elec. Eng. and Comput. Sci., Massachusetts Inst. of Technol., 77 Massachusetts Ave., Rm. 5-223, Cambridge, MA 02139, rypkema@
mit.edu), Erin M. Fischell, and Henrik Schmidt (Mech. Eng., Massachusetts Inst. of Technol., Cambridge, MA)
The recent development of very low-cost, miniature autonomous underwater vehicles (AUVs) has lowered the barrier toward the
deployment of multiple AUVs for spatially distributed sensing. However, these AUVs introduce size, power, and cost constraints that
prevent the use of traditional approaches for vehicle self-localization, such as Doppler velocity log (DVL)-aided inertial navigation. In
this work, we describe a system that estimates the vehicle’s position relative to a single acoustic transmitter. The transmitter periodically
outputs a linear up-chirp that is synchronously recorded by a tetrahedral ultra-short baseline (USBL) hydrophone array on the AUV.
Real-time 3D phased-array beamforming and matched filtering is performed on-board the vehicle, and integrated with AUV pitch-rollheading to calculate azimuth, inclination, and range measurements to the transmitter. Finally, a particle filter incorporates these measurements with vehicle speed estimates and a motion model to generate a positional likelihood for the acoustic transmitter relative to the
AUV. This system enables vehicle self-localization in the case where the transmitter is stationary, and is entirely passive on the vehicle,
allowing multiple AUVs to localize using a single transmitter. We describe the processing pipeline of our system, and present results
from AUV field experiments. [Work supported by Battelle, ONR, Lincoln Laboratory and DARPA.]
1:40
4pUWb2. Low cost underwater acoustic localization. Eduardo A. Iscar Ruland and Matthew Johnson-Roberson (Naval Architecture
and Marine Eng., Univ. of Michigan, 2600 Draper Dr, Ann Arbor, MI 48109, eiscar@umich.edu)
Over the course of the last decade, the cost of marine robotic platforms has significantly decreased. In part this has lowered the barriers to entry of exploring and monitoring larger areas of the earth’s oceans. However, these advances have been mostly focused on autonomous surface vehicles (ASVs) or shallow water autonomous underwater vehicles (AUVs). One of the main drivers for high cost in
the deep water domain is the challenge of localizing such vehicles using acoustics. Here we propose a novel low cost underwater modem
design to assist in localizing deep water submersibles. The system will consist of location aware anchor buoys at the surface and underwater nodes. We present a comparison of methods, simulation of the proposed algorithms as well as experimental results, together with
details on the physical implementation to allow its integration into a novel deep sea AUV currently in development.
2:00
4pUWb3. Adaptation of acoustic transmission rates to optimize unmanned underwater vehicle communications. Mae L. Seto
(Defence R&D Canada, #9 Grove St., Dartmouth, NS B2Y 3Z7, Canada, mae.seto@drdc-rddc.gc.ca) and Dainis Nams (GeoSpectrum
Technologies Inc., Dartmouth, NS, Canada)
A framework for on-line characterization of in-water in situ acoustic transmission conditions, and intelligent adaptation of transmission rates to these conditions, is implemented on-board an unmanned underwater vehicles (UUV). The objective is to optimize use of
the acoustic communications channel during collaborative shallow water missions with other UUVs. The software database uses relatively little bandwidth to track the success of transmitted packets, providing operator data tracking in addition to communications layer
visibility into current channel conditions. The rate selector chooses the optimal transmission rate based on an adaptive distance bin
approach. Measurable results are improvements in bandwidth, reduction in modem power usage, and increased visibility into data success compared to traditional, constant-rate acoustic communication patterns. The algorithm, its implementation, and recent in-water validation results are presented.
3924
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3924
2:20
4pUWb4. Noise cancellation for an autonomous underwater vehicle-towed thin line array through recursive adaptive filtering.
Chi Cheng and Venugopalan Pallayil (National Univ. of Singapore, Acoust. Res. Lab, Singapore, Singapore, cheng@arl.nus.edu.sg)
The digital thin line array (DTLA) developed at ARL, National University of Singapore have been integrated and tested with different autonomous underwater vehicles (AUV) and tested in the field to detect, localize and track underwater targets. For tow speeds not
exceeding 4knots, it has been found that the flow noise is not a major contributor to performance reduction. However, noise generated
by the propulsion system of some of the AUVs presents a major interferer to the detection capabilities of DTLA, especially, at low ambient noise conditions. The method proposed in the literature for platform noise cancellation employs a single adaptive filter. The number
of the filer taps required in this case would be more than thousands due to the multi-path nature of the AUV noise in the environment of
interest and the solution is difficult to converge. To mitigate the above problem, we propose a recursively adaptive strategy, which
employs the least mean-square (LMS) algorithm. Our strategy is based on the fact that the underwater channel is sparse for the DTLA
and using a reference sensor close to the AUV propulsion system. The noise peaks are eliminated through recursive adaptive filter operations after cross correlating the noise signal at each sensor with the reference. The proposed method has been verified through simulations and we propose to apply this technique in a practical application.
Contributed Papers
4pUWb5. Deployment of a passive acoustic collision alarm for autonomous underwater vehicles. Oscar A. Viquez, Erin M. Fischell, and Henrik
Schmidt (Massachusetts Intitute of Technol., 77 Massachusetts Ave., Bldg.
5-204, Cambridge, MA 02139, oviquezr@mit.edu)
Autonomous underwater vehicles (AUVs) operate in increasingly busy
environments, and avoiding collisions with ships is an important aspect of
the vehicle’s decision framework. Advances in low-cost AUV technology
has further raised interest in solutions that minimize the need for specialized
equipment. A proposed solution is to passively detect and track boats based
on their noise. We have developed and demonstrated an algorithm that uses
changes in acoustic power to estimate time to collision. A cylindrical propagation model is used for shallow-water environments, to relate measured
power with range from a source. The time-derivative of this result is computed to estimate the time to closest approach, for a ship with unknown but
constant acoustic source level. Experiments were performed with vessels of
various characteristic frequencies and source levels. Acoustic measurements
were recorded through off-the-shelf hydrophones. Estimates from the acoustic method were first compared with GPS-based measurements, and later
with the AUV’s on-board navigation data. Test missions were deployed to
alter AUV behavior in response to approaching acoustic sources. Successful
estimates and AUV responses were recorded, although improved signal
processing and noise filtering is recommended for future implementations.
[Work supported by Lincoln Laboratory and DARPA.]
3:00–3:20 Break
3:20
4pUWb6. Observer-feedback control based acoustic homing beacon for
autonomous underwater vehicles. Caitlin Bogdan, Sean Andersson, and
James G. McDaniel (Boston Univ., 110 Cummington Mall, Boston, MA
02215, cbogdan@bu.edu)
One of the visions for Acoustically enabled Autonomous Underwater
Vehicles (AUVs) is a fleet of these vehicles to be held on a ship, deployed
for a specific mission and then recovered by the ship. A component of this
mission is finding the ship after deployment to be recovered, since both the
ship and AUVs have probably drifted from their original locations and
intended deployments. While many vehicles can surface, utilize radio and
GPS to relocate the ship and navigate towards it, the method presented
allows the ship to be found by deploying an acoustic beacon, while the
AUVs utilize an observer system to navigate towards the ship. The spatial
decay of spreading sound waves provides a spatially dependent variable for
the AUV to measure. Then, utilizing a linear model of the dynamics, a linear
observer is constructed. The absolute value of the sound pressure readings
3925
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
are fit to a spatial decay model, which is then linearized to form the observer
equation. By tuning the observer gain, the AUV can directly feed its observations of the sound field into the observer algorithm and generate navigation controls which will allow it to find the beacon. [Sponsored by the
Raytheon Advanced Studies Fellowship.]
3:40
4pUWb7. Quiet micro boats: An inexpensive acoustic sensing suite. Caitlin Bogdan, Robert V. Palladino (Boston Univ., 110 Cummington Mall,
Boston, MA 02215, cbogdan@bu.edu), Elizabeth A. Magliula (NUWCDIVNPT, Newport, RI), and James G. McDaniel (Boston Univ., Boston,
MA)
The Quiet Micro Boat [QMB] is an inexpensive acoustic sensing platform constructed from off-the-shelf hardware and software components to
allow for a fleet of acoustic sensors that can be deployed into real-world
sensing scenarios at a low cost. The QMBs operate using a Beaglebone
Black as a central processor, with jet propulsion generated by bilge pumps
for robust ocean operation. Acoustic data are collected through a hydrophone which interfaces with the BBB, and data is centralized using the
XBee and Zigbee radio mesh configuration. The talk will focus on the hardware costs and performance, discuss the design cycle and decisions, and feature some of the acoustics projects that have been tested on the devices.
[Work sponsored by the Naval Sea Systems Command (NAVSEA) Naval
Engineering Education Consortium (NEEC) Contract Number N00174-15C-0022 and by the Raytheon Advanced Studies Fellowship.]
4:00
4pUWb8. Automatic target recognition and geo-location for side scan
sonar imagery. Daniel Scarafoni, Alexander Bockman, and Michael Chan
(Massachusetts Inst. of Technol. Lincoln Lab., 244 Wood St., Lexington,
MA 02420, alexander.bockman@ll.mit.edu)
Ocean and lake floor survey is most efficiently conducted by active sonar owing to its favorable propagation range under water relative to other
sensing modalities. Range however must be traded for resolution, yielding
sonar return imagery that challenges both operator and machine to match
returns to specific objects. In the case of certain objects, time-range extent
in echo imagery may be exploited by machine learning detection and classification methods. This talk describes the success of such a method when
applied to data collected from a low cost autonomous platform. Further
capabilities such as decision support and detection geolocation are discussed. [This material is based upon work supported under Air Force Contract No. FA8721-05-C-0002 and/or FA8702-15-D-0001. Any opinions,
findings, conclusions, or recommendations expressed in this material are
those of the author(s) and do not necessarily reflect the views of the U.S.
Air Force. Presentation may contain distribution limited material.]
Acoustics ’17 Boston
3925
4p WED. PM
2:40
4:20
4:40
4pUWb9. The predictability of acoustic receptions on gliders in the Arctic. Lora J. Van Uffelen (Ocean Eng., Univ. of Rhode Island, 215 South
Ferry Rd., 213 Sheets Lab., Narragansett, RI 02882, loravu@uri.edu), Sarah
E. Webster, Craig M. Lee (Appl. Phys. Lab., Univ. of Washington, Seattle,
WA), Lee E. Freitag (Woods Hole Oceanographic Inst., Woods Hole, MA),
Peter F. Worcester, and Matthew Dzieciuch (Scripps Inst. of Oceanogr.,
Univ. of California, San Diego, La Jolla, CA)
4pUWb10. Geoacoustic inversion to study spatial variability and uncertainty along a 14-km seabed survey on the Malta Plateau. Jan Dettmer
(Dept. of GeoSci., Univ. of Calgary, 2500 University Dr. NW, Calgary, AB
T2N 1N4, Canada, jan.dettmer@ucalgary.ca), Charles W. Holland (Appl. Res.
Lab., Penn State Univ., State College, PA), Stan E. Dosso, and Eric Mandolesi
(School of Earth and Ocean Sci., Univ. of Victoria, Victoria, BC, Canada)
Acoustic transmissions from shallow sources in the Arctic Ocean can
propagate several hundred kilometers due to the presence of an Arctic
acoustic duct. Receptions of these long-range transmissions are complex
patterns of arrivals which are strongly dependent upon upper ocean
sound-speed structure. In addition to measuring sound-speed parameters,
gliders equipped with acoustic recorders can measure these arrivals and
can complement moored receptions, providing data at many ranges with
respect to the moored sources. There is a higher degree of uncertainty in
glider position compared with moored receivers, but localization can be
improved in post-processing with enhanced acoustic predictability. Two
acoustic Seagliders were deployed for a short pilot study in late Summer
2016 in the vicinity of an array of acoustic tomography sources with frequencies on the order of 250 Hz in the Arctic Ocean in anticipation of a
longer deployment in Summer 2017. Source receptions recorded on the
gliders are compared with acoustic predictions based on sound-speed profiles generated from environmental data collected on the gliders themselves to begin to understand the predictability of transmissions received
on gliders in an Arctic environment. Acoustic predictions are analyzed for
receptions both within the acoustic duct and at depths exceeding the
acoustic duct.
3926
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Seabed variability of continental shelves is well understood at kilometer
and centimeter scales; however, mesoscales of several meters are poorly
understood. While vertical seismic profiling can provide insights into layer
geometries, geoacoustic parameter values are not estimated, and structural
images are averaged over ~100 m and typically distorted. Here, we apply
automated sequential inversion to seabed reflectivity data recorded using an
autonomous underwater vehicle (AUV) on the Malta Plateau. The AUV
tows a 32-hydrophone array and a source emitting signals at regular intervals
along a 14-km survey track in two frequency bands (900-1300 and 19003600 Hz). The reflection data are processed in terms of reflection coefficients which results in ~1600 data sets, each with a seabed footprint of <20
m. For efficient uncertainty quantification, a particle filter is applied. The
inversion provides rich seabed information with resolution and geoacoustic
parameter estimates significantly better than possible with vertical profiling.
The survey reveals a low-velocity (<1500 m/s) wedge with low attenuation
of initially 1.2-m thickness, thinning towards the Sicilian coast and disappearing after 8 km. An erosional, high-velocity boundary is increasingly buried by low-velocity material towards the coast. This boundary is rougher in
shallower water and depressions are filled with material of lower velocity.
[Data: CLUTTER JRP, a collaboration of ARL-PSU, DRDC, CMRE, and
NRL. Research supported by SERDP and ONR ocean acoustics.]
Acoustics ’17 Boston
3926
WEDNESDAY EVENING, 28 JUNE 2017
8:00 P.M. TO 9:30 P.M.
OPEN MEETINGS OF TECHNICAL COMMITTEES
The Technical Committees of the Acoustical Society of America will hold open meetings on Monday and Wednesday. See the list below
for the exact schedule.
These are working, collegial meetings. Much of the work of the Society is accomplished by actions that originate and are taken in these
meetings including proposals for special sessions, workshops, and technical initiatives. All meeting participants are cordially invited to
attend these meetings and to participate actively in the discussion.
Committees meeting on Monday, 26 June
Committee
Acoustical Oceanography
Animal Bioacoustics
Architectural Acoustics
Engineering Acoustics
Physical Acoustics
Psychological and Physiological
Acoustics
Structural Acoustics and Vibration
Start Time
8:00 p.m.
8:00 p.m.
8:00 p.m.
8:00 p.m.
8:00 p.m.
8:00 p.m.
Room
310
313
207
204
210
311
8:00 p.m.
312
Committees meeting on Wednesday, 28 June
Start Time
8:00 p.m.
8:00 p.m.
8:00 p.m.
8:00 p.m.
8:00 p.m.
8:00 p.m.
Room
312
200
203
302
304
310
4p WED. PM
Committee
Biomedical Acoustics
Musical Acoustics
Noise
Signal Processing in Acoustics
Speech Communication
Underwater Acoustics
3927
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3927
THURSDAY MORNING, 29 JUNE 2017
ROOM 207, 7:55 A.M. TO 12:20 P.M.
Session 5aAAa
Architectural Acoustics and ASA Committee on Standards: Uncertainty in Laboratory Building
Acoustic Standards
Matthew V. Golden, Cochair
Pliteq, 616 4th Street, NE, Washington, DC 20002
Daniel Urbán, Cochair
A & Z Acoustics, s.r.o., S. H. Vajanského 43, Novè Zámky, 94079, Slovakia
Chair’s Introduction—7:55
Invited Papers
8:00
5aAAa1. Remarks on the definition of airborne sound insulation and consequences for uncertainties. Volker Wittstock (Physikalisch-Technische Bundesanstalt, Bundesallee 100, Braunschweig 38118, Germany, volker.wittstock@ptb.de)
Airborne sound insulation is explicitly defined as the ratio between incident and transmitted sound power. Since sound power cannot
be measured directly, field quantities like sound pressure are measured to derive the desired sound power. The relation between sound
pressure and sound power depends on the nature of the sound field, i.e., to which extent it is a diffuse sound field. This is the main reason
why it is impossible to derive an analytic equation for the measurement uncertainty of a sound power and thus of a sound insulation.
The current practice is to define standardized test facilities for the measurement of airborne sound insulation. The uncertainty of measured sound insulations is then approximated by the standard deviation of reproducibility determined by interlaboratory tests. This is
equivalent to changing the definition of airborne sound insulation. It is no longer the sound power ratio but the mean value of the sound
insulation measured in very many or all thinkable laboratories meeting the required specifications. Thus, laboratory specifications
become part of the definition of airborne sound insulation. The contribution highlights the background of the different definitions and
shows consequences for the uncertainty of airborne sound insulation.
8:20
5aAAa2. Some practical issues affecting repeatability and reproducibility in laboratory transmission loss tests. Christoph Hoeller
and Jeffrey Mahn (Construction, National Res. Council Canada, 1200 Montreal Rd., Ottawa, ON K1A 0R6, Canada, christoph.hoeller@
nrc.ca)
The ASTM standard E90 defines the measurement of transmission loss, equivalent to the sound reduction index defined in ISO
10140. ASTM E90 and ISO 10140 specify requirements for the laboratory, the test procedure and conditions, and for preparation and
mounting of the specimen under test. Despite the strict requirements in ISO 10140 and the somewhat less strict requirements in ASTM
E90, transmission loss results for nominally identical specimens often vary if measured in different laboratories, and sometimes even if
measured again in the same laboratory. In practice, there are many factors that affect the repeatability or reproducibility of a transmission loss test for a given specimen. This presentation will not attempt to systematically cover all different sources of uncertainty, but
instead will highlight some practical issues commonly encountered in laboratory transmission loss tests. Examples will be presented for
a number of issues, including the effect of leakage through the specimen under test, the effect of varying temperature and humidity in
the test chambers, and the effect of re-using gypsum board.
8:40
5aAAa3. Cross-laboratory reproducibility of sound transmission loss testing with the same measurement and installation team.
Benjamin Shafer (Tech. Services, PABCO Gypsum, 3905 N 10th St., Tacoma, WA 98406, ben.shafer@quietrock.com)
Previous cross-correlative statistical research studies, combined with the results from past laboratory sound transmission loss round
robin testing, illustrate that the laboratory-to-laboratory reproducibility of sound transmission loss testing is inordinately and unacceptably low. Industry building construction professionals use the results of laboratory sound transmission loss testing to determine acoustics-related building code compliance. As such, a forensic analysis of laboratory sound transmission loss is needed to narrow potential
causes of cross-laboratory variability to a few primary sources. As a first step in this process, sound transmission loss measurements for
two different assemblies are compared between multiple laboratories, each with their own different technicians and installation crews.
Two different assemblies are then compared between multiple laboratory facilities with the same measurement and installation crew.
The use of the same measurement crew at two different facilities resulted in much better statistical reproducibility than all previous
reproducibility studies.
3928
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3928
9:00
5aAAa4. Variations in impact sound level as a function of tapping machine position. John LoVerde and David W. Dong (Veneklasen Assoc., 1711 16th St., Santa Monica, CA 90404, wdong@veneklasen.com)
Impact insulation class testing per ASTM E 492 requires measurement of the sound field at exactly four tapping machine positions.
Previous research by the authors [J. Acoust. Soc. Am. 121, 3113 (2007), J. Acoust. Soc. Am. 122, 2955 (2007)] indicated that for field
tests, the variation between tapping machine positions was small. To our knowledge, a systematic investigation has not been performed
for tapping machine positions in the laboratory, and some recent results indicate that the variation may be larger than expected. Large
variation in sound level may be inherent to the method, or may point to problems in construction or installation of flooring materials.
The variations with tapping machine position are analyzed for a set of laboratory tests, and the previous field test studies are updated
with additional data. The authors investigate possible changes to the standards to mandate a maximum allowable variation between tapping machine positions, and to require additional positions as necessary.
Contributed Paper
9:20
correlations are usually neglected. This has, e.g., led to the expression
included in Annex A of ISO 17497-1 to calculate the precision of the measurement of random-incidence scattering coefficients. To determine whether
it is actually justified to neglect the input correlations, this contribution
investigates the correlations between the reverberation times used to determine the random-incidence absorption coefficient (ISO 354) and scattering
coefficient (ISO 17497-1) in a reverberation chamber. The data used here
are taken from measurements in a real-scale and a small-scale reverberation
chamber. It is found that for ISO 354 correlations can be neglected. However, for ISO 17497-1, it is important to take correlations into account to
obtain the correct measurement uncertainty using error propagation.
5aAAa5. Importance of correlation between reverberation times for calculating the uncertainty of measurements according to ISO 354 and
ISO 17497-1. Markus Müller-Trapet (National Res. Council, ISVR, Univ.
of Southampton, Southampton SO17 1BJ, United Kingdom, M.F.MullerTrapet@soton.ac.uk)
The calculation of measurement uncertainties follows the law of error
propagation as described in the Guide to the Expression of Uncertainty in
Measurements (GUM). The result can be expressed as a contribution of the
variances of the individual input quantities and an additional term related to
the correlation between the input quantities. In practical applications, the
Invited Papers
9:40
5aAAa6. Addressing the lack of statistical control in acoustical testing laboratories. John LoVerde and David W. Dong (Veneklasen
Assoc., 1711 16th St., Santa Monica, CA 90404, jloverde@veneklasen.com)
In order to be useful in comparing products, evaluating assemblies, or performing research, acoustical laboratory tests must be precise. This means that the “chance” variation due to any external variables must be relatively small and randomly distributed. This defines
a measurement method that is in a state of statistical control, in which case the precision of the test method (the size of these small
chance variations) can be measured [J. Acoust. Soc. Am. 130, 2355 (2011)]. The authors have observed many airborne and impact insulation tests performed at accredited acoustical laboratories. While controlled behavior is sometimes seen, it is also sometimes observed
that a set of results shows unpredictable behavior, abrupt changes, excess scatter, or unexplained variations, which is symptomatic of a
loss of statistical control [J. Acoust. Soc. Am. 137, 2216 (2015)]. It is not merely that the precision in the measurement is larger than
desired; a lack of statistical control means that there are large unknown variables and the precision of the method cannot even be defined.
Experiences with laboratories in which this has occurred are shared, and procedures and safeguards to address the issue are discussed.
10:00–10:20 Break
10:20
5aAAa7. Numerical study on the repeatability and reproducibility of laboratory building acoustic measurements. Arne Dijckmans, Lieven De Geetere, and Bart Ingelaere (Belgian Bldg. Res. Inst., Lombardstraat 42, Brussels B-1000, Belgium, arne.dijckmans@
gmail.com)
3929
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
5a THU. AM
An important issue in building acoustics is the significant variability in laboratory test results that numerous round robin tests have
indicated. The current wish to include the frequency bands 50-80 Hz in the procedures to determine single-number quantities has
prompted new discussions. In this paper, wave based models are used to numerically investigate the fundamental repeatability and reproducibility. Regarding sound insulation measurements, both the pressure method (ISO 10140-2) and the intensity method (ISO 15186-1
and ISO 15186-3) are investigated in the frequency range 50-200 Hz. Flanking transmission measurements (ISO 10848) are also studied.
The investigation includes the repeatability of the different measurement procedures, which depends on the influence of the source and
receiver positions. The reproducibility in different test facilities is studied by looking at the influence of geometrical parameters like
room and plate dimensions. Increasing the number of source or receiver positions has little effect on the overall uncertainty as the reproducibility uncertainty is generally much larger than the repeatability uncertainty. For small-sized test elements, the reproducibility of the
intensity method is better. For heavy walls and lightweight double constructions, however, the predicted uncertainty is similar for the
three measurement methods.
3929
Contributed Paper
10:40
5aAAa8. Reproducibility of a metric for sound reflectivity. Felicia Doggett and Sooch San Souci (Metropolitan Acoust., LLC, 40 W. Evergreen
Ave., Ste. 108, Philadelphia, PA 19118, f.doggett@metro-acoustics.
com)
Past attempts at developing a system able to quantify the sound reflectivity of a surface have been hindered by the shear difficulty of repeatability
and reproducibility. Where and why these challenges arise, and how they
can be overcome is discussed. Sound fields resulting from direct sound
energy projected at various incident angles toward materials and assemblies
are compared as well as how laboratory setups and specimen changeovers
can affect outcomes. Can precision be increased and bias avoided? A proposed system based on software driven microprocessors and robotic precision is very fast with a high degree of precision performing at less than 0.1
mm error over 50 repeated runs. The performance in terms of repeatability
and reproducibility are approximately equivalent to today’s ink jet printers.
As a recognized acoustic metric, the Sound Reflectivity Index could provide
vital data that would accompany all products. For acousticians, a higher
order of accuracy would be possible in designs involving envelopment,
early reflections, speech comprehension, speech privacy, sound enhancement, and general room acoustics.
Invited Papers
11:00
5aAAa9. Comparisons of laboratory repeatability and performance when increasing reverberation chamber volume. Douglas
Winker (ETS-Lindgren, Inc., 3502 Hamlet Cv, Round Rock, TX 78664, douglas.winker@ets-lindgren.com), Brian Stahke, and Michael
C. Black (ETS-Lindgren, Inc., Cedar Park, TX)
ETS-Lindgren/Acoustic Systems has operated an acoustics laboratory in Austin, Texas since 1985. The original laboratory consisted
of a reverberation chamber suite with a source chamber volume of 200 cubic meters and a receive chamber volume of 254 cubic meters.
In 2008, ETS-Lindgren/Acoustics Systems relocated and constructed a completely new laboratory. The new lab features a reverberation
chamber suite with a source chamber volume of 208 cubic meters and a receive chamber volume of 408 cubic meters. During the transition, all measurement equipment, test frames, and proficiency specimens were maintained and are constant between both laboratories.
Repeatability data for ASTM E90 and ASTM C423 will be discussed and compared for each laboratory reverberation chamber suite.
The changes in the results will be shown with respect to the different chamber sizes. Proficiency panels constructed to ASTM round robin guidelines have been maintained since original construction and form the baseline for this comparison. Comparison of low-frequency performance between the two chamber sizes will be emphasized with respect to current reverberation construction guidelines
and limits. Intra-laboratory uncertainties over time will also be discussed and compared to the published uncertainties of the ASTM E90
and C423 standards.
11:20
5aAAa10. Update on the current ASTM building acoustics inter-laboratory studies. Matthew V. Golden (Pliteq, 616 4th St., NE,
Washington, DC 20002, mgolden@pliteq.com)
In the last few years, ASTM E33 committee on Building and Environmental Acoustics has undertaken a program to improve the precision and bias statements in its laboratory and field standards. This is done through analysis of inter-laboratory studies. At the time of
writing, there are approximately 10 such inter-laboratory studies either currently in process or recently completed. This paper will give a
brief overview of these activates. It will highlight those studies involving laboratory building acoustics standards, including standards on
sound transmission loss, impact sound transmission, sound absorption in reverberation rooms, and sound attenuation between rooms
sharing a common ceiling plenum.
11:40
5aAAa11. On the uncertainty of measurement of dynamic stiffness of resilient materials. Krister Larsson (Bldg. Technology/Sound
& Vibrations, RISE Res. Institutes of Sweden, Box 857, Boras SE-50115, Sweden, krister.larsson@sp.se)
The apparent dynamic stiffness of resilient materials used for example under floating floors is measured according to the standard
ISO 9052-1:1989 (EN 29052-1:1992). Basically, the material under test is loaded by a load plate corresponding to 200 kg/m2 and the
resonance frequency of the first vertical mode of the mass-spring system formed by the load plate and the resilient material under test is
determined. The resonance frequency then gives the dynamic stiffness. The standard allows for several excitation techniques such as
swept sine, continuous noise, or impact excitation. Additionally, vibration excitation at the base as well as force excitation on the load
plate is allowed. In this study, the uncertainty because of excitation of multiple vibration modes is investigated in detail. A model for the
load plate on an elastic foundation representing the test setup is developed. The model is verified towards measurements and a parameter
study is performed. The results show that additional modes may be excited depending on the excitation, which might lead to erroneous
results. Suggestions for procedures to take the excitation of multiple modes into account and to improve the uncertainty of the method
are given based on the findings.
3930
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3930
12:00
5aAAa12. Impact of sound insulation quality in dwellings on its financial value. Andrea Vargova (Faculty of Civil Eng., Dept. of
Bldg. Construction, STU Bratislava, Radlinskeho 11, Bratislava 81005, Slovakia, Andrea.Vargova@stuba.sk), Herbert Muellner (TGM
Wien, Vienna, Austria), Rudolf Exel (Exel, Vienna, Austria), and Monika Rychtarikova (Faculty of Architecture, KU Leuven, Gent,
Belgium)
A lot of research has been already done on the assessment and improvement of sound insulation quality of partitioning elements in
dwellings. Fewer studies have addressed the impact of acoustic improvements on those elements. Very little information is available on
the impact of sound insulation properties on the global real estate value of dwellings as a hole. This contribution reports on the analysis
of questionnaires and interviews concerning the overall satisfaction of dwellers about the acoustic comfort at their homes. The importance that people living in apartment flats in Slovakia give to their acoustic comfort at home is addressed, with the part of their budget
that they would potentially consider to spend for improving it used as a measure.
THURSDAY MORNING, 29 JUNE 2017
ROOM 206, 8:00 A.M. TO 8:40 A.M.
Session 5aAAb
Architectural Acoustics: Topics in Architectural Acoustics Related to Materials and Modeling
Kenneth W. Good, Chair
Armstrong, 2500 Columbia Av, Lancaster, PA 17601
8:00
8:20
5aAAb1. Exploring novel beautiful, durable, hard, smooth, acoustically
absorbent materials. Randall J. Rehfuss (Architecture, Virginia Tech, 224
Dunbar Ave., Dublin, VA 24084, rjrehfus@vt.edu), Michael Ermann, Martha Sullivan, Andrew Hulva (Architecture, Virginia Tech, Blacksburg, VA),
and Alexander M. Kern (Mech. Eng., Virginia Tech, Darmstadt,
Germany)
5aAAb2. Modeling the inhomogeneous reverberant sound field within
the acoustic diffusion model: A statistical approach. Cedric Foy (Cerema,
11 rue Jean Mentelin, Strasbourg 67200, France, cedric.foy@cerema.fr), Vincent Valeau (Institut Pprime UPR - Bât. B17, Poitiers Cedex 9, France), Judicaël Picaut, Nicolas FORTIN (Ifsttar, Bouguenais Cedex, France), Anas
Sakout (Pôle Sci. et Technologie, LaSie, La Rochelle Cedex 1, France), and
Christian Prax (Institut Pprime UPR - Bât. B17, Poitiers Cedex 9, France)
Acoustically absorbent materials are generally “fuzzy.” This texture
converts sound energy into heat through friction, thereby reducing the
amount of sound that reflects back into the room.[1] In contrast, this research
explores smooth surface alternatives to speech-frequency sound absorption
by retracing the efforts of Wallace C. Sabine and his partner, architect and
thin-structural-tile builder Raphael Guastavino. The two, each a luminary in
his respective field at the time, teamed up in the early 1900s to manufacture
sound absorbent ceramic tiles. After attempting to recreate and test the
Sabine-Guastavino tiles, we sought to improve upon the concept of a porous, fire-proof, and acoustically absorbent tile to include a smooth surface
that is easily sanitized to be used in facilities like child care centers, schools,
and hospitals. While there are materials on the market that provide a smooth
surface of micro-pores that open to a “fuzzy” material behind the surface,
this novel material would be a continuous unit that would enhance the aesthetics of a space. To achieve this, we used an impedance tube to test the
absorption of contemporary materials ranging from uncooked ramen noodles to cementitious composites, while simultaneously applying innovative
methods and technology into the firing of clay bodies to achieve the desired
surface smoothness and acoustic absorption in the vocal frequency domain.
3931
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
In room acoustics, starting from the sound particle concept, it is now well
established that the reverberant field can be modeled from a diffusion equation function of the acoustic density and a gradient equation function of the
acoustic intensity. The main works on the development of an acoustic diffusion model have highlighted the major role of a coefficient of the model, the
so-called diffusion coefficient. Indeed, the main phenomena influencing the
reverberant sound field can be modeled by proposing an appropriate expression of this diffusion coefficient. The work presented here deals with the modeling of inhomogeneous reverberant sound fields induced by geometric
disproportions, and investigates, in particular, the case of long rooms. Previously, the ability of the acoustic diffusion to model adequately the spatial variations of the sound field along the room has been demonstrated by
considering a diffusion coefficient that is spatially dependent. We propose
here to extend this work by determining an empirical law of the diffusion
coefficient, depending on both the scattering and absorption coefficients of
the walls of the room. The approach proposed here is statistical and is based
on the least squares method. Several linear models are proposed, for which a
rigorous statistical analysis makes it possible to assess their relevance.
Acoustics ’17 Boston
3931
5a THU. AM
Contributed Papers
THURSDAY MORNING, 29 JUNE 2017
ROOM 208, 8:20 A.M. TO 12:20 P.M.
Session 5aAAc
Architectural Acoustics: Simulation and Evaluation of Acoustic Environments III
Michael Vorländer, Cochair
ITA, RWTH Aachen University, Kopernikusstr. 5, Aachen 52056, Germany
Stefan Weinzierl, Cochair
Audio Communication Group, TU Berlin, Strelitzer Str. 19, Berlin 10115, Germany
Ning Xiang, Cochair
School of Architecture, Rensselaer Polytechnic Institute, Greene Building, 110 8th Street, Troy, NY 12180
Invited Papers
8:20
5aAAc1. The perceptual evaluation of acoustical environments I: Simulated environments. Stefan Weinzierl (Audio Commun.
Group, TU Berlin, Strelitzer Str. 19, Berlin, Berlin 10115, Germany, stefan.weinzierl@tu-berlin.de)
The successful design and development of numerical models for the sound propagation in rooms crucially depends on the existence
of appropriate measures for the quality assessment of the modeling results. In the perceptual domain, these should include single-number
measures for the overall assessment of the simulation as well as differential measures, enabling experts to identify specific shortcomings
in the selected modeling approach and its implementation. In the SEACEN consortium, the “authenticity” and the “plausibility” of virtual acoustic environments has been established, measuring the perceived identity with an external (given) or internal reference. Moreover, a Spatial Audio Quality Inventory (SAQI) was developed by a focus group of experts for virtual acoustics, as a metric for the
differential diagnosis of virtual environments.
8:40
5aAAc2. The perceptual evaluation of acoustical environments II: Natural environments. Stefan Weinzierl (Audio Commun.
Group, TU Berlin, Strelitzer Str. 19, Berlin, Berlin 10115, Germany, stefan.weinzierl@tu-berlin.de)
The perceptual evaluation of natural acoustic environments requires an assessment of the “room acoustical impression.” After some
theoretical considerations on the nature of this perceptional construct and the shortcomings of existing tools, we present a new, empirically substantiated approach for the development of a corresponding measuring instrument. It relies crucially on the room acoustical
simulation of a representative pool of acoustical environments for speech and music and their auralization for different acoustical sources. The resulting room acoustical quality inventory, together with a database of room acoustical models and their monaural and binaural
transfer functions, can be used as a ground truth for room acoustical analysis and perception.
9:00
5aAAc3. Tools for the assessment of sound quality and quality of experience in original and simulated spaces for acoustic performances. Jens Blauert (Inst. of Commun. Acoust., Ruhr-Universitaet Bochum, Bochum, North-Rhine Westphalia 44780, Germany,
jens.blauert@rub.de), Jonas Braasch (Ctr. for Cognition, Commun. and Culture, Rensselaer Polytechnic Inst., School of Architecture,
Troy, NY), and Alexander Raake (Inst. of Media Technol., Audio-visual Group, Tech. Univ. of Ilmenau, Ilmenau, Thuringa, Germany)
Sound quality and Quality of Experience are complex mental constructs. Their assessment, consequently, requires consideration of
multiple different aspects, each of which may demand different evaluation and measurement methods going beyond current standards
such as ISO 3382 for room acoustics. In this talk, relevant quality aspects are identified including signal-related, psychological, semiotic,
and further cognitive ones. For each of them, available evaluation and measurement methods will be discussed, using the amount of
abstraction involved as an ordering principle. Methods that can extract binaural cues from a running signal, such as those provided by
BICAM (Binaurally Integrated Cross-correlation/Auto-correlation Mechanism) for room-impulse responses, or those provided by the
TWO!EARS model framework (www.twoears.eu) will be highlighted, including consideration of the processing of cross-modal cues. A
special focus will be put on the question of to which extent human assessors can be replaced with current algorithmic (instrumental)
methods as based on computer models of the human hearing and subsequent cognitive processing. The task of collecting proper reference data will be considered, whereby results from the AABBA Initiative (Aural Assessment By means of Binaural Algorithms) and the
TWO!EARS project (Reading the world with TWO!EARS) will be incorporated. [Work supported by FET-Open FP7-ICT-2013-C618075 and NSF BCS-1539276.]
3932
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3932
9:20
5aAAc4. Psycho-acoustic evaluation of physically-based sound propagation algorithms. Atul Rungta (Comput. Sci., Univ. of North
Carolina at Chapel Hill, 250 Brooks Bldg., Columbia St., Chapel Hill, NC 27599-3175), Roberta Klatzky (Carnegie Mellon Univ., Pittsburg, PA), Ming C. Lin, and Dinesh Manocha (Comput. Sci., Univ. of North Carolina at Chapel Hill, Chapel Hill, NC, dmanocha@
gmail.com)
Recently, many physically accurate algorithms have been proposed for interactive sound propagation based on geometric and wavebased methods. In terms of these applications, a key question arises whether the improved physical accuracy of these algorithms offers
perceptual benefits over prior interactive methods? In this work, we present results from two studies that compare listeners’ perceptual
response to both accurate and approximate propagation algorithms that are used to simulate two key acoustic effects: diffraction and
reverberation. For each effect, we evaluate whether increased numerical accuracy of a propagation algorithm translates into increased
perceptual differentiation in interactive environments. Our results suggest that auditory perception indeed benefits from the increased accuracy, with subjects showing better perceptual differentiation when experiencing the more accurate propagation method. The diffraction experiment exhibits a more linearly decaying sound field (with respect to the diffraction angle) for the accurate diffraction method,
while the reverberation experiment shows that more accurate reverberation results in better assessment of room volume and acoustic distance perception. In case of room volume, accurate reverberation, after modest user experience, results in a near-logarithmic response to
increasing room volume. Finally, in the case of acoustic distance perception, accurate reverberation shows less distance compression as
compared to an approximate, filter-based reverberation method.
9:40
5aAAc5. A room acoustical quality inventory. Steffen Lepa and Stefan Weinzierl (Audio Commun. Group, TU Berlin, Einsteinufer
17, Berlin, Berlin 10587, Germany, steffen.lepa@tu-berlin.de)
In a two-step procedure, a new psychological measuring instrument has been developed for the acoustical perception of room acoustical environments for speech and music. As a first step, an expert focus group of room acoustical scholars and consultants was formed
in order to reach a consensus on a vocabulary as complete as possible to describe room acoustical qualities. In a second step, this inventory was used for the evaluation of 35 different simulated room acoustical environments presented by binaural synthesis to 190 subjects
of different age and expert level. Based on the ratings of this room sample, a comprehensive psychometric analysis was performed in
order to evaluate the preliminary item battery with respect to reliability, discriminative power, and redundancy. The resulting room
acoustical quality inventory, together with the database of room acoustical models as well as their monaural and binaural transfer functions and their perceptual evaluation, can be used as a ground truth for the validation of existing and newly developed room acoustical
parameters.
10:00–10:20 Break
10:20
5aAAc6. A new method for quantifying binaural decoloration based on parametrically altering spectral modulations. Andreas
Haeussler and Steven van de Par (Acoust. Group, Cluster of Excellence “Hearing4all,” Univ. Oldenburg, Carl-von-Ossietzky-Straße 911, Oldenburg 26129 Oldenburg, Germany, andreas.haeussler@uni-oldenburg.de)
The reproduction of sound in a reverberant environment leads to spectral modifications that are typically perceived as coloration. It
is well known that a dichotic instead of a diotic presentation of such signals leads to reduced perception of coloration [Salomon (1995),
Ph.D. Thesis, TU Delft]. In this contribution, a new method is presented for quantifying the reduction in coloration due to dichotic presentation. In this method, the first part of a Binaural Room Impulse Response, pre-dominantly responsible for the spectral envelope is
extracted, and the spectral modulations are parametrically modified and reinstated on a minimum-phase Impulse Response that is convolved with pink noise and musical instruments and presented diotically to the listeners. Their task is to adaptively adjust the spectral
modulations until this diotically presented stimulus sounds equally colored as a dichotically presented binaural signal. Results show that
the spectral fluctuations expressed as standard deviations in decibels needs to be reduced by 1-2 dB for the diotic presentation to sound
equally colored as the dichotic presentation.
10:40
5aAAc7. Inter-aural cross-correlation measured during symphony orchestra performance in big concert halls. Magne Skalevik
(AKUTEK and Brekke&Strand, Bolstadtunet 7, Spikkestad 3430, Norway, msk@brekkestrand.no)
3933
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
5a THU. AM
Spatial impression in concert hall listeners is known to depend on lateral reflections causing differences between the sound at the left
ear and the right ear. Such differences can be measured, e.g., by the so-called inter-aural cross-correlation IACC in a binaural signal
pair, i.e., a signal pair from microphones placed in the ear canal entrances. IACC data from binaural impulse responses (BRIR) are commonly reported. In contrast, little attention has been paid to running IACC, i.e. IACC(t), during music performance. Therefore, in 2011
this author launched the Binaural Project in 2011 in order to collect binaural signal data from concerts with symphony orchestras. An
analysis of IACC(t) from more than 600 minutes of binaural recordings during concerts in many big concert halls is presented. Several
famous halls, including Boston Symphony Hall, are included in the data. Among the questions to be answered are Can we observe from
the data that concert halls make a difference to IACC(t)? If so—is this variation small or big compared to the variation from one moment
to another, bar to bar, movement to movement, from one orchestra to another, and so on? On the other hand, if we cannot observe significant hall-to-hall differences, several new questions would arise, including How can we maintain that listeners are able to perceive hallto-hall differences in ASW and LEV? And why do not the reported hall-to-hall differences in IACC from impulse responses (ISO-3382)
make an observable difference to running IACC(t)?
3933
11:00
5aAAc8. Influence of visual rendering on the acoustic judgments of a theater auralization. Barteld N. Postma (LIMSI, CNRS, Université Paris-Saclay, LIMSI, CNRS, Université Paris-Saclay, Rue John von Neumann Campus Universitaire d’Orsay, Bât 508, Orsay
91403, France, bart.postma@limsi.fr) and Brian F. Katz (Lutheries - Acoustique - Musique, Inst. @’Alembert, UPMC/CNRS, Paris,
France)
Auralizations have become more prevalent in architectural acoustics. Auralizations in listening tests are typically presented in a unimodal fashion (audio only). However, in everyday-life one perceives complex multi-modal information. Multi-sensory research has
shown that visuals can influence auditory perceptions, such as with the McGurk and ventriloquist effects. However, few studies have
investigated the influence of visuals on room acoustic perception. Additionally, in the majority of previous studies, visual cues were represented by photographs either with or without visuals of the source. Previously, a virtual reality framework combining a visible animated source in a virtual room with auralizations was conceived enabling multi-modal assessments. The framework is based on
BlenderVR scene graph and visual rendering with MaxMSP for the real-time audio rendering of 3rd order HOA room impulse responses
(RIRs) in tracked binaural. CATT-Acoustic TUCT was used to generate the HOA RIRs. Using this framework, two listening tests were
carried out: (1) a repeat of a prior audio-only test comparing auralizations with dynamic voice directivity to static orientation and (2) a
test comparing dynamic voice auralizations with coherent or incoherent visuals with respect to seating position. Results indicate that
judgments of several room acoustic attributes are influenced by the presence of visuals.
11:20
5aAAc9. Audio-visual room perception: Cross-modal and interaction effects put to test. Hans-Joachim Maempel and Michael Horn
(Staatliches Institut für Musikforschung, Tiergartenstraße 1, Berlin, Berlin 10785, Germany, m.horn@posteo.de)
The perception of rooms involves different modalities, particularly hearing and sight. Fundamental issues such as the acoustical and
optical shares in certain perceptual features have, however, not been experimentally addressed yet. We investigated to what extent the
acoustical and optical properties of performance rooms influenced auditory and visual features. Specifically, cross-modal effects and
interaction effects were a matter of particular interest. We also quantified the respective proportion of acoustical and optical information
accounting for the perceptual features. The main preconditions for such an undertaking are the dissociation of the acoustical and optical
components of the stimuli, the commensurability of these components, and rich cue conditions. We acquired binaural room impulse
responses and panoramic stereoscopic images of six rooms in order to recreate these rooms virtually, and added recordings of both a
music and a speech performance by applying dynamic binaural synthesis and chroma-key compositing. By the use of a linearized extraaural headset and a semi-panoramic stereoscopic projection system we presented the scenes to test participants and asked them to rate
unimodal features such as loudness, highs, lows, clarity, reverberance, and envelopment as well as brightness, contrast, color intensity,
and hue. The statistical analyses indicate a straight-forward processing of low-level features.
11:40
5aAAc10. Binaural auralization of proposed room modifications based on measured omnidirectional room impulse responses.
Christoph Pörschmann and Philipp Stade (Technische Hochschule Köln, Betzdorfer Str. 2, Cologne 50679, Germany, christoph.poerschmann@th-koeln.de)
The auralization of rooms with dynamic binaural synthesis using binaural room impulse responses (BRIRs) is an established
approach in virtual audio. The BRIRs can either be obtained by simulations or by measurements. Up to now changed acoustical properties, as they occur when a room is altered in a renovation, cannot easily be considered in a measurement-based approach. This paper
presents a new method to auralize modifications of existing rooms. The authors already have shown in a previous publication that such
an auralization can be done by appropriately shaping the reverberation tail of an impulse response. Furthermore, the authors have presented an approach to synthesize BRIRs based on one omnidirectional room impulse response (RIR). In this paper, both methods are
combined: A single measured omnidirectional RIR is enhanced and adapted to create a binaural representation of a modified room. A listening experiment has been performed to evaluate the procedure and to investigate differences between synthesized and measured
BRIRs. The advantages of this method are obvious: Planned room modifications can be made audible without complex measurements or
simulations; just one omnidirectional RIR is required to provide a binaural representation of the desired acoustic treatment.
Contributed Paper
12:00
5aAAc11. Azimuthal localization in 2.5D near-field-compensated higher
order ambisonics. Fiete Winter, Nara Hahn (Inst. of Communications Eng.,
Univ. of Rostock, Universität Rostock - Institut für Nachrichtentechnik, R.Wagner-Str. 31 (Haus 8), Rostock 18119, Germany, fiete.winter@unirostock.de), Hagen Wierstorf (Audiovisual Technol. Group, Technische
Universität Ilmenau, Ilmenau, Germany), and Sascha Spors (Inst. of Communications Eng., Univ. of Rostock, Rostock, Germany)
Sound Field Synthesis approaches aim at the reconstruction of a desired
sound field in a defined target region using a distribution of loudspeakers.
Near-Field Compensated Higher Order Ambisonics (NFCHOA) is a prominent example of such techniques. In practical implementations different artifacts are introduced to the synthesized sound field: spatial aliasing is caused
3934
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
by the non-zero distance between the loudspeakers. Modal bandwidth limitation is a well-established approach to reduce spatial aliasing in 2.5D
NFCHOA, but introduces temporal and spectral impairments to the reproduced sound field which strongly depend on the relative position to the center of modal expansion. Also, the dimensionality mismatch in a 2.5D
synthesis scenario results in a different amplitude decay compared to the
desired sound field. Listening experiments already investigated the azimuthal localization in 2.5D NFCHOA. It is however unclear, in how far
individual artifacts caused by spatial sampling, modal bandwidth limitation,
and the 2.5D dimensionality mismatch contribute to these localization
impairments in particular. Within this contribution a mathematical framework is used together with binaural synthesis to simulate the individual
effect of each artifact on the ear signals. Humans’ performance is approximated by a binaural model for azimuthal localization.
Acoustics ’17 Boston
3934
THURSDAY MORNING, 29 JUNE 2017
ROOM 206, 9:15 A.M. TO 12:20 P.M.
Session 5aAAd
Architectural Acoustics: Recent Developments and Advances in Archeo-Acoustics and Historical
Soundscapes III
David Lubman, Cochair
DL Acoustics, 14301 Middletown Ln., Westminster, CA 92683-4514
Miriam A. Kolar, Cochair
Architectural Studies; Music, Amherst College, School for Advanced Research, 660 Garcia St., Santa Fe, NM 87505
Elena Bo, Cochair
DAD, Polytechnic Univ. of Turin, Bologna 40128, Italy
Chair’s Introduction—9:15
Invited Papers
9:20
th
5aAAd1. Music and sound of the 17 Century:Athanasius Kircher and his Phonosophia anacamptica. Lamberto Tronchin (DIN CIARM, Univ. of Bologna, Viale del Risorgimento, 2, Bologna I-40136, Italy, lamberto.tronchin@unibo.it) and David J. Knight (Univ.
of Southampton, Guelph, ON, Canada)
In the 17th Century many Physicists, Mathematicians and Musicians dealt with the experiences of harmony, music, and sound propagation in enclosed interior spaces. Among them, Athanasius Kircher was one of the most influential researchers of his time. Born in
Geisa, Thüringia (Germany), he became a Jesuit in 1608 and spent a large part of his life in Rome, where he died in 1680. During his
lifetime, he wrote several books spanning a wide range of topics, including sound, music, and acoustics. One of these, the Phonurgia
Nova, published in 1673, was almost ignored for hundreds of years. Phonurgia Nova was translated from the original Latin. It consists
of two different books, the Phonosophia nova and the Phonosophia anacamptica. The former deals with the influence of music on
human beings whereas the latter analyses sound propagation in enclosed spaces. In this paper, the Authors present new achievements
regarding some of the apparatuses that Kircher invented. Among all his marvelous sound machines, the Authors will describe some of
Kircher’s items, including the tuba stentorophonica (the “loud trumpet”), the statua citofonica (the “talking statue”), the obiectum phonocampticum (the “phonocentric object”), the Ruota Cembalaria (the “sounding wheel”), the ancient Egyptian singing statue of Memnon, the Aeolian Harp, and the hydraulis (hydraulic organ). Some of these apparatuses were also recently partially realized by the Polish
Pavilion during the Biennale of Venice in 2012, achieving a Special Mention from the international jury.
9:40
5aAAd2. Seeking the sounds of ancient horns. D. Murray Campbell (Acoust. and Audio Group, Univ. of Edinburgh, James Clerk
Maxwell Bldg., Mayfield Rd., Edinburgh EH9 3JZ, United Kingdom, d.m.campbell@ed.ac.uk), Joël GILBERT (Laboratoire d’Acoustique de l’Université du Maine - CNRS, LE MANS, France), and Peter Holmes (Designer in Residence, Middlesex Univ., London,
United Kingdom)
5a THU. AM
Recent archaeological discoveries, most notably at the Gallo-Roman site at Tintignac in the Corrèze district of France, have thrown
fresh light on the nature of some of the lip-excited wind instruments used in Europe around two thousand years ago. In particular, it has
been possible to reconstruct working copies of the Celtic horn known as the carnyx, and to experiment on the reproductions both scientifically and musically. A number of Etruscan and classical Roman brasswind instruments, including the lituus and the cornu, have also
been reproduced and tested under the auspices of the European Music Archaeology Project. This paper reviews some of this work, and
discusses the usefulness of acoustical modeling and measurement in interpreting the possible musical functioning of these ancient horns.
10:00
5aAAd3. Reconstructing human music with hard evidence. Jelle Atema (Biology, Boston Univ., 5 Cummington Mall, Boston, MA
02215, atema@bu.edu)
Philosophers have long discussed the origins of music and language and used them as an argument to set humans apart from other
animals. For evidence we rely on archaeology, anthropology and even biology, and on historical depictions and descriptions. In the case
of music we have some hard evidence in the form of ancient bone flutes. Their physical reconstructions are important tools in the quest
for origins, but open to interpretation and controversy. As a biologist-flutist I have been skeptical of many resulting “models” that have
3935
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3935
been proposed. Here I submit some of the main obstacles I encounter in reconstructing 4,000 to 50,000 year-old bone flutes in hopes of
hearing their music. How do we define what is a flute? Which physical reconstruction is most credible? Which sounds can it make? Is
there a recognizable scale and is that scale credibly constrained? Which sounds constitute music? Is “our” music related to “their”
music? While flutes have been preserved in the pre-historic record and their finger holes suggest a musical scale, the oldest evidence is
sufficiently weak to allow for many interpretations and vociferous debates. I will demonstrate these questions with a few reconstructions
and different types of flutes.
10:20–10:40 Break
10:40
5aAAd4. Archaeological auralization as fieldwork methodology: Examples from Andean archaeoacoustics. Miriam A. Kolar (Res.
Associate, Five Colleges, Inc., School for Adv. Res., 660 Garcia St., Santa Fe, NM 87505, mkolar@fivecolleges.edu)
Auralization, the computational rendering of sound for listeners, enables archaeoacoustical reconstructions. In archaeoacoustics
research, computational tools and analyses frequently enmesh with human performance. Broadening the definition of archaeological auralization to encompass the investigative process of specifying and enacting the re-sounding of archaeological spaces, objects, and events
positions auralization as a methodology for the sensory exploration of anthropological research questions. A foundational tool for
archaeoacoustical and archaeomusicological fieldwork, auralization allows contextualized testing and measurement of spatial and instrumental acoustics, along with their perceptual evaluation. Case-study examples from Andean archaeoacoustics research include auralizations of reconstructed architectural acoustics, and in-situ loudspeaker playback of recorded performances of 3,000-year-old conch shell
horns, delivered as auditory perceptual experiment stimuli within the extant ceremonial architecture at Chavı́n de Huántar, Peru. Performed plaza auralizations at the Inka administrative city Huánuco Pampa re-enacted sound transmission dynamics in that Pre-Columbian public space, enabling present-day listeners to evaluate verbal intelligibility, among other tests. As a fieldwork methodology,
archaeological auralization is both process and product: the specification and physical sounding of concepts and data, to be observed and
evaluated in relationship with archaeological materials and knowledge.
11:00
5aAAd5. The acoustics and sound environments of Early Delta Blues. Mark H. Howell (MS Dept. of Archives and History, Winterville Mounds, 2415 Hwy. 1 nort, Greenville, MS 38703, mhrabinal@gmail.com)
As part of a larger investigation into the archaeological origins of the blues, I seek to re-create the sound environments that surrounded and informed the initial emanations of this important American music genre. This study is warranted because the music has had
such a pronounced influence on global musics, as well as on broader sociological concerns like race, regionalism, nationalism, capitalism, and gender. It is also of value for archaeo-acoustic studies in that it concerns a study of an archaeo-historic site that is not in the
deep past. For the ASA meeting-forum I will report on two related acoustic processes: one is the digital reconstruction of the physical
structures where the blues was first heard, such as wood-framed shotgun-shaped performance spaces (juke joints, general stores), some
of which still exist in the Mississippi Delta (Po Monkeys near Merigold and Mary’s in Indianola), or can be archaeologically recovered
(the general store at Dockery Farms); and the second process is a recreation of the sonic environment of the late 19th early 20th century
south of the lower Mississippi river valley, including natural and anthropomorphic sounds. These steps precede a future goal, the
archaeologically reconstructed instrumental and vocal sounds of incipient and early blues, allowing for a truer picture of this musical
phenomenon than currently exists.
11:20
5aAAd6. Üneholisunn: Proposal for a new descriptive language of sound. Jeff Benjamin (Columbia Univ., P.O. 42, West Shokan,
NY 12494, jlb2289@columbia.edu)
The apprehension of historic sound relies upon a philosophical shift from representation to presence: we do not need electronic capture to preserve sonic forms. In this paper, I will argue that most of the sounds we hear are old sounds, glazed with the patina of novelty.
The dialog that persists between different fundamental conceptions of sound is instructive, but the assertion of sound as artifact, or sonifact (as a material thing that endures through time)provides a useful way of thinking about historic sound in particular. This points to the
somewhat pressing need for an adequate descriptive language of sound, a colloquial way of expressing the abundance of sonifacts all
around us. In this paper, and specifically pertaining to landscapes of industrial ruination, I intend to offer some suggestions for the possible development of such a language. After an initial description of this project, I will read a series of poems using these words to demonstrate a possible method to convey sonic information in the vernacular. Many of the root words for this agglutinative sonic language will
be drawn from some of the world’s disappearing languages.
11:40–12:20 Panel Discussion
3936
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3936
THURSDAY MORNING, 29 JUNE 2017
ROOM 313, 7:55 A.M. TO 12:20 P.M.
Session 5aABa
Animal Bioacoustics: Ecosystem Acoustics I
Susan Parks, Cochair
Biology, Syracuse University, Biology, 107 College Place, RM 114, Syracuse, NY 13244
Jennifer L. Miksis-Olds, Cochair
Center for Coastal and Ocean Mapping, Univ. of New Hampshire, 24 Colovos Rd., Durham, NC 03824
Denise Risch, Cochair
Ecology, Scottish Association for Marine Science (SAMS), SAMS, Oban PA371QA, United Kingdom
Chair’s Introduction—7:55
Invited Papers
8:00
5aABa1. Overview: Ecoacoustics for monitoring freshwater and marine biodiversity. Denise Risch (Ecology, Scottish Assoc. for
Marine Sci. (SAMS), SAMS, Oban, Argyll PA371QA, United Kingdom, denise.risch@sams.ac.uk) and Susan Parks (Biology, Syracuse
Univ., Syracuse, NY)
Global marine and freshwater ecosystems are experiencing an unprecedented loss and re-distribution of biodiversity, due to far
reaching effects of human activities, including accelerated climate change and over-exploitation. Such changes in aquatic diversity patterns will lead to shifting baselines with respect to species richness and distribution, which need to urgently be monitored. Due to its
applicability in surveying remote areas over extended timescales, ecoacoustics play a vital part in monitoring such large-scale changes.
Aquatic ecoacoustics is a field that is expanding rapidly alongside emerging underwater technologies, including gliders and real-time
passive acoustic buoys, as well as analytical approaches for assessing ecosystem health. These tools can also be used to monitor changes
in abiotic environmental factors, including precipitation and wind events, as well as contributions of anthropogenic noise to the overall
soundscape. Concerns about the increasing impact of particularly long range and ubiquitous noise sources such as global shipping traffic
or seismic surveys, necessitate approaches to monitor their relative influence on aquatic soundscapes. This review will examine the use
of ecoacoustic approaches to monitor freshwater and marine environments, identify gaps of knowledge, and provide recommendations
for future applications of ecoacoustic tools to aide in the conservation of freshwater and marine biodiversity.
8:20
5aABa2. Implantation of marine ecoacoustic indices. Craig A. Radford (Inst. of Marine Sci., Univ. of Auckland, PO Box 349, Warkworth 0941, New Zealand, c.radford@auckland.ac.nz)
5a THU. AM
Diversity measurement techniques can present logistical and financial obstacles to conservation efforts. Ecoacoustics has recently
emerged as a promising solution to these issues, providing a mechanism for measuring diversity using acoustic indices, which have proven to be beneficial in terrestrial habitats. During summer in temperate north eastern New Zealand acoustic and traditional biodiversity
surveys were conducted. Three ecoacoustic indices originally developed for terrestrial use were then compared to three species assemblage diversity measures and compared using Pearson correlations. Acoustic Complexity Index (ACI) was significantly correlated with
Pielou’s Evenness (J’) and Shannon’s index (H’). Wind did not affect any of the acoustic indices. As anthropogenic noise was included
in these investigations, both ACI and H were considered robust to its presence. However, all these relationships break down using
recordings taken in winter even though the traditional diversity measures remains consistent. The development of marine acoustic indices is only in its early stages, but there are significant questions around trying to implant what has been achieved in terrestrial ecosystems
rather than developing specific indices for the marine environment.
8:40
5aABa3. Do bioacoustic conditions reflect species diversity? A case study from four tropical marine habitats. Erica Staaterman
(Smithsonian Inst., 647 Contees Wharf Rd., Edgewater, MD 21037, staatermane@si.edu)
New tools, such as passive acoustic monitoring, can be helpful for measuring levels of biodiversity in habitats that are otherwise difficult to sample. Here, we tested the utility of acoustic measurements in shallow coastal waters by conducting simultaneous bioacoustic
and biodiversity surveys in four habitat types in Panama: mangrove, reef, seagrass, and sand. We found that acoustic measurements in
the “low band” (<1000 Hz) were positively correlated with cryptic fish species richness. However, our 24-h acoustic recordings revealed
a clear toadfish chorus at dusk, which masked other fish sounds and confounded results from newer acoustic indices such as acoustic
3937
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3937
entropy and acoustic complexity. Band level in the “high band” (3,000-10,000 Hz) did not differ across habitat types and was not significantly correlated to biodiversity measurements. Our study demonstrates that bioacoustic surveys can help scientists to identify certain
cryptic, soniferous species, and should be used in tandem with traditional biodiversity surveys. Additional research is needed in the marine environment to validate the utility of the newer acoustic indices.
9:00
5aABa4. Soundscape analyses for ecosystem conservation. Amandine Gasc (Institut de Systématique, Evolution, Biodiversité, museum national d’histoire naturelle, 45 rue buffon, Paris 75005, France, amandine.gasc@gmail.com)
Large scale spatial and temporal analyses help to appreciate and subsequently act to avoid or reduce ecosystem disturbances. For
example, remote sensing imagery aid in understanding large scale dynamics of ecosystems; however, the capture of the animal community component of the ecosystem is challenging. Soundscape analyses provide a new insight into animal community and ecosystem ecological levels with a detailed temporal resolution, considered here as a complementary approach. Therefore, the focus of this
presentation is to: (1) summarize the recent research advances in the estimation of composition and dynamics of animal acoustic community and (2) present recent research utilizing acoustic community in disturbance detection and its potential for ecosystem restoration.
Initial results from the scientific community provide robust elements to support the use of soundscape measurements to evaluate disturbance impacts on animal community and natural ecosystems. Additionally, interest from natural area managers in application of soundscape techniques is largely confirmed. However, additional research is still necessary to develop robust and calibrated methods and tools
for concrete biological conservation action. Three objectives are highlighted as future directions of this research: (1) develop soundscape
metrics and improve their interpretation, (2) improve the understanding of soundscape drivers, and (3) develop soundscape-based disturbance indicators.
9:20
5aABa5. How often is human auditory detection in natural environments limited by our absolute hearing thresholds? Kurt M.
Fristrup (Natural Sounds and Night Skies Div., National Park Service, 1201 Oakridge Dr., Ste. 100, Fort Collins, CA 80525, kurt_
fristrup@nps.gov) and Damon Joyce (Natural Sounds and Night Skies Div., National Park Service, Fort Collins, CO)
The National Park Service (NPS) has collected long-term sound level measurements from more than 800 sites, using equipment that
measures 1 second, 1/3rd octave band levels. One-third octave bands approximate the critical bands for the human auditory system, and
the initial motivation for collecting these data was to predict the levels at which incoming aircraft noise would be audible. Here, we will
compare the nominal human threshold of hearing—expressed in 1/3rd octave bands—with an appropriate summary of the background or
residual sound level of each environment. Several procedures have been recommended for estimating the residual sound level, or the
sound level that remains after energy from nearby and transient sound sources is removed. The merits of alternative procedures for estimating residual sound levels will be assessed. The NPS data show that human hearing has evolved to take advantage of the quietest conditions that occur in all but the very quietest environments, across a substantial portion of the audible spectrum. For species whose
hearing is up to 10 dB less sensitive than humans, they will be operating in a masked hearing regime in the majority of locations and
hours.
9:40
5aABa6. Large scale passive acoustic recording efforts improve our understanding of long term changes in marine mammal ecology and distribution. Sofie M. Van Parijs, Danielle Cholewiak, Genevieve Davis (NOAA Fisheries, 166 Water St., Woods Hole, MA
02543, sofie.vanparijs@noaa.gov), and Mark F. Baumgartner (Biology, Woods Hole Oceanographic Inst., Woods Hole, MA)
Collaborative efforts across a large number of scientists working throughout the Western Atlantic Ocean has led to sharing of passive
acoustic data spanning over a decade. These data allow for a 24/7 lens to be cast providing a long term temporal understanding of species presence. This collaborative approach has allowed for unprecedented research focusing on long term patterns and changes in distribution and movements of baleen whales and odontocetes. Results from these data shows how baleen whales migration paths can be
defined and changes in these paths can be detected over time. Additionally, they show that whales are present at times of the year and in
regions that were not previously documented, particularly during winter months. Beaked whale composition within and between each
shelf break canyon where recordings are available varies considerably, demonstrating latitudinal as well as regional gradients in species
presence. The addition of ambient noise curves to this mix provides context and allows for the evaluation of anthropogenic noise. This
long term big picture view of species presence and movements improves our capacity to infer whether observed changes are a result of
ecological, climatological, or anthropogenic factors.
10:00
5aABa7. Soundscape planning: An acoustic niche for anthropogenic sound in the ocean? Ilse Van Opzeeland and Olaf Boebel
(Ocean Acoust. Lab, Alfred-Wegener Inst. for Polar and Marine Res., Am Alten Hafen 26, Bremerhaven 27568, Germany, ilse.van.
opzeeland@awi.de)
In analogy to landscape planning, the concept of soundscape planning aims to reconcile potentially competing uses of acoustic space
by managing the anthropogenic sound sources. We present here a conceptual framework to explore the potential of soundscape planning
in reducing (mutual) acoustic interference between hydroacoustic instrumentation and marine mammals. The basis of this framework is
formed by the various mechanisms by which acoustic niche formation occurs in species-rich communities that acoustically coexist while
maintaining hi-fi soundscapes, i.e., by acoustically partitioning the environment on the basis of time, space, frequency and/or signal
form. Hydroacoustic measurements often exhibit certain flexibility in the timing, signal characteristics and even instrument positioning,
potentially offering the opportunity to minimize the underwater acoustic imprint. We evaluate how the principle of acoustic niches (i.e.,
the partitioning of the acoustic space) could contribute to reduce potential (mutual) acoustic interference based on actual acoustic data
from various recording locations in polar oceans.
3938
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3938
10:20–10:40 Break
10:40
5aABa8. Spatio-temporal distribution of beaked whales on Canada’s East Coast. Bruce Martin (Oceanogr., Dalhousie Univ., 32
Troop Ave., Ste. 202̀, Dartmouth, NS B3B 1Z1, Canada, bruce.martin@jasco.com), Julien Delarue, Katie Kowarski (JASCO Appl. Sci.,
Dartmouth, NS, Canada), Hilary Moors-Murphy (Bedford Inst. of Oceanogr., Dartmouth, NS, Canada), and Joanna Mills Flemming
(Mathematics and Statistics, Dalhousie Univ., Halifax, NS, Canada)
Beaked whales represent some of the least understood marine mammals worldwide with the movements and distribution of many
species largely unknown. Around eastern Canada current knowledge is limited to the eastern Scotian shelf and northern bottlenose
whales. The acoustic signals of the beaked whale species are recognizable and sufficiently unique to be candidates for passive acoustic
monitoring. Thirteen deep-water recorders located along the shelf break off Eastern Canada from Bonnécamps Canyon (42.5 N) to
Southern Labrador (55.3 N) collected acoustic data near-continuously from Aug 2015 to July 2016. A minimum of one of every 20
minutes were recorded at 250,000 samples per second to monitor for the presence echolocations clicks of odontocetes, including beaked
whales. An automated detector, validated by manual analysts, identified the presence of endangered Northern Bottlenose whales (Hyperoodon ampullatus), Cuvier’s (Ziphius cavirostris), and special concern Sowerby’s beaked whales (Mesoplodon bidens). The presence
data were analyzed to determine the occurrence and residency durations of beaked whales throughout the geographic range studied. We
then studied the influence of currents, sea ice, surface temperatures, chlorophyll, distance to the 1000 m isobath, background noise and
anthropogenic noise on the whales’ acoustic occurrence. Acoustic studies such as this allow us to gain further insights on the occurrence
of these notoriously difficult to study species.
11:00
5aABa9. Seasonal acoustic ecology of beluga and bowhead whale core-use areas in the Pacific Arctic. Kathleen Stafford (Appl.
Phys. Lab., Univ. of Washington, 1013 NE 40th St., Seattle, WA 98105, stafford@apl.washington.edu), Manuel Castellote (JISAO,
Univ. of Washington, Seattle, WA), Melania Guerra (Appl. Phys. Lab., Univ. of Washington, Seattle, WA), and Catherine L. Berchok
(Marine Mammal Lab, NOAA, Seattle, WA)
The acoustic ecology of Arctic marine mammals is driven by anthropogenic, biotic, and abiotic factors each of which may influence
the behavioral ecology of each species. The acoustic environment of bowhead (Balaena mysticetus) and beluga (Delphinapterus leucas)
whales in three core-use regions of the Pacific Arctic was examined during the months in which both species occur in these regions. The
Anadyr Strait region in winter was dominated by the signals of bowhead whales, walrus and bearded seals. In Bering Strait in late fall
and winter, wind noise predominated in November but once the region was ice-covered, bowhead and walrus were the main sources of
noise. Barrow Canyon in late summer and fall was the only region in which anthropogenic sources overlapped with both whale species.
Overall, ambient noise levels were low in the Pacific Arctic when compared to other ocean basins in which anthropogenic noise dominates low frequencies. However, climate change-driven increases in open water are leading to rising noise levels from increased human
use of the Arctic, increased storminess, and increased presence of vocal subarctic whales. These “new” sources of sound may be altering
the underwater soundscape and possibly influencing the acoustic ecology of Pacific Arctic cetaceans.
11:20
5aABa10. Variability in coral reef soundscapes, spatiotemporal differences, biophysical and behavioral drivers, and associations
with local biota. T. Aran Mooney, Ashlee Lillis, Maxwell B. Kaplan (Biology Dept., Woods Hole Oceanographic Institution, 266
Woods Hole Rd., Woods Hole, MA 02543, amooney@whoi.edu), Justin Suca (Biology Dept., Woods Hole Oceanographic Instituition,
Falmouth, MA), and Marc Lammers (HIMB, Univ. of Hawaii, Kaneohe, HI)
5a THU. AM
Coral reefs harbor some of the highest biodiversity on the planet. Their rich ecoacoustic soundscape may provide a way to track both
animal activities and community level structure. To do so, it is critical to identify how reef soundscapes are influenced by biotic and abiotic parameters, and establish how soundscapes change over time and across habitats. Here we present results from 18 coral reefs in the
U.S. Virgin Islands and Maui, Hawaii, with the overall goals to quantify soundscape variability across multiple spatial and temporal
scales (days to years), test how soundscape parameters relate to local biological communities, and address how biophysical parameters
(light, temperature, and rugosity) influence these eco-soundscapes. Acoustic measurements were made in-tandem with benthic and fish
visual surveys. Analyses were carried out using high and low-frequency bands corresponding to the primary soniferous taxa on reefs,
snapping shrimp and fish. Overall, these results indicate that certain acoustic metrics can be linked to visual survey results. Snapping
shrimp exhibit complex spatiotemporal patterns, with strong diel rhythms shifting over time and varying substantially over short spatial
scales. Furthermore, long-term recordings are necessary to provide a robust baseline measurement of acoustic variability and better
quantify changes in coral reef ecosystems.
3939
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3939
Contributed Papers
11:40
12:00
5aABa11. Rapid extraction of ecologically meaningful information from
large-scale acoustic recordings. Megan F. McKenna (Natural Sounds and
Night Skies Div., National Park Service, 1201 Oakridge Dr., Ste. 100, Fort
Collins, CO, megan_f_mckenna@nps.gov), Rachel Buxton (Fish, Wildlife,
and Conservation Biology Dept., Colorado State Univ., Fort Collins, CO),
Mary Clapp (Evolution and Ecology Dept., Univ. of California, Davis,
Davis, CA), and Erik Meyer (Sequoia & Kings Canyon National Parks,
Three Rivers, CA)
5aABa12. Leveraging big data: How acoustic archives facilitate ecosystem research. Carrie C. Wall, Charles Anderson (Cooperative Inst. for Res.
in Environ. Sci., Univ. of Colorado at Boulder, 216 UCB, Boulder, CO
80309, carrie.bell@colorado.edu), J. Michael Jech, Sofie M. Van Parijs
(Northeast Fisheries Sci. Ctr., NMFS, Woods Hole, MA), Leila Hatch (Stellwagen Bank National Marine Sanctuary, NOS, Scituate, MA), and Jason
Gedamke (Sci. & Technol., NMFS, Silver Spring, MD)
Acoustic recordings have the potential to address a suite of important
conservation questions, from assessing phenology shifts due to climate
change, to examining the impact of anthropogenic noise on wildlife, to monitoring biodiversity at enormous spatio-temporal scales. However, consistent
methods are required to extract meaningful information from these large
datasets. Here we apply a method of calibrating recordings to standardize
acoustic data collected at over 50 unique sites in a diversity of habitats across
the continental U.S. using a variety of recording units and parameters. The
calibration method results in a coarser data resolution, decreasing storage
space and computation time of further analysis. We then apply recently
developed acoustic indices to evaluate biodiversity in our recordings. A
review of existing acoustic indices and degree of correlation with bioacoustic
activity, species richness, functional diversity, landscape attributes, and
anthropogenic influence guided our decisions about what indices to implement. Resulting indices were compared with the diversity of birds from observer point counts and from animal vocalizations observed in the recording
spectrograms and to anthropogenic sounds observed in the recordings. The
results provide important insight on the utility of each index, or group of indices, to investigate dynamics of ecological communities across large scales.
The National Oceanic and Atmospheric Administration’s (NOAA)
National Centers for Environmental Information (NCEI) has developed archives for the long-term stewardship of active and passive acoustic data.
Water column sonar data have been collected for fisheries and habitat characterization over large spatial and temporal scales around the world, and
archived at NCEI since 2013. Protocols for archiving passive acoustic data
are currently being established in support of the NOAA Ocean Noise Reference Station Network project, and monitoring marine mammals and fish.
Archives maintain data, but access to these data is a core mission of NCEI
that allows users to discover, query, and analyze the data in new and innovative ways. Visualization products continue to be developed and integrated
into the data access portal so that researchers of varying backgrounds can
easily understand the quality and content of these complex data. Spatially
and temporally contemporary oceanographic and bathymetric data are also
linked to provide an ecosystem-wide understanding of the region. Providing
access and facilitating the utility of these data for ecoacoustics research are
ongoing efforts at NCEI, and would benefit from input from the acoustics
community.
THURSDAY MORNING, 29 JUNE 2017
BALLROOM A, 8:00 A.M. TO 12:20 P.M.
Session 5aABb
Animal Bioacoustics: Topics in Animal Bioacoustics (Poster Session)
Aaron Thode, Chair
SIO, UCSD, 9500 Gilman Dr., MC 0238, La Jolla, CA 92093-0238
All posters will be on display from 8:00 a.m. to 12:20 p.m. To allow contributors in this session to see the other posters, authors of
odd-numbered papers will be at their posters from 8:00 a.m. to 10:10 a.m. and authors of even-numbered papers will be at their posters
from 10:10 a.m. to 12:20 p.m.
Contributed Papers
5aABb1. Listening for right whales off Brazil: Present knowledge and
future research. Julia R. Dombroski, Susan Parks (Dept. of Biology, Syracuse Univ., 107 College Pl., Syracuse, NY 13210, jribeiro@syr.edu), Karina
R. Groch (Projeto Baleia Franca, Imbituba, Brazil), Paulo A. Flores (APA
Anhatomirim, Instituto Chico Mendes para Conservação da Biodiversidade,
Florianopolis, Brazil), and Renata S. Sousa-Lima (Departamento de Fisiologia, Universidade Federal do Rio Grande do Norte, Natal, Brazil)
In the Southwest Atlantic, a key southern right whale wintering ground
is found off southern Brazil. Aiming to collect reference information on the
acoustic ecology of right whale mother-calf pairs in the region, we used two
complementary passive acoustic monitoring methods. Recordings from autonomous archival devices were used to obtain the description of pairs’
3940
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
vocal repertoire: call classes were established, temporal and frequency parameters of calls were measured and reported, and the existence of a diel
pattern of vocal activity was investigated. Calling rate and contextual call
usage were obtained through synchronized behavioral observations and
acoustic recordings made using a dipping two-unit linear array. Current
knowledge about the species vocal behavior off Brazil allows an increment
in the use of PAM methods as a research tool. Therefore, our plans for
future research include the use of multistory tags, sound propagation, and
playback experiments, and long-term deployment of autonomous devices
along the whale’s concentration area. Future results will contribute to
enhancing the knowledge on the species communication system and will aid
habitat management decisions.
Acoustics ’17 Boston
3940
The geography of biological sound is a largely unexplored topic that
exists at the boundary of ecoacoustics and biogeography. Identification and
characterization of patterns of biological sound and soundscape variation
across the planet may provide insight into the potential for acoustics as a
tool for biodiversity monitoring and conservation. Ecoacoustic indices are
audio signal measurements that can be useful for revealing, characterizing,
and comparing such soundscape patterns. Some of these indices, such as the
Acoustic Complexity Index (ACI), have been shown to correlate with biodiversity and avian vocalization activity in some systems. The goal of this
study was to investigate broad-scale trends in the ACI by testing whether
the acoustic complexity of dawn chorus soundscapes follows the known latitudinal gradient in avian biodiversity. The ACI was calculated for dawn
chorus recordings from 187 sites worldwide, spanning from -45.4 to 68.1
latitude. Acoustic complexity was expected to be highest near the equator
and decrease with absolute latitude, tracking the general avian diversity gradient, because higher avian diversity should result in more elaborate dawn
chorus soundscapes. However, the ACI did not track the general avian latitudinal diversity gradient. The results and potential explanations for this
trend will be discussed.
5aABb3. Temporal and spatial distribution patterns of toothed whales
in Massachusetts Bay using passive acoustics from ocean gliders.
Tammy Silva (Biology, Univ. of Massachusetts Dartmouth, 285 Old Westport Rd., Dartmouth, MA 02747, tsilva4@umassd.edu), T. Aran Mooney,
Laela Sayigh, and Mark F. Baumgartner (Biology, Woods Hole Oceanographic Inst., Woods Hole, MA)
Basic information on marine mammal habitat use is necessary for
informing ecosystem-based management plans and for mitigating human
impacts. Massachusetts Bay is an important marine mammal foraging area
in the Gulf of Maine and a region of high human activity, but little is known
about toothed whale habitat use, particularly during winter months. Passive
acoustic monitoring, particularly from autonomous platforms, provides
advantages in studying habitat use in unfavorable conditions. The goal of
this work is to use acoustic monitoring to investigate temporal and spatial
occurrence patterns of toothed whale species in Massachusetts Bay during
late fall/early winter using ocean gliders equipped with passive acoustic
recorders. Slocum gliders were deployed in western Massachusetts Bay in
2014, 2015, and 2016. Toothed whales were detected on 92 (72%) of 128
deployment days. Detections occurred more often at night. Ongoing work
includes acoustic identification of species and assessing relationships
between detections and environmental conditions. These data provide the
first evidence of a consistent presence of toothed whales in Massachusetts
Bay during late fall and winter, and demonstrate the potential for ocean
gliders as a tool to detect toothed whales in areas/times that are difficult to
survey with traditional visual methods.
5aABb4. Automated classification of Pacific white-sided dolphin (Lagenorhynchus obliquidens) pulsed calls for diel pattern assessment. Kristen
Kanes, Stan E. Dosso (Univ. of Victoria, 3800 Finnerty Rd., Victoria, BC
V8P 5C2, Canada, kristenk@uvic.ca), Tania I. Lado (Ocean Networks Canada, Victoria, BC, Canada), and Xavier Mouy (Jasco Appl. Sci., Victoria,
BC, Canada)
Diel patterns in marine mammal activity can be difficult to assess visually due to the cost of ship time and limitations of daylight and weather during expeditions that can bias analysis based on sightings. Acoustic data can
be a cost-effective approach to evaluate activity patterns in marine mammals. However, manual analysis of acoustic data is time consuming, and
impractical for large data sets. This study seeks to evaluate diel patterns in
Pacific white-sided dolphin communication through automated analysis of
one year of continuous acoustic data collected from the Barkley Canyon
node of Ocean Networks Canada’s NEPTUNE observatory, offshore of
Vancouver Island, British Columbia, Canada. In this study, marine mammal
acoustic signals are manually annotated in a sub-set of the data, and used to
3941
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
train a random forest classifier targeting Pacific white-sided dolphin pulsed
calls. Binary and multiclass classifiers are compared, and the effects of different data-balancing methods are evaluated. The results from automated
classification of the full data set are used to determine whether a diel pattern
in Pacific white-sided dolphin communication exists in this region.
5aABb5. Analysis of fin whale vocalizations south of Rhode Island. Jennifer Giard, James H. Miller, Gopu R. Potty (Ocean Eng. Dept., Univ. of
Rhode Island, 215 S Ferry Rd., Narragansett, RI 02882, jennifer_giard@my.
uri.edu), Arthur Newhall, Ying-Tsong Lin, and Mark F. Baumgartner
(Woods Hole Oceanographic Inst., Woods Hole, MA)
Fin whale vocalizations were recorded south of Rhode Island during late
summer through early fall of 2015 using a number of underwater recording
systems. These systems were deployed to monitor broadband noise, including pile driving, from construction of the Block Island Wind Farm. Two vertical hydrophone array moorings were deployed in approximately 40 m of
water each with four hydrophones. Additionally, a tetrahedral array was
deployed in about 30 m of water just above the seabed. The tetrahedral array
consisted of four hydrophones spaced 0.5 m apart. The spacing between
each of these recording systems was approximately 7.5 km. 20-Hz fin whale
vocalizations were recorded numerous times on all of the sensors both during and after construction was completed. An analysis and localization
effort of these signals was performed to estimate the source level, directionality and the track of the whale over the period of the vocalizations. The
results of this analysis will be discussed. [Work supported by the BOEM.]
5aABb6. LAMLA 2016: Results from the workshop listening for
aquatic mammals in Latin America. Renata S. Sousa-Lima (Physiol. and
Behavior, UFRN, Lab. of BioAcoust., Centro de Biociencias, Campus Universitario, Caixa Postal 1511, Natal, Rio Grande do Norte 59078-970, Brazil, sousalima.renata@gmail.com), Susannah Buchan (Universidad de
Concepción, Santiago, Chile), Artur Andriolo (UFJF, Juiz de Fora, Minas
Gerais, Brazil), and Julia R. Dombroski (Syracuse Univ., Syracuse, NY)
The field of bioacoustics and the applications of passive acoustic monitoring (PAM) methods to investigate aquatic mammals have grown worldwide. In Latin America, a number of researchers are using PAM to
investigate different species and habitats. However, due to the lack of a
proper venue to discuss research findings and priorities, collaboration within
the region is scarce. Considering the clear demand for an opportunity for
networking and exchange of information at a regional level, we proposed a
series of workshops entitled LAMLA—Listening for Aquatic Mammals in
Latin America. The aim of LAMLA is to bring together researchers, professionals, and graduate students working in bioacoustics to communicate their
research, network and interact, and discuss directions for a coordinated regional bioacoustics network in order to better utilize research resources. The
first edition of LAMLA was held in Natal, Brazil, in June, 2016 and the second edition was held in Valparaiso, Chile, during the XI SOLAMAC Reunion in November 2016. Outcomes and results of these two meetings will be
presented as well as our goals and expectations for the 2018 meeting.
[LAMLA was supported by PAEP CAPES, Office of Naval Research
Global, Cetacean Society International, FAPERN, UFRN, UFJF, UNAM,
University of Saint Andrews, Universidad de Concepcion, SOLAMAC, and
the Acoustical Society of America.]
5aABb7. Concurrent passive and active acoustic observations of highlatitude shallow foraging sperm whales (Physeter macrocephalus) and
mesopelagic prey layer. Geir Pedersen (Sci. and Technol., Christian
Michelsen Res. AS, P.O. Box 6031, Bergen 5892, Norway, geir.pedersen@
cmr.no), Espen Storheim (Nansen Environ. and Remote Sensing Ctr., Bergen, Norway), Lise D. Sivle, Olav Rune Godø (Marine Ecosystem Acoust.,
Inst. of Marine Res., Bergen, Norway), and Lars Alf Ødegaard (Norwegian
Defence Res. Establishment (FFI), Bergen, Norway)
Using echosounder and hydrophone data from the Lofoten-Vesterålen
Cabled Ocean Observatory (LoVe, N 68 54.474’, E 15 23.145’, 258 m
depth) collected in 2015, we are able to concurrently quantify sperm whale
(Physeter macrocephalus) shallow foraging behavior and the behavior of
the mesopelagic prey layer. Click rate and type was detected by the passive
Acoustics ’17 Boston
3941
5a THU. AM
5aABb2. Variation in dawn chorus acoustic complexity across a global
latitudinal gradient. Colin R. Swider, Susan Parks (Biology, Syracuse
Univ., 114 Life Sci. Complex, 107 College Pl., Syracuse, NY 13244,
cswider@syr.edu), and Mark V. Lomolino (Environ. and Forest Biology,
SUNY College of Environ. Sci. and Forestry, Syracuse, NY)
acoustics while active acoustics monitored the distribution and vertical and
horizontal movement of the prey organisms in the water column. In one
instance a diving sperm whale was also detected by the active acoustics
allowing TS measurements and estimation of diving speed and angle. Additional data such as ocean current and proximity of vessels, in addition to
vessel noise measurements, further allowed us to examine potential links
between oceanographic conditions and noise on sperm whale behavior and
foraging and the presence of prey and whales. The results demonstrate the
additional information obtained by combining data from active and passive
acoustic sensors. The first part of the LoVe cross-disciplinary ocean observatory was established in 2013, and the extension is planned for 2017/2018
covering the Norwegian shelf to approximately 2500 m depth. This will further expand the observatories capabilities for underwater acoustic monitoring and targeted scientific studies.
5aABb8. Mining noise affects Rufous-Collared Sparrow (Zonothichia
capensis) vocalizations. Yasmin Viana (Laboratório de Bioacústica, Museu
de Ciências Naturais, Pontifı́cia Universidade Católica de Minas Gerais,
Belo Horizonte, MG, Brazil), Robert J. Young (School of Environment and
Life Sci., Univ. of Salford Manchester, Salford, United Kingdom), Renata
S. Sousa-Lima (Physiol. and Behavior, UFRN, Lab. of BioAcoust., Centro
de Biociencias, Campus Universitario, Caixa Postal 1511, Natal, Rio
Grande do Norte 59078-970, Brazil, sousalima.renata@gmail.com), and
Marina H. Duarte (Laboratório de Bioacústica, Museu de Ciências Naturais,
Pontifı́cia Universidade Católica de Minas Gerais, Belo Horizonte,
Brazil)
Mining activity generates noise through explosions, traffic, machinery,
alert signals, etc. Noise affects the behavior of many species that depends
on acoustic communication. Our objective was to verify if the noise produced by truck traffic affects rufous-collared sparrow vocalizations. Data
were collected in an Atlantic forest fragment located close to a mine at the
Peti Environmental Station, in Southeast Brazil. Two digital field recorders
(SM2—Wildlife Acoustics) were installed 150m from each other and 25 m
from a mining road. The SM2 were set to record at 44.1kHz, from 05:00 to
09:00 am during seven days in October 2012. Using Raven pro 1.4, maximum and minimum frequencies, number of notes and duration of the Z.
capensis songs were extracted from the recordings one minute before, one
after and during the passage of trucks. Trucks noise spectral measurements
were also extracted. The species decreased the duration (H = 17.8, gl = 2,
p<0.05), the bandwidth (H = 36.28, gl = 2, p<0.05) and the maximum frequency (H = 24.45, gl = 2, p<0.05) and increased the minimum frequency of
the calls (H = 25.34, gl = 2, p<0.05) during exposure to truck noise. These
results indicate that noise can affect the vocal behavior of the species and
reveal the need to address the acoustic impact of mining on animal species.
5aABb9. Non-linear analysis as a hierarchal classification scheme for
vocalizing marine biologics. Cameron A. Matthews (Panama City Div.,
Naval Surface Warfare Ctr., 110 vernon ave, Panama City, FL 32407,
cameron.matthews@navy.mil), Anthony Matthews (EPS Corp, Panama
City, FL), and Anthony Ceparano (Gumbo Limbo Res. Facility, Boca
Raton, FL)
Many ocean animals use acoustic communications. Breeding, territorial
aggression, and hunting actions often include some form, often complex,
and can provide actionable intelligence on the vocalizing animals’ intent.
As a means of considering the linear features of the time series and the corresponding spectral and cyclic frequency content as it pertains to different
classes of animals, a hierarchy termed periodic linear, periodic non-linear,
and aperiodic non-linear structural vocalization is considered. To demonstrate the application of such a hierarchy, known vocalizations from the red
hind grouper (Epinephelus guttatus), the four lined grunter (Pelates quadrilineatus), the spiny lobster (Panulirus argus) are considered and analyzed for
grouping according to the hierarchy based on their time, spectral, and cyclic
signal content. Finally, a vocalization collected from the invasive Lionfish
(Pterois volitans), understood at the time of publication to be the first such
recorded instance, is exercised against the hierarchy to show membership
identification.
3942
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
5aABb10. Acoustic environment of North Atlantic right whales in the
Southeastern United States. Susan Parks (Dept. of Biology, Syracuse
Univ., Syracuse, NY), Andrew J. Read, and Douglas P. Nowacek (Nicholas
School of the Environment and Duke Univ. Marine Lab., Pratt School of
Eng., Duke Univ., Beaufort, NC, doug.nowacek@duke.edu)
North Atlantic right whales are an endangered species of baleen whale
that migrates along the east coast of the United States, with winter calving
grounds located in the coastal waters off Florida and Georgia. This study
investigated the acoustic environment experienced by individual right
whales swimming through this habitat though the use of suction cup
attached acoustic recording tags. Nineteen tag attachments were made
between 2014 and 2016. These tags documented a range of sounds from the
right whale acoustic environment, including calls produced by the tagged
whale, sounds produced by conspecifics, as well as sounds from other biological (fish and dolphin) and anthropogenic sources. The call rates of individual whales were relatively low, with calls typically produced in short
duration bouts. Sounds from other biological sources, particularly fish and
dolphin, and anthropogenic sources, particularly vessels, were common.
This project presents an initial step toward characterizing the acoustic environment experienced by individual whales to allow future comparisons to
stationary acoustic recordings in the same habitat.
5aABb11. Redefining species boundaries for acoustically and morphologically distinct species of swamp breeding frilled tree frogs (Kurixalus
appendiculatus) in the Southwestern Philippines. Taylor Broadhead (Forestry and Natural Resources, Purdue Univ., 203 S. Martin Jischke Dr.,
B066, West Lafayette, IN 47907, taylorbroadhead@gmail.com), Jesse
Grismer, and Rafe Brown (Ecology and Evolutionary Biology, Univ. of
Kansas, Lawrence, KS)
Combining analysis of male advertisement calls, multivariate analysis of
continuous morphological variation, biogeographic information, and a multilocus phylogenetic estimate of relationships, we reconsider species boundaries within Philippine populations of the frilled tree frogs Kurixalus
appendiculatus. Within the archipelago, the species spans several recognized biogeographic boundaries, with highly divergent genetic lineages isolated within formally recognized, geologically defined, faunal subregions.
Given this distribution, at least four possible taxonomic arrangements are
possible, varying from one to four possible evolutionary species. Simultaneous consideration of fixed external phenotypic character differences, continuously varying morphometric data, evolutionary relationships,
biogeography, and statistically significant differences in mating calls converges on a solution of two Philippine species. We advocate for more widespread, regular, and deliberate sampling of acoustic data to diminish
challenges for future studies, where we anticipate the validation of other
likely taxonomic arrangements by differences in advertisement calls.
5aABb12. Brazilian Cerrado nocturnal summer soundscape. Luane S.
Ferreira (Physiol., Universidade Federal do Rio Grande do Norte, Avenida
Senador Salgado Filho 3000 - Campus Universitário, Natal, Rio Grande do
Norte 59078-970, Brazil, fsluane@gmail.com), Eliziane G. Oliveira (Ecology, Universidade Federal do Rio Grande do Norte, Natal, Rio Grande do
Norte, Brazil), Luciana H. Rocha (Physiol., Universidade Federal do Rio
Grande do Norte, Natal, Rio Grande do Norte, Brazil), Flávio H. Rodrigues
(General Biology, Universidade Federal de Minas Gerais, Belo Horizonte,
Minas Gerais, Brazil), and Renata S. Sousa-Lima (Physiol., Universidade
Federal do Rio Grande do Norte, Natal, Rio Grande do Norte, Brazil)
The Brazilian Cerrado is one of the world’s biodiversity hotspots. Our
objective was to characterize its nocturnal soundscape. 12 autonomous
recorders (SongMeter2 + , Wildlife Acoustics) were deployed in Canastra
National Park (MG/Brazil) and recorded five consecutive nights during the
rainy season. Using Arbimon II soundscape builder we identified four frequency bands with higher activity levels. The lower band (0.3-1.3 kHz) is
acoustically occupied throughout the night. The second band (2.8-3.2 kHz)
is highly active around sunset and almost disappears after 10 PM. The third
band (3.8-6.6 kHz) splits into two near 9 PM, with the upper limit disappearing after 3 AM. The highest frequency band (9-16 kHz) is the broadest and
occupied in all recordings, being comprised by unidentified background
noise. Insects (mainly crickets and cicadas) are present in the three superior
Acoustics ’17 Boston
3942
5aABb13. Passive acoustic monitoring finds concurrent use of an artificial reef in the New York Bight by foraging humans and odontocetes
(Tursiops truncatus). Colin Wirth and Joseph Warren (Marine and Atmospheric Sci., Stony Brook Univ., 239 Montauk Hwy., Southampton, NY
11968, colin.wirth@gmail.com)
Passive acoustic recordings collected during summer 2015 at an artificial
reef (sunken barge) south of Long Island, New York revealed regular visitation by groups of delphinid odontocetes. Detected signals included social
(whistles) and foraging (short-interval echolocation) signals of odontocetes,
as well as signals specific to bottlenose dolphins (Tursiops truncatus) and
two known prey species. Visual observations, high broadband noise levels,
and presence of acoustic signatures specific to boats indicated heavy use of
this site by recreational fishers. Boat detections were significantly more frequent on weekends and between sunrise and sunset. Dolphin detections did
not vary diurnally and were significantly lower on weekends, possibly due
to avoidance of persistent noise disturbance. Bottlenose dolphins produce
low-frequency, narrow-band signals that are highly susceptible to masking
by boat noise. However, no significant difference was observed in the duration, average peak frequency, or frequency range of these signals when boat
noise was present or absent. Our findings demonstrate the benefits of passive
acoustic techniques in monitoring soniforous users (including humans) of
artificial reef habitats in these waters. Attraction of both human fishers and
odontocetes to artificial reefs may increase direct human-predator interactions as well as indirect ecological competition.
5aABb14. Effects of noise on avian abundance and productivity at the
landscape scale. Stacy L. DeRuiter (Mathematics and Statistics, Calvin
College, Grand Rapids, MI 49546, sld33@calvin.edu), Amber Bingle (Biology, Calvin College, Grand Rapids, MI), Matthew Link (Mathematics and
Statistics, Calvin College, Grand Rapids, MI), Michael Pontius, and Darren
Proppe (Biology, Calvin College, Grand Rapids, MI)
Songbirds, with their reliance on acoustic communication, may be especially sensitive to potential population consequences of anthropogenic noise.
Regional studies suggest that many species avoid noisy areas, and in some
cases non-avoiding individuals experience reduced fitness. However, multiple studies of a species sometimes produce conflicting results, perhaps due
to localized processes that overpower or exacerbate noise effects. Many
studies also use abundance as an imperfect proxy for population persistence.
To address these issues, we paired large published datasets—from the
MAPs (Monitoring Avian Productivity and Survivorship) program and the
U.S. National Parks Service noise map—to assess noise levels and bird
demographics across the continental United States. We modeled effects of
noise on songbird diversity, abundance, productivity, and physical condition
(fat score), accounting for temporal and spatial variation. At the continental
scale, diversity decreased with increasing noise, but other effects varied by
species. For example, least flycatchers become more abundant with increasing noise, and red-breasted nuthatches less abundant. Effects on fat and productivity also varied by species, with abundance trends not consistently
matching productivity. Landscape-scale models such as those presented
here may facilitate range-wide conservation measures and help identify species or groups that are most at risk.
5aABb15. Preliminary measurements of passive acoustics in Lake Superior. Jay A. Austin (Large Lakes Observatory, Univ. of Minnesota, Res.
Lab Bldg., Duluth, MN 55812, jaustin@d.umn.edu)
In July 2016, a hydrophone was deployed for eight days in 54 m of water
in the western arm of Lake Superior. This is, to the best of our knowledge,
the first recording of passive acoustic information in a large lake. The signal
3943
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
is dominated by noise from passing ships (30-100 Hz) and by surface winds
(broad spectrum). Noise from passing ships drops off as approximately r-6,
suggesting that there are significant transmission losses associated with
reflections off of the bottom. The signal associated with wind is highly correlated with wind speed as measured at a nearby buoy. Intermittent “clicks”
look similar to burbot calls previously observed, and appear to occur only in
the absence of ship noise. This suggests a potential behavioral response to
ambient acoustic energy. Ray tracing experiments suggest the acoustic environment within the lake will change drastically as the lake transitions from
summer stratified conditions to unstratified, and again when inverse winter
stratification sets in.
5aABb16. Correlation of direct field observations of the endangered
Golden Cheeked Warbler (Dendroica chrysoparia) with long-term
acoustical measurements. Preston S. Wilson (Mech. Eng., Univ. of Texas
at Austin, 1 University Station, C2200, Austin, TX 78712-0292, pswilson@
mail.utexas.edu), David P. Knobles (Knobles Sci. and Anal., LLC, Austin,
TX), Lisa O’Donnell, Darrell Hutchinson, and William Reiner (Balcones
Canyonlands Preserve, City of Austin, Austin, TX)
We present elements of a bioacoustics study that correlated direct field
observations of the endangered Golden Cheeked Warbler (Dendroica chrysoparia) in the Balcones Canyonlands Preserve near Austin, TX with colocated long-term acoustical measurements. The goal is to eventually understand the effects of anthropogenic noise on the breeding success of the warblers. The anthropogenic component of the soundscape includes noise from
road traffic, jet aircraft, helicopters, and urban development and utilization.
During the 2016 and 2017 breeding seasons (March through May), acoustical recordings were made from sun up to sun down each day at four sites.
The acoustical measurements were correlated with contemporaneous direct
field observations that mapped the male territories and their degree of reproductive success. The study considered the interplay of the source levels of
the warblers, the noise, and sound propagation loss in the habitat. The key
result is the difference in the distribution of two song types sung by banded
birds with different degrees of breeding success. The relationship between
these distributions, the corresponding success rates, and the size of the
acoustic active space as determined by soundscape characteristics, is
discussed.
5aABb17. Spatial analysis of soundscapes of a Paleotropical rainforest.
Jack T. VanSchaik (Forestry and Natural Resources, Purdue Univ., 203 S.
Martin Jishke Dr., Mann Hall, B066, West Lafayette, IN 47906, jvanscha@
purdue.edu), Amandine Gasc (Forestry and Natural Resources, Purdue
Univ., Paris, France), Kristen M. Bellisario, and Bryan C. Pijanowski (Forestry and Natural Resources, Purdue Univ., West Lafayette, IN)
The world’s biodiversity is drastically decreasing due to human activity.
The paleotropical rainforests of Borneo contribute 10% of the world biodiversity but are at risk of destruction due to logging and other human interests. Soundscape Ecology, defined as the composition of sounds in an
environment, is a new field that offers potential for biodiversity assessment.
Spatial dynamics are an important component of an ecosystem, yet the link
between spatial dynamics and soundscapes has not yet been studied. It
should be possible to assess disturbance of an ecosystem by analyzing the
spatial structure of the soundscape. Particularly, soundscapes in healthy ecosystems should exhibit more spatial autocorrelation than soundscapes in disturbed ecosystems. We calculated Alpha acoustic and Beta acoustic indices
for 13 recorders at each site that had identical spatial configurations. We
compared the resultant Alpha indices using Moran’s I, Geary’s C and other
statistical tests. We compared Beta indices by Mantel Tests and a new technique, the Beta index Semivariogram, a traditional variogram except using
means of Beta indices. Spatial statistics on Alpha and Beta indices, and Beta
Index Semivariograms reveal more spatial autocorrelation at the undisturbed
site. However, Beta Indices detect disturbance better presumably due to
their comparative nature.
Acoustics ’17 Boston
3943
5a THU. AM
bands, anura in the two lower bands, and birds in the second and third near
dusk and dawn. Characterizing such protected soundscapes is vital for future
monitoring and identification of changes in this important Brazilian
preserve.
5aABb18. Musicological indices for soundscape ecological analysis.
Kristen M. Bellisario, Jack T. VanSchaik (Forestry & Natural Resources,
Purdue Univ., Lafayette, 195 Marsteller St., 305 FORS Bldg., West Lafayette, IN 47906), amandine gasc (Forestry & Natural Resources, Purdue
Univ., Lafayette, Paris, France), Carol Bedoya, Hichem Omrani, and Bryan
C. Pijanowski (Forestry & Natural Resources, Purdue Univ., Lafayette,
West Lafayette, IN 47906, bpijanow@purdue.edu)
Soundscape ecologists have collected sound recordings from large-scale
studies that are difficult to analyze with traditional approaches and tools.
Natural soundscapes are complex and contain a diverse mixture of biological, geophysical, and anthropogenic sources that span similar frequency
bands and often lack a discernible fundamental frequency. Selecting features that are responsive to signals without fundamental frequencies and that
are capable of classification for multi-layer signals, or polyphonic textures,
is a challenging task in soundscape ecology. Spectral timbral features in various combinations have been shown to discriminate in music classification
problems, and lend support to our hypothesis; timbral features in soundscape
analysis may detect and identify patterns that are inherently related to
order-specific communication in frequency bands shared by biological, geophysical, and anthropogenic sounds.Combined timbral feature extractions
provides a new level of information about acoustic activity within a soundscape. Current soundscape metrics assess biodiversity, functional diversity,
and acoustic complexity, but may be missing crucial information if we compare musical acoustic analysis techniques used to identify genres and structures in music. This new method provides a relational approach to
understanding sound event interactions within soundscapes, refining quantifiable soundscape data, and improving the resolution with which it is
analyzed.
5aABb19. Acoustic competition of Serranids at a fish spawning aggregation. Katherine Cameron, Brice Semmens (Marine Physical Lab., Scripps
Inst. of Oceanogr., 8622 Kennel Way, La Jolla, CA 92037, kccameron@
ucsd.edu), Christy V. Pattengill-Semmens (REEF, La Jolla, CA), Steve Gittings (National Marine Sanctuaries Program, NOAA, Silver Spring, MD),
Croy McCoy (Cayman Island Dept. of Environment, Little Cayman, Cayman Islands), and Ana Sõirovic¨ (Marine Physical Lab., Scripps Inst. of
Oceanogr., La Jolla, CA)
Many fish species are known to produce stereotyped calls during spawning activities. It has been hypothesized that these calls play a vital role in
coordination during this critical period. Competition for acoustic space can
result in masking of calls and, as a result, may limit their function. Acoustic
niche separation could be a solution to avoid acoustic competition for species that co-occur in a geographic area. The data from passive acoustic
arrays deployed in Little Cayman, Cayman Islands, between 2015 and 2017
during the spawning of one of the largest aggregations of Nassau grouper
(Epinephelus striatus) have been analyzed to explore acoustic niches in Serranids. At least three other vocal Serranid species are known to spawn at
this location during the same time as Nassau grouper: red hind (E. guttatus),
black grouper (Mycteroperca bonaci), and yellowfin grouper (M. venenosa).
Call characteristics, such as the peak frequency, bandwidth, source level,
and pulse period, were analyzed for all distinct calls from these species. In
addition, temporal and spatial patterns of those calls were evaluated, allowing a detailed understanding of the communication space and acoustic competition potential for these cohabitating species.
5aABb20. Sparsified nightly fish chorusing in the Dry Tortugas during
elevated background noise levels due to a tropical storm. Benjamin S.
Gottesman, Dante Francomano, Taylor Broadhead, and Bryan C. Pijanowski
(Forestry and Natural Resources, Purdue Univ., Ctr. for Global Soundcapes,
331 Smiley St., West Lafayette, IN 47906, bgottesm@purdue.edu)
Determining how fish respond to naturally occurring noise disturbances
can provide insight into the biological mechanisms underlying the response
of fish to anthropogenic noise. Data collected from passive acoustic monitoring in the Dry Tortugas, FL, showed that a tropical storm significantly
increased levels of low-frequency noise over a four-day period. The nightly
fish chorus occurring at this site, likely from Black Drums (Pogonias cromis), was significantly reduced during this storm event, with fewer than
10% of grunts detected during the storm’s peak as compared to pre- and
3944
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
post-storm levels. We applied Soundscape Control Charts, a new method to
quantify the effects of natural and anthropogenic disturbances. Most commonly used in the industrial sector as alarm systems, control charts identify
when a system deviates from its normal state. In this study, Soundscape
Control Charts quantified the effects of this tropical storm on the communication of fish. This serendipitous dataset suggests that fish communication is
negatively impacted by naturally occurring noise disturbances, and is evidence for the need to preserve marine acoustic habitats in order to facilitate
animal communication.
5aABb21. Bioacoustic analysis of penguin vocalization classification for
Newport Aquarium educational exhibit. Bethany Wysocki (Univ. of Cincinnati, 3250 Jefferson Ave., Cincinnati, OH 45220, wysockby@mail.uc.
edu) and Peter M. Scheifele (FETCH~LAB, Univ. of Cincinnati, Cincinnati,
OH)
Newport Aquarium in Newport, Kentucky, strives to engage its visitors
in the educational importance of aquatic life and conservation. Through a
variety of research programs and volunteerism, the aquarium promotes local
and global efforts for animal advocacy. An important component of their organization is the diversity of education offered to its daily patrons and
guests. The purpose of this project was to examine the acoustic and behavioral characteristics of penguin vocalizations in an effort to create an educational exhibit targeting school-age visitors. Vocalization samples of the 5
penguin species housed at Newport Aquariums’ Kroger Penguin Palooza exhibit were recorded at various times of the day, over a 5-month period. In
addition to the recordings of the various penguins, behavioral characteristics
were also noted in correspondence with the individual call and species. The
calls were spectrally analyzed using SpectraPLUS software system, looking
specifically at frequency, power, and sound pressure levels. Spectrogram
plots were also documented to identify penguin vocalization variances and
species identification. A Hidden Markov Model used this information to categorize and cluster vocalizations in an effort to classify them. The information extracted about the vocalizations and acoustic variances will be used
for educational and exhibit purposes for the aquarium.
5aABb22. Spectrogram contour analysis of vocalizations of the Asian
Small-Clawed Otter (Amblonyx cinerea). Haylea G. McCoy and Peter M.
Scheifele (Commun. Sci. and Disord., Univ. of Cincinnati, 3202 Eden Ave.,
Cincinnati, OH 45267, roarkhg@mail.uc.edu)
Asian Small-Clawed Otters (Amblonyx cinerea) are small creatures
found in many areas of southern and southeastern Asia, as well as Indonesia.
They mate for life and communicate with one another using a wide variety
of vocalizations from long, drawn out cries, to small yipping noises. The
vocalizations of these otters were recorded in the back up area of the Newport Aquarium and Wellington Zoo and analyzed in Spectra Plus to determine the spectrogram contour of each vocalization. The goal of this
research was to gather and compile data on the way the Asian Small-Clawed
Otter communicates in order to better understand the way these animals
live. These data will now be used to perform vocal clustering and classification using a Hidden Markov model, spectral moments, and geometric contour classification (Lofft, 2009; Williamson, 2014).
5aABb23. North Atlantic Right Whale call detection with very deep
convolutional neural networks. Kele Xu (College of Electron. Sci. and
Eng., National Univ. of Defense Technol., 16 Rue Flatters, Paris 75005,
France, kelele.xu@gmail.com), Hengxing Cai (Guangdong Provincial Key
Lab. of Intelligent Transportation Systems, Sun Yat-Sen Univ., Guangdong,
China), Xi Liu (ESPCI Paris, Paris, France), Zhifeng Gao (School of Software & Microelectronics, Peking Univ., Peking, China), and Bingbing
Zhang (College of Electron. Sci. and Eng., National Univ. of Defense Technol., Changsha, China)
Ship collision is one of the main threats to the North Atlantic right
whale, which is in danger of extinction. One popular way to reduce the collision is monitoring for the occurrences of whales by detecting their sounds
on data recordings. We explore the application of very deep convolutional
neural network in this detection problem. For feature extraction, we compute Mel-frequency cepstral coefficients (MFCCs) along with their first and
Acoustics ’17 Boston
3944
5aABb24. Characterizing Chilean blue whale vocalizations with digital
acoustic recording tags: A test of using tag accelerometers for caller
identification. Mark Saddler (Biology, Woods Hole Oceanographic Inst.,
5500 S University Ave. 585, Chicago, IL 60637, mark.saddler@sbcglobal.
net), Alessandro Bocconcelli (Appl. Ocean Phys. and Eng., Woods Hole
Oceanographic Inst., Woods Hole, MA), Leigh S. Hickmott (Sea Mammal
Res. Unit, Scottish Oceans Inst., Univ. of St. Andrews, Petersfield, Hampshire, United Kingdom), Gustavo Chiang, Rafaela Landea-Briones, Paulina
A. Bahamonde, Gloria Howes (Fundación MERI, Vitacura, Santiago,
Chile), and Laela Sayigh (Biology, Woods Hole Oceanographic Inst.,
Woods Hole, MA)
Vocal behavior of blue whales (Balaenoptera musculus) in the Gulf of
Corcovado, Chile, was analyzed using digital acoustic recording tags
(DTAGs). We report the occurrence of Southeast Pacific type 2 (SEP2)
calls, which exhibit peak frequencies, durations, and timing consistent with
previous reports. We also offer the first description of tonal downswept (D)
calls for this population. Since being able to accurately assign vocalizations
to individual whales is fundamental for studying communication and for
estimating population densities from call rates, we further examine the feasibility of using DTAG accelerometers to identify low-frequency calls produced by tagged whales. We cross-correlated acoustic signals with
simultaneous tri-axial accelerometer readings in order to analyze the phase
match as well as the amplitude of accelerometer signals associated with
low-frequency calls, which provides a reliable method of determining if a
call is associated with a detectable acceleration. Our results suggest that
vocalizations from nearby individuals are also capable of registering accelerations in the tagged whale’s DTAG record. We cross-correlate acceleration vectors between calls to explore the possibility of using signature
acceleration patterns associated with sounds produced within the tagged
whale as a new method of identifying which accelerometer-detectable calls
originate from the tagged animal.
5aABb25. Build-up effect of auditory streaming in budgerigars (Melopsittacus undulatus). Huaizhen Cai, Laurel A. Screven, and Micheal L. Dent
(Psych., Univ. at Buffalo, SUNY, 206 Park Hall, Buffalo, NY 14228, huaizhen@buffalo.edu)
When listening to a rapid tone sequence of the form ABA-ABA-ABA-…
(where A and B are two tones of different frequencies and “-” indicates a
silence interval), listeners may either hear one coherent “gallop” of three
tones grouped together or two separate auditory streams (one high frequency, one low frequency) appearing to come from two sound sources.
Research on humans indicates that the tendency to perceive two streams can
be built up as exposure time increases. Neural recordings in European starlings show build-up effects of auditory streaming too. A lack of behavioral
data on the build-up effect in nonhumans make it difficult to draw parallels
between animals and humans. The present research aims to behaviorally
validate the build-up effect of auditory streaming and factors that may influence the effect in nonhuman animals. Four budgerigars were tested in a categorization task using operant conditioning. “Streaming” categorization
increased as the frequency separation increased. Additionally, in some frequency separations, as the sequence duration increases, the probability of
“streaming” categorizations reported by the budgerigars is higher. These
results indicate that budgerigars experience a build-up effect of auditory
streaming behaviorally, and this effect is influenced by, but may not be limited to, different frequency separations.
3945
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
5aABb26. Acoustic characterization of sound production by the PotBellied Seahorse (Hippocampus abdominalis). Brittany A. Hutton, Peter
M. Scheifele (Commun. Sci. and Disord., Univ. of Cincinnati, 3202 Eden
Ave., P.O. Box 670379, Cincinnati, OH 45267, huttonba@mail.uc.edu), and
Laurel Johnson (Animal Husbandry, Newport Aquarium, Newport, KY)
Sound production is a critical component of predator-prey interactions.
In order to understand why seahorses produce sound in various instances
(i.e., courtship, feeding, and stress), we must first quantify the acoustic parameters of the signal. Seahorses produce sound with a stridulation of the
supraoccipital bone and the coronet by moving their head upwards in a
motion that is referred to as a “snick.” The acoustic signal that accompanies
this head movement is called a “click.” We set out to analyze the sound parameters of the largest seahorse species, Hippocampus abdominalis, housed
at the Newport Aquarium. Adult and juveniles were tested each individually
and allowed one hour to acclimate to an isolated tank. Feeding on brine
shrimp (Genus: Artemia) was observed with video and audio recordings that
were collected for approximately 12 minutes. SpectraPLUS was used to
evaluate the frequency, intensity, and number of clicks present in the audio
recordings. The video footage allowed for analysis of the presence of the
snick. By characterizing the sound production in this species of seahorse we
are able to begin to answer the question of the purpose for the click and
snick behavior.
5aABb27. Biotic factors influencing sound production patterns by two
species of snapping shrimp (Alpheus heterochaelis and A. angulosus).
Jessica N. Perelman (Biology, Woods Hole Oceanographic Inst., 266 Woods
Hole Rd., MRF, WHOI, Woods Hole, MA 02543, jperelman@whoi.edu),
Apryle Panyi (Univ. of Southern MS, Ocean Springs, MS), Ashlee Lillis,
and T. Aran Mooney (Biology, Woods Hole Oceanographic Inst., Woods
Hole, MA)
Snapping shrimp are among the most pervasive sources of biological
sound in the ocean, inhabiting and sonifying many shallow temperate and
tropical reefs and seagrass flats. Despite the continuous crackling sounds of
snapping shrimp colonies that contribute greatly to marine soundscapes, relatively few studies have explored baseline information regarding acoustic
patterns and underlying behavioral ecology. Recent field data has highlighted intricate spatiotemporal dynamics in shrimp sound production, with
seasonally variable diel rhythms. However, the biotic factors (e.g., sex, size,
species, or behavioral mode) underlying these patterns are unclear. This
study investigated the snapping behavior of two species of snapping shrimp
(Alpheus heterochaelis and A. angulosus). Previously undescribed spontaneous snap behavior was observed. The snap rates for shrimp held individually, in pairs, and in a 10-shrimp colony, were measured under natural daynight light conditions. Results show high variability in individual snap rates,
with females generally snapping more than males, and higher snap rates for
same-sex pairs compared to male-female pairings. Time of day was also
found to variably affect snap rates. Establishing the nature of these patterns
increases our understanding of a key sound-producer and driver of marine
soundscapes.
5aABb28. Comparison between the marine soundscape of recreational
boat mooring areas with that of a pristine area in the Mediterranean:
Evidence that such acoustic hot-spots are detrimental to ecologically
sensitive habitats. Juan Francisco Ruiz, Jose Miguel Gonzalez-Correa, Just
Bayle-Sempere, Jaime Ramis (Univ. Alicante, Alicante, Spain), Rodney A.
Rountree (Waquoit, MA), and Francis Juanes (Univ. Victoria, 3800 Finnerty
Rd., Victoria, BC V8P 5C2, Canada, juanes@uvic.ca)
We investigated the potential to use passive acoustics to access the
impact of recreational boat mooring areas on ecologically sensitive habitats
in the Western Mediterranean. One important consequence of the tourist
industry in the region is that it targets the most pristine and ecologically sensitive habitats. Underwater sounds were recorded in mooring areas in Ibiza,
Formentera and Tabarca harbors during high use and low use seasons and
compared to recordings in the Tabarca Marine Protected area. At each location, we recorded sounds during 20 min at three different sites, for three random sampling times during the day. The percent of time occupied by
selected biological (drums and croaks) and anthropogenic sounds (boat and
mooring chain noises), and call rates of selected fish sounds were measured
Acoustics ’17 Boston
3945
5a THU. AM
second temporal derivatives, and Fourier-transform-based filter-banks for
all sound clips. MFCCs were calculated with Hamming window, and the filter-banks were calculated in range of 50—650 Hz, and include 72 coefficients, distributed on mel-scale, for each of the 97 time steps. For classifier
modeling method, we apply the very deep convolutional Neural Network
(CNN) in our task. The CNN architecture s 22 layers, which consists of
alternating convolutional layer and pooling layer, while the last layers are
full-connected neural network. Dropout is used in our fully connected layers
with a rate of 0.4. By using the data provided by the Cornell University
Whale Detection data, our model provides area under the ROC curve
(AUC) performance of 0.985, which achieves the state-of-the-art performance presently.
and compared among sites and seasons. Biological sounds contributed significantly less to the soundscape in mooring areas during the tourist season,
and to the reserve in both seasons. Our study demonstrates the critical need
for research on the impact of acoustic noise “hot spots” such as recreational
mooring areas on marine and freshwater soundscapes.
5aABb29. Auditory sensitivity of various areas of the head to underwater acoustical stimulation in odontocetes. Evgeniya Sysueva (Inst. of
Ecology and Evolution, 33 Leninsky Prospect, Moscow 119071, Russian
Federation, evgeniasysueva@gmail.com), Paul E. Nachtigall (Hawaii Inst.
of Marine Biology, Univ. of Hawaii, Kailua, HI), Aude F. Pacini (Hawaii
Inst. of Marine Biology, Univ. of Hawaii, Kaneohe, HI), Jeff Pawloski,
Craig Allum (Sea Life Park, Honolulu, HI), and Alexander Supin (Inst. of
Ecology and Evolution, Moscow, Russian Federation)
Sensitivity to the local underwater acoustic stimulation of the ventro-lateral head surface was investigated in a bottlenose dolphin (Tursiops truncatus). The stimuli were tone pip trains of carrier frequencies ranging from 32
to 128 kHz with a pip rate of 1 kHz. Auditory evoked potentials (the rate
following responses) were recorded. For all the tested frequencies, a lowthreshold region was revealed at the lateral side of the middle portion of the
lower jaw. This result differed from that obtained in a beluga whale, Delphinapterus leucas (Popov et al., JASA 2016, 140: 2018) revealed a maximal
sensitivity region next to the medial side of the middle portion of the lower
jaw. The comparative analysis of these data and their extrapolation to all
odontocetes in generally is discussed.
5aABb30. Influence of fatiguing noise on auditory evoked responses to
stimuli of various levels in a beluga whale, Delphinapterus leucas. Vladimir Popov, Evgeniya Sysueva, Dmitry Nechaev, and Alexander Supin (Inst.
of Ecology and Evolution, 33 Leninskij prosp., Moscow, 119071, Russian
Federation, popov.vl.vl@gmail.com)
The post-exposure effect of fatiguing noise (half-octave band-limited
noise centered at 32 kHz) on the evoked responses to test stimuli (rhythmic
pip trains with a 45-kHz center frequency) at various levels (from threshold
to 60 dB above threshold) was investigated in a beluga whale Delphinapterus leucas. For baseline (pre-exposure) responses, the magnitude-vs-level
function featured a segment of steep magnitude dependence on level that
was followed by a segment of little dependence (plateau). Post-exposure,
the function shifted upward along the level scale. Due to the plateau in the
magnitude-vs-level function, post-exposure suppression of responses
depended on the stimulus level such that higher levels corresponded to less
suppression. The experimental data may be modeled based on the compressive non-linearity of the cochlea. According to the model, post-exposure
responses of the cochlea to high-level stimuli are minimally suppressed
compared to the pre-exposure responses, despite a substantially increased
threshold. [The study was supported by The Russian Foundation for Basic
Research (Grant No. 15-04-01068).]
5aABb31. Psychophysical audiogram of a California sea lion (Zalophus
californianus) listening for airborne tonal sounds in an acoustic chamber. Colleen Reichmuth, Jillian Sills, and Asila Ghoul (Inst. of Marine Sci.,
Long Marine Lab., Univ. of California, Long Marine Lab., 115 McAllister
Way, Santa Cruz, CA 95060, coll@ucsc.edu)
Many species-typical audiograms for marine mammals are based on
data from only one or a few individuals that are not always tested under
ideal conditions. Here, we report auditory thresholds across the frequency
range of hearing for a healthy, five-year-old female California sea lion identified as Ronan. Ronan was trained to enter a hemi-anechoic acoustic chamber to perform a go/no-go audiometric experiment. Auditory sensitivity was
measured first by an adaptive staircase procedure and then by the method of
constant stimuli. Minimum audible field measurements were obtained for
500 ms frequency-modulated tonal upsweeps with 10% bandwidth and 5%
rise and fall times. Thresholds were measured at 13 frequencies: in oneoctave frequency steps from 0.1 to 25.6 kHz, and additionally at 18.0, 22.0,
36.2, and 40.0 kHz. Sensitivity was greatest between ~0.9 and 23 kHz, with
best hearing of 0 dB re 20 lPa at 12.8 kHz. Hearing range, determined at
the 60 dB re 20 lPa level, extended from ~0.2 kHz to 38 kHz. Sensitivity
3946
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
was comparable to that of three sea lions tested in similarly controlled conditions, and much better than that of two sea lions tested in less controlled
conditions. [Work supported by ONR.]
5aABb32. Differential temporal structures in mouse ultrasonic vocalizations relay sex and age information about the producer. Daniel Calbick
(Neurosci., Yale Univ., 333 Cedar St., New Haven, CT 06510, daniel.calbick@yale.edu), Gregg A. Castellucci (Linguist, Yale Univ., New Haven,
CT), and David A. McCormick (Neurosci., Yale Univ., New Haven,
CT)
Mice produce a variety of ultrasonic vocalizations (USVs) during social
interactions among conspecifics. However, it remains unclear which features of these calls mice utilize to distinguish one type of USV from another.
In this study, we examine male courtship USVs, neonatal isolation USVs,
and female social contact USVs, and find that the temporal structure of the
calls alone is sufficient for a high level of discriminability. Specifically, we
found that males produce temporally distinct short and long duration USVs
with resulting short and long duration call intervals, while females produce
nearly exclusively short USVs with short call durations. Young pups were
found to produce medium duration USVs with long call intervals only.
Interestingly, as the pups aged, their USV durations and call intervals
decreased and approached values observed in juvenile male courtship
USVs. Therefore, the gross USV rhythmic structure carries a high degree of
information about both sex and age of the producer, and may be utilized by
mice during call discrimination. These findings are reminiscent to some pinniped species, who face similar challenges in the transmission of their calls
(i.e., a high density of calling conspecifics), who also use gross temporal
features to distinguish call types.
5aABb33. Comparison between sub-ms fast spatio-temporal imaging
and electrical recordings from the bat inferior colliculus using a microendoscope. Hidetaka Yashiro (Life and Medical Sci., Doshisha Univ., Kyotanabe, Kyoto 6180011, Japan, hidetaka.yashiro@gmail.com), Kazuo Funabiki (Inst. of Biomedical Res. and Innovation, Kobe, Japan), Andrea
Simmons, James A. Simmons (Brown Univ., Providence, RI), and Hiroshi
Riquimaroux (Shandong Univ., Jinan, Shandong, China)
Recently, we demonstrated that our micro-endoscopic system enabled
acquiring optical image in submillisecond temporal resolution (132 ls/line)
by line-scan. The micro-endoscope was fabricated from optical fiber bundle
as an endoscope tip and the bundle was coated by gold and enamel. The tip
was beveled as cone-shape for minimal invasion and the surrounding edge
became electrical-conductive for being used as an electrode. Then, this system can record electrical activities (local field potentials; LFP, and multiunit activities; MUA) at a sampling rate of 20 kHz and optical responses
(calcium response derived from Oregon green BAPTA-1, AM; DF/F) simultaneously. We recorded these responses from the inferior colliculus (IC) of
two species of bats (Carollia perspicillata and Eptesicus fuscus) to tone
bursts (10, 20, 40, 60, and 80 kHz), noise bursts (5-100 kHz), and downward
FM sweeps (80-20 and 100-30 kHz). Along the same scanning line, several
activated areas (hot spots) were found responding to a single sound stimulus, while different areas were also found to be activated dependent on types
of sound stimulus. We analyzed spatio-temporal characteristics at a single
recording site and compared calcium response to electrical activities (LFP
and MUA). [Work supported by JSPS, MEXT Japan, ONR, Capita
Foundation.]
5aABb34. Neurobehavioral studies of Dumetella carolinensis in the
Northeast United States. Babak Badiey (Newark Charter School, 200
Mcintire Dr., Newark, DE 19711, babak.badiey@gmail.com)
Behavioral studies of Grey Catbirds (Dumetella carolinensis) have been
conducted over two seasons using data collection, including playback calls,
to learn about the neurobehavior of mimicking birds. First birds are identified by their signals, and detailed signal processing techniques are used to
examine the characteristics of the bird calls using call time-frequency signal
dispersion and relating them to behavior. Catbird songs often include frequency-modulated notes sweeping through a wide range of frequencies and
different sweeps related to their syrinx and are identified in the signals. The
Acoustics ’17 Boston
3946
broadband signals are grouped into similar song patterns and are checked
versus the bird’s position, the background noise, and weather conditions.
These experiments are conducted in controlled background noise conditions
to quantify the effect of noise on the birds. It is found that background noise
causes production of louder calls while it does not affect that playback behavioral response of the bird.
5aABb35. Computational acoustics for soundscape ecology: Simulation
and normalization. Cris Graupe (Multidisciplinary Eng., Purdue Univ.,
272 Littleton St., Unit 561, West Lafayette, IN 47906, cgraupe@purdue.
edu)
One of the major research themes of soundscape ecology is to understand ecosystem dynamics by measuring and analyzing patterns in biological acoustic communication in the context of the local environment. The
emerging field focuses at landscape scales to quantify compositional, spatial, and temporal variation in these patterns. This is reflected by variation in
specific acoustic qualities that must be taken into account to more accurately
compare diverse soundscape recordings. A new software is being developed
that aims to overcome this challenge. The MATLAB-based software utilizes
geometrical and parabolic methods of computational acoustics to simulate
terrestrial sound propagation, quantifying the impact of variable environmental filtration on acoustic recordings. This software provides both the
ability to quantitatively compare this impact across ecosystems, and the
ability to normalize acoustic recordings so that soundscape data from distinct environments can be compared and analyzed more accurately.
5aABb36. Bioacoustic monitoring station in underwater sculpture.
Heather R. Spence (GRACIASS, 801 S. 25th St., Arlington, VA 22202,
info@heatherspence.net)
Bottlenose dolphins (Tursiops truncatus) produce a wide array of
sounds, including clicks for echolocation and whistles for communication,
both of which have been studied intensively. However, sounds other than
whistles and echolocation clicks have received less attention, probably due
to their high variability. These include the class of sounds loosely described
as “burst pulses,” which in several studies of dolphins under human care
have been linked to aggressive interactions. Few studies have been carried
out in the wild, beyond those describing basic acoustic parameters of
sounds. Here we use acoustic and movement recording tags (DTAGs)
placed simultaneously on both members of pairs of free-ranging bottlenose
dolphins in Sarasota Bay, Florida, USA, to investigate acoustic behavior
during aggressive interactions between male alliances and female-calf pairs.
Using unsupervised clustering and discriminant function analysis on parameters such as frequency content, duration and rise time, we separate three
different sound types recorded during aggressive interactions: broadband
burst pulses, highly resonant cracks, and low-frequency narrowband quacks.
We demonstrate how these different signal types appear to be used in the
context of male-to-female aggression and/or male-male coordination. These
characterizations may assist researchers analyzing acoustic recordings in
assigning behavioral states to animals that are largely out of sight.
5aABb38. Potential to use passive acoustics to monitor the invasion of
the Hudson River by freshwater drum. Rodney A. Rountree (23 Joshua
Ln., Waquoit, MA 02536, rrountree@fishecology.org) and Francis Juanes
(Biology, Univ. of Victoria, Victoria, BC, Canada)
We conducted a preliminary passive acoustic survey of the occurrence
of freshwater drum in the New York State Canal System (NYSCS). Similar
to more well studied marine members of the Sciaenidae, freshwater drum
calls are composed of highly variable trains of 1 to 119 knocks/call (mean =
25 knocks/call), a mean knock period of 33 knocks/s, mean peak frequency
of 400 Hz, and mean duration of 0.8 s. The occurrence of reproductively
active freshwater drum, as evidenced by the presence of chorus calling
throughout the canals, suggests that native drum populations from Lake
Champlain, Lake Erie, and Lake Ontario likely all contribute to the Hudson
River invasive population. We suggest that freshwater drum most likely
also invaded the finger lakes through the NYSCS. The invasion of the Hudson River by freshwater drum is significant because the species had previously been geographically excluded from the entire east coast. It is a prolific
species with the widest distribution of any native species in the Americas
and will likely have a strong impact on the Hudson River ecosystem. We
conclude that passive acoustic surveys are a highly effective non-invasive
tool to monitor the future spread of freshwater drum in the Hudson River
system.
5a THU. AM
In the waters surrounding Cancun, pressures from development, tourism,
and shipping threaten the second largest coral reef system in the world. Passive bioacoustic monitoring provides information about presence and activity of marine life and other environmental information, including during the
night and inclement weather. The development and deployment of underwater reef-forming art structures provided a unique opportunity to listen to
the stages of growth. Starting in 2012, an Ecological Acoustic Recorder was
used to sample sounds at “The Listener” sculpture site in the Punta Nizuc
Marine Protected Area in Quintana Roo, Mexico. Snapping shrimp and fish
sounds were prevalent as well as sounds from boat motors. Diel patterns
were characterized to inform management practices and assess efficacy of
nighttime noise restrictions.
5aABb37. Non-whistle sounds used in bottlenose dolphin aggressive interactions recorded on digital acoustic tags. Laela Sayigh, Austin Dziki (Biology Dept., Woods Hole Oceanographic Inst., MS #50, Woods Hole, MA
02543, lsayigh@whoi.edu), Vincent Janik (Scottish Oceans Inst., Univ. of St.
Andrews, St. Andrews, United Kingdom), Edward Kim (Univ. of Pennsylvania, Philadelphia, PA), Katherine McHugh (Chicago Zoological Society, Sarasota Dolphin Res. Program, Sarasota, FL), Peter L. Tyack (Scottish Oceans
Inst., Univ. of St. Andrews, St. Andrews, Fife, United Kingdom), Randall Wells
(Chicago Zoological Society, Sarasota Dolphin Res. Program, Sarasota, FL),
and Frants H. Jensen (Aarhus Univ., Woods Hole, MA)
3947
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3947
THURSDAY MORNING, 29 JUNE 2017
ROOM 310, 8:15 A.M. TO 12:20 P.M.
Session 5aAO
Acoustical Oceanography: Tools and Methods for Ocean Mapping I
Scott Loranger, Cochair
Earth Science, University of New Hampshire, 24 Colovos Road, Durham, NH 03824
Philippe Blondel, Cochair
Physics, University of Bath, University of Bath, Claverton Down, Bath BA2 7AY, United Kingdom
Chair’s Introduction—8:15
Invited Papers
8:20
5aAO1. Forty years of progress in multibeam echosounder technology for ocean investigation. Xavier Lurton (Underwater Acoust.
Lab., Ifremer, IMN/NSE/ASTI, CS 10070, Plouzane 29280, France, lurton@ifremer.fr)
Along four decades, multibeam echosounders (MBES) have continuously progressed in their technology and application fields, and
are today the favorite active sonar tool of several communities for ocean investigation. Measurement capabilities extended progressively
from seafloor bathymetry to interface imagery and reflectometry, and to water column imaging and target quantification; the concerned
domains spread from hydrography and seafloor-mapping to geosciences, offshore industry, biology and fisheries, coastal engineering,
habitat mapping, environmental science and monitoring. The paper proposes an overview of this history, based on the experience of the
author’s oceanography institute since 1977. Technologically MBES first extended the classical single-beam echosounder toward a narrow fan of elementary beams, then to a wide-coverage design similar to sidecan sonars, to multiple swaths, and to various geometries of
3-D insonification. Various aspects of MBES technology evolution are presented: frequencies and powers; array configuration and beam
features; electronics; sounding detection methods; reflectometry techniques. The progress in performance level includes: coverage
extent, bathymetry accuracy, resolution, and sampling density; reflectometry reliability; water column data specificities. The importance
of ancillary sensors and on-carrier platform installation is emphasized. Recent trends and possible future evolutions are finally presented,
as well as some facts related to MBES environmental impact.
8:40
5aAO2. Regional seabed backscatter mapping using multiple frequencies. John E. Hughes Clarke, Anand Hiroji (Ctr. for Coastal
and Ocean Mapping, Univ. of New Hampshire, Jere A. Chase Ocean Eng. Lab, 24 Colovos Rd., Durham, NH 03824, jhc@ccom.unh.
edu), Glen Rice (HSTP, NOAA , Durham, NH), Fabio Sacchetti, and Vera Quinlan (INFOMAR, Marine Inst., Renville, Co. Galway,
Ireland)
The frequency dependence of seabed backscatter has previously been assessed at static sites. While dependencies have been identified, the restricted range of seabed types and issues of absolute calibration have limited inferences. Until recently seabed backscatter
measurements from underway mapping sonars (sidescans and multibeams) have predominantly been at a single center frequency, dictated by the range versus resolution compromise best suited for the sonar altitude. With improved range performance using FM pulses,
the depth range over which a specific frequency is usable have expanded. Taking advantage of that, two national seafloor mapping programs have switched to routine collection of seabed backscatter with wavelength differences of almost an order of magnitude. The
NOAA ship Thomas Jefferson and the Irish government vessel Celtic Explorer now are acquiring data at 45 and 300 kHz and 30 and
200 kHz, respectively. Even though absolute calibration remains a concern (particularly for multi-sector systems), the spatial variation
of relative backscatter strength clearly provides extra discrimination between seafloors that would not separate using a single frequency.
The discrimination occurs both over grazing-angle-normalized scattering strength as well as the variable shape of the angular response
curves. For these two programs, data manipulation procedures and preliminary results are presented.
3948
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3948
9:00
5aAO3. Integrating dual frequency side-scan sonar data and multibeam backscatter, angular response and bathymetry, for
benthic habitat mapping in the Laganas Gulf MPA, Zakinthos Isl., Greece. Elias Fakiris, Xenophon Dimas, Nikolaos Georgiou,
Dimitrios Christodoulou (Geology, Univ. of Patras, University Campus, Rio 26504, Greece, fakiris@upatras.gr), Yuri Rzhanov (Ctr. for
Coastal and Ocean Mapping, Univ. of New Hampshire, Durham, NH), and George Papatheodorou (Geology, Univ. of Patras, Patras,
Greece)
The preferred procedure nowadays for benthic habitat mapping is combining marine acoustic and ground truthing methods, with the
former ones tending to be the swath sonars, such as the Multi Beam Echo Sounders (MBES) and the Side-Scan Sonars (SSS). Both
above acoustic systems, in conjunction with an extensive underwater video footage, were employed to map in detail benthic habitats in
the marine area of Laganas Gulf in the National Marine Park of Zakynthos Isl., Greece, including key protected habitats such as P.oceanica beds and coralligenous formations. Object oriented seafloor classification was achieved taking advantage of the multi-layer information available, including the two individual frequencies of SSS and the MBES backscatter, angular response, and bathymetry data.
The extracted statistical derivatives regarded: (1) textural analysis of the multi-frequency backscatter imagery, (2) angular range analysis
of the MBES backscatter, as well as (3) application of various bathymetric indices. Those derivatives were classified individually or
fuzzed together, to explore the rate of improvement when choosing one system or another and to investigate their best combinations and
practices towards valid seafloor classification. Benthic habitat classification has been comprehended using the NATURA and EUNIS
classification schemes to be compatible with the EU regulations.
Contributed Paper
9:20
less favorable for processing and interpretation of acoustic backscatter
intensities because the sectors have different acoustic absorption profiles
and are disjoint across and along track. This paper presents a method of target detection in the water column that involves frequency- and depth-dependent compensation for transmission loss in each sector, equalization of
gains across all the sectors, and adaptive removal of sidelobe interferences.
Echo intensities in each sector are then aggregated along track and displayed
as signal-to-clutter ratio vs. altitude and distance along track. This technique
allows detection of targets that are in the water column between the bottom
echo trace and the ring of sidelobe interference—a region that is often
deemed “blind” or too noisy for target detection.
5aAO4. Target detection with multisector and multiswath echosounders. Christian de Moustier (10dBx LLC, PO Box 81777, San Diego, CA
92138, cpm@ieee.org)
Discontinuities in the ensonification pattern of certain multibeam
echosounders can hamper the detection of targets in the water column and
on the bottom. Such echosounders operate in single or multiswath modes
with several independent transmit sectors per swath. Each sector has a
unique acoustic frequency band and is steered across and along track to
compensate for the vessel’s pitch and yaw and to achieve a specified spatial
density of soundings. This sonar geometry is optimized for bathymetry but
Invited Papers
9:40
5aAO5. Multibeam sonar water column data processing tools to support coastal ecosystem science. Ian Church (Geodesy and Geomatics Eng., Univ. of NB, 15 Dineen Dr., Fredericton, NB E3B5A3, Canada, ian.church@unb.ca)
Multibeam sonar water column data are routinely collected by hydrographic survey vessels to observe and validate minimum depth
measurements over anthropogenic submerged objects, which may pose a hazard to navigation. A large volume of additional data is collected during this process but mostly goes unused. This project investigates the development of processing tools and algorithms to automate the extraction of oceanographic and ecological features from the water column data files to enhance the usefulness of these
datasets. Sample data from a Gulf of Mexico Research Initiative (GoMRI) project is explored where multibeam data is collected simultaneously with a towed profiling high resolution in situ ichthyoplankton imaging system (with CTD, dissolved oxygen, PAR, and chlorophyll-a fluorescence sensors). The objective is to identify and map spatial and temporal variations in biomass throughout the water
column by correlating the acoustic data with imagery and sensor output from the towed profiling system. Processing the dataset to isolate
and identify objects and signal patterns of interest, which might normally be considered noise, is investigated. The developed tools will
aid in biological and physical feature extraction to further enhance the application of multibeam acoustic water column data.
10:00
5a THU. AM
5aAO6. Mapping methane gas seeps with multibeam and split-beam echo sounders. Thomas C. Weber (Univ. of New Hampshire,
24 Colovos Rd., Durham, NH 03824, tom.weber@unh.edu)
Modern multibeam echo sounders (MBES), which are most widely used for collecting high-resolution bathymetry and seabed imagery, often have the capability of recording acoustic backscatter from the full water column. This capability has enabled several new
applications for MBES including the study of marine organisms, the quantification of suspended sediments, imaging physical oceanographic structure, and the detection, localization, and characterization of methane gas seeps. Split-beam echo sounders (SBES) are
widely used in fisheries applications, but have a similarly diverse range of other applications. Here, we review the use of both MBES
and SBES systems for mapping different phenomena in the water column, with a focus on mapping methane gas seeps. In doing so, we
attempt to highlight the many advantages of these systems, but also discuss some of the limitations including the masking of targets by
high seafloor reverberation levels in MBES systems. We also discus some of the challenges associated with wide bandwidth SBES systems, including our attempts to maintain a frequency-independent field-of-view using constant-beamwidth transducers.
3949
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3949
10:20–10:40 Break
10:40
5aAO7. Use of acoustic estimates of euphausiid biomass in trophic dynamics and ecosystem models of the Georges Bank/Gulf of
Maine Region from 1999 to 2012. J. Michael Jech (Northeast Fisheries Sci. Ctr., 166 Water St., Woods Hole, MA 02543, michael.
jech@noaa.gov), Gareth L. Lawson (Biology, Woods Hole Oceanographic Inst., Woods Hole, MA), Michael Lowe (Renewable Natural
Resources, Louisiana State Univ. Agricultural Ctr., Baton Rouge, LA), Sean Lucey, and Paula Fratantoni (Northeast Fisheries Sci. Ctr.,
Woods Hole, MA)
Euphausiids are a key link between primary production and higher-level predators in the Gulf of Maine, but are not well sampled
during standard fisheries surveys. Multifrequency acoustic data may provide useful estimates of euphausiid distribution and biomass, as
long as automated classification of acoustic backscatter is reliable and robust. Estimates of euphausiid biomass in the Georges Bank
region of the Gulf of Maine were derived from annual acoustic/midwater trawl surveys from 1999 through 2012. Acoustic data were collected continuously with Simrad EK500 and EK60 echo sounders operating at 18, 38, and 120 kHz. Four different methods were used to
classify euphausiids from the acoustic data: multifrequency single beam imaging, “dB-differencing” of 120- and 38-kHz volume backscatter, multifrequency z-score, and a multifrequency index. Scattering model predictions of euphausiid target strength and biological
metrics were used to scale acoustic data to biomass for each classification method. Biomass estimates were compared among classification methods and to depth-stratified quantitative net samples to evaluate whether the acoustically-derived biomass estimates were commensurate with historical estimates. Biomass estimates were also incorporated in ecosystem models and in calculations of euphausiid
consumption by fish and other predators to assess their importance in the ecosystem and their trophic significance.
Contributed Papers
11:00
5aAO8. Assessment of three sonars to evaluate the downstream migration of American Eel in the St. Lawrence River. Christopher W. Gurshin
(Normandeau Assoc. Inc., 30 Int., Dr., Ste. 6, Portsmouth, NH 03801, cgurshin@normandeau.com), David J. Coughlan (Normandeau Assoc., Inc.,
Stanley, NC), Anna-Maria Mueller, Don J. Degan (AquAcoust., Sterling,
AK), and Paul T. Jacobson (Electric Power Res. Inst., Glenelg, MD)
This study assessed the feasibility of three sonar technologies to estimate
eel abundance, determine distribution, and describe approach behavior to
advance strategies for providing safe downstream passage of out-migrating
American Eels at hydroelectric facilities on the St. Lawrence River. A Simrad EK60 split-beam echosounder (120 kHz), Sound Metrics ARIS Explorer
multibeam sonar (1100/1800 kHz), and Mesotech M3 multi-mode multibeam sonar (500 kHz) were deployed at Iroquois Dam for experimentally
testing their capabilities in detecting and identifying known numbers and
sizes of live adult eels tethered to surface floats released upstream of the sonar beams and allowed to swim through at known locations and times. In
addition, sonars collected data continuously to monitor wild, out-migrating
eels during July 15-22 and September 17-19, 2015. Results highlight several
challenges in acoustically monitoring eels in a large, fast-moving river with
a few orders of magnitude higher abundance of other targets that can lead to
a high false positive error rate. The ARIS multibeam sonar, operating with
48 beams, holds the most promise for correctly identifying eels out to 16-20
m in range, but the M3 multibeam sonar has some value for tracking previously identified targets over larger areas.
11:20
the fishing area closure. Echo statistics and position in the water column
were used to characterize categories of fish detections and associate them
spatially to the seafloor features and temporally to fishing closure.
11:40
5aAO10. Sediment sound speed dispersion inferences from broadband
reflection coefficient measurements. Charles W. Holland (Appl. Res. Lab.,
The Penn State Univ., P.O. Box 30, State College, PA 16804, cwh10@psu.
edu), Samuel Pinson (State College, PA), and Derek R. Olson (Appl. Res.
Lab., The Penn State Univ., State College, PA)
The frequency dependence, or dispersion, of sound speed in marine sediments has been a topic of considerable interest and remains a research topic.
While experiments on well-sorted sediments (having a narrow range of
grain sizes) show promising concordance with theory, the more typical continental shelf sediments exhibit a rather wide range of grain sizes. A major
experimental challenge is to measure in-situ sound speed over a sufficiently
wide frequency range, such that the underlying mechanisms (e.g., viscous or
friction) that control intrinsic dispersion can be isolated. Broadband 1.8-10
kHz seabed reflection measurements in the TREX13 experiment show a critical angle that is very nearly frequency independent. When effects of wavefront curvature, sound speed gradients, layering, and roughness are taken
into account, this observation indicates that sediment sound speed must also
be nearly independent of frequency. [Research supported by the ONR Ocean
Acoustics Program.]
12:00
5aAO9. Acoustic observations of the fish assemblage during a multibeam hydrographic survey of a cod spawning area. Christopher W. Gurshin (Normandeau Assoc. Inc., 30 Int. Dr., Ste. 6, Portsmouth, NH 03801,
cgurshin@normandeau.com)
5aAO11. Development of a new acoustic mapping method for eelgrass
and macroalgae using a multi-beam echo-sounder. Ashley R. Norton and
Semme J. Dijkstra (Ctr. for Coastal and Ocean Mapping, Univ. of New
Hampshire, 24 Colovos Rd., Durham, NH 03824, anorton@ccom.unh.
edu)
A two-day hydrographic survey using a dual-head Kongsberg EM3002
multibeam echosounder was conducted to map the seafloor features within 9
square kilometers of the Gulf of Maine Cod Spawning Protection Area,
where Atlantic Cod (Gadus morhua) were known to concentrate during their
spring spawning season. Acoustic backscatter from the water column collected during the first day of this survey was used to describe the fish assemblage associated with a corridor bounded by elevated features when the area
was closed to commercial fishing. During the survey, a bottom trawler under
a research permit made six tows where fish were observed by the hydrographic survey vessel. The species assemblage was dominated (>80%) by
Atlantic Cod. Acoustic observations from the second day, when the area
was open to an active commercial fishery, were compared to those during
Eelgrass and various macroalgae play important roles in temperate
coastal ecosystems, including as habitat for many species, and as a bio-indicator for water quality. However, in turbid or deeper waters, the optical
remote sensing methods commonly used for mapping eelgrass do not provide the necessary range for analysis. We are developing a methodology for
detecting and characterizing eelgrass and macroalgae beds using water column backscatter data from multi-beam echosounder systems. We are specifically developing methods to map the maximum depth limit, percent cover,
functional type (i.e., macroalgae or eelgrass) and canopy height of the beds,
because these are difficult to characterize using existing optical and acoustic
methods. Water column data was collected using an Odom MB1 sonar in
2014 and 2015 over a variety of vegetated sites selected to represent a range
3950
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3950
of conditions: dense/sparse eelgrass, long/short eelgrass, mixed macroalgae
and eelgrass, eelgrass on muddy or hard substrates, etc. In addition to sonar
data, drop camera data was collected, and data from a regional aerial
mapping program also exist for comparison. Initial data analysis shows
good agreement between drop camera and sonar detections, and patches as
small as 1m2 and as short as 20 cm are detectable.
THURSDAY MORNING, 29 JUNE 2017
BALLROOM B, 7:55 A.M. TO 12:20 P.M.
Session 5aBAa
Biomedical Acoustics and Signal Processing in Acoustics: Diagnostic and Therapeutic Applications of
Ultrasound Contrast Agents I
Tyrone M. Porter, Cochair
Boston University, 110 Cummington Mall, Boston, MA 02215
Klazina Kooiman, Cochair
Thoraxcenter, Dept. of Biomedical Engineering, Erasmus MC, P.O. Box 2040, Room Ee2302, Rotterdam 3000 CA,
Netherlands
Chair’s Introduction—7:55
Invited Papers
8:00
5aBAa1. High frame rate imaging of microbubble contrast agents. Mengxing Tang (Dept. of BioEng., Imperial College London,
London, N/A SW7 2AZ, United Kingdom, mengxing.tang@imperial.ac.uk)
The advent of microbubble contrast agents has transformed the way blood flow and tissue perfusion can be imaged using ultrasound
in cardiovascular and oncological applications. The high echogenicity offered by the microbubbles greatly enhances ultrasound echoes
from within the blood, making imaging of micro-vascular flow and tissue perfusion possible. Recent advances in high frame rate (HFR)
ultrasound, which uses non-focused wave transmission, parallel data acquisition and digital beamforming, enable an imaging frame rate
two orders of magnitude higher than the existing line-by-line scanning approach. A combination of HFR ultrasound with microbubble
contrast enhanced ultrasound (CEUS) offers new exciting opportunities in imaging with unprecedented resolution and contrast. The
HFR allows better tracking of fast moving target such as arterial flow and cardiac motion. Even for slow moving target, by processing
the large amount of data afforded by HFR ultrasound significant improvement in image contrast/signal to noise ratio can be gained. We
have developed HFR CEUS imaging methodology based on a HFR ultrasound research platform using both linear and phased array
probes, and achieved a frame rate of up to tens of thousands of frames per second over multiple centimeter depth. We have demonstrated
that HFR CEUS, together with signal processing, significantly improves macro- and micro-vascular flow imaging in vivo.
5a THU. AM
8:20
5aBAa2. Ultrafast ultrasound localization microscopy. Claudia Errico, Olivier Couture, and Mickael Tanter (CNRS, INSERM,
ESPCI Paris, PSL Res. University, Institut Langevin, 17, rue moreau, Paris, IDF 75012, France, cerrico87@gmail.com)
The resolution of ultrasound imaging is limited by the classical wave diffraction theory and corresponds roughly to the ultrasonic
wavelength (from 0.2 to 1 mm for clinical applications). Current methods for in vivo microvascular imaging are limited by trade-offs
between the depth of penetration and resolution. Inspired by the optical localization techniques (FPALM), we developed the technique
ultrafast ultrasound superlocalization (uULM) where the resolution is not limited by the wavelength (Couture et al. 2011, Desailly et al.
2013). The use of ultrafast ultrasound acquisitions, based on plane wave transmissions at the rate of thousand frames per second, enabled
the separation of million microbubbles achieving a resolution of about 8 lm at 12 mm depth for the vascular structure of the rat brain invivo (Errico et al., 2015). Moreover, we have lately demonstrated that by combining acoustic vaporization of composite droplets and
rapid ultrasound monitoring, ultrasound drug-delivery can also be attained with subwavelength precision (Hingot et al., 2016).
3951
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3951
Contributed Papers
8:40
5aBAa3. Effects of a mono-disperse bubble population on the cumulative phase delay between second harmonic and fundamental component.
Libertario Demi (TMC Europe, Da Vincilaan 5, Brussel 1903, Belgium, libertario.demi@tmceurope.com), Wim van Hoeve (Tide Microfluidics,
Enschede, Netherlands), Ruud J. van Sloun (Eindhoven Univ. of Technol.,
Eindhoven, Netherlands), Hessel Wijkstra (Eindhoven Univ. of Technol.,
Amsterdam, Netherlands), and Massimo Mischi (Eindhoven Univ. of Technol., Eindhoven, Netherlands)
A positive cumulative phase delay (CPD) between the second-harmonic
and fundamental component of ultrasound waves is a marker specific to
ultrasound contrast agents. Dynamic contrast-enhanced ultrasound images
generated by using this marker have already been reported in the literature.
However, only results obtained with a poly-disperse contrast agent
R ) have been presented. In this study, we compared CPD values
(SonoVueV
R and with a mono-disperse contrast agent
obtained with standard SonoVueV
(MDCA); the latter consisted of 4-lm microbubbles produced with a MicroR , MDCA microbubbles were
R . The same as with SonoVueV
SphereCreatorV
made of SF6 gas encapsulated in a monolayer phospholipid shell. An ULAOP research platform equipped with an LA332 linear-array probe was
employed to image a dedicated gelatin flow-phantom. Different frequencies
(1.5 to 3.5 MHz) with a mechanical index of 0.7 were used. Images were
reconstructed in a tomographic fashion, and the contrast to tissue ratio
(CTR) evaluated. When imaging at 1.8 MHz (close to the resonance frequency of a 4 lm microbubble), results show significantly stronger CPD
values for the mono-disperse microbubbles, resulting in an overall CTR
improvement by 5 dB. In conclusion, mono-disperse UCAs can be used to
improve imaging performance of cumulative phase delay imaging.
9:00
5aBAa4. Subharmonic response of lipid-coated monodisperse microbubbles. Qian Li (Biomedical Eng., Boston Univ., 44 Cummington Mall,
Rm. B01, Boston, MA 02215, qianli@bu.edu) and Tyrone M. Porter (Mech.
Eng., Boston Univ., Boston, MA)
Subharmonic emissions from lipid-coated microbubbles, which is not
radiated by tissues, can be leveraged to improve contrast-enhanced diagnostic ultrasound. In our work, we investigated subharmonic emissions from
monodisperse lipid-coated microbubbles under different acoustic pressure
and frequencies. First, the resonance frequency of microbubble monodispersion was determined from measured attenuation spectrum. Next, acoustic
emissions from the microbubbles were detected by a transducer positioned
orthogonal to the excitation transducer. Our study showed that subharmonic
emissions were maximized when bubbles were driven at nearly twice the
pressure-dependent resonance frequency rather than the linear resonance
frequency. We also observed subharmonic emission at low excitation acoustic pressure (< = 30 kPa) for microbubbles coated with densely packed lipid
shells, which suggests that minimizing the initial surface tension can enable
subharmonic emissions at very low excitation pressures. Further studies
were conducted with shells composed of varying lipids to test the influence
of shell composition on the initial surface tension and subharmonic emissions. Theoretical simulations were carried out and agreed with the
experimental trends. Implications of these results on the use of monodisperse lipid-coated microbubbles for subharmonic imaging will be discussed.
9:20
5aBAa5. The dynamic of contrast agent near a wall under the excitation
of ultrasound wave. Nima Mobadersany and Kausik Sarkar (Mech. Eng.,
George Washington Univ., 800 22nd St. NW, Washington, DC 20052,
sany@gwu.edu)
In this study, the behavior of contrast agents near a wall subjected to
ultrasound wave have been studied numerically using boundary integral
method. Contrast agents are gas filled microbubbles coated with layer of
lipid or protein to prevent them against dissolution in the blood stream.
Under the exposure of ultrasound wave, the contrast agents oscillate and
collapse. The oscillation or collapse of the contrast agent near a wall generates shear stress resulting in the perforation of the wall to better uptake of
large molecules and drugs by the tissue. In this research, we are studying the
dynamics of the contrast agent, the surrounding velocity and pressure filed
and the shear stress exerted on the wall due to bubble collapse for different
shell rheology parameters, different standoff distances, different excitation
pressures and frequencies. The study has been conducted at excitation pressures beyond the threshold of inertial cavitation where the bubble forms a
jet during the collapse phase. The strain softening exponential elasticity
model has been used for the interfacial rheology of the coating.
9:40
5aBAa6. Impact of temperature on the size distribution and shell properties of ultrasound contrast agents. Himanshu Shekhar, Nathaniel Smith
(Dept. of Internal Medicine, Univ. of Cincinnati, 3933 Cardiovascular Ctr.,
231 Albert Sabin Way, Cincinnati, OH 45267, himanshu.shekhar@uc.edu),
Jason L. Raymond (Dept. of Eng. Sci., Univ. of Oxford, Oxford, United
Kingdom), and Christy K. Holland (Dept. of Internal Medicine and Biomedical Eng. Program, Univ. of Cincinnati, Cincinnati, OH)
Physical characterization of ultrasound contrast agents (UCAs) is important for their efficacious use in theragnostic applications. The goal of this
study was to elucidate the impact of temperature on the size distribution and
shell properties of DefinityV, an FDA-approved clinical UCA. A Coulter
counter (Multisizer IV) was modified to enable size measurements of UCAs
at elevated temperatures. The size distribution and attenuation spectrum of
DefinityV was measured at room temperature (25 C) and physiological temperature (37 C), and used to estimate the shell stiffness and viscosity of the
agent at both temperatures. The attenuation coefficient of DefinityV
increased by as much as 5 dB at 37 C relative to 25 C. The highest
increase in attenuation was observed at 10 MHz, the resonance frequency of
DefinityV. However, no significant difference was observed in the size distribution of DefinityV at 25 C and 37 C. The estimated shell stiffness and
viscosity decreased from 1.76 6 0.18 N/m and 0.21 10-6 6 0.07 10-6
kg/s at 25 C to 1.01 6 0.07 N/m and 0.04 10-6 6 0.04 10-6 kg/s at 37
C. These results indicate that the change in shell properties mediates the
change in acoustic behavior of DefinityV at physiological temperature.
R
R
R
R
R
R
10:00–10:20 Break
3952
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3952
Invited Paper
10:20
5aBAa7. Gas vesicles: Acoustic biomolecules for ultrasound imaging. Mikhail G. Shapiro (Chemistry and Chemical Eng., California
Inst. of Technol., 1200 E. California Blvd., Mail Code 210-41, Pasadena, CA 91125, mikhail@caltech.edu)
Expanding the capabilities of ultrasound for biological and diagnostic imaging requires the development of contrast agents linked to
cellular and molecular processes in vivo. In optical imaging this is commonly accomplished using fluorescent biomolecules such as the
green fluorescent protein. Analogously, we recently introduced gas vesicles (GVs) as the first acoustic biomolecules for ultrasound. GVs
are physically stable gas-filled protein nanostructures (~250 nm) naturally expressed in aquatic photosynthetic microbes as a means to
regulate buoyancy. Purified GVs produce robust ultrasound contrast across a range of frequencies at picomolar concentrations, exhibit
nonlinear scattering to enable enhanced detection versus background in vivo, and have species-dependent thresholds for pressureinduced collapse to enable multiplexed imaging. Here, I will present our recent progress on understanding the biophysical and acoustic
properties of these biomolecular contrast agents, engineering their mechanics and targeting at the genetic level, developing ultrasound
pulse sequences to enhance their detection in vivo and expressing them heterologously as acoustic reporter genes. 1. Shapiro, M.G. et al.
Nat. Nanotechnol. 9, 311-316 (2014). 2. Cherin, M. et al. U.M.B. (In press). 3. Lakshmanan, A. et al. ACS Nano 10, 7314-7322 (2016).
4. Maresca, D. et al. In revision. 5. Bourdeau, R.W. et al. Submitted. More information at http://shapirolab.caltech.edu.
Contributed Papers
5aBAa8. In vitro acoustic characterization of echogenic polymersomes
with PLA-PEG and PLLA-PEG shells. Lang Xia, Krishna Kumar (Mech.
and Aerospace Eng., George Washington Univ., 800 22nd St. NW, SEH
3961, Washington, DC 20052, langxia.org@gmail.com), Fataneh Karandish, Sanku Mallik (Dept. of Pharmaceutical Sci., North Dakota State Univ.,
Fargo, ND), and Kausik Sarkar (Mech. and Aerospece Eng., George Washington Univ., Washington, DC)
Echogenic liposomes (ELIPs), lipid bilayer-coated vesicles, have been
widely studied as an acoustically triggerable drug delivery agent or an ultrasound contrast agent. Instead of liposomes, polymersomes, amphiphilic
vesicles, offer additional stability and chemical flexibility. Here, we report
the acoustic behaviors of echogenic polymersomes made of block copolymers PLA-PEG and PLLA-PEG, which are stereo-isomers. Polymersomes
were excited with three different frequencies, 2.25 MHz, 5 MHz and 10
MH, and their scattered responses were measured. Both PLA-PEG and
PLLA-PEG shell polymersomes produce strong acoustic responses as high
as 50 dB in the fundamental component, thus demonstrating their potential
as contrast agents. Significant subharmonic as well as second harmonic
responses were observed at excitation frequencies of 2.25 MHz and 5 MHz.
The gas dissolved in the suspension was found to be essential for the echogenicity of polymersomes.
11:00
5aBAa9. Acoustic vaporization threshold of lipid coated perfluoropentane droplets. Mitra Aliabouzar, Krishna N. Kumar, and Kausik Sarkar
(Mech. & Aerosp. Eng., George Washington Univ., 800 22nd St. NW,
Washington, DC 20052, mitraaali@email.gwu.edu)
Phase shift droplets that can be vaporized in situ by acoustic stimulation
offer a number of advantages over microbubbles as contrast agents due to
their higher stability and smaller size distribution. The acoustic vaporization
threshold (ADV) of droplets with perfluoropentane (PFP) core has been
investigated extensively via optical and acoustical means. However, there
are noticeable discrepancies among reported ADV thresholds between the
two methods. In this study, we thoroughly discuss the criteria and the experimental methodology of determining the ADV threshold. In addition, we
explain the possible reasons for the discrepancies between the optical and
acoustical studies of the droplet vaporization. The ADV threshold was
measured as a function of the excitation frequency by examining the scattered signal from PFP droplets (400-3000 nm). The threshold increases with
frequency3=42 MPa at 2.25 MHz, 2.5 MPa at 5 MHz, and 3 MPa at 10 MHz.
The scattered response from droplets was also compared with the scattered
response from a microbubble at the corresponding excitation pressure and
frequency. We found the ADV threshold to increase with frequency. The
3953
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
ADV threshold values determined here are in agreement with past values
obtained using an optical technique.
11:20
5aBAa10. Study of acoustic droplet vaporization using classical nucleation theory. Krishna N. Kumar, Mitra Aliabouzar, and Kausik Sarkar
(George Washington Univ., 800 22nd St. NW, SEH 3000, Washington, DC
20052, krishnagwu@gwmail.gwu.edu)
Lipid coated perfluorocarbon (PFC) nanodroplets can be vaporized by
an external ultrasound pulse to generate bubbles in situ for tumor imaging
and drug delivery. Here we employ classical nucleation theory (CNT) to
investigate the acoustical droplet vaporization (ADV), specifically the
threshold value of the peak negative pressure required for ADV. The theoretical analysis predicts that the ADV threshold increases with increasing
surface tension of the droplet core and frequency of excitation, while it
decreases with increasing temperature and droplet size. The predictions are
in qualitative agreement with experimental observations. We also estimate
and discuss energy required to form critical cluster to argue that nucleation
occurs inside the droplet, as was also observed by high-speed camera.
11:40
5aBAa11. A model for acoustic vaporization of droplets encapsulated
within a nonlinear hyperelastic shell. Thomas Lacour, Tony Valier-Brasier, and François Coulouvrat (Institut Jean Le Rond d’Alembert UMR
CNRS 7190, Université Pierre et Marie Curie, 4 Pl. Jussieu, Paris 75005,
France, thomas.lacour@upmc.fr)
Nanometric particles with a liquid core arise an increasing interest for
medical applications such as contrast imaging and targeted drugs delivery.
Unlike conventional ultrasonic contrast agents with a gaseous core (ACU),
their dimensions are smaller than the gap endothelium of tumor blood vessels, allowing them to reach tumor tissues. However, because of this liquid
core, these particles are poorly compressible and therefore less efficient for
imaging than standard ACUs. Their echogenicity may be enhanced by
vaporizing the liquid core when applying an ultrasonic field. The investigated particles are made of a liquid core encapsulated within a visco-elastic
shell. One assumes the existence of a vapor nucleus at initial time. The shell
confines the liquid core and ensures greater stability, but it usually inhibits
the spontaneous growth of vapor core. The dynamic of the liquid-vapor
interface is described by a generalized Rayleigh-Plesset equation coupled
with the heat diffusion equation in the dense phases. The vapor growth induces a significant volume expansion (typically about 100 times) and therefore, the shell is assumed to behave as a hyperelastic soft material to
account for large deformation. The complete vaporization is shown to be dependent on the product of the shell shear modulus by its thickness, but also
on the nonlinear elastic parameter. Numerical simulations reveal that the
Acoustics ’17 Boston
3953
5a THU. AM
10:40
bubble dynamic can be distributed into three families, thus enabling to
define a threshold and an optimum of acoustical parameters.
12:00
5aBAa12. Effect of diluent fluid viscosity on acoustic droplet vaporization-mediated dissolved oxygen scavenging. Karla P. Mercado (Internal
Medicine, Univ. of Cincinnati, 231 Albert Sabin Way, Cardiovascular Ctr.
3944, Cincinnati, OH 45267, karlapatricia.mercado@uc.edu), Deepak S.
Kalaikadal (Mech., Industrial, and Nuclear Eng., Univ. of Cincinnati, Cincinnati, OH), John N. Lorenz (Molecular and Cellular Physiol., Univ. of
Cincinnati, Cincinnati, OH), Raj M. Manglik (Dept. of Mech., Industrial,
and Nuclear Eng., Univ. of Cincinnati, Cincinnati, OH), Christy Holland
(Internal Medicine and Biomedical Eng. Program, Univ. of Cincinnati, Cincinnati, OH), Andrew N. Redington, and Kevin J. Haworth (Internal Medicine, Biomedical Eng. Program, and Pediatrics, Univ. of Cincinnati,
Cincinnati, OH)
perfluoropentane droplets. The impact of the diluent fluid’s viscosity on
ADV-mediated pO2 reduction was investigated. Polyvinylpyrrolidone
(PVP) was dissolved in saline to modify the solution’s viscosity. The diluent
fluid viscosity (g) and surface tension (c) were measured. Droplets were
manufactured using amalgamation and differential centrifugation to yield
diameters between 1-6 lm. Droplets were diluted to 6.5x106 droplets/mL in
saline (c = 68 mN/m, g = 0.7 cP), 3 mg/mL PVP solution (c = 65 mN/m,
g = 1.2 cP), or 15 mg/mL PVP solution (c = 65 mN/m, g = 4 cP). The viscosities of the 3 mg/mL and 15 mg/mL PVP solutions mimicked those of
plasma and whole blood, respectively. Droplet solutions were exposed to
ultrasound (5 MHz, 4.25 MPa peak negative pressure in situ, 10 cycles) in a
37 C in vitro flow system. The initial pO2 in the fluids was 11362 mmHg,
similar to human arterial pO2. After ultrasound exposure, the pO2 in saline,
3 mg/mL PVP, and 15 mg/mL PVP solutions were reduced by 39.960.8
mmHg, 31.960.7 mmHg, and 16.060.4 mmHg, respectively. These studies
indicated that ADV-mediated pO2 reduction increased with decreasing
viscosity.
Acoustic droplet vaporization (ADV) can be used to scavenge dissolved
oxygen and reduce the partial pressure of oxygen (pO2) in a fluid containing
THURSDAY MORNING, 29 JUNE 2017
ROOM 312, 8:20 A.M. TO 12:00 NOON
Session 5aBAb
Biomedical Acoustics: Imaging III
Kevin J. Parker, Chair
Department of Electrical & Computer Engineering, University of Rochester, Hopeman Engineering Building 203,
PO Box 270126, Rochester, NY 14627-0126
Contributed Papers
8:20
8:40
5aBAb1. Characterization of modular arrays for transpinal ultrasound
application. Shan Qiao (Dept. of Eng. Sci., University of Oxford, Biomedical Ultrason., Biotherapy & Biopharmaceuticals Lab. (BUBBL) Inst. of Biomedical Eng., Rm. 205, Bldg. A, 418 Guiping Rd., Xuhui District, Shanghai
2000233, China, shan.qiao03@gmail.com), Constantin Coussios, and Robin
Cleveland (Dept. of Eng. Sci., University of Oxford, Biomedical Ultrason.,
Biotherapy & Biopharmaceuticals Lab. (BUBBL) Inst. of Biomedical Eng.,
Oxford, United Kingdom)
5aBAb2. The H-scan analysis and results in tissues. Kevin J. Parker
(Dept. of Elec. & Comput. Eng., Univ. of Rochester, Hopeman Eng. Bldg.
203, PO Box 270126, Rochester, NY 14627-0126, kevin.parker@rochester.
edu)
Chronic low back pain is one of the most prevalent musculoskeletal conditions worldwide, and is normally caused by the degeneration of intervertebral discs. High intensity focused ultrasound can be used to mechanically
fractionate degenerate disc tissue by inertial cavitation. Due to the complexity of the spine structure, delivering sufficient focused acoustic energy to the
target zone without damaging surrounding tissue is challenging and further
exacerbated by patient-to-patient variability. Here we designed modular
arrays, each consisting of 32 elements at 0.5MHz, which can be configured
to optimize delivery for a specific patient by the means of time-reversal
using the patient geometry derived from CT scans. In this study, the performance of the modular array was measured with a hydrophone and simulated numerically. For a four-module configuration the size of the focus was
4 mm in diameter and 30 mm long with a focal gain of approximately 35
and steering range of the focus was + /-30 mm in azimuth, + /-5mm in elevation, providing the required focusing flexibility and focal pressure for the
transpinal application. The numerical simulations agreed well with the
measurements suggesting simulations can be used for treatment planning.
[Work supported by EPSRC, UK.]
3954
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
The H-scan is based on a simplified framework for characterizing scattering behavior, and visualizing the results as color coding of the B-scan
image. The methodology begins with a standard convolution model of
pulse-echo formation from typical situations, and then matches those results
to the mathematics of Gaussian Weighted Hermite Functions. The nth successive differentiation of the Gaussian pulse G = exp(-t2) generates the nth
order Hermite polynomial (Poularikas 2010). The function H4(t)G resembles a broadband pulse. Assuming a pulse-echo system has a round trip
impulse response of A0H4(t)G, then we expect that a reflection from a step
function of acoustic impedance will produce a corresponding received echo
proportional to GH4(t). However, a thin layer of higher impedance, or a
small scatterer or incoherent cloud of small scatterers would produce higher
order Hermite functions as echoes. In this framework, the identification task
is simply to classify echoes by similarity to either GH4(t), or GH5(t), or
GH6(t). The resulting B-scan image is examined and echoes can be classified and colored according to their class. Results from tissue scans also demonstrate groups of echoes separated by Hermite order Hn. A theoretical
framework is introduced where reflections are characterized by their similarity to nth order Hermite polynomials.
Acoustics ’17 Boston
3954
5aBAb3. Passive elastography in soft-tissues: Phase velocity measurement. Bruno Giammarinaro and Stefan Catheline (LabTau Inserm U1032,
151 cours Albert Thomas, Lyon cedex 03 69424, France, bruno.giam@hotmail.fr)
Elastography is an imaging technique used on medical ultrasound devices. It consists in measuring shear waves in soft tissues in order to give a tomography reconstruction of the shear elasticity. A method of measurement,
usually referred as passive elastography, is to use noise correlation techniques on diffuse shear wave fields present in the medium. In human body,
these fields can be naturally created by activities like heart beating, arteries
pulsatility. Passive elastography allows to locally estimate the group velocity in the medium and this method has already been used for some organs as
the liver, the thyroid or the brain. The present study is therefore devoted to
improve this method with the calculation of the phase velocity. For example, this information, obtained for each frequency, could allow to measure
the attenuation with Kramers-Krönig relations.
9:20
5aBAb4. Optical imaging of propagating ultrasonic wave fronts resulting from ultrasonic pulses incident on heel bones using refracto-vibrometry. Thomas M. Huber (Phys., Gustavus Adolphus College, 800 W College
Ave., Saint Peter, MN 56082, huber@gac.edu), Matthew T. Huber, and
Brent Hoffmeister (Phys., Rhodes College, Memphis, TN)
Ultrasonic measurements of the heel bone (calcaneus) are used commonly for osteoporosis screening. Pulses emitted by an ultrasound transducer are incident on the calcaneus, and the transmitted wave fronts are
detected with a separate transducer. In the current in-vitro study, full field
videos of propagating ultrasonic wave fronts incident on a calcaneus sample, along with transmitted and backscattered waves were obtained using
refracto-vibrometry. Pulses were emitted by a 500 kHz Panametrics V303
transducer. To optically detect ultrasonic wave fronts, the measurement
beam from a Polytec PSV-400 scanning laser Doppler vibrometer laser was
directed through a water tank towards a stationary retroreflective surface.
Acoustic wave fronts (density variations) which pass through the measurement laser cause variations in the integrated optical path length between the
vibrometer and retroreflector. The time-varying signals detected by the vibrometer at numerous scan points were used to determine the time evolution
of ultrasonic wave fronts. The resulting videos enable visualization of the
propagating wave fronts incident on the calcaneus and the backscattered and
transmitted wave fronts. These videos enable direct investigation of wave
front distortion due to reflection, refraction and diffraction effects for pulses
transmitted through the calcaneus during ultrasonic heel scanning.
9:40
5aBAb5. The effect of respiratory gas composition on kidney stone
detection with the color Doppler ultrasound twinkling artifact. Julianna
C. Simon (Graduate Program in Acoust., Penn State Univ., Penn State,
201E Appl. Sci. Bldg., University Park, PA 16802, jcsimon@psu.edu), YakNam Wang, Jeffrey Thiel, Frank Starr, and Michael R. Bailey (Appl. Phys.
Lab., Ctr. for Industrial and Medical Ultrasound, Univ. of Washington, Seattle, WA)
The color Doppler ultrasound twinkling artifact, which is thought to
arise from microbubbles on and within the stone, has the potential to
improve kidney stone detection in space; however, bubbles are known to be
sensitive to the elevated levels of carbon dioxide (CO2) found on space
vehicles. Here, we investigate the effect of respiratory gas composition on
twinkling in swine implanted with kidney stones. Thirteen swine were initially exposed to either 100% oxygen (O2) or room air and then to air with
elevated CO2 at 0.8%, 0.54%, or 0.27%. Stones were imaged with a Verasonics ultrasound system and ATL P4-2 transducer. The 9 swine initially
breathing 100% O2 showed a significant reduction in twinkling when
exposed to air with elevated CO2, with the degree of decrease in twinkling
occurring in the order: 0.8%>0.54%>0.27% CO2. An additional 4 swine
3955
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
were oscillated between air with 0.04% CO2 (normal air) and 0.5% CO2. A
reduction in twinkling was observed over the course of the experiment. The
effect of respiratory gas composition should be further investigated before
using twinkling to diagnose a kidney stone in space. [Work supported by
NSBRI through NASA NCC 9-58 and NIH DK043881.]
10:00
5aBAb6. The native frequency of B-lines artifacts may provide a quantitative measure of the state of the lung. Libertario Demi (TMC Europe, Da
Vincilaan 5, Brussel 1903, Belgium, libertario.demi@tmceurope.com),
Wim van Hoeve (Tide Microfluidics, Enschede, Netherlands), Ruud J. van
Sloun (Eindhoven Univ. of Technol., Eindhoven, Netherlands), Marcello
Demi (Medical Imaging Processing, Fondazione Toscana Gabriele Monasterio, Pisa, Italy), and Gino Soldati (Emergency Medicine Unit, Valle del
Serchio General Hospital, Lucca, Italy)
B-lines are ultrasound-imaging artifacts, which correlate with several
lung-pathologies. However, their understanding and characterization is still
largely incomplete. To further study B-lines, ten lung-phantoms were
designed. A layer of microbubbles was trapped in tissue-mimicking gel. To
simulate the alveolar size reduction typical of various pathologies, 166 and
80-micrometer bubbles were used for phantom-type 1 and phantom-type 2,
respectively. A normal alveolar diameter is around 280 micrometer. A
LA332 linear-array connected to the ULA-OP platform was used for
imaging. Standard ultrasound imaging at 4.5 MHz was performed. Next, a
multi-frequency approach was used: images were sequentially generated
using orthogonal sub-bands centered at different frequencies (3, 4, 5, and
6 MHz). Results show that B-lines appear predominantly with the phantomtype 2, suggesting a link between increased artifact formation and the reduction of the alveolar size. Moreover, the multifrequency approach revealed
that the B-lines have a native frequency: B-lines appeared with significantly
stronger amplitude in one of the 4 images, and spectral-analysis confirmed
B-lines to be centered at specific frequencies. These results can find relevant
clinical application since, if confirmed by in-vivo studies, the native frequency of B-lines could serve as a quantitative-measure of the state of the
lung.
10:20–10:40 Break
10:40
5aBAb7. Full 3D dynamic functional ultrasound imaging of neuronal
activity in mice. Claire Rabut, Mafalda Correia, Victor Finel, Thomas Deffieux, Mathieu Pernot, and Mickael Tanter (INSERM U979, Institut Langevin, CNRS UMR 7587, ESPCI Paris, PSL Res. Univ., Inserm U979, 17 rue
Moreau, PARIS 75012, France, claire.rabut@espci.fr)
Introduction: In vivo neuronal activity imaging is key to understand the
mechanisms of complex brain behavior. 2D Functional Ultrasound (fUS)
Imaging is a powerful tool for measuring brain activation with high spatiotemporal sampling (80lm, 1 ms) using neurovascular coupling. Here, we
demonstrated the proof of concept of in vivo full 3-D fUS and extend the
research work toward 3D functional connectivity imaging in mice. Method:
A fully programmable 1024-channel ultrasound platform was used to drive
32 x 32 matrix phased arrays (9 MHz central frequency) at ultrafast frame
rates (500 volumes per second). Successive plane wave emissions were
compounded to produce high sensitivity vascular volumetric images of the
brain of anesthetized (Kétamine-Xylazine) and craniotomized mice.
Whiskers were alternatively (6 seconds ON/OFF) stimulated during acquisition. Results: High-quality 3D images of cerebral blood volume were
obtained and showed the feasibility of task-activated 3D fUS imaging. The
activation maps depict the spatiotemporal distribution of the hemodynamic
response to whiskers stimulation at high spatial (150lm x 150lm x 150 lm)
and temporal resolution (400 ms). Conclusion: We demonstrated for the first
time the feasibility of full 3D dynamic functional brain imaging in mice.
This paves the way toward a full-fledged neuro-imaging modality of the
entire brain using ultrasound.
Acoustics ’17 Boston
3955
5a THU. AM
9:00
11:00
5aBAb8. In-vivodemonstration of a self-contained ultrasound-based
battery charging approach for medical implants. Inder Raj S. Makin (A.
T. Still Univ. of Health Sci., 5850 E Still Circle, Mesa, AZ 85206, imakin@
atsu.edu), Leon Radziemski, Harry Jabs (Piezo Energy Technologies, Tucson, AZ), and T. Doug Mast (Univ. of Cincinnati, Cincinnati, OH)
Ultrasound electrical recharging (USERTM) has been developed to demonstrate application-specific charging of a 200 mA-h Li-ion battery currently used in a clinical device for lower esophageal sphincter stimulation.
In refining earlier developments [JASA 134(5) 4121, 2013], the receiver
transducer and charging chip circuitry was miniaturized by an order of magnitude to a volume of 1.1 cc, the transducer attached directly to the 0.4 mm
thick titanium device casing. Transmitter was a 1 MHz, 25 mm diameter
piezo-composite disk, while the 1 MHz frequency matched receiver was either a 15 mm diameter disk or a 15 mm square tile. During a series of acute
in vivo porcine experiments, the titanium prototype was implanted 10—15
mm deep in the subcutaneous tissue, the battery being successfully charged
at a current of up to 75 mA, whereby the nominal transmitted RF power was
2 W. Maximum tissue temperature increase during the 4-hour charging
cycle was 2.5 0C, directly in front of the receiver face, with no histologic
thermal changes noted in the tissue post-mortem. The ultrasound approach
with 10-15% system efficiency is a potentially favorable option for charging
a Li-ion battery for next generation gastric stimulation implants. [Work supported by the NIH/NIBIB R43EB019225.]
11:20
5aBAb9. Model-based ultrasound attenuation estimation. Natalia Ilyina
and Jan D’hooge (Cardiovascular Imaging and Dynam., KU Leuven, UZ
Herestraat 49 - box 7003 50, Leuven, België 3000, Belgium, natalia.ilyina@
uzleuven.be)
Ultrasound attenuation coefficient (a) has shown a potential to provide
quantitative information on the pathological state of the tissue. However,
the main difficulty in the estimation of a consists in the need for diffraction
correction that is currently done by means of a reference measurement. Previously, we proposed an alternative attenuation reconstruction technique,
wherein the attenuation coefficient was estimated by iteratively solving the
forward wave propagation problem and matching the simulated signals to
measured ones. The simulation procedure involved modeling of the
3956
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
diffraction effects and allowed to avoid several assumptions made by conventional methods. The proposed method showed promising results when
applied to the data recorded using a single-element transducer. In the present
study, this methodology was extended to a case of a clinically used phased
array transducer. The proposed approach was validated on simulated in
Field II data and data recorded in a tissue mimicking phantom with varying
focal position. For the simulated data, exact a estimates were obtained
regardless of the focal position, while the relative error of a for the phantom
data remained below 10 %. Currently, the performance of the proposed
method is validated on the phantom an in-vivo liver data.
11:40
5aBAb10. Transcranial ultrasound detection of intracranial hemorrhages: time-frequency analysis with empirical and variational mode
decomposition. Michael P. Jordan, Amber B. Bennoui, Anjelica A. MolnarFenton (Chemistry and Phys., Simmons College, 300 Fenway, Boston, MA
02115, michael.jordan@simmons.edu), and Phillip J. White (Radiology,
Brigham and Women’s Hospital, Harvard Med. School, Boston, MA)
Ultrasound imaging has become a standard method for evaluating
patients who have suffered blunt trauma. Specifically, focused assessment
with sonography in trauma, commonly known as the “FAST exam,” is regularly employed to detect free intraperitoneal, intrathoracic, and pericardial
fluid in the setting of trauma. Currently, evaluation of the intracranial space
remains a significant missing component of the FAST exam because the
skull bone acts as a physical barrier to ultrasound transmission for existing
approaches in pulse-echo ultrasound imaging. To address this shortcoming,
we have explored a grounds-up re-evaluation of the way in which ultrasound
can be used in neurosonography based on the hypothesis that backscattered
signal from the intracranial space can be analyzed to identify the unique signature of pooling and coagulated blood. Preliminary experiments with tissue
phantoms, ex vivo animal blood, and ex vivo human calvaria have established the parameters that optimize the SNR of pulse-echo transcranial ultrasound. The need to improve the sensitivity in isolating the desired signals
have led to the exploration of several time-frequency signal analysis techniques, including the recently introduced empirical and variational mode
decomposition methods. Our results have established the basis of a muchneeded novel intracranial assessment technique to incorporate into the
FAST exam.
Acoustics ’17 Boston
3956
THURSDAY MORNING, 29 JUNE 2017
ROOM 205, 8:00 A.M. TO 12:20 P.M.
Session 5aEA
Engineering Acoustics: Engineering Acoustics Topics III
Jordan Cheer, Cochair
Institute of Sound and Vibration Research, University of Southampton, University Road, Highfield, Southampton SO17 2LG,
United Kingdom
Andrew W. Avent, Cochair
Mechanical Engineering, University of Bath, University of Bath, Claverton Down, Bath BA2 7AY, United Kingdom
Contributed Papers
8:00
8:40
5aEA1. Feedback control of an active acoustic metamaterial. Jordan
Cheer and Stephen Daley (Inst. of Sound and Vib. Res., Univ. of Southampton, University Rd., Highfield, Southampton, Hampshire SO17 2LG, United
Kingdom, j.cheer@soton.ac.uk)
5aEA3. Model based prediction of sound pressure at the ear drum for
an open earpiece equipped with two receivers and three microphones.
Steffen Vogl, Matthias Blau (Hearing Technol. and Audiol., Jade Univ.,
Ofenerstr. 16/19, Oldenburg 26121, Germany, steffen.vogl@jade-hs.de),
and Tobias Sankowsky-Rothe (Hearing Technol. and Audiol., Jade Univ.,
Oldenburg, Niedersachsen, Germany)
8:20
5aEA2. A thermoacoustic test rig for low-onset temperature gradients
and the validation of numerical models. Andrew W. Avent and Christopher R. Bowen (Mech. Eng., Univ. of Bath, Claverton Down, Bath, Sommerset BA2 7AY, United Kingdom, a.avent@bath.ac.uk)
The design and implementation of a low-onset temperature gradient
standing wave thermoacoustic test rig is presented together with results
which validate previous numerical models. A summary of on-going development of innovative fabrication methods and the novel use of composite
materials for thermoacoustic stacks and regenerators is provided. An evaluation of the performance of these components is presented and further
research and development work proposed. A novel design for selective laser
sintered (SLS) aluminium thermoacoustic heat exchangers to deliver heat to
from the thermoacoustic core is also presented together with numerical
models and experimental validation from a purpose-built test rig.
3957
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
In future hearing systems, one or more microphones and one or more
receivers located within the earmold or ear canal are feasible. In order to
predict the sound pressure at the ear drum in such a scenario, a one-dimensional electro-acoustic model of a prototype open earpiece with two integrated receivers and three integrated microphones was developed. The
transducers were experimentally characterized by their (frequency-dependent) sensitivity (microphones) and Norton equivalents (receivers). The
remaining acoustical system was modeled by 12 frequency-independent parameters which were fitted using a training set-up with well-defined loads at
both sides of the ear piece. Put on an individual subject, the model could
then be used to determine the acoustic impedance at the medial end of the
earpiece, based on measured transfer functions between the integrated components. Subsequently, a model of the ear canal and its termination was estimated from the measured ear canal impedance, which could eventually be
used to predict the drum pressure in the individual subject. Comparison to
probe tube measurements of the drum pressure in 12 human subjects showed
good agreement (less than 63 dB up to 3 kHz, less than 65 dB up to 6…8
kHz).
9:00
5aEA4. Development of a novel sound pressure level requirement for
characterizing noise disturbances from theater and opera stages. Anton
Melnikov (SBS Bühnentechnik GmbH, Bosewitzerstr. 20, Dresden 01259,
Germany, anton.melnikov@sbs-dresden.de), Marcus Guettler, Monika Gatt
(Tech. Univ. of Munich, Munich, Germany), Michael Scheffler (Univ. of
Appl. Sci., Zwickau, Germany), and Steffen Marburg (Tech. Univ. of Munich, Munich, Germany)
In theaters or opera houses, the stage elevator is one of the most important machinery elements. It can be used for holding decorations in place and
moving them between scenes or providing effects by elevating large decorations or choirs directly in a play. This scenic movement is a crucial situation
from a machinery acoustics point of view. To localize and understand the
sources of sound from these elevators, suitable experiments can be performed, e.g. sound pressure level (SPL) measurements, experimental and
operational modal analysis. From previous measurements, the drive train
could be identified as an important sound source. To quantify suitable quality requirements, the SPLs were measured in various theaters throughout
Germany. The new key idea was to do the measurements while playing a
drama or opera and to evaluate time periods of calm breaks during the play.
Acoustics ’17 Boston
3957
5a THU. AM
Acoustic metamaterials have recently been of significant interest due to
their potential ability to exhibit behavior not found in naturally occurring
materials. This extends to the realization of acoustic cloaks, but perhaps of
greater industrial impact is their ability to achieve high levels of noise control performance. In particular, previous research has demonstrated the high
levels of transmission loss that can be achieved by an array of locally resonant elements. However, these passive metamaterials are inherently limited
in performance due to both losses and their static nature. Therefore, there
has been an increasing interest in active acoustic metamaterials, which can
allow both increased performance and adaptability. Recent work has investigated the integration of active elements into a passive resonator-based metamaterial, and it has been demonstrated using a feedforward control
architecture that significant increases in the level and bandwidth of transmission loss are achievable. However, in many practical noise control applications, it is not possible to obtain a time advanced reference signal that is
required for a feedforward control implementation. Therefore, this paper
will explore the design of a feedback control architecture that is applicable
to the active resonator based acoustic metamaterial and demonstrate the
potential performance of such a system.
This procedure published herein, where the masking effect of noises caused
by the auditorium listening to a play is taken into account, lead to a new
SPL requirement for the noise radiation of the stage machinery of theaters
and opera houses. Last but not least, the results of these measurements in
different German theaters, are compared and discussed.
9:20
5aEA5. Analysis of the feasibility of an array of MEMS microphones to
machinery condition monitoring or fault diagnosis. Lara del-Val (Industrial Eng. School. Mech. Eng. Dept., Univ. of Valladolid, Paseo del Cauce,
Valladolid, Spain, lvalpue@eii.uva.es), Alberto Izquierdo, Juan J. Villacorta
(TeleCommun. Eng. School. Signal Theory Dept., Univ. of Valladolid, Valladolid, Spain), Luis Suarez (Superior Tech. College, Civil Eng. Dept.,
Univ. of Burgos, Burgos, Spain), and Marta Herráez (Industrial Eng. School.
Mech. Eng. Dept., Univ. of Valladolid, Valladolid, Spain)
During the last decades, vibration analysis has been used to evaluate
condition monitoring and fault diagnosis of complex mechanical systems.
The problem associated with these analysis methods is that the employed
sensors must be in contact with the vibrant surfaces. To avoid this problem,
the current trend is the analysis of the noise, or the acoustic signals, which
are directly related with the vibrations, to evaluate condition monitoring
and/or fault diagnosis of mechanical systems. Both, acoustic and vibration
signals, obtained from a system can reveal information related with its operation conditions. Using arrays formed by digital MEMS microphones, which
employ acquisition/processing systems based on FPGA, allows creating systems with a high number of sensors paying a reduced cost. This work studies
the feasibility of the use of acoustic images, obtained by an array with 64
MEMS microphones (8x8) in a hemianechoic chamber, to detect, characterize, and, eventually, identify failure conditions in machinery. The resolution
obtained to spatially identify the problem origin in the machine under test.
The acoustic images are processed to extract different feature patterns to
identify and classify machinery failures.
9:40
5aEA6. Numerical investigation of normal mode radiation properties of
ducts with low Mach number incoming flow. José P. de Santana Neto,
Danilo Braga (Dept. of Mech. Eng., Federal Univ. of Santa Catarina, Rua
Monsenhor Topp, 173, Florianópolis, Santa Catarina 88020-500, Brazil),
Julio A. Cordioli, and Andrey R. da Silva (Dept. of Mech. Eng., Federal
Univ. of Santa Catarina, Florianópolis, Santa Catarina, Brazil, andrey.rs@
ufsc.br)
Normal mode radiation properties of ducts issuing a subsonic mean flow
have been thoroughly investigated in the past. The behavior of the radiation
characteristics, such as the magnitude of the reflection coefficient and the
length correction, are well understood. Nevertheless, the behavior of the
same features in the presence of an incoming low Mach number flow has
not been investigated in detail, particularly in the case of the length correction. This work presents a numerical study of the reflection coefficient and
length correction of unflanged pipes and pipes terminated by circular horns
of different radii in the presence of an incoming flow. The investigations are
conducted with a tridimensional lattice Boltzmann scheme. The results suggest that the detachment and reattachment of the incoming flow inside the
duct play a significant role on the transfer of kinetic energy from the flow to
the internal acoustic field. Moreover, the results show that the mechanism of
flow detachment and reattachment is highly sensitive to the geometrical
characteristics at the open end.
10:00
5aEA7. Effect of taper angle on the performance of jet pumps for a
loop-structured thermoacoustic engine. Ye Feng, Ke Tang, and Tao Jin
(Inst. of Refrigeration and Cryogenics, Zhejiang Univ., Rd. 38 West Lake
District, Yuquan Campus, Hangzhou, Zhejiang 310027, China,
972390816@zju.edu.cn)
Gedeon streaming, which circulates throughout the loop configuration
with a time-averaged manner in the oscillating flow, can considerably deteriorate the efficiency of traveling-wave thermoacoustic engine. A jet pump
is characterized by a tapered channel with different opening areas, which
3958
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
can produce a time-averaged pressure drop to suppress Gedeon streaming.
Three jet pumps with different taper angles, i.e., 5 , 9 , 15 , are studied. Following Iguchi’s hypothesis, the turbulent oscillating flow can be considered
as a steady flow. The flow structures though the jet pumps are numerically
simulated. Meanwhile, an experimental apparatus has been built to measure
the performance of jet pump operating in the turbulent oscillating flow. The
results show that the simulation is in a good agreement with the experiment
data for the jet pumps with small taper angles. For the jet pump with 15
taper angle, the simulation results match well with the experiment data
when the velocity at small opening of jet pump is higher than 50 m/s. However, when the velocity is lower, the simulation results deviate from the
experiment data considerably, which can be attributed to the flow separation
in the diverging direction operated differently in the steady flow and oscillating flow at lower velocity.
10:20–10:40 Break
10:40
5aEA8. Coherence resonance and stochastic bifurcation in standingwave thermoacoustic systems. Xinyan Li and Dan Zhao (Aerosp. Eng.
Div., Nanyang Technolog. Univ., 50 Nanyang Ave., Singapore, Singapore
639798, Singapore, zhaodan@ntu.edu.sg)
In this work, we develop a noisy nonlinear model and conduct experimental tests to study the stochastic bifurcation and coherence resonance in
standing-wave thermoacoustic systems. When white Gaussian noise is
added and its intensity is varied, it is found that the stochastic behaviors of
the system can be better described by stochastic P bifurcation. When the
noise intensity is chosen as a bifurcation parameter, it is shown to the bimodal region and reduce the bi-modal area are shifted. In addition, the
noise-induced coherence motions are examined and confirmed. Resonancelike behaviors of signal to noise ratio (SNR) are clearly observed. When the
system approaches the critical bifurcation point, SNR is found to become
larger and the optimal noise intensity is decreased to a smaller value. This
property can be used as a precursor to the Hopf bifurcation in standingwave thermoacoustic systems. Experiments are then conducted on a Rijketype thermoacoustic system with 3 loudspeakers implemented. Transition to
instability is found to be subcritical. Comparison is then made between the
present theoretical and experimental results. Good qualitative agreements
are obtained in terms of (1) SNR, (2) the peak height of power spectrum,
and (3) the width of the frequency.
11:00
5aEA9. Analysis and design of Fresnel zone plates with multiple foci.
Pilar Candelas (Centro de Tecnologı́as Fı́sicas, Universitat Politècnica de
València , Camino de Vera s/n, Valencia 46022, Spain, pcandelas@fis.upv.
es), José Miguel Fuster (Departamento de Comunicaciones, Universitat
Politècnica de València, Valencia, Spain), Constanza Rubio (Centro de Tecnologı́as Fı́sicas, Universitat Politècnica de València , Valencia, Spain), Sergio Castiñeira-Ibáñez (Departamento de ingenierı́a Electrónica, Universitat
de València, Valencia, Spain), and Daniel Tarrazó-Serrano (Centro de Tecnologı́as Fı́sicas, Universitat Politècnica de València , Valencia, Spain)
Fresnel Zone Plates (FZPs) become an interesting alternative to traditional lenses when planar fabrication is advantageous, and are used in a
wide range of physical disciplines such as optics, microwave propagation,
or ultrasounds. Conventional FZPs produce a single focus, which is optimal
in most applications. However, certain medical applications, such as MRI
(magnetic resonance imaging) guided ultrasound surgery, require multiple
foci ultrasound exposures. In this work, new multi-focus Fresnel lenses
(MFFLs) based on conventional FZPs are presented. The advantages and
drawbacks of these new MFFL structures are thoroughly analyzed. There is
a tradeoff on the number of foci achieved in a single MFFL and its focusing
efficiency. Therefore, the most efficient MFFL is that with two foci. A procedure for designing 2-foci MFFLs, in which the focal length of both foci
may be selected independently, is established. For each 2-foci MFFL, there
are several physical implementations. The focusing properties of all implementations are compared among them. From this comparison, the most efficient MFFL is selected and fully characterized.
Acoustics ’17 Boston
3958
11:20
5aEA10. Actively passive control of thermoacoustic instability. Dan
Zhao, Ashique Akram Tarique (Aerosp. Eng. Div., Nanyang Technolog.
Univ., 50 Nanyang Ave., Singapore, Singapore 639798, Singapore, zhaodan@ntu.edu.sg), and Shen Li (School of Energy and Power Eng., Jiangsu
Univ. of Sci. and Technol., Zhenjiang City, China)
In this work, experimental studies of actively passive control of perforated liner with a bias flow on mitigating thermoacoustic instability are performed. For this, a well-designed Rijke-type thermoacoustic combustor with
a perforated liner implemented is designed. A premixed propane-fueled
flame is confined in the bottom half. A mean cooling flow (known as bias
flow) generated from a centrifugal pump is forced to pass through the lined
section. To maximize the damping capacity of the perforated liners, the bias
flow rate is optimized by implementing a real-time tuning algorithm. The
algorithm determines the optimum actuation signal to drive the centrifugal
pump. On implementing the tuning algorithm, the unstable thermoacoustic
combustor is successfully stabilized by reducing sound pressure level over
64 dB. To evaluate the off-design performance of the developed control
approach, an extension tube is added/removed to the Rijke-type thermoacoustic combustor to give rise to the dominant unstable mode frequency
being changed by approximately 17\%. It is found that the present actively
passive control approach is able to mitigate the new limit cycle. And sound
pressure level is reduced by about 41 dB. This confirms that the developed
actively passive control scheme is sufficiently robust for use in real combustion systems.
11:40
5aEA11. Frequency dependence of Fresnel zone plates focus. José
Miguel Fuster (Departamento de Comunicaciones, Universitat Politècnica
de València, Camino de Vera s/n, Valencia 46022, Spain, jfuster@dcom.
upv.es), Pilar Candelas, Constanza Rubio (Centro de Tecnologı́as Fı́sicas,
Universitat Politècnica de València, Valencia, Spain), Sergio Castiñeira-Ibáñez (Departamento de Ingenierı́a Electrónica, Universitat de València, Valencia, Spain), and Daniel Tarrazó-Serrano (Centro de Tecnologı́as Fı́sicas,
Universitat Politècnica de València, Valencia, Spain)
12:00
5aEA12. Experimental study of aeroacoustic damping performance of
in-duct perforated orifices with different geometric shapes. Dan Zhao,
C. Z. Ji, Nuomin Han, and Xinyan Li (Aerosp. Eng. Div., Nanyang Technolog. Univ., 50 Nanyang Ave., Singapore, Singapore 639798, Singapore,
zhaodan@ntu.edu.sg)
In this work, 11 in-duct perforated plates are experimentally tested in a
cold-flow pipe. These plates have the same porosities but different number
and geometric shaped orifices: (1) circle, (2) triangle, (3) square, (4) pentagon, (5) hexagon, and (6) star. The damping effect of these orifices is characterized by power absorption and reflection coefficient from to. It is found
that the orifice shape has little influence on and at lower frequency. However, as the frequency is increased, star-shaped orifice is shown to be with
much lower D in comparison with that of other shapes orifices. For the perforated plates with the same shaped orifices, increasing does not lead to an
increase of power absorption at lower frequency. However, the orifice with
the same shape and porosity but larger is found to be associated with more
power absorption at approximately. Maximum power absorption is approximately at about, as. The optimum depends on the orifice shape. The present
parametric measurements shed light on the roles of the number and geometric shapes of orifices and the flow parameters on its noise damping
performance.
5a THU. AM
Fresnel zone plates (FZPs) focus waves through constructive interference of diffracted fields. They are used in multiple fields, such as optics,
microwave propagation or acoustics, where refractive focusing by conventional lenses is difficult to achieve. FZPs are designed to work and focus at a
design frequency. At this frequency, the behavior of the FZP is optimum
and focusing at a certain focal length is achieved. In most medical applications using lenses, it is critical to have a fine and dynamic control on the
lens focal length. In this work, the variation of the FZP focus parameters
when working at operating frequencies different from the design frequency
is analyzed, and a focal length control mechanism is proposed. It is shown
that the FZP focal length shifts linearly with the operating frequency,
becoming a dynamic control parameter that can be useful in many different
applications. However, other focusing parameters, such as focal depth and
distortion, are also affected by the operating frequency. These parameters
establish a limit on the frequency span in which the operating frequency can
be shifted, and therefore they restrict the range of focal lengths available
with a single FZP.
3959
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3959
THURSDAY MORNING, 29 JUNE 2017
ROOM 200, 8:00 A.M. TO 12:00 P.M.
Session 5aMU
Musical Acoustics: General Topics in Musical Acoustics II
Tim Ziemer, Cochair
Institute of Systematic Musicology, University of Hamburg, Neue Rabenstr. 13, Hamburg 20354, Germany
James P. Cottingham, Cochair
Physics, Coe College, 1220 First Avenue NE, Cedar Rapids, IA 52402
Contributed Papers
8:00
8:40
5aMU1. Influence of the musician’s position on the radiation impedance
for transverse and notch flutes. Augustin Ernoult (LAM, Institut Jean le
Rond d’Alembert, Université Pierre et Marie Curie, @’Alembert, boı̂te 162,
4, Pl. Jussieu, Paris 75252, France, ernoult@lam.jussieu.fr), Patricio de la
Cuadra (Chair thématique Sorbonne universités, Pontificia Universidad
Católica, Santiago, Chile), Cassandre Balosso-Bardin (Chair thématique
Sorbonne universités, Univ. of Lincoln, Lincoln, United Kingdom), and
Benoit FABRE (LAM, Institut Jean le Rond d’Alembert, Université Pierre
et Marie Curie, Paris, France)
5aMU3. Numerical simulations of the turbulent flow and the sound field
of the Turkish ney end-blown flute. Jost L. Fischer and Rolf Bader (Inst.
of Systematic Musicology, Univ. Hamburg, Neue Rabenstr. 13, Hamburg,
Hamburg D-20354, Germany, jost.leonhardt.fischer@uni-hamburg.de)
To play a transverse or notch flute, musicians place their mouth near an
open end of the instrument where a sharp edge or labium is the target of the
air-jet blown by the musicians. The jet/labium interaction is responsible for
generating sound. Musicians can control the geometry of the air-jet during
their performance. They can, for example, decrease the distance from their
lips to the labium when increasing the pitch. The presence of the musician
and the variation of his position while playing modifies the boundary conditions at the opening which, in turn, impacts the passive resonances of the
instrument/player system. Flute manufacture adapted empirically to take
into account this effect but, until now, it was never systematically quantified. The main goal of this study is to quantify and model the influence of
the musician’s presence and his movements as a modification on the radiation impedance of the instrument. Finite element simulations and experimental mockups are implemented and described in order to fit experimental
models. Such models are useful as a complement to physical models of
flutes as well as to understand the choices made in flute manufacture.
8:20
5aMU2. Reed chamber resonances in free reed instruments. James P.
Cottingham (Phys., Coe College, 1220 First Ave. NE, Cedar Rapids, IA
52402, jcotting@coe.edu)
Western free reed instruments such as the accordion, harmonica, and
harmonium do not normally employ pipe resonators to determine the pitch,
but all do feature some sort of reed chamber or cavity in which the reed is
mounted. The reed chamber will necessarily have resonances which can
affect the tone quality and the pitch, but, since the cavity volumes are small
and the resonances have high frequencies, the effects on the reed vibration
generally tend to be small. In some cases, however, a resonance of the reed
chamber can be close to the vibration frequency of the reed tongue. In this
case, the cavity air vibration can become large enough to influence the selfexcitation mechanism, possibly interfering with tongue vibration and the
resulting musical tone, and in some cases preventing the sounding of the
reed at all. For various configurations of the reed chamber, reed motion during the initial transient stage of free reed vibration has been analyzed,
exploring effects on the rise time and final amplitude of vibration due to
changes in reed chamber configurations. [Work partially supported by
United States National Science Foundation Grant PHY-1004860.]
3960
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
The Turkish ney is an end-blown flute in which sound generation is generated as an interplay between the air jet caused by the player and the sound
pressure inside the flute tube. Sound pressure measurements inside the
instrument show considerable nonlinear behavior which are crucial for the
operation of the instrument and its sound character. Numerical simulations
of the dynamics of both the turbulent flow field and the sound field solving
the compressible Navier-Stokes equations, are performed. The transient process as well as the quasi steady-state operation mode of the instrument are
performed. Varying the initial conditions of the blowing velocity as well as
its attack time and shape result in the normal and in the overblown tones,
which are characteristic of the instrument. Active elements like the turbulent
air jet and rotating vortices around the labium as well as passive elements
like the role of the mouthpiece and the resonator are discussed. A high
agreement between numerical simulations and measurements is achieved.
9:00
5aMU4. Modulation and instability in the sound of plastic soprano
recorders. Péter Rucz (Dept. of Networked Systems and Services, Budapest
Univ. of Technol. and Economics, 2 Magyar Tuodósok körútja, Budapest
H1117, Hungary, rucz@hit.bme.hu), Judit Angster (Dept. of Acoust.,
Fraunhofer Inst. for Bldg. Phys., Stuttgart, Baden-Württemberg, Germany),
and András Miklós (Steinbeis Transfer Ctr. of Appl. Acoust., Stuttgart,
Baden-Württemberg, Germany)
Soprano recorders are among the most popular wind instruments in
music education for children. As these instruments are played by amateurs,
they should be easy to play even by beginner players. The investigations
reported in this paper were initiated by an instrument maker company. It
was found that their plastic soprano recorders often had unsatisfactory sound
quality and exhibited hard playability and strange, unstable steady state
sounds, especially in the low regime. The experiments reported here were
performed in order to examine the characteristic properties of these unusual
sounds and to identify the background causes of the observed phenomena.
In this contribution the results of various measurements carried out on different plastic soprano recorders are presented. Recorder sounds and edge
tones are analyzed in the steady and attack transient states and their key
properties are compared. The observed instabilities, strong amplitude modulations and the appearance of subharmonic components in the steady state
sound are discussed. Finally, possible physical explanations of the experimental results are examined.
Acoustics ’17 Boston
3960
9:20
10:40
5aMU5. Analysis of the tonehole lattice of the northern xiao flute. Michael Prairie and Da Lei (Elec. and Comput. Eng., Norwich Univ., 158 Harmon Dr., Northfield, VT 05663, mprairie@norwich.edu)
5aMU8. Physical model of a drum using an edge-diffraction approach.
Sara R. Martı́n Román, U. Peter Svensson (Acoust. Res. Ctr., Dept. of Electronics and TeleCommun., Norwegian Univ. of Sci. and Technol., Trondheim,
Norway, O.S. Bragstads plass 2a, Trondheim 7034, Norway, sara.martin@
ntnu.no), Mark Rau, and Julius O. Smith (Music Dept., Ctr. for Comput. Res.
in Music and Acoust. (CCRMA), Stanford Univ., STANFORD, CA)
9:40
5aMU6. Non-planar vibrations of an ideal string against a smooth unilateral obstacle. Dmitri Kartofelev (Dept. of Cybernetics, Tallinn Univ. of
Technol., Ehitajate Rd. 5, Akadeemia Rd. 21, Tallinn, Harju 19086, Estonia,
dima@ioc.ee)
In this paper, we study the various possible motions of a string free to
vibrate in two mutually perpendicular planes in the presence of a finite unilateral curved obstacle. We consider an obstacle which is curved only along
the direction of the string at rest, and which is located at one of the ends of
the string. The nonlinear problem of a non-planar string vibration against an
obstacle is investigated using a kinematic numerical model under a number
of simplifying assumptions. The complex interaction of the string with rigid
obstacle is studied without the interfering effects of wave dissipation and
dispersion. Also, it is assumed that no energy is lost due to friction and collision of the string with the obstacle. In this paper, we are especially interested in strings that are excited primarily parallel to surface of the obstacle.
The modeling results show that presents of the obstacle changes dynamics
of the string motion qualitatively. The conclusions of this studied idealized
scenario are relevant to string vibrations in Indian stringed musical instruments like sitar or in Japanese shamisen. These lutes are equipped with finite curved bridges, and their strings are primarily excited parallel to those
bridges.
10:00–10:20 Break
10:20
5aMU7. Acoustic characterization of chichin bass-drum. Sergio E.
Floody and Luis E. Núñez (Universidad de Chile, Compañia 1264, Santiago,
Metropolitana, Chile, eddiefloody@u.uchile.cl)
The objective of this work is to present an acoustic characterization of
the chinchı́n, a bass drum type instrument native of Chile. The instrument is
played by the chinchinero, an urban street performer in Chile, who dances
acrobatically and plays the instrument simultaneously. Carried like to a
backpack, the musician play the instrument with long drumsticks, which
also involves a rope tied around the performer’s foot to play the hi-hat cymbals. The string that drives the hi-hat cymbals passes through two holes in
the cylindrical drum shell, modifying the expected sound of this type of
instruments. We present detailed vibro-acoustic characterization in frequency and spatial domain using the Finite Element Method (FEM). Natural
frequencies are compared with a usual bass-drum layout of similar dimensions as a baseline. Our preliminary results show differences in the fundamental frequency and the overtones of the chinchı́n bass drum are different
from the normal one.
3961
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Physical modeling of musical instruments is of great interest, both due
to the interpretation of the parameters that make the model as similar to reality as possible, and because of the faithful sound synthesis. In this paper we
will focus on the modeling of percussion instruments and more concretely
drums consisting of a vibrating membrane and a rigid body. One recently
published physical model [S. Bilbao and C. J. Webb, J. Audio Eng. Soc. 61
(2013) 737-748] uses finite difference time domain schemes to solve both
the system of equations describing the dynamics of the membrane, and the
influence of the surrounding air. The main drawback of that approach is the
growth of computation cost with the scale of the problem. A new approach
is suggested here where an edge diffraction-based method [A. Asheim, U. P.
Svensson, J. Acoust. Soc. Am. 133 (2013) 3681-3691] is used to compute
both the air loading on the surface and the sound radiation. The finite difference time domain method is used in this approach too but only to solve the
dynamics of the membrane. The computational complexity is evaluated and
results are compared to reference solutions.
11:00
5aMU9. Viscoelastic internal damping finite-difference model for musical instruments physical model sound production. Rolf Bader (Inst. of
Musicology, Univ. of Hamburg, Neue Rabenstr. 13, Hamburg 20354, Germany, R_Bader@t-online.de)
A viscoelastic model for the internal damping of musical instrument
parts, like membranes or plates is implemented within a Finite-Difference
Time Domain (FDTD) method. Internal damping of wood, leather, nylon,
mylar, glue, or varnish strongly change the timbre of musical instruments
and the precise spectrum of this damping contributes strongly to the individual instrument character. The model assumes a complex, frequency-dependent and linear stiffness in the frequency domain, which is analytically
transferred into the time-domain using a Laplace transform. The resulting
mass-weighted restoring force integral of the respective membrane or plate
differential equation is solved using a circular buffer accumulation method
for each spatial node on the geometry, which is effective, as the model is
implemented on a massive parallel Graphics Processing Unit (GPU). The
model is able to reproduce arbitrarily shaped internal damping frequency
responses with sharp bandwidth and fast response. The model is also able to
reproduce other energy distribution problems, like energy loss or even
energy supply by different parts of musical instruments through coupling,
time-dependent energy loss and supply behavior, or nonlinear damping
behavior, like amplitude-dependent loss strength. So also internal damping
of metamaterials can be calculated with this model.
11:20
5aMU10. High-resolution directivities of played musical instruments.
Timothy W. Leishman and William Strong (Phys. and Astronomy, Brigham
Young Univ., N247 ESC, Provo, UT 84602, tim_leishman@byu.edu)
Recent experimental developments have enabled directivity measurements of played musical instruments with high angular resolution. The measurement system assesses directional radiation characteristics while including
diffraction and absorption effects of seated musicians. The results are advantageous to better understand and visualize sound produced by the instruments,
provide benchmarks for physical modeling, predict and auralize sound in rehearsal and performance venues, and improve microphone placement techniques. This paper explores steady-state directivities, contrasting key
differences between brass, tone-hole, and string instruments. As expected,
brass instruments generate relatively predictable results because of their single
radiation elements. Tone-hole instruments produce notable interference patterns due to radiation from multiple instrument openings. String instruments
produce even more complex directivities because of radiation from distributed
vibrating structures and instrument openings. To illustrate the effects, the paper focuses on an instrument from each category, within the same pitch range.
Acoustics ’17 Boston
3961
5a THU. AM
The northern Chinese xiao is an end-blown flute characterized by an
array of two to three pairs of holes that separate the main bore from an
extended foot. The top pair are tuning holes and the lower ones are
described as vent holes, but the design of the latter have shown an influence
in the attainability of the third octave as well as the timbre of the notes (G.
Ellis, personal communication, 3 March 2014). We analyze these holes in
the context of an infinite lattice with a cutoff frequency fc determined by the
geometry and spacing of the top holes, and compare the results to that of the
pole frequency of the calculated impedance of the actual, irregular lattice
below the tuning holes. The input impedance of a cylindrical pipe with a
tonehole lattice is calculated, and pressure standing waves are predicted for
the peaks of the resulting admittance spectra. The standing waves are compared to experimental results to confirm the high-pass filter properties of the
lattice above fc. The effects of varying lattice dimensions on fc and the
alignment of upper harmonics with peaks in the spectra will be presented.
11:40
material, from the standpoint of musical acoustics, is Young’s modulus of
elasticity, which can be determined by observing the wooden beam test sample response to an excitation. In this process visualization of modes helps
pairing frequencies from response spectra with vibration modes of sample,
which allows us to numerically calculate Young’s modulus for different frequencies. In this paper, visualization of vibration modes are achieved using
the microphone Very Near Field scanning of excited samples.
5aMU11. Improvement of method for tone wood properties examination using the very near field sound pressure scanning for mode visualization. Filip Pantelic (Audio and Video Technologies, The School of Elec.
and Comput. Eng. of Appl. Studies, Vojvode Stepe 283, Belgade 11000,
Serbia, filip_pantelic@yahoo.com), Miomir Mijic, and Dragana Sumarac
Pavlovic (School of Elec. Eng., Belgrade, Serbia)
The ability to predict behavior of some type of wood as part of a musical
instrument is of a great importance. One of the important characteristics of a
THURSDAY MORNING, 29 JUNE 2017
ROOM 203, 8:35 A.M. TO 12:20 P.M.
Session 5aNSa
Noise and Signal Processing in Acoustics: Statistical Learning and Data Science Techniques in
Acoustics Research
Jonathan Rathsam, Cochair
NASA Langley Research Center, MS 463, Hampton, VA 23681
Edward T. Nykaza, Cochair
ERDC-CERL, 2902 Newmark Dr., Champaign, IL 61822
Laure-Anne Gille, Cochair
Direction territoriale Ile de France, Cerema, rue Maurice Audin, Vaulx-en-Velin 69120, France
Chair’s Introduction—8:35
Invited Papers
8:40
5aNSa1. Noise forecasting: A machine-learning and probabilistic approach. Carl R. Hart, D. Keith Wilson (U.S. Engineer Res. and
Development Ctr., 72 Lyme Rd., Hanover, NH 03755, carl.r.hart@usace.army.mil), Chris L. Pettit (Aerosp. Eng. Dept., U.S. Naval
Acad., Annapolis, MD), and Edward T. Nykaza (U.S. Engineer Res. and Development Ctr., Champaign, IL)
Forecasting the transmission of transient noise is a challenge since several sources of uncertainty exist: source and receiver positions,
meteorology, and boundary conditions. These sources of uncertainty are considered to be model parameters. Experimental observations
of noise, such as peak sound pressure level, or C-weighted sound exposure level, are data parameters with their attendant sources of
uncertainty. Forward models, relating model parameters to the data parameters, are also imprecise. We quantify all of these sources of
uncertainty by a probabilistic approach. Probability density functions quantify a priori knowledge of model parameters, measurement
errors, and forward model errors as states of information. A conjunction of these states of information is used to generate the joint probability distribution of model and data parameters. Given a forecast of model parameters, say, from a numerical weather prediction model,
the joint probability distribution is marginalized in order to forecast the noise field. In this study, we examine the feasibility of this
approach using, instead of numerical weather predictions, point measurements of meteorological observations and peak sound pressure
level collected during a long-range sound propagation experiment. Furthermore, we examine different types of forward models based on
machine learning.
3962
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3962
9:00
5aNSa2. In-situ estimation of transmission loss based on learned dictionaries and sparse reconstruction. Jonathan Botts, Mark A.
Ross (ARiA, 209 N. Commerce St., Ste 300, Culpeper, VA 22701, jonathan.botts@ariacoustics.com), Jason E. Summers, and Charles F.
Gaumond (ARiA, Washington, DC)
Transmission loss is an important and notoriously difficult quantity to estimate in real underwater environments. Used to evaluate
detections and threshold exceedances, estimates of transmission loss may be affected by approximate knowledge of weather conditions,
bathymetry, bottom properties, and sound-speed profile—among other sources of environmental uncertainty. For applications in multistatic antisubmarine warfare, a relatively small number of in-situ measurements are available. In this work, in-situ measurements of
transmission loss are used to reconstruct transmission loss fields using matching pursuit over a learned dictionary of transmission-loss
fields for a given operational area. The dictionary is trained on simulated transmission-loss fields with inputs derived from historical
data and plausible variations of environmental parameters. A principal challenge is constraining the problem in such a way that fields
may be meaningfully reconstructed given a small set of measured data. The feasibility of reconstruction is evaluated for both full-field
and depth-independent transmission-loss fields. Plausibility of practical application is evaluated with respect to realistic sampling conditions. [Portions of this material are based upon work supported by the Naval Air Systems Command.]
9:20
5aNSa3. Deep convolutional neural networks for semi-supervised learning from synthetic aperture sonar (SAS) images. Johnny
L. Chen (Appl. Res. in Acoust., LLC, 209 N. Commerce St., Ste 300, Culpeper, VA 22701-2780, johnny.chen@ariacoustics.com) and
Jason E. Summers (Appl. Res. in Acoust., LLC, Washington, District of Columbia)
Advancements in deep neural networks for computer-vision tasks have the potential to improve automatic target recognition (ATR)
in synthetic aperture sonar (SAS) imagery. Many of the recent improvements in computer vision have been made possible by densely labeled datasets such as ImageNet. In contrast, SAS datasets typically contain far fewer labeled samples than unlabeled samples—often
by several orders of magnitude. Yet unlabeled SAS data contain information useful for both generative and discriminative tasks. Here
results are shown from semi-supervised ladder networks for learning to classify and localize in SAS images from very few labels. We
perform end-to-end training concurrently with unlabeled and labeled samples and find that the unsupervised-learning task improves classification accuracy. Ladder networks are employed to adapt fully convolutional networks used for pixelwise prediction based on supervised training to semi-supervised semantic segmentation and target localization by pixel-level classification of whole SAS images.
Using this approach, we find improved segmentation and better generalization in new SAS environments compared to purely supervised
learning. We hypothesize that utilizing large unsupervised data in conjunction with the supervised classification task helps the network
generalize by learning more invariant hierarchical features. [Work supported by the Office of Naval Research.]
Contributed Paper
9:40
5aNSa4. Talker age estimation using machine learning. Mark Berardi,
Eric J. Hunter (Communicative Sci. and Disord., Michigan State Univ.,
1026 Red Cedar Rd., Rm. 211D, East Lansing, MI 48824, mberardi@msu.
edu), and Sarah H. Ferguson (Dept. of Commun. Sci. and Disord., Univ. of
Utah, Salt Lake City, UT)
As a person ages, the acoustic characteristics of their voice change.
Understanding how the sound of a voice changes with age may give insight
into physiological changes related to vocal function. Previous work has
shown changes in acoustical parameters with chronological age as well as
differences between perceived age and chronological age. However, much
of this previous work was done using cross-sectional speech samples, which
will show changes with age but may average out important individual variability with regard to aging differences. The current study used a longitudinal recording sample gathered from a corpus of speeches from an individual
spanning about 50 years (48 to 97 years of age). This study investigates how
the voice changes with age using both chronological age and perceived age
as independent variables; perceived age data were obtained in a previous
direct age estimation study. Using the longitudinal recordings, a range of
voice and speech acoustic parameters were extracted. These acoustic parameters were fitted to a supervised learning model to predict chronological age
and perceived age. Differences between the chronological age and perceived
age models as well as the usefulness of the various acoustic parameters will
be discussed.
10:00–10:20 Break
Invited Papers
5a THU. AM
10:20
5aNSa5. Automated assessment of bird vocalization activity. Paul Kendrick, Mike Wood (Univ. of Salford, Salford, Lancashire
m54wt, United Kingdom, p.kendrick@salford.ac.uk), and Luciana Barçante (Univ. of Salford, Brasilia, Brazil)
This paper presents a method for the automated acoustic assessment of bird vocalization activity using a machine learning approach.
Acoustic biodiversity assessment methods use statistics from vocalizations of various species to infer information about the biodiversity.
Manual annotations are accurate but time-consuming and therefore expensive, so automated assessment is desirable. Acoustic Diversity
indices are sometimes used. These are computed directly from the audio and comparison between environments can provide insight
about the ecologies. However, the abstract nature of the indices means that solid conclusions are difficult to reach and methods suffers
from sensitivity to confounding factors such as noise. Machine learning based methods are potentially are more powerful because they
can be trained to detect and identify species directly from audio. However, these algorithms require large quantities accurately labeled
training data, which is, as already mentioned, non-trivial to acquire. In this work, a database of soundscapes with known levels of
3963
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3963
vocalization activity was synthesized to allow training of the algorithm. Comparisons show good agreement between manually annotated and automatic estimates of vocalization activity in simulations and data from a field survey.
10:40
5aNSa6. Improved feature extraction for environmental acoustic classification. Matthew G. Blevins (U.S. Army Engineer Res. and
Development Ctr., 2902 Newmark Dr., Champaign, IL 61822, matthew.g.blevins@usace.army.mil), Steven L. Bunkley (U.S. Army
Engineer Res. and Development Ctr., Vicksburg, MS), Edward T. Nykaza (U.S. Army Engineer Res. and Development Ctr., Champaign,
IL), Anton Netchaev (U.S. Army Engineer Res. and Development Ctr., Vicksburg, MS), and Gordon Ochi (Columbia College Chicago,
Chicago, IL)
Modern automated acoustic classifiers have been shown to perform remarkably well with human speech recognition and music genre
classification. These problems are well defined; there is a deep understanding of the source signal, and the required robustness of the
model can be decreased without significantly sacrificing accuracy. Unfortunately, this simplification creates models that are insufficient
when tasked with classifying environmental noise, which is inherently more variable and difficult to constrain. To further close the gap
between human and computer recognition, we must find feature extraction techniques that address the additional set of complexities
involved with environmental noise. In this paper, we will explore sophisticated feature extraction techniques (e.g., convolutional autoencoders and scattering networks), and discuss their effect when applied to acoustic classification.
11:00
5aNSa7. Deep learning for unsupervised separation of environmental noise sources. Bryan Wilkinson (Comput. Sci., UMBC, 1000
Hilltop Circle, Baltimore, MD 21250, bwilk7@gmail.com), Charlotte Ellison (ERDC-GRL, Alexandria, VA), Edward T. Nykaza
(ERDC-CERL, Champaign, IL), Arnold P. Boedihardjo (ERDC-GRL, Alexandria, VA), Anton Netchaev (ERDC-ITL, Vicksburg, MS),
Zhiguang Wang (Comput. Sci., UMBC, Baltimore, MD), Steven L. Bunkley (ERDC-ITL, Vicksburg, MS), Tim Oates (Comput. Sci.,
UMBC, Baltimore, MD), and Matthew G. Blevins (ERDC-CERL, Champaign, IL)
With the advent of reliable and continuously operating noise monitoring systems, we are now faced with an unprecedented amount
of noise monitor data. In the context of environmental noise monitoring, there is a need to automatically detect, separate, and classify all
environmental noise sources. This is a complex task because sources can overlap, vary by location, and have an unbounded number of
noise sources that a monitor device may record. In this study, we synthetically generate datasets that contain Gaussian noise and overlaps
for several pre-labeled environmental noise monitoring datasets to examine how well deep learning methods (e.g., autoencoders) can
separate environmental noise sources. In addition to examining performance, we also focus on understanding which signal features and
separation metrics are useful to this problem.
Contributed Paper
11:20
5aNSa8. Machine listening in combination with microphone arrays for
noise source localization and identification. Markus Müller-Trapet (Inst.
of Sound and Vib. Res., Univ. of Southampton, Southampton SO17 1BJ,
United Kingdom, M.F.Muller-Trapet@soton.ac.uk), Jordan Cheer, Filippo
M. Fazi (Inst. of Sound and Vib. Res., Univ. of Southampton, Southampton,
Hampshire, United Kingdom), Julie Darbyshire, and J. Duncan Young (Nuffield Dept. of Clinical NeuroSci., Univ. of Oxford, Oxford, United
Kingdom)
In a recent project, a large microphone array system has been created to
localize and quantify noise sources in an Intensive Care Unit (ICU). In the
current state, the output of the system is the location and level of the most
dominant noise sources, which is also presented in real-time to the nursing
staff. However, both staff as well as patients have expressed the need for information about the types of noise sources. This additional source identification can also help to find means of reducing the overall noise level in the
ICU. To accomplish the source identification, the approach of machine listening with a deep neural network is chosen. A feed-forward pattern recognition network is considered in this work. However, it is not clear which
types of features are best suited for the given application. This contribution
thus examines the problem from a practical point of view, comparing different features including those related to sound perception, such as specific
loudness, Mel-frequency cepstral coefficients, as well as the output of a
gamma-tone filter bank. Additionally, the concept of time-delay networks is
tested to see whether a better classification of the signals can be achieved by
including their time history.
Invited Papers
11:40
5aNSa9. A likelihood-based method for point and interval estimation on auditory filter data. Andrew Christian (Structural Acoust.
Branch, NASA Langley Res. Ctr., 2 N. Dryden St., Rm. 117A, M/S 463, Hampton, VA 23681, andrew.christian@nasa.gov)
A novel method for determining auditory filter (AF) shapes given a set of n-alternative forced choice (nAFC) responses from a single
human subject or set of subjects is discussed. The method works by developing a function which maps individual nAFC responses into
likelihood values—either supporting or conflicting with a proposed model AF shape. The aggregate of these likelihoods is then used as
an objective function for optimization schemes for point estimation, or as the basis function for Metropolis Hastings-like algorithms for
interval estimation, both of either parameters of the AF model or of the entire AF shape. The method is demonstrated on simulated updown staircase data. The consistency of the method is discussed in the context of canonical methods for AF data analysis, some of which
are shown to produce systematic errors. Other possible benefits of this approach are discussed including the ability of the method to:
combine data from heterogeneous nAFC tasks (e.g., notched-noise maskers with tone masking) into single AF models; combine data
3964
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3964
from different frequency-AFs into single analyses—shedding light on effects due to off-frequency listening; produce consistent results
between different AF basis models.
12:00
5aNSa10. Structural equation modeling of partial and total annoyances due to urban road traffic and aircraft noises. Laure-Anne
Gille (Direction territoriale Ile-de-France, Cerema / Univ Lyon, ENTPE, Laboratoire Génie Civil et Bâtiment , ENTPE/LGCB, rue Maurice Audin, Vaulx-en-Velin 69120, France, laureanne_gillechambo@yahoo.com), Catherine Marquis-Favre ( Univ. Lyon, ENTPE, Laboratoire Génie Civil et Bâtiment, Vaulx-en-Velin, France), and Kin-Che Lam (Geography and Resource Management, The Chinese
Univ. of Hong Kong, Hong Kong, China)
Data of a French in situ socio-acoustic survey were used to model partial annoyance due to urban road traffic noise, partial annoyance due to aircraft noise and total annoyance due to these combined noises. Structural equation modeling carried out on the in situ data
showed that long-term noise annoyance depends on noise exposure but also on noise disturbance, dwelling satisfaction, visibility of a
main road from the dwelling and noise sensitivity. Both noise exposure and noise sensitivity were introduced as independent variables
in structural equation modeling of partial and total noise annoyances. Their contributions to the models highlight the necessity to consider these two variables in annoyance model prediction. Finally, in total noise annoyance models, whereas partial annoyance due to aircraft noise contributes to total noise annoyance as much as partial road traffic annoyance, aircraft noise exposure contributes to total
noise annoyance much more than road traffic noise. Several reasons may explain this difference, such as the event character of aircraft
noise or the fact that aircraft noise exposure reflects also the city exposure to aircraft noise. These hypothesis need to be confirmed on
wider samples.
THURSDAY MORNING, 29 JUNE 2017
ROOM 202, 9:15 A.M. TO 12:00 P.M.
Session 5aNSb
Noise, Architectural Acoustics, Speech Communication, and Psychological and Physiological Acoustics:
Effects of Noise on Human Comfort and Performance I
Z. Ellen Peng, Cochair
Waisman Center, University of Wisconsin-Madison, 1500 Highland Avenue, Madison, WI 53711
Lily M. Wang, Cochair
Durham School of Architectural Engineering and Construction, University of Nebraska - Lincoln, PKI 100C, 1110 S. 67th St.,
Omaha, NE 68182-0816
Anna Warzybok, Cochair
Department of Medical Physics and Acoustics, Medical Physics Group, University of Oldenburg, Universität Oldenburg,
Oldenburg D-26111, Germany
Chair’s Introduction—9:15
Invited Papers
5a THU. AM
9:20
5aNSb1. The quest for good, quiet spaces: Evaluating the relationship between office noise annoyance, distraction, and performance. Martin S. Lawless, Michelle C. Vigeant (Graduate Program in Acoust., The Penn State Univ., 201 Appl. Sci. Bldg., University
Park, PA 16802, msl224@psu.edu), and Andrew Dittberner (GN Hearing, Glenview, IL)
To facilitate office work performance, acousticians must design spaces that minimize annoyance from background noise, primarily
from HVAC equipment, and reduce worker distraction caused by intermittent sounds, e.g., ringing telephones. Increasing background
noise can mask intermittent sounds and mitigate distraction, but negatively affects annoyance. Additionally, some disrupting sounds,
such as alarms, contain informational content necessary for workplaces. Balancing worker annoyance and distraction can be difficult
since the definition of what constitutes a good, quiet space is yet unclear. The goal of the present work was to perform a literature review
to inform ideal office noise conditions and develop an experimental procedure to test such environments. The review included papers
3965
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3965
about indoor environmental quality and the effects of acoustics on environmental satisfaction, job performance, and noise annoyance, as
well as cognitive, neurobehavioral, and physiological measures that can quantify work performance. The results of the literature survey
will be used to form the basis of a future subjective study. In particular, an experimental design will be discussed that aims to evaluate
the effects of various simulated environments, reproduced using higher-order Ambisonics, on work performance, annoyance, and distraction. The data from these future studies will be used to investigate ideal office acoustic environments.
9:40
5aNSb2. A simple sound metric for evaluating sound annoyance in open-plan offices. Etienne Parizet (Laboratoire Vibrations
Acoustique, Univ. Lyon, INSA-Lyon, 25 bis, av. Jean Capelle, Villeurbanne 69621, France, etienne.parizet@insa-lyon.fr), Patrick Chevret, and Krist Kostallari (INRS, Vandoeuvre les Nancy, France)
Noise in open-plan offices has become a major health issue. Intelligible speech is considered as the most annoying noise sources by
the occupants of such offices. Speech level fluctuations prevent people from achieving some high-demanding tasks, thus inducing annoyance and tiredness. Many studies were conducted in order to identify a sound metric closely related to this Irrelevant Speech Effect.
Hongisto et al. have shown that Speech Transmission Effect is appropriate for evaluating the annoyance due to a neighbor in the office.
More recently, Schlittmeier et al. suggested that the Fluctuation Strength can be used to evaluate the effect of the fluctuations of the ambient noise on task performance. This paper intends to present a new metric. It is based on the measurement of short-term temporal modulation of sound level. Results indicate that it seems to be as efficient as STI or FS, while being more suitable for in-situ experiments
and usable by practitioners.
10:00
5aNSb3. Speech intelligibility under realistic classroom acoustics. Giuseppina E. Puglisi (Dept. of Energy, Politecnico di Torino,
Torino, Italy), Anna Warzybok, Birger Kollmeier (Medizinische Physik and Cluster of Excellence Hearing4All, Carl von Ossietzky Universität Oldenburg, Universität Oldenburg, Oldenburg D-26111, Germany, a.warzybok@uni-oldenburg.de), and Arianna Astolfi (Dept.
of Energy, Politecnico di Torino, Turin, Italy)
Speech recognition is fundamental in everyday communication environments, especially in school classrooms where the teachinglearning process takes place. Extensive literature is available that refers to speech intelligibility studies on the effect of artificially added
reverberation and speech-shaped noise, whereas there is a little number of works that account for realistic acoustics. This work investigates the effect of measured classroom acoustics on speech intelligibility, accounting for the effect of informational and energetic masking, distance teacher-to-listener between and binaural unmasking. Speech reception threshold (SRT) corresponding to signal-to-noise
ratio yielding 80% of speech intelligibility was measured in two acoustically different primary school classrooms. The acquired binaural
room impulse responses were convolved with anechoic speech and noise stimuli, then presented via headphone to a group of adult normal-hearing listeners. The results show that SRTs were lower (better) under optimal classroom acoustics, i.e., with reverberation time of
0.4 s. Also, SRTs under informational masking noise were averagely higher (worse) by 6.6 dB than SRTs under energetic masking,
proving that the former masker is more competing than the latter and needs to be deepen in future research. The binaural unmasking was
observed only for a short teacher-to-listener distance in the room with a shorter reverberation time.
10:20
5aNSb4. Potential audibility and side effects of ultrasonic surveillance monitoring of PA and Life Safety Sound Systems. Peter
Mapp (PMA, 101 London Rd., copford, Colchester co61lg, United Kingdom, peter@petermapp.com)
Ultrasonic surveillance monitoring, to check the operational integrity of PA and Emergency Communication Systems, has been in
existence for well over 30 years—particularly in Europe. Since its inception, there has been debate as to the potential audibility that
these systems may have. As the vast majority of PA systems engineers and designers have not heard or experienced any effects, is has
generally been assumed that the general public do not either. Recently however, concern has been raised and claims of ill effects have
been reported. There is however, little or no data as to the ultrasonic sound levels that PA systems actually emit. The paper discusses the
results of an initial survey of ultrasound radiated by PA systems and compares the results with a number of international standards—
there currently being little or no specific guidance. The paper reviews the technology involved, typical emission levels and concludes by
making a number of recommendations to assist with the control of ultrasonic emissions from PA systems that should help to mitigate
unintended side effects.
10:40–11:00 Break
11:00
5aNSb5. Relations between acoustic quality and student achievement in K-12 classrooms. Lily M. Wang, Laura C. Brill (Durham
School of Architectural Eng. and Construction, Univ. of Nebraska - Lincoln, PKI 100C, 1110 S. 67th St., Omaha, NE 68182-0816,
lwang4@unl.edu), Houston Lester, and James Bovaird (Educational Psych., Univ. of Nebraska - Lincoln, Lincoln, NE)
Prior work by Ronsse and Wang (2013) found that, in elementary schools, higher unoccupied background noise levels do correlate
to lower student achievement scores in reading comprehension, but that study did not include detailed logs of acoustic conditions taken
during the school day nor did it investigate middle or high school classrooms. More recently, measurements of the indoor environmental
conditions in 110 K-12 classrooms, logged over a period of two weekdays three times seasonally, were taken over the 2015-16 academic
year. Assorted acoustic metrics have been calculated from the raw measurements and a confirmatory factor analysis has been conducted
to statistically create a comprehensive construct of “acoustic quality” that includes three general components: room characteristics
(including reverberation times), occupied noise levels, and unoccupied noise levels. Standardized test scores of students who learned in
the measured classrooms that year have also been gathered as an indicator of student achievement. Results from a structural equation
model are presented to show how the various components of the proposed acoustic quality construct relate to student achievement.
[Work supported by the United States Environmental Protection Agency Grant Number R835633.]
3966
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3966
11:20
5aNSb6. Development of a methodology for field studies on the effects of aircraft noise on sleep. Sarah McGuire and Mathias Basner (Div. of Sleep and Chronobiology, Dept. of Psychiatry, Univ. of Pennsylvania Perelman School of Medicine, 423 Guardian Dr.,
1013 Blockley Hall, Philadelphia, PA 19104, smcgu@upenn.edu)
An inexpensive yet sound study methodology is needed for field studies on the effects of aircraft noise on sleep. These studies are
needed for developing exposure-response relationships that are representative of noise exposed communities around multiple airports
and that can be used to inform policy. A methodology of monitoring sleep and identifying awakenings using ECG and actigraphy has
been developed. An advantage of this approach is that ECG electrodes can be applied by investigated subjects themselves therefore
reducing the need for staff in the field and the methodologic cost of the study. In addition, an automatic algorithm based on ECG and
actigraphy data which identifies awakenings based on both body movements and changes in heart rate has been developed. The automatic scorings of the algorithm agree closely with awakenings identified using polysomnography which is the current gold standard for
measuring sleep and related events. This ECG and actigraphy approach for monitoring sleep has been implemented in a pilot study conducted around 1 U.S. airport to evaluate its feasibility, in this study participants completed 3 nights of unattended sleep and noise measurements. Based on lessons learned, the study methodology has been further refined and implemented in a second pilot study.
Contributed Paper
11:40
5aNSb7. Study of the impact of aircraft noise on annoyance and cognitive task performance regarding the distance from the airport. AnneLaure Verneil, Catherine Lavandier (ETIS, Université de Cergy-Pontoise, 5
mail Gay Lussac, CS 20601 Neuville, Cergy-Pontoise cedex 95031, France,
anne-laure.verneil@env-isa.com), and Emilia Suomalainen (ENVISA,
Paris, France)
Aviation traffic is expected to increase by 30% by 2025. Aircraft noise
annoyance of people living near airports is one of the parameters which
could limit this industry. Indeed, occurrence of flights takes part in noise
annoyance. In addition to non-acoustic factors, acoustic parameters such as
noise level or temporal and spectral aspects are involved in annoyance. In
this study, two assumptions are studied: (1) close to the airport, the
emergence of the flyovers constitutes the most important parameter, (2) far
from the airport, this is the flyover duration. This paper presents a perceptive
experiment which aims at verifying these assumptions, questioning about
short term annoyance in laboratory. Three sequences of aircraft noise have
been recorded at three different distances from the airport. Short term
annoyance after each sound sequence has been assessed by about 50 participants. During the experiment, cognitive tasks were performed: (1) the reading of a text presented on a computer screen equipped with an eye tracker in
order to measure the velocity of reading and the number of retro-saccades,
(2) a memorization task where the performance is assessed with the number
of errors and the reaction time. The acoustic characteristics, perceived
annoyance and performance measurements are then crossed. The results are
presented and discussed in this paper.
THURSDAY MORNING, 29 JUNE 2017
ROOM 210, 9:00 A.M. TO 12:00 NOON
Session 5aPA
Physical Acoustics: General Topics in Physical Acoustics III
Bart Lipkens, Chair
Mechanical Engineering, Western New England University, 1215 Wilbraham Road, Box S-5024, Springfield, MA 01119
9:00
5aPA1. Array-based inhomogeneous soundwave generation to enhance
sound transmission into solids. Trevor Kyle, Daniel C. Woods, Rahul
Tiwari, Jeffrey F. Rhoads, and J. S. Bolton (School of Mech. Eng., Purdue
Univ., 102 N Chauncey Ave., Apt. 203, West Lafayette, IN 47906, kylet@
purdue.edu)
The acoustic excitation of energetic materials has been demonstrated to
be useful in detection and defeat applications, but its efficacy is hindered by
the inability to transmit a high percentage of incident acoustic energy across
the air/energetic material interface. While large acoustical impedance differences usually prevent energy transmission from air into a solid,
3967
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
inhomogeneous incident waves have been found to transmit a significant
percentage of their energy into the target material. Thus, inhomogeneous
waves, whose amplitudes decay spatially in a direction different from the
propagation direction, are an optimal choice for this application; however, it
is difficult to create such a waveform by using a simple source. The objective of the present work is to demonstrate that by tuning the strengths and
phases of sound sources in a linear array, an interference pattern can be generated such that an inhomogeneous wave forms on a surface of interest. Furthermore, it is demonstrated that by adjusting the level of inhomogeneity of
the wave and its incidence angle, one can target the parameters associated
with optimal sound transmission, and that these waves can be generated
even in the presence of small errors in the powers and phases of the sources.
Acoustics ’17 Boston
3967
5a THU. AM
Contributed Papers
9:20
10:40
5aPA2. A spheroid model for the sound radiation of a loudspeaker on a
sound bar. Vincent Roggerone (LMS, Ecole polytechnique, Laboratoire de
Mécanique des Solides, École polytechnique, Palaiseau 91128, France, rogger@lms.polytechnique.fr), Etienne Corteel (Sonic Emotion Labs, PARIS,
France), and Xavier Boutillon (LMS, Ecole Polytechnique, Palaiseau,
France)
5aPA5. The effects of diffraction on the frequency difference and frequency sum autoproducts. Brian M. Worthmann (Appl. Phys., Univ. of
Michigan, 1231 Beal Ave., Ann Arbor, MI 48109, bworthma@umich.edu)
and David R. Dowling (Mech. Eng., Univ. of Michigan, Ann Arbor, MI)
The sound radiation of a loudspeaker on a sound bar with a slender
shape is analyzed. Measurements and boundary element method (BEM)
simulations of a rectangular rigid enclosure with a flat piston turn out to be
in close agreement up to the frequency limit imposed by the discretization
chosen for the BEM. Looking up for a shorter computation time, we consider an analytic model based on a geometrical approximation of the sound
bar by a prolate spheroid. The corresponding spheroidal coordinate system
allows for an analytical solution of the sound-radiation problem. The following parameters are adjusted: geometry of the ellipse-based spheroid, size
and location of the circular piston, minimum order of the spheroidal wave
functions that ensures convergence. In the light of the BEM results, we also
predict the frequency validity of the analytic model. In order to improve the
control of the acoustical field radiated by a sound bar, we discuss the influence of the enclosure edges on the regularity of the sound field pattern.
[Work supported by the ANR-13-CORD-0008 EDISON 3D grant from the
French National Agency of Research.]
9:40
5aPA3. Computer simulation of synthetic apertures radar for classroom
demonstration. Kathryn P. Kirkwood and Murray S. Korman (Phys. Dept.,
U.S. Naval Acad., 572 C Holloway Rd., Annapolis, MD 21402, m193342@
usna.edu)
Synthetic Aperture Radar (SAR) has many civilian and military applications as a high resolution imaging system. A MathematicaV simulation of
Synthetic Aperture Acoustic Radar will demonstrate how two-dimensional
point targets on a ground plane can be imaged from a collection of acoustic
echoes. A transmitter and receiver will be modeled as a one point element
that stops along a linear track at collection points and hops to the next location (stop and hop approximation). The transmitter on the hypothetical apparatus will transmit acoustic signals that reflect off the targets as echoes to be
collected by the receiver at each location of the track. A matched filter correlation process will complete pulse compression of the LFM (linear frequency modulated) chirp. The image reflectance of the point targets will be
constructed using a time correlation/backprojection algorithm developed by
Yegulap [“Fast Backprojection Algorithm for Synthetic Aperture Radar,” In
Proceedings 1999 IEEE Radar Conference, Waltham, MA, April 20-22,
1999, 60-65]. Image resolution may be improved by increasing chirp
bandwidth.
R
10:00
5aPA4. Low-order modeling of fan broadband interaction noise. Dorien
O. Villafranco and Sheryl Grace (Mech. Eng., Boston Univ., 110 Cummington Mall, Boston, MA 02215, dvillafr@bu.edu)
For commercial aircraft equipped with a modern high-bypass turbofan
engine, a primary noise source on take-off and at approach has been attributed to the fan stage of the engine. The largest contribution to fan noise is
rotor wake impingement on the fan exit guide vanes (FEGVs). The interaction creates both tonal and broadband noise. Engine designers have reliable
tools for the prediction of tonal noise while prediction methods for broadband noise are still being developed. In the current work, a low-order
method for simulating the broadband noise downstream of the fan stage of
the engine is presented. Comparisons between computational results and experimental data are shown. The method produces good predictions of the
spectral shape when compared to the experimental measurements. The basic
low-order method models the FEGVs as flat plates. While reasonable predictions are attained with this simplification, increased fidelity is sought
through inclusion of the real vane geometry in the low order model.
Previously, a remote sensing technique termed frequency difference
matched field processing was developed for source localization in the shallow ocean (Worthmann et al., 2015, 138, 3549-3562). In this technique,
field measurements at the in-band frequency are shifted down (or up) in frequency through the use of the bandwidth-averaged frequency-difference (or
frequency-sum) autoproduct, a nonlinear construction made from field
amplitudes and averaged over the available signal bandwidth. These bandwidth-averaged autoproducts may have phase structure similar to genuine
acoustic fields at out-of-band frequencies when the original acoustic field is
well-described by a sum of ray-path contributions. While ray theory may be
a useful field description in many situations, it does not include diffraction.
In this presentation, the effects of acoustic diffraction on the autoproduct are
analyzed in an environment where diffraction varies in importance depending on the spatial location. Specifically, the behavior of the autoproducts is
investigated in Sommerfeld’s half-plane problem, where a plane wave is
incident on a thin, semi-infinite rigid barrier. The bandwidth-averaged frequency difference and sum autoproduct fields are calculated in this environment, and their correlation with exact out-of-band acoustic fields are
provided as a function of distance from the barrier and scattering angle.
[Sponsored by NSF and ONR.]
11:00
5aPA6. Enhancing the convergence of fast multipole expansion at intermediate frequency. Hui Zhou (Univ. of Massachusetts Lowell, 1 University Ave., Lowell, MA 01854, hui_zhou@student.uml.edu)
In this work, we examined acoustic wave scattering from media having
a spacial variation in its compressibility contrast. Typically, the pressure in
the scattered field can be expressed as a Neumann series when the compressibility contrast is relatively small. However, divergence can occur due to
resonant scattering. It has been shown that Padé Approximants method can
be used to extend the range of validity of the solution. Fast multipole expansion method is applied to evaluate the terms of Neumann series. Particular
interest is paid to the numerical convergence of the translation operator used
in the fast multipole method.
11:20
5aPA7. Computer design, 3D printing, testing, and commercialization
of a revolutionary machine gun suppressor (silencer) design*. William
Moss (WCI, Lawrence Livermore National Lab., 7000 East Ave., Livermore, CA 94551, moss4@llnl.gov) and Andrew Anderson (ENG, Lawrence
Livermore National Lab., Livermore, CA)
Since their invention over 100 years ago, firearm suppressors have
achieved acoustic suppression using baffles and chambers to trap and delay
propellant gases from exiting the muzzle of a weapon. A modern suppressor
is functionally identical to the original 1908 design, with most of the
improvements made by lawyers trying to circumvent extant patents. We
have produced a flow-through suppressor that functions completely differently from all previous suppressors. We used a few rapid design cycles of
high performance computing, 3D printing of titanium prototypes, testing,
and analysis to create our suppressor, which has been patented and licensed
for commercialization. Ours is the only design to simultaneously limit blowback, flash, noise, and temperature. It will last the lifetime of the barrel on
single shot and fully automatic weapons, requires minimal maintenance,
and therefore, is the first practical suppressor for battlefield use. If adopted
for general use, the main benefit would be the reduction of debilitating longterm hearing loss, one of the most prevalent injuries in the military. [This
work was performed under the auspices of the U.S. Department of Energy
by Lawrence Livermore National Laboratory under Contract DE-AC5207NA27344.]
10:20–10:40 Break
3968
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3968
11:40
equations (LEE), the linearized Navier-Stokes equations (LNSE) and the
acoustic perturbation equations (APE) which are all written in a pure Eulerian frame, Galbrun utilized a mixed Eulerian-Lagrangian frame to reduce
the number of unknowns. Despite the advantages of fewer degrees of freedom and the reduced effort to solve the system equations, using the usual finite element method suffers from instabilities called spurious modes that
pollute the solution leaving useless results. In this work, the authors apply a
discontinuous Galerkin method to overcome the difficulties related to spurious modes when solving Galbrun’s equation in a mixed and pure displacement based formulation. The results achieved with the novel approach are
compared with results from former attempts to solve Galbrun’s equation.
5aPA8. Utilizing a discontinuous Galerkin method for solving Galbrun’s equation in the frame of aeroacoustics. Marcus Guettler (Faculty
of Mech. Eng., Tech. Univ. of Munich, Boltzmannstr. 15, Munich 85748,
Germany, marcus.guettler@tum.de) and Steffen Marburg (Faculty of Mech.
Eng., Tech. Univ. of Munich, Muenchen, Germany)
In the research field of aeroacoustics, scientists and engineers developed
broad varieties of mathematical formulations to investigate numerically
flow-induced noise in an early stage of product design and development.
Besides the already established theories such as the linearized Euler
THURSDAY MORNING, 29 JUNE 2017
ROOM 311, 7:55 A.M. TO 12:20 P.M.
Session 5aPPa
Psychological and Physiological Acoustics, Speech Communication, ASA Committee on Standards,
Architectural Acoustics, and Signal Processing in Acoustics: Speech Intelligibility in Adverse
Environments: Behavior and Modeling I
Virginia Best, Cochair
Dept. Speech, Language and Hearing Sciences, Boston University, 635 Commonwealth Ave., Boston, MA 02215
Mathieu Lavandier, Cochair
ENTPE/LGCB, Univ. Lyon, Rue M. Audin, Vaulx-en-Velin 69518, France
Chair’s Introduction—7:55
Invited Papers
8:00
5aPPa1. Speech intelligibility in complex environments: Modeling and possible applications. H. Steven Colburn and Jing Mi (Biomedical Eng., Boston Univ., 44 Cummington Mall, Boston, MA 02215, colburn@bu.edu)
5a THU. AM
A summary of recent modeling work from our lab on the topic of speech intelligibility in complex environments will be presented,
including some results from experiments designed to evaluate the models. The primary focus will be models of binaural processing,
with the dual goals of (1) understanding the processing within the brain and (2) suggesting possible strategies for external processing
that could provide more useful acoustic inputs for listeners with impaired hearing. Several processing algorithms based on allocation of
individual time-frequency slices will be considered, including one based on EC processing and one based on local (in time-frequency)
estimates of interaural time and intensity differences and interaural coherence. Performance of these algorithms will be evaluated using
statistics of source-separation accuracy (with the ideal binary mask as the “golden standard”) and also using human listening experiments. In the listening experiments, the waveforms generated by combining the time-frequency slices selected for the target location are
presented to the subject in tests of speech intelligibility. Performance with these waveforms are compared to performance with the original binaural waveforms and to performance in collocated conditions. [Work supported by NIH/NIDCD Grant 2R01DC000100.]
8:20
5aPPa2. Blind modeling of binaural unmasking of speech in stationary maskers. Christopher F. Hauth, Stephan D. Ewert, and
Thomas Brand (Medizinische Physik and Cluster of Excellence Hearing4All, Universität Oldenburg, Ammerländer Heerstr. 114-118,
Oldenburg D-26129, Germany, thomas.brand@uni-oldenburg.de)
The equalization cancellation (EC) model predicts the binaural masking level difference by equalizing interaural differences in level
and time and increasing the signal-to-noise ratio (SNR) using destructive and constructive interferences. The EC model has been successfully combined with the speech intelligibility index (SII) to predict binaural speech intelligibility. Here a blind EC model is introduced that relies solely on the mixture of speech and noise, replacing the unrealistic requirement of the separated clean speech and noise
signals in previous versions. The model uses two parallel EC paths, which either maximize or minimize the EC output level in each
3969
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3969
frequency band. If SNR is negative, minimization improves the SNR by removing the interferer component from the mixed signal. If
SNR is positive, maximization improves the SNR by enhancing the target component. Either the minimizing or maximizing path in each
frequency band is selected blindly based on envelope frequency-selective amplitude modulation (AM) analysis. The model is evaluated
for speech in stationary speech shaped noise in different spatial configurations. The suggested AM-steered selection in the EC stage
demonstrates that a simple signal driven process can be used to explain binaural unmasking of speech in humans, disregarding localization and higher-level processes.
8:40
5aPPa3. Speech intelligibility and spatial release from masking in maskers with different spectro-temporal modulations. Wiebke
Schubotz, Thomas Brand, and Stephan D. Ewert (Medizinische Physik and Cluster of Excellence Hearing4all, Universität Oldenburg,
Carl-von-Ossietzky Str. 9-11, Oldenburg 26129, Germany, Stephan.ewert@uni-oldenburg.de)
Speech-reception thresholds (SRTs) decrease as target and maskers are spatially separated (spatial release from masking, SRM) even
if two maskers are symmetrically placed around the listener’s head. In this case, speech intelligibility (SI) cannot be explained by an
improved long-term signal-to-noise-ratio (SNR) caused by the head shadow at one better ear alone, but could be facilitated by shortterm spectro-temporal segments (“glimpses”) in each ear that provide favorable SNRs. Here it was systematically assessed how SRT
and SRM depend on the spectro-temporal masker properties and on the availability of specific binaural cues for a frontal target in a symmetric masker setup. Maskers ranged from stationary noise to single, interfering talkers. Maskers were modified by head-related transfer
functions providing different binaural cues (interaural level and time differences; ILD, ITD, both), by presenting only glimpses derived
with a fast-switching better-ear mechanism, and an “infinite ILD,” removing crosstalk of the maskers between the ears. Results were
compared to model predictions showing that spectral cues contribute to SRM for all maskers, while IPD and ILD cues were more important for modulated maskers. The “infinite ILD” condition suggests binaural processing limitations, resulting in a maximal SRM of 12 dB
for low or absent informational masking.
9:00
5aPPa4. An account for the spatial advantage in multitalker situations based on glimpses. Esther Schoenmaker and Steven van de
Par (Acoust. Group, Cluster of Excellence “Hearing4All,” Univ. of Oldenburg, Carl von Ossietzkystrasse 9-11, Oldenburg D-26129,
Germany, esther.schoenmaker@uni-oldenburg.de)
Spectro-temporal regions with a high local signal-to-noise ratio (SNR), so-called glimpses, play a vital role in the intelligibility of
target speech against fluctuating interferers (e.g., concurrent speech signals). These glimpses provide access to reliable information on
local signal properties. In a situation with spatially separated speech sources, a spatial advantage relative to a situation with collocated
sources can be observed. This advantage is generally conceived to be composed of a monaural contribution due to better-ear listening,
and a binaural contribution due to either binaural unmasking or segregation supported by spatial cues. A previous study [Schoenmaker
and van de Par (2016), Adv. Exp. Med. Biol. 894, 73-81] provided evidence against the use of binaural unmasking and in favor of spatial
segregation based on spatial cues extracted from glimpses. New data suggest that the better-ear contribution relies on the amount of target speech in glimpses, rather than the global SNR of the masked target speech. Together this suggests that all cues used for speech intelligibility in spatial multitalker situations are obtained from well-audible glimpses. Specifically, better-ear listening provides monaural
cues to the target speech, while binaural listening provides spatial cues that improve allocation of extracted information to the correct
talkers.
9:20
5aPPa5. The speech-based envelope power spectrum model (sEPSM) family: Development, achievements, and current challenges. Helia Relaño-Iborra (Dept. of Elec. Eng., Tech. Univ. of Denmark, Ørsteds Plads, Bldg. 352, Kgs. Lyngby 2800, Denmark, heliaib@elektro.dtu.dk), Alexandre Chabot-Leclerc (Dept. of Elec. Eng., Tech. Univ. of Denmark, Kongens Lyngby, Denmark), Christoph
Scheidiger, Johannes Zaar, and Torsten Dau (Dept. of Elec. Eng., Tech. Univ. of Denmark, Kgs. Lyngby, Denmark)
Intelligibility models provide insights regarding the effects of target speech characteristics, transmission channels and/or auditory
processing on the speech perception performance of listeners. In 2011, Jørgensen and Dau proposed the speech-based envelope power
spectrum model [sEPSM, Jørgensen and Dau (2011). J. Acoust. Soc. Am. 130(3), 1475-1487]. It uses the signal-to-noise ratio in the
modulation domain (SNRenv) as a decision metric and was shown to accurately predict the intelligibility of processed noisy speech. The
sEPSM concept has since been applied in various subsequent models, which have extended the predictive power of the original model to
a broad range of conditions. This contribution presents the most recent developments within the sEPSM “family:” (i) A binaural extension, the B-sEPSM [Chabot-Leclerc et al. (2016). J. Acoust. Soc. Am. 140(1), 192-205] which combines better-ear and binaural unmasking processes and accounts for a large variety of spatial phenomena in speech perception; (ii) a correlation-based version [Relaño-Iborra
et al. (2016). J. Acoust. Soc. Am. 140(4), 2670-2679] which extends the predictions of the early model to non-linear distortions, such as
phase jitter and binary mask-processing; and (iii) a recent physiologically inspired extension, which allows to functionally account for
effects of individual hearing impairment on speech perception.
9:40
5aPPa6. A model predicting the effect of audibility on speech reception thresholds and spatial release from masking. Mathieu
Lavandier (ENTPE/LGCB, Univ. Lyon, Rue M. Audin, Vaulx-en-Velin 69518, France, mathieu.lavandier@entpe.fr), Jorg M. Buchholz
(Linguist, Macquarie Univ., Chatswood, NSW, Australia), and Baljeet Rana (Linguist, Macquarie Univ., Sydney, NSW, Australia)
A binaural model is proposed to predict the effect of audibility on speech reception thresholds (SRTs) measured in the presence of
two (unintelligible) vocoded-speech maskers which were either (artificially) spatially separated or co-located with the frontal speech target. Comparing these two configurations allowed to evaluate a spatial release from masking (SRM) which was based here primarily on
better-ear glimpsing. Audibility was varied by testing four sound levels for the combined maskers (while the target level was varied relative to these reference levels to measure the SRTs). The proposed model is based on a short-term binaural speech intelligibility model
3970
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3970
described by Collin & Lavandier [J. Acoust. Soc. Am. 134, 1146-1159 (2013)] and takes the calibrated target and masker signals (independently) at each ear as inputs along with the listener hearing thresholds in order to calculate a binaural “effective” signal-to-noise ratio. Differences in ratio across conditions can be directly compared to differences in SRT. This model allows a good prediction of the
decrease of SRT as well as the increase of SRM with increasing audibility/levels. The averaged absolute error between data and prediction was about 1dB across tested conditions.
10:00–10:20 Break
10:20
5aPPa7. Understanding effects of hearing loss on multitalker speech intelligibility in terms of glimpsing. Virginia Best, Christine
Mason, Elin Roverud (Dept. Speech, Lang. and Hearing Sci., Boston Univ., 635 Commonwealth Ave., Boston, MA 02215, ginbest@bu.
edu), Jayaganesh Swaminathan (Starkey Hearing Res. Ctr., Berkeley, CA), and Gerald Kidd (Dept. Speech, Lang. and Hearing Sci., Boston Univ., Boston, MA)
In multitalker mixtures, listeners with hearing loss demonstrate reduced spatial release from masking compared to listeners with normal hearing. However, it is not clear whether this problem reflects an inability to use spatial cues to segregate sounds, or a degraded representation of the target speech itself. In this work, a simple monaural glimpsing model was used to isolate the target information that is
potentially available at each ear in spatialized speech mixtures, and intelligibility of these glimpsed stimuli was then measured directly.
Performance in the glimpsed condition was strongly correlated with performance in the natural spatial condition, suggesting a common
limit in both cases. Similar results were found for different kinds of speech mixtures in which the target and maskers were distinguished
by cues other than spatial location (talker sex, or time-reversal). The results suggest that the primary detrimental effect of hearing loss
might be on the representation of target glimpses, rather than on the segregation of competing talkers, in multitalker mixtures.
10:40
5aPPa8. The effect of listener head orientation on speech intelligibility in noise. Jacques A. Grange and John F. Culling (School of
Psych., Cardiff Univ., Park Pl., Cardiff, Wales CF10 3AT, United Kingdom, CullingJ@cf.ac.uk)
The signal-to-noise ratio at one ear is progressively improved by orienting the head away from the target sound source by up to 65
degrees, so facing the speaker in a sidelong way may be an effective listening tactic in noisy listening situations. In a listening configuration that optimized this effect in a sound-treated room, speech reception thresholds improved by up to 10 dB with head orientation, but
without instruction few listeners adopted the tactic. Audio-visual presentation of the target speech further suppressed its use [Grange and
Culling, J. Acoust. Soc. Am. 139, 703-712 (2016)]. Audio-visual presentation produced an additive lip-reading benefit that was unaffected by a head orientation 30 degrees away from the speaker. A head-orientation benefit was also observed in realistic listening conditions, simulated over headphones. Binaural room impulse responses from a real restaurant were used to simulate listening at six different
tables with nine concurrent interferers. Head orientation of 30 degrees produced a mean benefit of 1 dB for speech interferers and 1.3 dB
for noise interferers [Grange and Culling, J. Acoust. Soc. Am. 140, 4061-4072 (2016)]. These results suggest that listeners would benefit
from advice to orient away from the speaker while maintaining eye contact.
11:00
11:20
5aPPa9. Objective evaluation of binaural noise-reduction algorithms
for the hearing-impaired in complex acoustic scenes. Marc René Schädler, Anna Warzybok, and Birger Kollmeier (Medizinische Physik and
Cluster of Excellence Hearing4all, Universität Oldenburg, Oldenburg D26111, Germany, a.warzybok@uni-oldenburg.de)
5aPPa10. The interaction between reverberation and digital noise
reduction in hearing aids: Acoustic and behavioral effects. Paul Reinhart
(Commun. Sci. and Disord., Northwestern Univ., 2240 Campus Dr., Evanston, IL 60208, preinhart@u.northwestern.edu), Pavel Zahorik (Univ. of
Louisville, Louisville, KY), and Pamela Souza (Commun. Sci. and Disord.,
Northwestern Univ., Evanston, IL)
The simulation framework for auditory discrimination experiments
(FADE) was used to predict the benefit in speech reception thresholds
(SRT) with the German matrix sentence test when using a range of singleand multi-channel noise-reduction algorithms in complex acoustic conditions. FADE uses a simple robust automatic speech recognizer to predict
SRTs from simulated speech recognition experiments in an objective way,
independent from any empirical reference data. Here, it was extended with a
simple binaural stage and individualized by taking into account the audiogram. Empirical data from the literature was used to evaluate the model in
terms of predicted SRTs and benefits in SRT when using eight different
noise-reduction algorithms. In a realistic binaural cafeteria condition, FADE
explained about 90% of the variance of the empirical SRTs for normal-hearing listeners and predicted the corresponding benefits in SRT with a rootmean-square prediction error of 0.6 dB. In contrast to the surprisingly high
performance of this simple approach for normal-hearing listeners, much less
of the inter-individual variance can be explained for hearing-impaired listeners, where individual audiograms only sufficed for aided group performance predictions. In a single competing talker condition, the prediction even
failed for the normal listeners, clearly demonstrating the limits of the current
versatile approach and demanding further extensions.
3971
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Digital noise reduction (DNR) is widely implemented in hearing aids to
improve the signal-to-noise ratio (SNR) of speech-in-noise. To accomplish
this, the DNR processor must be able to accurately discriminate the speech
versus noise signals. Acoustically, reverberation causes the blending of the
speech and noise signals. The purpose of the present experiment was to examine whether reverberation impacts the benefits of DNR processing. Speech
stimuli were combined with white noise at multiple SNRs. Speech-in-noise
signals were processed using virtual auditory space techniques to simulate
reverberation times and with a DNR simulation that mimicked hearing aid
processing based on spectral subtraction. Signals were acoustically analyzed to
quantify changes in SNR as a result of DNR processing. As reverberant degradation increased, the improvement in SNR decreased. Behaviorally, hearingimpaired individuals listened to low-context sentences in noise with varying
reverberation either with or without DNR processing. Without reverberation,
DNR had no or minimal impact on speech intelligibility, consistent with previous work. However, as reverberant degradation increased, the effects of DNR
on speech intelligibility were variable. These results suggest that the benefit of
DNR processing in hearing aids in noisy environments may depend on the
amount of reverberation in the environment. [Work supported by NIH.]
Acoustics ’17 Boston
3971
5a THU. AM
Contributed Papers
11:40
5aPPa11. The effect of room acoustics on speech intelligibility and spatial release from masking. Thomas Biberger and Stephan D. Ewert (Medizinische Physik and Cluster of Excellence Hearing4All, Universität
Oldenburg, Carl-von-Ossietzky-Straße 9-11, Oldenburg, Lower Saxony
26135, Germany, thomas.biberger@uni-oldenburg.de)
In daily life, verbal communication often takes place in indoor situations
with interfering sounds, where speech intelligibility (SI) is affected by (i)
masking and (ii) reverberation. Both introduce spectral and temporal
changes to the signal. A critical spatial configuration to assess (binaural) SI
is a frontal target speaker and two interfering sources symmetrically placed
to either side (6 60 ). Here a spatial release from masking (SRM) is
observed in comparison to co-located frontal target and interferers, showing
that the auditory system can make use of temporally fluctuating interaural
differences. Room reverberation affects the temporal representation of the
target and maskers and, moreover, the interaural differences depending on
the spatial configuration and room acoustical properties. Here the effect of
room acoustical properties (room size, T60, frequency dependency of T60),
temporal structure of the interferers, and direct to reverberation ratio (DRR)
on speech reception thresholds (SRT) and SRM were systematically
assessed in a simulated room using headphone-based virtual acoustics. For
constant T60 and DRR a different room size resulted in, e.g., significantly
different SRTs but similar SRMs, implying the temporal structure of
reverberation is less relevant for exploiting binaural cues. Data are discussed
and compared to predictions of a binaural SI model.
12:00
5aPPa12. On the relationship between a short-term objective metric
and listening efficiency data for different noise types. Nicola Prodi and
Chiara Visentin (Dipartimento di Ingegneria, Università di Ferrara, via Saragat 1, Ferrara 44122, Italy, nicola.prodi@unife.it)
This study aims to compare the distinct effects of a steady-state (SSN)
and a fluctuating (ICRA) masker on speech reception performance. SNR,
reverberation and masker type were combined as to create several acoustic
scenarios; matrixed-word listening tests in the Italian language were proposed to a panel of young adults with normal hearing, collecting data on
intelligibility scores (IS) and response time (RT). The listening conditions
were objectively qualified with the short-term metric STIr, defined as the average of the STI values calculated over short time-windows, whose duration
reflects the typical phoneme length. The results showed that for a given
STIr, both maskers yield the same IS, being the fluctuation benefit already
accounted for by the objective metric. The slope of the STIr-IS function
only depends on the speech material. Anyway, the fluctuating masker calls
for an increased amount of cognitive resources to be deployed in the speech
reception process, traced by a statistically significant higher response time.
These results shade a new light on the fluctuating masker release (FMR)
phenomenon.
THURSDAY MORNING, 29 JUNE 2017
ROOM 300, 8:00 A.M. TO 12:20 P.M.
Session 5aPPb
Psychological and Physiological Acoustics: Sound Localization and Binaural Hearing
Griffin D. Romigh, Chair
Air Force Research Labs, 2610 Seventh Street, Area B, Building 441, Wright Patterson AFB, OH 45433
Contributed Papers
8:00
8:20
5aPPb1. Influence of interaural time differences on the loudness of lowfrequency pure tones at varying signal and noise levels. Gauthier Berthomieu, Vincent Koehl, and Mathieu Paquier (Lab-STICC UMR 6285, Univ.
of Brest, 6 Ave. Le Gorgeu, Brest 29200, France, gauthier.berthomieu@
univ-brest.fr)
5aPPb2. Computational study of head geometry effects on sound pressure gradient with applications to head-related transfer function. Mahdi
Farahikia and Quang T. Su (Mech. Eng., SUNY Binghamton, 13 Andrea
Dr. Apt. A, Vestal, NY 13850, mfarahi1@binghamton.edu)
Directional loudness sensitivity, which is generally accounted for by atear pressure modifications because of the perturbation of the sound field by
the head, has been reported to occur at 400 Hz where shadowing effects are
usually considered small. Then, an effect of the interaural time difference
(ITD) on loudness has been observed for pure tones below 500 Hz. The latter was rather small but still significant, contributing to directional loudness
sensitivity. In addition, it has been shown that the effect of ITD on loudness
was caused by the ITD itself and not by its related localization. As this
effect appeared significant at low level only (40 phon), it was hypothesized
that ITD could help separate the signal from the internal noise and enhance
its loudness. The aim of the present study is to confirm this hypothesis by
observing the effect of ITD on the loudness of low-frequency pure tones
(100 and 200 Hz) for various signal-to-noise ratios. The signal level was
varied from 30 to 90 phon and the noise could be internal only or external
also. The effect of ITD appeared significant up to 40 or 50 phones depending on the frequency.
3972
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Effects of object geometry on sound scattering of far-field incident sound
waves for two different head models (spherical and ellipsoidal) have been
studied using an optimized Finite Element Method (FEM) based on frequency-dependent adaptive dimensions. This optimized FEM technique is
proven both efficient and accurate when compared with analytical results for
the spherical model, with maximum deviation of 0.6 dB. Comparisons
between models have been made on the equivalent Head-Related Transfer
Functions (HRTFs) for acoustic pressure, and for the first and second order
pressure gradients on the surface. It is shown that while directionality cannot
be achieved at lower frequencies using only pressure, pressure gradients provide sound cancellation for certain source orientations. Hence, it is possible to
cancel incoming sound from the front (or behind) or sides depending on the
direction (radial, azimuthal) and order of pressure gradients. While the pattern
of pressure gradient directionality remains similar between the spherical and
ellipsoidal models, the difference in dimensions affects the amplitude of the
equivalent HRTFs for these parameters. This study provides insight into the
placement of directional microphones on the head for hearing aids.
Acoustics ’17 Boston
3972
5aPPb3. My personal head related transfer function: High quality individualized computer models. Sebastian Fingerhuth, Danny Angles, and
Juan Barraza (School of Elec. Eng., Pontificia Universidad Católica de Valparaı́so, Av. Brasil 2147, Valparaı́so 2362804, Chile, sebastian.fingerhuth@
pucv.cl)
In this paper, we present a method to obtain individual 3D CAD-models
of the head and pinna. The method is a hybrid method: A photogrammetric
technique for the head is combined with a molding process for the pinna.
For the 3D reconstruction of the head, a set of pictures from different perspectives of the person is taken. Therefore the person is stepwise rotated
while seated and pictures are taken using semi-professional photographic
equipment. Four different orbits are used: top level, upper level, eye level,
and low level. These pictures, about 120, are then combined using a commercial photogrammetric software. Due to the complex and concave geometry of the pinna, an additional step has to be taken. An alginate mold is
made for each pinna which then is molded again to obtain a positive plaster
replica of the pinna. This replica is then converted to a CAD model. For this
last step, two methods were compared: the same photogrammetric process
as before and using a 3D scanner. Both CAD models, head and pinna, are
then carefully combined into one CAD mesh. The CAD-models can then be
used to compute HRTFs by means of the Boundary Element Method
(BEM).
9:00
5aPPb4. Predicting sound-localization performance with hearing-protection devices using computational auditory models. Paul Calamia,
Christopher Smalt, Shakti K. Davis, and Austin Hess (BioEng. Systems and
Technologies Group, MIT Lincoln Lab., 244 Wood St., Lexington, MA
02420, pcalamia@ll.mit.edu)
Evaluation of the effect of hearing-protection devices (HPDs) on auditory tasks such as detection, localization, and speech intelligibility typically
is done with human-subject testing. However, such data collections can be
impractical due to the time-consuming processes of subject recruitment and
the testing itself, particularly when multiple tasks and HPDs are included.
An alternative, objective testing protocol involves the use of a binaural mannequin (a.k.a an acoustic test fixture) and computational models of the auditory system. For example, data collected at the eardrums of such a
mannequin outfitted with an HPD can be fed into a binaural localization
model. If the performance of the model with such input can be shown to be
similar to that of human subjects, the model-based assessment may be sufficient to characterize the hearing protector and inform further design decisions. In this presentation we will describe the preliminary results of an
effort to replicate human-subject localization performance for 5 HPDs and
the open ear using an acoustic test fixture and three auditory localization
models. The task involved localizing the direction of a gun-cocking sound
from the center of a 24-loudspeaker ring. Variations among the models, as
well as a comparison to the human-subject data will be discussed. [Work
sponsored by US Army NSRDEC.]
9:20
5aPPb5. Localization of speech in noise with and without self-directed
room exploration. Samuel W. Clapp and Bernhard U. Seeber (Audio Information Processing, Technische Universität München, Arcisstr. 21, München
80333, Germany, samuel.clapp@tum.de)
The auditory system adapts to reflections by suppressing them in favor
of the direct sound, aiding greatly in sound localization and speech understanding in reverberation. Previous studies have shown an increase in the
echo threshold over repeated exposure to a reflection pattern, and the breakdown of this effect when the pattern changes suddenly. These studies examined this effect as a low-level cognitive process. The current study gives
listeners a more active role in adapting to reflections, to see if this can
engage higher-level processes more durable over time. In a psychoacoustic
test, listeners localized a short speech signal in the presence of either pointlike or diffuse noise in a simulated room, using the Simulated Open Field
Environment for simulation and playback over a loudspeaker array. In the
“Block” condition, subjects heard stimuli from the same room in all trials.
In the “Interrupt” condition, trials in the target room were interspersed with
3973
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
dummy trials from different spaces. In the “Learning” condition, listeners
used a GUI to position a virtual source in a room and hear the resulting auralizations before each block of trials, conducted as in the “Interrupt” condition. Localization was most accurate in the “Learning” condition,
particularly in the presence of the point-like noise source, and for target
locations the furthest angular distance from the noise. These results suggest
it is possible for actively engaged listeners to learn the acoustics of a room
in a temporally durable manner.
9:40
5aPPb6. Binaural detection-based estimates of precision of coding of
interaural temporal disparities across center frequency. Leslie R. Bernstein and Constantine Trahiotis (UConn Health Ctr., MC3401, 263 Farmington Ave., Farmington, CT 06030, Les@neuron.uchc.edu)
This presentation reports binaural detection data obtained using three
masking configurations previously shown, in toto, to reveal: (1) whether listeners can compensate internally for external interaural temporal disparities
(ITDs) and (2) the precision with which such compensation of ITD changes
as a function of reference ITD [L. R. Bernstein and C. Trahiotis, J. Acoust.
Soc. Am. 138, EL474-EL479 (2015)]. The current experiment extended that
paradigm in order to obtain binaural detection data at center frequencies that
ranged in octave steps from 250 Hz to 8 kHz. Results indicate that, at all
center frequencies, listeners are able to compensate internally for externally
imposed ITDs. Such compensation allows them to outperform what would
be expected were they only utilizing changes in interaural correlation produced by the addition of the tonal signal to the noise masker. The data will
be discussed in terms of quantitative analyses that show how detection performance is constrained by two distinct sources of interaurally and mutually
uncorrelated internal noise: one having a power that increases with the
power of the external masker and the other having a power that grows with
the magnitude of the ITD.
10:00–10:20 Break
10:20
5aPPb7. A model of the ongoing precedence effect. Patrick Zurek (Sensimetrics Corp., 14 Summer St., Malden, MA 02148, pat@sens.com) and
Richard L. Freyman (Commun. Disord., Univ. of Massachusetts, Amherst,
MA)
The precedence effect is the observation that, when a sound is repeated
after a delay of a few milliseconds, the interaural cues of the leading sound
have greater influence on localization and lateralization judgments of the
composite sound than do the cues of the lagging sound. Classic studies of
the precedence effect with transient stimuli have suggested auditory mechanisms that emphasize earlier-arriving cues. More recent studies have shown,
however, that non-transient sounds—steady, long-duration noises with slow
onset—also give rise to the precedence effect. This ongoing precedence
effect is difficult to explain with the abrupt-onset mechanisms invoked to
explain the effect with transients. The present model of the ongoing effect
assumes that listeners use interaural cues within auditory bands to form
band-specific lateral positions, which are then integrated across bands.
Within-band positions are estimated from the interaural level difference and
the interaural phase during the rising side of bandpass envelope modulations. Cross-band integration includes a mechanism that favors cues that are
consistent across wider frequency regions. Predictions from the model will
be compared to published data. [Work supported by NIDCD R01 01625.]
10:40
5aPPb8. Improving interaural time difference sensitivity using short
interpulse intervals with vowel-like stimuli in bilateral cochlear
implants. Sridhar Srinivasan (Acoust. Res. Inst., Austrian Acad. of Sci.,
Wohllebengasse 12-14/1, Wien, Wien 1040, Austria, ssri.oeaw@gmail.
com), Bernhard Laback (Acoust. Res. Inst., Austrian Acad. of Sci., Vienna,
Austria), and Piotr Majdak (Acoust. Res. Inst., Austrian Acad. of Sci.,
Wien, Austria)
Interaural time differences (ITDs) in the signal are important for sound
localization in the lateral dimension. However, even under laboratory stimulus control, ITD sensitivity of cochlear-implant (CI) listeners is poor at pulse
Acoustics ’17 Boston
3973
5a THU. AM
8:40
rates commonly used for encoding speech. Recently, improvements in ITD
sensitivity were shown for unmodulated high-rate pulse trains with extra
pulses at short interpulse intervals (SIPIs). In this study, we extended this
approach to more realistic stimuli, i.e., high-rate (1000 pulses-per-second)
pulse trains with vowel-like temporal envelopes. Using fixed SIPI parameters derived from the preceding study, we independently varied the timing
of the extra pulses across the fundamental frequency (F0) period, the modulation depth (0.1, 0.3, 0.5, 0.7, and 0.9), and the F0 frequency (125 and 250
Hz). Our results show largest improvements in ITD sensitivity for SIPIs at
the rising and peak portions of the F0 period and for larger modulation
depths. These findings may be useful for enhancing sound localization cues
with bilateral CI strategies.
11:00
5aPPb9. Interactive simulation and free-field auralization of acoustic
space with the rtSOFE. Bernhard U. Seeber and Samuel W. Clapp (Audio
Information Processing, Technische Universität München, Arcisstrasse 21,
Munich 80333, Germany, seeber@tum.de)
The Simulated Open Field Environment (SOFE), a loudspeaker setup in
an anechoic chamber to render sound sources along with their simulated,
spatialized reflections, has been used for more than two decades in free-field
hearing research. In 2004, the concept was revised to incorporate roomacoustic simulation software that computes sound reflections in arbitrarilyshaped rooms and auralizes them via many loudspeakers—the principle of
various systems used today (Hafter and Seeber, ICA 2004). For a complete
redesign of the system, an anechoic chamber has been purpose-built at
TUM and I will talk about its specifications. The anechoic chamber hosts
the real-time SOFE (rtSOFE), a setup with 61 loudspeakers to create a spatial sound field in a 5 m x 5 m area along with 360 of visual 3D projection.
New room-acoustic simulation software for interactive computation of
reflections computes room impulse responses in sub-millisecond intervals
and updates a convolution system capable of convolving seconds-long
impulse responses for many independent loudspeaker channels with very
short latency. I will present the general concept and capabilities of the new
rtSOFE, give details about its implementation and first experimental results.
The rtSOFE in the new anechoic chamber at TUM forms a cutting edge
research facility for interactive psychoacoustic and audio-visual research in
virtual acoustic space.
11:20
5aPPb10. Update on sound quality assessment with TWO!EARS.
Alexander Raake, Janto Skowronek, Hagen Wierstorf (Inst. of Media Technol., Audiovisual Technol. Group, Tech. Univ. Ilmenau, Helmholtzplatz 2,
Ilmenau 98693, Germany, alexander.raake@tu-ilmenau.de), and Christoph
Hold (Assessment of IP-based Applications, Tech. Univ. Berlin, Berlin,
Germany)
The paper summarizes the different test and modeling campaigns carried
out in the EC-funded FET-Open project TWO!EARS (www.twoears.eu) for
sound quality and Quality of Experience (QoE) evaluation of spatial audio
reproduction technology like stereophony or Wave-field Synthesis (WFS).
This work represents one of the two proof-of-concept application domains
of the interactive listening model developed in TWO!EARS. One stream of
our sound-quality-related work focused on listening tests and model development for the individual sound quality features localization and coloration.
After briefly reviewing the modeling approaches for these individual features presented in more depth elsewhere, the paper presents data and modeling considerations for a set of pairwise preference listening tests, following
a dedicated audio mixing and reproduction paradigm. For subsequent model
3974
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
development, the results are analyzed in different ways, for example in
terms of the pairwise preference data directly, using the Bradley-Terry-Luce
model and using multidimensional analysis techniques. Based on these analyses, different modeling approaches based on the TWO!EARS framework
are presented. To conclude, considerations are provided on how multimodal
interaction can affect preference selections, based on an additional test on
the selection of the “sweet spot” in a spatial audio listening context.
11:40
5aPPb11. Specificity of adaptation to non-individualized head-related
transfer functions. Griffin D. Romigh (Air Force Res. Labs, 2610 Seventh
St., Area B, Bldg. 441, Wright Patterson AFB, OH 45433, griffin.romigh@
us.af.mil), Brian Simpson (Air Force Res. Labs, Wright-Patterson AFB,
OH), and Michelle Wang (Air Force Res. Labs, Dayton, OH)
Initial accuracy is poor when listeners are asked to localize virtual sound
sources that have been low-pass filtered or rendered with non-individualized
head-related transfer functions (HRTFs). However, Majdak et al. (2013)
showed that, with training, localization performance improved when virtual
sounds were rendered using HRTFs that were low-pass filtered at 8.5 kHz.
This result suggests that previous outcomes showing training-induced
improvement in localization performance with non-individualized HRTFs
may merely be the result of listeners attending to low-frequency spectral information that is consistent with their own HRTFs, and not attending to the
information from high frequencies, where HRTFs differ widely across individuals. That being the case, one would expect training to generalize to all
non-individualized HRTFs, not just the non-individualized HRTF used during training. The current study investigated this hypothesis by performing
tests of localization accuracy before and after three weeks of auditory localization training with a single non-individualized HRTF. For all subjects,
localization performance with non-individualized HRTFs improved to a
level at or near their performance with individualized HRTFs, and no generalization to the other non-trained HRTFs was found, suggesting subjects do
learn to utilize an alternative set of high-frequency spectral information.
12:00
5aPPb12. Age-related cortical changes in spatial auditory attention.
Erol J. Ozmeral, Madeleine Berg, David A. Eddins, and Ann C. Eddins
(Commun. Sci. and Disord., Univ. of South Florida, 3802 Spectrum Blvd.,
Ste. 210, Tampa, FL 33612, eozmeral@usf.edu)
Along with established effects of age on hearing sensitivity, there is a
growing body of evidence that the aging auditory system suffers from
reduced temporal resolution as well. This, in combination with changes in
attentional-resource allocation, could have profound effects on the ability
for older listeners to selectively attend to spatial locations—a key component to successful listening and communication in challenging auditory
environments. Because behavioral tasks rarely have an unattended comparison and electrophysiological tasks rarely have an attended comparison, it is
difficult to ascertain the extent to which selective attention mediates or
sharpens spatial tuning. To address this shortcoming, we measured cortical
responses using electroencephalography for moving stimuli in the free field
during both passive and active conditions. Active conditions required listeners to respond to the onset of a stimulus when it occurred at a specific location (either 30 to the left or right of center). Both younger and older
normal-hearing listeners participated in the study. The event-related potentials as well as the source-localized activity in regions of interest associated
with sensory processing (i.e., left and right auditory cortices) and top-down
control (i.e., dorsal fronto-parietal areas) revealed considerable morphological differences between the age groups.
Acoustics ’17 Boston
3974
THURSDAY MORNING, 29 JUNE 2017
ROOM 201, 8:00 A.M. TO 12:20 P.M.
Session 5aSAa
Structural Acoustics and Vibration and Physical Acoustics: Numerical Methods and Benchmarking
in Computational Acoustics I
Robert M. Koch, Cochair
Chief Technology Office, Naval Undersea Warfare Center, Code 1176 Howell Street, Bldg. 1346/4, Code 01CTO,
Newport, RI 02841-1708
Micah R. Shepherd, Cochair
Applied Research Lab, Penn State University, PO Box 30, mailstop 3220B, State College, PA 16801
Manfred Kaltenbacher, Cochair
Mechanics and Mechatronics, TU Wien, Getreidemarkt 9, Wien 1060, Austria
Steffen Marburg, Cochair
Faculty of Mechanical Engineering, Technical University of Munich, Boltzmannstr. 15, Muenchen 85748, Germany
Invited Papers
8:00
5aSAa1. Benchmark problem identifying a pollution effect in boundary element method. Steffen Marburg (Faculty of Mech. Eng.,
Tech. Univ. of Munich, Boltzmannstr. 15, Muenchen 85748, Germany, steffen.marburg@tum.de)
In his former contributions on the boundary element method (BEM) in acoustics, the authors had not found any indication of a dispersion error similar to what is known as the pollution effect for the finite element method (FEM). However, in a recent paper, the author
has demonstrated the effect of numerical damping in BEM. A consequence of the pollution effect of FEM and numerical damping in
BEM is that the common rule of choosing a fixed number of elements per wavelength is not valid. By using one of the benchmark cases
of the EAA Technical Committee for Computational Acoustics, this can be easily shown. Traveling waves in a long duct are decaying.
In this presentation it will be shown that the numeric error depends on the length of the duct. For models with many waves over its surface, numerical damping adds an additional numeric error which can be understood as a pollution error because a local refinement in certain regions of the model will not significantly decrease the error. It will be discussed in which cases this problem may become relevant
for practical use of BEM.
8:20
5aSAa2. Investigation of the flow- and sound-field of a low-pressure axial fan benchmark case using experimental and numerical
methods. Florian Zenger (Inst. of Process Machinery and Systems Eng., Friedrich-Alexander Univ. Erlangen-Nürnberg, Cauerstr. 4,
Erlangen 91058, Germany, ze@ipat.uni-erlangen.de), Clemens Junger (Inst. of Mech. and Mechatronics, TU Vienna, Vienna, Austria),
Manfred Kaltenbacher (Inst. of Mech. and Mechatronics, TU Vienna, Wien, Austria), and Stefan Becker (Inst. of Process Machinery
and Systems Eng., Friedrich-Alexander Univ. Erlangen-Nürnberg, Erlangen, Bavaria, Germany)
3975
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
5a THU. AM
An extension of a benchmark case for a low-pressure axial fan is presented. The generic fan is a typical fan to be used in commercial
applications. The fan design procedure, as well as the experimental setups are described in detail. The numerical approach is based on a
forward coupling between a flow simulation with ANSYS Fluent and an aeroacoustic source term and wave propagation computation
with multiphysics research software CFS + +. Experimental and numerical data for aerodynamics and aeroacoustics are compared. This
includes aerodynamic performance (volume flow rate, pressure rise and efficiency), fluid mechanical quantities on the fan suction and
pressure side (velocity distribution and turbulent kinetic energy), wall pressure fluctuations in the gap region and acoustic spectra at various microphone positions. Finally, a comprehensive data base of an axial fan was generated. Flow field properties at the fan suction and
pressure side from the CFD simulation are in good agreement and spectra from the wall pressure fluctuations are in excellent agreement
with the experimental data. Spectra from the computed acoustic pressure tend to slightly overestimate the experimental results. Based on
the good agreement of both aerodynamic and aeroacoustic data, a thorough study on the dominant sound generation mechanisms is
made.
3975
8:40
5aSAa3. Non-conforming finite element method for efficiently computing multilayered microperforated plate absorbers. Manfred
Kaltenbacher, Sebastian Floss, and Jochen Metzger (Mech. and Mechatronics, TU Wien, Getreidemarkt 9, Wien 1060, Austria,
manfred.kaltenbacher@tuwien.ac.at)
The concept of acoustic impedance is a very useful model approach to efficiently compute configurations for sound absorption.
Thereby, the measurements are performed by an impedance tube and the obtained data is used for the first order Robin (impedance)
boundary condition within the numerical simulation. However, this approach is only valid for sound incidence perpendicular to the
boundary. A second approach is to resolve the volume of the absorber and use a rigid or even elastic frame model. Especially for multilayered silencers based on microperforated plates (MPPs) the volume resolving approach is beneficial. Here, a main challenge is to cope
with the quite different mesh sizes needed for accurately resolving the waves in the MPPs and the surrounding air regions. To efficiently
simulate such designs, we apply a Nitsche-type mortaring within the Finite Element Method to allow for non-conforming meshes and
thereby directly connect the different mesh sizes in the MPPs and surrounding air regions. We will discuss in detail our absorber design,
the performed measurements and numerical simulations and plan to publish the complete setup and results as a benchmark case for computational acoustics.
Contributed Papers
9:00
9:40
5aSAa4. Quantification of numerical damping in the acoustic boundary
element method for the example of a traveling wave in a duct. Suhaib K.
Baydoun and Steffen Marburg (Chair of VibroAcoust. of Vehicles and
Machines, Tech. Univ. of Munich, Boltzmannstraße 15, Garching bei München 85748, Germany, suhaib.baydoun@tum.de)
5aSAa6. Comparison of three-dimensional acoustical Green’s functions
for a half-space BEM formulation. Martin A. Ochmann (FB II, Beuth
Hochschule fuer Technik Berlin, Luxemburger Strasse 10, Berlin D-13353,
Germany, ochmann@beuth-hochschule.de)
The boundary element method (BEM) is a popular numerical method
for solving linear time-harmonic acoustic problems. Using the BEM, modeling is limited to the boundary of the fluid domain, which is particularly advantageous for exterior problems with unbounded domains. A widely
unknown drawback of the acoustic BEM is numerical damping. This work
is concerned with numerical damping encountered in the benchmark problem of an air-filled duct with rigid walls. A traveling wave, induced by a
particle velocity at the inlet, is fully absorbed at the outlet of the duct by
imposing an impedance boundary condition. The exact solution gives a constant pressure amplitude over the entire frequency range. However, the numerical solution exhibits decay of the pressure amplitude, which is clearly
an indication for numerical damping. This phenomenon is studied for different frequencies and elements-per-wavelength ratios. The extent of numerical damping is quantified by relating the decay to the pressure distribution
obtained for a fluid model considering damping. The gained knowledge enables more accurate estimations of real damping phenomena in the future.
9:20
5aSAa5. A brief review, benchmark, future development of structural
acoustic tools. Kuangcheng Wu (Ship Signatures, Naval Surface Warfare
Ctr. - Carderock, 9500 MacArthur Blvd., West Bethesda, MD 20817,
kcwu@msn.com)
Numerical tools have been widely applied to real world applications for
noise and vibration control. Each tool has its own advantages and limitations. To properly use the tool in the correct environment, it is imperative to
understand its underlying physics. One of the main challenges in modeling a
marine structure is to correctly simulate its unbounded surrounding domain.
Analytic solutions are limited to certain geometries (Junger & Feit, Sound,
Structure, and Their Interaction, ASA, 1993), semi-analytic solution (i.e.,
Surface Variational Principle), and different numerical techniques (i.e.,
boundary element, infinite element, and perfectly matching layer) has been
used to model complex structures with unbounded fluid implicitly or explicitly while satisfying the Sommerfeld radiation BC (Pierce, Acoustics: An
Introduction to its Physical Principles and Applications, ASA, 1989). In this
paper, those numerical techniques will be briefly reviewed and several
benchmark cases for simple structures will be presented. With the understanding of underlying physics, ideas in speeding up the numerical analysis
by proper simplification or model reduction will be addressed.
3976
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
We consider three-dimensional acoustical scattering or radiation problems in frequency domain above an infinite flat plane equipped with a local
impedance condition. For such half-space problems, a BEM formulation is
of advantage, where the Green’s function used satisfies not only the Helmholtz equation, but also the boundary condition at the impedance plane.
When using such a tailored Green’s function, only the surface of the sound
radiating or scattering object has to be discretized, since the influence of the
impedance ground is automatically taken into account. Simple formulas for
such half-space Green’s function exist only for rigid or soft infinite planes.
For a ground with arbitrary surface impedances, many different expressions
of the Green’s function are given in the literature. For incorporating these
functions into a BEM formulation, the first and second normal derivatives
must be calculated in such a way that they do not possess strong singularities. In the present work, four representations of half-space Green’s functions are investigated and compared for three different kinds of surface
impedances with respect to accuracy and computing time: (1) for a pure
absorbing ground, i.e., with real impedance, (2) for a masslike, and (3) for a
springlike ground corresponding to pure imaginary impedances.
10:00–20 Break
10:20
5aSAa7. Atmospheric acoustic modes between a complex ground impedance and an artificial absorber. Richard B. Evans (Retired, 99F Hugo
Rd., N. Stonington, CT 06359, richard.evans.01@snet.net), Xiao Di, and
Kenneth E. Gilbert (National Ctr. for Physical Acoust., University, MS)
Atmospheric acoustic normal mode computer codes are faced with finding the complex modal eigenvalues. Searching in the complex plane is difficult and requires special numerical techniques and custom software. A
Legendre-Galerkin technique that reduces the problem to a complex matrix
eigenvalue problem can be solved by commercially available software. This
proposed Legendre-Galerkin method is described as the projection of the
acoustic normal mode problem onto a recombined basis of Legendre polynomials. The modal approach is best suited for providing benchmark quality
results in cases when guided modes dominate the problem. Such results are
useful in establishing the validity and interpreting the characteristics of
atmospheric acoustic fields computed with the parabolic equation method,
for the same problem. The Legendre-Galerkin method is applied to cases
with a ground based duct and an elevated duct. Measured wind speeds, from
a costal experiment, provide the effective downwind and upwind sound
speed profiles with these ducted characteristics.
Acoustics ’17 Boston
3976
Invited Paper
10:40
5aSAa8. Comparison of finite element and analytical modeling of scattering of an acoustic wave by particles in a fluid. Valerie J.
Pinfield, Derek M. Forrester (Chemical Eng. Dept., Loughborough Univ.,Loughborough LE11 3TU, United Kingdom, v.pinfield@lboro.
ac.uk), Artur L. Gower, William J. Parnell, and Ian D. Abrahams (School of Mathematics, Univ. of Manchester, Manchester, United
Kingdom)
Ultrasonic wave propagation through dispersions of particles in liquids is of interest for particle characterization and process monitoring applications. Interpretation of the measurements relies on a theoretical model; we typically use a multiple scattering model which
builds on models of scattering by independent particles. We report finite element modeling of an acoustic wave propagating through a
liquid and interacting with a particle, using the linearized thermo-acoustic equations for propagation in a viscous liquid. We demonstrate
that the interaction of the acoustic field with the particle leads to decaying thermal and shear wave fields in the region very close to the
particle. Since the length scale of the thermal and shear decay is orders of magnitude smaller than the propagational mode acoustic
wavelength, fine meshing is necessary in the region of the particle/fluid boundary. The simulation results are compared with analytical
solutions for scattering of a plane wave by a single spherical particle, provided by Epstein and Carhart [JASA, 25, 533, (1953)] and Allegra and Hawley [(JASA, 51, 1546 (1972)].
Contributed Paper
11:00
5aSAa9. A coupled isogeometric finite element and boundary element
method with subdivision surfaces for structural -acoustic analysis of
shell structures. Zhaowei Liu, Robert Simpson (School of Eng., Univ. of
Glasgow, Glasgow G12 8QQ, United Kingdom, z.liu.2@research.gla.ac.
uk), Fehmi Cirak, and Musabbir Majeed (Dept. of Eng., Univ. of Cambridge, Cambridge, United Kingdom)
We demonstrate a method for simulating medium-wave acoustic scattering over elastic thin shell structures. We propose a coupled approach
whereby the finite element formulation is used to describe the dynamic
structural response of the shell and the boundary element method models
the acoustic pressure within the infinite acoustic domain. The two methods
are coupled through the relationship between acoustic velocities on the
structural-fluid interface. In our approach, a conforming subdivision discretization is generated in Computer Aided Design (CAD) software which can
be used directly for analysis in keeping with the idea of isogeometric analysis whereby a common geometry and analysis model is adopted. The subdivision discretization provides C1 surface continuity which satisfies the
challenging continuity requirements of Kirchhoff-Love shell theory. The
new method can significantly reduce the number of elements required per
wavelength to gain same accuracy as an equivalent Lagrangian discretization, but the main benefit of our approach is the ability to handle arbitrarily
complex geometries with smooth limit surfaces directly from CAD software. Our implementation make use of H-matrices to accelerate dense matrix computations and through this approach, we demonstrate the ability of
our method to handle high-fidelity models with smooth surfaces for structural-acoustic analysis.
Invited Papers
11:20
5aSAa10. Computing head related impulse responses and transfer functions using time domain equivalent sources. John B. Fahnline (ARL / Penn State, P.O. Box 30, State College, PA 16804-0030, jbf103@arl.psu.edu)
In the past, head-related impulse responses (HRIR) and head-related transfer functions (HRTF) have primarily been computed using
frequency domain boundary element methods or finite-difference time domain methods. The possibility of computing HRIRs and
HRTFs using transient equivalent sources is examined using a lumped parameter technique for enforcing the specified boundary condition. It is demonstrated that performing the computations in the time domain is advantageous because only a few thousand time steps are
need to fully define the HRIRs and nonuniform meshes can be used to reduce the number of acoustic variables drastically without significantly degrading the solution accuracy. It is also shown that the computations adapt well to parallel processing environments and the
times associated with the equivalent source calculations are proportional to the number of processors.
11:40
5a THU. AM
5aSAa11. A new infinite element paradigm in computational structural acoustics? David S. Burnett (Naval Surface Warfare Ctr.,
110 Vernon Ave., Panama City, FL 32407, david.s.burnett@navy.mil) and Les H. Wigdor (Syslink Consulting LLC, Beacon, NY)
In the 1990s, the lead author developed a radical new formulation for infinite elements for modeling scattering and radiation from
structures in unbounded domains. It was shown to be faster than the popular boundary element method (BEM), for the same physics to
the same accuracy, by several orders of magnitude; the speedup is unbounded as problem size increases. Academia and industry called it
a “revolution” in computational acoustics that would probably bring an end to the BEM. But then Bell Labs patented and licensed the
elements, effectively ending the “revolution” and removing the technology from the public domain for the next 20 years. Now, in 2017,
some patents have expired and the rest will expire soon, thus restoring the technology to the public domain. The talk will review the
original technology and then describe new R&D since 2015: (i) speeding up a commercial acoustic scattering code by 1400x and (ii)
extending the technology by developing a new hybrid version that computes the external field over 12,000x faster than the traditional,
expensive Helmholtz integral. Now that this “revolutionary” technology is back in the public domain, the market place can finally decide
whether it constitutes a new paradigm in computational structural acoustics.
3977
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3977
Contributed Paper
12:00
5aSAa12. Acoustic radiation modes and normal modes in exterior
acoustic problems. Lennart Moheit and Steffen Marburg (Chair of VibroAcoust. of Vehicles and Machines, Tech. Univ. of Munich, Boltzmannstr. 15,
Garching b. München, Bavaria 85748, Germany, lennart.moheit@tum.
de)
Acoustic radiation modes are eigenvectors of the real and symmetric
acoustic impedance matrix Z, which is usually computed by the boundary
element method (BEM) matrices G and H at the surfaces of inner obstacles
in an unbounded fluid-filled domain. Application of the finite element
method (FEM) and the infinite element method (IFEM) allows the computation of the acoustic radiation modes as well, but also normal modes can be
computed as right eigenvectors of a state-space eigenvalue problem. Modal
superposition of both radiation modes and normal modes leads to accurate
results of the radiated sound power. However, normal modes additionally
provide modal sound pressure distributions in the whole computational domain and can therefore be used to calculate frequency response functions. In
this work, modal superposition and reduction in exterior acoustics are presented and discussed.
THURSDAY MORNING, 29 JUNE 2017
ROOM 204, 10:40 A.M. TO 12:00 NOON
Session 5aSAb
Structural Acoustics and Vibration, Noise, Physical Acoustics, and Architectural Acoustics:
Acoustics and Vibration of Sports and Sports Equipment
Daniel A. Russell, Chair
Graduate Program in Acoustics, Pennsylvania State University, 201 Applied Science Bldg., University Park, PA 16802
Contributed Paper
10:40
5aSAb1. How runners deal with the shock induced vibrations propagating through their body? Delphine Chadefaux, Eric Berton, and Guillaume Rao (Aix Marseille Univ, CNRS, ISM, Inst Movement Sci., 163 Ave.
de Luminy, Marseille 13009, France, delphine.chadefaux@univ-amu.fr)
Runners experience numerous shocks leading to vibrations propagating
from the foot toward the entire body. These repetitive shocks are related to
musculoskeletal injuries. Consequently, runners tend to adapt their running
patterns according to the ground surface to cushion the impact. However,
the way runners manage precisely the three-dimensional components of the
vibrations, especially in the frequency domain, is not well understood. The
present study investigated which biomechanical parameters runners adapt to
tune the shock induced vibrations according to different running conditions.
A specific experimental procedure was designed, based on simultaneously
collecting kinematic, dynamic, vibration, and electromyographic data during running barefoot or shod and at various velocities. Using 10 non-specialist runners, energetic and spectral analyses of the three-dimensional foot
impact induced vibrations occurring at the third metatarsal bone, the tibial
plateau, the knee joint, the hip joint, and the 7th cervical were performed.
Results outlined the transfer function of each investigated segment. A significant outcome is the strategy set up by the neuro-musculoskeletal system
to protect upper areas of the human body. This contribution opens up new
perspectives in running analyses by underlining the significance of the
three-dimensional and the spectral contents in the shock induced vibrations.
Invited Paper
11:00
5aSAb2. Hitting the ball on the meat—Finding the sweet spot of a hurley. Eoin A. King and Robert Celmer (Acoust. Program and
Lab, Univ. of Hartford, 200 Bloomfield Ave., West Hartford, CT 06117, eoking@hartford.edu)
Hurling is one of Ireland’s most popular indigenous sports. It is a Gaelic stick-and-ball sport that combines elements of field hockey,
lacrosse, and handball. Key to the game is a player’s mastery of the hurling stick (hurley) which is used to the strike the ball (sliotar).
The hurley is similar to a hockey stick but with a shorter, wider and more circular head (bás), which is the area of the hurley that strikes
the sliotar. The sweet spot is a general term used amongst players to indicate the correct position on the hurley to strike the ball (“hitting
the ball on the meat”). By measuring the moment of inertia and center of percussion of a hurley, combined with experimental modal
analysis, this paper attempts to define the location of a sweet spot on a 34 inch ash huley. Measurements are based on the ASTM standard test method for measuring the moment of inertia and center of percussion of a baseball bat; however, we also propose an alternative
method to this standard for determining the center of percussion of a bat.
3978
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3978
Contributed Papers
11:20
5aSAb3. Dynamical analysis of stroke induced vibrations in tennis
racket. Delphine Chadefaux, Guillaume Rao (Aix Marseille Univ, CNRS,
ISM, Inst. Movement Sci., 163 Ave. de Luminy, Marseille 13009, France,
delphine.chadefaux@univ-amu.fr), Jean-Loı̈c LE CARROU (Sorbonne Universités, UPMC Univ Paris 06, CNRS, UMR 7190, LAM - Institut Jean le
Rond d’Alembert, Paris, France), Eric Berton, and Laurent Vigouroux (Aix
Marseille Univ, CNRS, ISM, Inst Movement Sci, Marseille, France)
Tennis rackets are mostly designed disregarding the boundary condition
managed by the player’s hand on the handle. This process leads to a lack of
accuracy in the mechanical parameters the manufacturers provide to their
rackets in order for them to be reliable and comfortable to the player. Our
work aimed at providing a better understanding of the effect of the tennis
player’s hand on the racket’s dynamical behavior. For this purpose, a dedicated experimental procedure involving 14 tennis players and 5 tennis rackets has been carried out. Vibrations propagated from the racket toward the
upper-limb have been collected synchronously with kinematic and electromyographic data during forehands of various intensities. Additionally, an
analytical model of the hand/racket interaction has been designed based on
operational modal analyses. This model provides a straightforward tool to
predict changes in the dynamical behavior of a tennis racket under playing
conditions. Results indicated that tennis players adjust their grip-force to
tune the vibrational content entering into his upper-limb. Besides, a noteworthy outcome is that grip-force induces modifications in the racket’s dynamical behavior that are at least as important as the differences observed
under free boundary conditions due to the rackets’ own mechanical
parameters.
11:40
5aSAb4. Vibroacoustic analysis of table tennis rackets and balls: The
acoustics of ping pong. Daniel A. Russell (Graduate Program in Acoust.,
Penn State Univ., 201 Appl. Sci. Bldg., University Park, PA 16802, drussell@engr.psu.edu)
Table Tennis rackets (ping-pong paddles) exhibit a large number of
structural vibration modes which follow patterns first observed by Ernst
Chladni and Mary Waller for elliptical plates. Vibrational mode shapes and
frequencies obtained through experimental modal analysis will be shown.
Acoustic analysis reveals that one structural mode of the paddle, in particular, dominates the sound produced by the ball-paddle impact. The rubber
padding provides some damping, and a significant mass loading to the paddle vibrations. The hollow cellulose nitrate balls exhibit a number of vibrational mode shapes typical of a hollow spherical shell, starting at
frequencies around 5900 Hz; these will be demonstrated from experimental
and computational results. However, the contact time between ball and paddle is such that the lowest acoustic modes of the ball do not contribute to the
radiated sound. Instead, the ball appears to radiate sound at a much higher
frequency sound (10-12 kHz) most likely due to snap-through buckling
common to spherical shells undergoing deformation while impacting a rigid
surface at high speeds.
THURSDAY MORNING, 29 JUNE 2017
BALLROOM A, 8:00 A.M. TO 12:20 P.M.
Session 5aSC
Speech Communication: Variation: Age, Gender, Dialect, and Style (Poster Session)
Elizabeth D. Casserly, Chair
Dept. of Psychology, Trinity College, 300 Summit St., Hartford, CT 06106
All posters will be on display from 8:00 a.m. to 12:20 p.m. To allow contributors in this session to see the other posters, authors of
odd-numbered papers will be at their posters from 8:00 a.m. to 10:10 a.m. and authors of even-numbered papers will be at their posters
from 10:10 a.m. to 12:20 p.m.
5aSC1. Acoustic cues and linguistic experience as factors in regional
dialect classification. Steven Alcorn, Kirsten Meemann (The Univ. of
Texas at Austin, 150 W. 21st St., Stop B3700, Austin, TX 78712, steven.
alcorn@utexas.edu), Erin Walpole, Cynthia G. Clopper (The Ohio State
Univ., Columbus, OH), and Rajka Smiljanic (The Univ. of Texas at Austin,
Austin, TX)
Listeners rely on a variety of acoustic cues when identifying regional
dialects, including segmental, prosodic, and temporal features of speech.
The purpose of this study was to examine how native speakers of American
English (AE) classify AE talkers by regional dialect when segmental and
prosodic features are manipulated in the stimuli they hear; it also considered
experience with different regional dialects as an additional factor affecting
classification. Native AE listeners residing in Ohio and Texas completed a
3979
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
free classification task in which they heard the same sentence read by 60
monolingual AE talkers and grouped talkers together based on perceived regional similarities. Three versions of the stimuli were presented in a
between-subjects design: unaltered, monotonized (f0 flattened to remove
intonation cues), or low-pass filtered (to remove segmental cues). Preliminary analyses indicate that performance in the unaltered and monotone conditions was more accurate overall than in the low-pass filtered condition,
suggesting that listeners rely on segmental information more than prosodic
information for classification. Overall performance was similar across the
Ohio and Texas listener groups for all three conditions, but Ohioans
outperformed Texans at grouping talkers from the local Midland dialect together, providing preliminary evidence for an effect of experience on
classification.
Acoustics ’17 Boston
3979
5a THU. AM
Contributed Papers
5aSC2. Applying pattern recognition to formant trajectories: A useful
tool for understanding African American English (AAE) dialect variation. Meisam K. Arjmandi, Laura Dilley, and Zachary Ireland (Dept. of
Communicative Sci. and Disord., Michigan State Univ., 1026 Red Cedar
Rd., East Lansing, MI, khalilar@msu.edu)
Few studies have focused on the acoustic-phonetic characteristics of
African American English (AAE) which distinguish this dialect from Standard American English (SAE), particularly for vowels and sonorant consonants. This study investigated whether formant dynamics from short,
sonorant portions of speech are sufficient to distinguish AAE and SAE dialects. Seven female speakers, four SAE and three AAE, from the Lansing,
Michigan area, were selected from a corpus of 30-45 minute sociolinguistic
interviews. Target portions of speech consisting of a V or VC sequence (C =
/n/, /m/, /l/, /r/) were identified from contexts selected to control for coarticulation. First (F1) and second (F2) formant values were extracted from randomly selected tokens at points 19%, 56%, and 81% of the duration through
the demarcated speech portions. Pattern recognition techniques were examined to differentiate tokens of the two dialects based on formant trajectories
as feature vectors. The results revealed that formant dynamics of the
selected contexts are acoustically informative enough to differentiate groups
of SAE from AAE speakers. A near-perfect classification of some contexts
was also achieved by applying support vector machines to the formant trajectories. These findings highlight the usefulness of incorporating pattern
recognition techniques for understanding acoustic variation due to dialect.
5aSC3. Articulatory kinematics during connected speech across dialects
and dysarthria. Jeffrey J. Berry (Speech Pathol. & Audiol., Marquette
Univ., P.O. Box 1881, Milwaukee, WI 53201-1881, jeffrey.berry@marquette.edu), Yunjung Kim (Commun. Sci. and Disord., Louisiana State
Univ., Baton Rouge, LA), and James Schroeder (Elec. and Comput. Eng.,
Marquette Univ., Milwaukee, WI)
The current work presents an analysis of articulatory kinematics during
connected speech in typical talkers and talkers with dysarthria from two different dialects of American English. Instrumental methods for obtaining
articulatory kinematic data during speech (particularly electromagnetic
articulography) are becoming increasingly viable within the clinical setting.
Yet almost no existing clinical standards for collecting and interpreting
articulatory kinematic data have been established. Moreover, there is little
basis for differentiating the impact of dialect from dysarthria on articulatory
kinematics. We examine articulatory kinematics obtained via electromagnetic articulography during a standard connected speech passage read by
typical talkers (n = 30) and talkers with dysarthria (n = 15). Participants are
divided among upper Midwestern and Southern American English dialects.
Analyses focus on kinematic measures of articulatory movement (range-ofmotion, speed, acceleration, and jerk) within and across dialect groups and
between typical talkers and individuals with dysarthria. The goal of the current work is to provide a preliminary evaluation of whether different kinematic measures of articulatory movement during connected speech may be
differentially sensitive to the impact of dialect and dysarthria. The results of
this work are germane to establishing clinically relevant measures of articulatory kinematics to improve the clinical assessment of dysarthria.
5aSC4. Dialect classification reveals mismatch between speech processing and dialect perception. Megan Dailey and Cynthia G. Clopper (Ohio
State Univ., 1961 Tuttle Park Pl., 108A Ohio Stadium East, Columbus, OH
43210, clopper.1@osu.edu)
Familiar dialects can facilitate speech processing. However, recent
investigations of speech processing of the Northern and Midland dialects of
American English reveal a different pattern: in noise-masked speech, listeners from both dialect regions identify Midland words and phrases with
higher accuracy than Northern words and phrases. This preference may be
explained by inconsistencies between Northern talkers’ production and perception of their own dialect. The goal of the current study was to determine
whether cross-dialect processing differences between the Northern and Midland dialects reflect listeners’ explicit dialect identification ability. Participants completed a speech intelligibility in noise task followed by a forcedchoice dialect categorization task. Speech stimuli in both tasks were short
phrases taken from passages read by eight Northern and eight Midland
3980
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
talkers. Responses in both tasks were scored for accuracy. Results revealed
higher accuracy in intelligibility for Midland phrases than Northern phrases,
as in previous work, but poor dialect categorization performance across all
listeners. The inability of listeners to explicitly categorize talkers by dialect
while showing an intelligibility benefit for Midland forms indicates that the
observed cross-dialect processing differences emerge even in the absence of
explicit dialect categorization, revealing a perceptual mismatch between
speech processing and dialect perception.
5aSC5. Sociophonetic variation in Mississippi: Gender, ethnicity, and
prevoiced plosives. Wendy Herd (MS State Univ., 2004 Lee Hall, Drawer
E, MS State, MS 39762, wherd@english.msstate.edu)
While native English speakers are traditionally reported to produce
word-initial voiced plosives with short positive VOTs, recent studies suggest sociophonetic variation exists in the production of these sounds. In separate studies, more prevoicing has been reported for men than women, for
African American speakers than Caucasian American speakers, and for
southern American English speakers than speakers from other regions. The
current study investigates the effects of gender, ethnicity, and context on
voicing variation in Mississippi by analyzing word-initial /b, d, g/ as read in
sentences by forty native speakers of English grouped according to selfreported gender and ethnicity. A significant effect of ethnicity and an interaction between gender and ethnicity were found. African American speakers
produced voiced stops with a larger proportion of closure voicing and produced more fully voiced closures than Caucasian American speakers. While
African American men and women produced similarly voiced closures,
Caucasian American men voiced closures more than women. Similarly,
Caucasian American speakers’ closure voicing was affected by context (e.
g., following an approximant vs. following a plosive), but African American
speakers consistently produced voiced closures regardless of context. These
findings strongly suggest that dialectal differences play a role in the voicing
variation of word-initial voiced stops.
5aSC6. Assessing vowel space area metrics in the context of dialect variation. Ewa Jacewicz and Robert A. Fox (Dept. and Speech and Hearing
Sci., The Ohio State Univ., 1070 Carmack Rd., 110 Pressey Hall, Columbus,
OH 43210, jacewicz.1@osu.edu)
There has been a growing interest in the development of a sensitive
methodology to define a working vowel space area (VSA) as a metric in basic and clinical research applications. In this study, three approaches to the
assessment of VSA were tested to evaluate their efficacy in characterizing
cross-dialectal and cross-generational variation: The traditional vowel quadrilateral, the traditional convex hull, and a more liberal convex hull. In the
two traditional approaches, VSA was computed as a planar convex polygon
shaped by phonologically distinct vowel categories as its corners. The mean
F1/F2 values at vowel midpoint were used to define the respective areas.
The liberal convex hull utilized vowel dynamics and variable F1/F2 temporal locations to refine the outer boundaries and maximize the VSA. This
approach used an unrestricted number of vowels and measurement locations
to define the optimal VSA. All computations were based on a common
speech material produced by 135 female speakers representing three American English dialects and four generations ranging from 8 to 91-year olds.
The three metrics yielded inconsistent and contradictory estimates of VSA.
Discussion will focus on the limited utility of polygon geometry in characterizing a working VSA in American English dialects.
5aSC7. “He maybe did” or “He may be dead”? The use of acoustic and
social cues in applying perceptual learning of a new dialect. Rachael Tatman (Linguist, Univ. of Washington, Guggenheim Hall, 3940-2425 Benton
Ln, Seattle, WA 98195, rctatman@uw.edu)
When learning to recognize words a new dialect, listeners rely on both
acoustics (Norris, McQueen, & Cutler 2003) and extra-linguistic social cues
(Kraljic, Brennan, & Samuel 2008). This study investigates how listeners
use both acoustic and social information after exposure to a new dialect.
American English (AmE) speaking listeners were trained to correctly identify the front vowels of New Zealand English (NZE). To an AmE speaker,
these are highly confusable: “head” is often heard as “hid”. Listeners were
Acoustics ’17 Boston
3980
5aSC8. Transcription and forced alignment of the digital archive of
southern speech. Margaret E. Renwick, Michael Olsen, Rachel M. Olsen,
and Joseph A. Stanley (Linguist Program, Univ. of Georgia, 240 Gilbert
Hall, Athens, GA 30602, mrenwick@uga.edu)
We describe transcription and forced alignment of the Digital Archive
of Southern Speech (DASS), a project that will provide a large corpus of
historical, semi-spontaneous Southern speech for acoustic analysis. 372
hours of recordings (64 interviews) comprise a subset of the Linguistic Atlas
of the Gulf States, an extensive dialect study of 1121 speakers conducted
across eight southern U.S. states from 1968 to 1983. Manual orthographic
transcription of full DASS interviews is carried out according to in-house
guidelines that ensure consistency across files and transcribers. Separate
codes are used for the interviewee, interviewer, non-speech, overlapping,
and unintelligible speech. Transcriber output is converted to Praat TextGrids
using LaBB-CAT, a tool for maintaining large speech corpora. TextGrids
containing only the interviewee’s speech are generated, and subjected to
forced alignment by DARLA, which accommodates the levels of variation
and noise in the DASS files with a high degree of success. Toward acoustic
analysis, we evaluate three methods for vowel formant extraction: the native
output of DARLA, a local implementation of FAVE-Extract, and a Praatbased extractor that incorporates separate formant tracks for different
regions of the vowel space. We present this workflow of transcription and
analysis to benefit other projects of similar size and scope.
5aSC9. An acoustic perspective on legacy data: Vowels in the digital
archive of Southern speech. Margaret E. Renwick and Joseph A. Stanley
(Linguist Program, Univ. of Georgia, University of Georgia, 240 Gilbert
Hall, Athens, GA 30602, mrenwick@uga.edu)
Speech varies widely in the American South, but the region is argued to
share the Southern Vowel Shift, whose various characteristics include
monophthongization of upgliding diphthongs, convergence of certain front
vowels via raising and lowering, and back-vowel fronting. We investigate
the influence of social factors on shift participation using vowel formant and
duration data from the Digital Archive of Southern Speech (recorded
1968 1983), which is newly transcribed and segmented by forced alignment. With this corpus of 64 linguistic interviews (372 hours), we study
how shifting varies in geographic space, across states from Texas to Florida.
The interviews offer large amounts of data from individual speakers, and
their semi-spontaneous nature reveals a more realistic portrait of phonetic
variability than is typically available. Interviews of European- and African
American speakers permit comparison of the Southern Vowel Shift with the
African American Vowel Shift. The impacts of other factors on the vowel
space are evaluated including generation, gender, socioeconomic status, and
education level. Acoustic analysis of historical speech corpora offers perspective for modern sociophonetic studies, by providing a point of comparison to illuminate the development of modern regional variation, which will
inform and enhance models of language change over time.
3981
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
5aSC10. Implications of covert articulatory variation for several phonetic variables in Raleigh, North Carolina English. Jeff Mielke, Bridget
Smith (English, North Carolina State Univ., 221 Tompkins Hall, Campus
Box 8105, Raleigh, NC 27695-8105, jimielke@ncsu.edu), and Michael J.
Fox (Sociology and Anthropology, North Carolina State Univ., Altoona,
WI)
We examine several phonological variables in a spontaneous speech corpus and a lab-collected acoustic/articulatory dataset, in order to pursue the
hypothesis that covert inter-speaker differences in speech production are
instrumental to solving the actuation problem in language change. We
examine two known cases of covert articulatory variation and their impact
on a range of low-level and high-level sound patterns active in Raleigh, NC.
The covert articulatory variables are // tongue shape and /l/ posterior constriction location. The overt variables are the retraction of /s/ and /z/ and the
affrication of /t/ and /d/ in various contexts near // (within words and across
word boundaries), flapping and deletion of // after /h/, intrusive [à] next to
/h/ and nasals, and the quality of vowels before and after /l/. We also examine affrication of /t/ and /d/ before /w/. The spontaneous speech comes from
132 hour-long interviews from the Raleigh Corpus (Dodsworth and Kohn
2012), and the lab speech is a set of 29 wordlist recordings exhibiting the
overt variables under investigation. We report the observed relationships
between the production of the covert and overt variables in the lab speech,
and relate this to the distribution of variants in spontaneous speech.
5aSC11. Splitting of Arabic communal dialects at childhood: The case
of consonant acquisition in Kfar Kanna Israel. Judith K. Rosenhouse
(Linguist and Humanities and Arts, SWANTECH Ltd. and Technion I.I.T.,
9 Kidron St., Haifa 3446310, Israel, judith@swantech.co.il) and Jomana
Abu Dahoud (Commun. Disord., Tel-Aviv Univ., Tel-Aviv, Israel)
Phonetic studies of the acquisition of Arabic dialects are few, both in
Israel and in Arab countries where native speakers use Arabic dialects as
their daily communication means. The need to know when and how dialects
split in the numerous Arabic-speaking communities is important both for
the linguistic aspect and for some practical clinical goals. The current study
focuses on the acquisition of consonants in Kfar Kanna (near Nazareth), a
village in the north of Israel, which has a mixed population of Christian and
Moslem inhabitants. Participants were altogether 127 girls and boys, in six
age groups (3:00 to 7:00 years old) of those faith communities. The children
had normal development and no hearing or speech problems. Our findings
show that the pronunciation of the colloquial Arabic consonantal inventory
develops with age, as expected. It also assimilates gradually to the adult
faith groups’ phonetic systems. This is evident, especially in a few distinctive consonants, e.g., /q, D3/. Due to schooling, some effects of the Modern
Standard Arabic phonetic system are also evident in the older children’s
data. This is a first study of communal Arabic dialects development in
Israel, as far as we know.
5aSC12. Detection of creaky voice as a function of speaker pitch and
gender. Lisa Davidson (Linguist, New York Univ., 10 Washington Pl., New
York, NY 10003, lisa.davidson@nyu.edu)
Creaky voice in American English speakers (especially women) has
been flagged as a negative characteristic, such as in business and radio
(Anderson et al. 2014, Glass 2015). However, it is unclear how accurately
naı̈ve listeners can identify creaky voice, and what factors facilitate or
hinder its identification. American listeners (N = 55) are presented with
stimuli from four podcast hosts: a high- and low-pitched male speaker, and
a high- and low-pitched female speaker. Other manipulated factors include
whether or not the utterance is a full sentence, and whether the utterance is
completely modal, completely creaky, or partially creaky (begins modal and
ends creaky). After a short familiarization, listeners identify whether 1.5sec
utterances contain creaky voice. Results show that listeners are significantly
less accurate in identifying creak in both male speakers than in both female
speakers, and less accurate when the utterance is partially creaky. For male
speakers, listeners are more accurate on fragments than on full sentences.
Lower accuracy on male speech may be a combination of smaller, less noticeable differences between average modal and creaky pitch, if listeners
heavily rely on low F0 as a cue to creaky voice (Khan et al 2016), and bias
toward attributing creak to female voices.
Acoustics ’17 Boston
3981
5a THU. AM
then played 500 ms vowels produced by both AmE and NZE speakers. Half
of the listeners were given correct information on the speakers’ dialect, and
half incorrect. Listeners’ classifications of vowels were affected by what
they were told about the speakers’ dialect. Vowels labeled as a given dialect,
correctly or not, were more likely to be classified as if they were from that
dialect. There was also an effect of speakers’ actual dialect. Overall, the
AmE-speaking listeners were more accurate when identifying vowels from
AmE than those from NZE; even with the very limited acoustic information
available listeners are still sensitive to inter-dialectal differences. Any model
of cross-dialect perception, then, must account for listener’s use of both
social and acoustic cues.
5aSC13. The role of voice onset time in the perception of English voicing contrasts in children’s speech. Elaine R. Hitchcock (Commun. Sci.
and Disord., Montclair State Univ., 116 West End Ave., Pompton Plains, NJ
07444, hitchcocke@mail.montclair.edu) and Laura L. Koenig (Haskins
Labs., New Haven, CT)
The perception of phonemic voicing distinctions is typically attributed
mainly to voice onset time (VOT). Most previous research focusing on voicing discrimination used synthetic speech stimuli varying in VOT. Results of
this work suggest that adult listeners show stable crossover boundaries in
the 20-35 ms range. However, no research has evaluated how VOT values
correspond to adult labeling regardless of whether the intended target is
voiced or voiceless. The present study obtained adult labeling data for natural productions of bilabial and alveolar pairs produced by 2-3-year-old
monolingual, English-speaking children. Randomized stimuli were presented twice to 20 listeners resulting in 5,760 rated stimuli. Stimuli were
categorized as short VOT (<20 ms), ambiguous VOT (20-35 ms) and long
VOT (>35 ms). The findings show that listeners demonstrated the greatest
accuracy for bilabials (>99%) and alveolars (>92%) when the target
matched the expected VOT duration (i.e., doe!short lag and toe!long
lag). As expected, ambiguous tokens showed generally lower levels of accuracy across all stimuli although listeners were able to identify the target phoneme with greater than chance accuracy. These findings suggest that other
variables such as burst intensity, fundamental frequency, and first formant
transition duration contribute to adults’ perception of children’s stops.
5aSC14. Cross-register speaker identification: The case of infant and
adult directed speech. Thayabaran Kathiresan, Volker Dellwo (Phonet.
Lab., Univ. of Zurich, Plattenstrasse 54, Zurich, Zurich 8032, Switzerland,
thayabaran.kathiresan@uzh.ch), Moritz Daum (Development Psych., Univ.
of Zurich, Zurich, Zurich, Switzerland), and Rushen Shi (Université du Québec, Montreal, QC, Canada)
The performance of automatic speaker recognition (ASR) systems
decreases when training and test data are produced in different social situations (speech registers). The present research tested ASR performance
across adult- and infant-directed speech registers (ADS and IDS respectively). IDS compared to ADS is generally characterized by higher and
more variable F0, hyper-articulated vowels and higher segment duration and
variability. Our dataset consisted of 12 sentences read by 10 Swiss-German
mothers to their infants (IDS register) and to an adult experimenter (ADS
register). ASR was performed when training and test registers were the
same (within register) and when they varied (between register) in 3 experiments. Experiment I used segmental features such as MFCCs and their deltas. Results revealed considerable recognition rate within register (87%) that
dropped to about half between registers (44%). This suggests that the variability between IDS and ADS poses challenges on ASR. Experiment II (in
progress) uses prosodic features such as F0 statistics, local and long term
variations of F0, intensity variations and energy of the frame for the identification. In experiment III, segmental and prosodic features are combined to
model the classifier for the identification done in the previous experiments.
5aSC15. Acoustic features and gender differences in clear and conversational speech produced in simulated environments. Shae D. Morgan,
Sarah H. Ferguson (Commun. Sci. and Disord., Univ. of Utah, 390 South
1530 East, Ste. 1201, Salt Lake City, UT 84112, shae.morgan@utah.edu),
and Eric J. Hunter (Communicative Sci. and Disord., Michigan State Univ.,
East Lansing, MI)
In adverse listening environments or when barriers to communication
are present (such as hearing loss), talkers often modify their speech to facilitate communication. Such environments and demands for effective communication are often present in professions that require extensive use of the
voice (e.g., teachers, call center workers, etc.). Women are known to suffer
a higher incidence of voice disorders among those in these professions, possibly due to their accommodation strategies they employ when in adverse
environments. The present study assessed gender differences in speech
acoustic changes made in simulated environments (quiet, low-level noise,
high-level noise, and reverberation) for two different speaking style instructions (clear and conversational). Ten talkers (five male, five female) performed three speech production tasks (a passage, a list of sentences, and a
3982
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
picture description) in each simulated environment. The two speaking styles
were recorded in separate test sessions. Several acoustic features relevant to
clear speech will be compared between simulated environments, speaking
styles, and talker genders to identify differences that may begin to address
the higher incidence of voice disorders among women in professions with a
high vocal use.
5aSC16. Perceptual learning and gender normalization in fricative perception. Benjamin Munson and Mara Logerquist (Univ. of Minnesota, 164
Pillsbury Dr., SE, Minneapolis, MN 55455, munso005@umn.edu)
Listeners identify fricatives ambiguous between /s/ and /S/ differently
depending on whether they believe that the talker is a woman or a man
(Strand & Johnson, 1996). The current experiment examined how robust
this effect is across experimental manipulations in a relatively large group
(n = 99) of listeners. Listeners identified a sack-shack continuum created by
pairing a fricative continuum with a VC token with gender neutral acoustic
characteristics. Listeners who participated in an experiment in which the
stimuli were paired with a male face did not label fricatives differently from
listeners for whom the stimuli were paired with a female face. Listeners
who participated in an experiment in which the stimuli were paired with
both male and female faces identified fricatives differently in the two conditions, but only for participants who were flexible in their assignment of
talker gender from speech stimuli alone in a second experiment. Most strikingly, fricative identification for listeners in all three experiments changed
significantly over the course of the experiment, as compared to listeners in a
baseline experiment in which the stimuli were not paired with a face. This
suggests that the presence of a face delays listeners’ perceptual learning of
ambiguous sounds.
5aSC17. Acoustic analysis of whispery voice disguise in Chinese. Cuiling
Zhang (Southwest Univ. of Political Sci. & Law, Tawan St., NO.83,
Huanggu District, Shenyang, Liaoning 110854, China, cuiling-zhang@forensic-voice-comparison.net) and Bin Lin (The City Univ. of Hong Kong,
Kowloon, Tong, Hong Kong)
This paper investigates the auditory and acoustical features of disguised
whispery voices, the acoustic difference between normal (non-disguised)
voices and whispery voices, and the effect of whispery disguise on forensic
speaker recognition. Recordings of eleven male college students’ normal
voices and whispery disguised voices were collected. All their normal and
whispery speech was acoustically analyzed and compared. The parameters
average syllable duration, intensity, vowel formant frequencies, and long
term average spectrum (LTAS) were measured and statistically analyzed.
The effect of whispery disguise on forensic speaker recognition by auditory
and phonetic-acoustic approach were evaluated. Correlation and regression
analyses were made on the parameters of whispery voice and normal voice.
To some extent, these simple regression models can be used for parameter
compensation in forensic casework.
5aSC18. Phonetic properties of the non-modal phonation in Shanghainese register contrast. Jia Tian and Jianjing Kuang (Linguist, Univ. of
Pennsylvania, Apt B3, 4105 Spruce St., Philadelphia, PA 19104, jiatian@
sas.upenn.edu)
Earlier literature documented that the tonal registers of Shanghainese
were distinguished by both pitch and breathy voice, although recent studies
suggested that young speakers might have lost the non-modal phonation.
The goal of our present study is two-fold: (1) with the development of analytic tools, we extensively investigated the acoustic and articulatory properties of the breathy voice in Shanghainese; (2) with the ongoing sound
change in mind, we recruited a large pool of speakers, born from 1930s to
2000s, so that we are able to discuss the different strategies in producing the
register contrast among speakers. Simultaneous audio and EGG signals
were collected. Statistical models suggested that speakers born after 1980
use different strategies in producing the register contrast. For older speakers,
overall, the lower register has a breathy phonation type: periodicity and
noise measures (HNR and CPP) are the most important acoustic correlates,
followed by spectral tilts (H1-A1, H1-A2, and H1-A3); articulatorily, Contact Quotient and Speed Quotient reliably distinguish the two registers. For
Acoustics ’17 Boston
3982
younger speakers, spectral measures are generally no longer a reliable cue,
but noise measures still differ significantly between the two registers, since
some younger speakers produce the lower register tones with creaky voice.
interdental-stopping among the younger generation indicates a shift in the
feature’s prestige within the community.
5aSC19. Power priming and speech perception. Ian C. Calloway (Linguist, Univ. of Michigan, 2947 Roundtree Blvd., Ypsilanti, MI 48197, iccallow@umich.edu)
5aSC21. Generalization of cross-category phonetic imitation of Mandarin regional variants. Qingyang Yan (Linguist, The Ohio State Univ.,
591 Harley Dr. Apt. 10, Columbus, OH 43212, yan@ling.ohio-state.
edu)
English listeners use information about speaker gender in categorizing a
sibilant as /s/ or /S/. This study investigates whether sibilant categorization
is influenced by whether cues to speaker gender are congruous and whether
self-perceived power in turn influences one’s ability to process incongruous
gender cues. One’s self-perceived power, the perceived capacity to control
another individual’s resources, can inhibit how one attends to information
associated with a social category. Participants were primed experimentally
for a high or low degree of self-perceived power. They then performed a
forced-choice identification task—for each trial, participants were presented
with an auditory stimulus—a word ranging from “sigh” to “shy”—and a visual stimulus—a male or female face—and they indicated whether they
heard “sigh” or “shy.” Participant likelihood to respond “sigh” was significantly influenced by speaker gender, whether the gender of the face matched
that of the speaker, and the power prime the participant received. For the
female voice, both participants responded “sigh” more often when presented
with a male face. For the male voice, however, low-power individuals were
less likely to respond “sigh” when presented with a female face, while highpower individuals responded similarly regardless of the face presented.
5aSC20. Apparent-time study of interdental-stopping among Englishmonolingual Finnish- and Italian-heritage Michiganders. Paige Cornillie, Julianne Fosgard, Samantha Gibbs, Delani Griffin, Olivia Lawson, and
Wil A. Rankinen (Commun. Sci. and Disord., Grand Valley State Univ.,
515 Michigan St. NE, Ste. 300, Office 309, Grand Rapids, MI 49503, wil.
rankinen@gvsu.edu)
5aSC22. Effects of age, sex, context, and lexicality on hyperarticulation
of Korean fricatives. Charles B. Chang (Linguist Program, Boston Univ.,
621 Commonwealth Ave., Boston, MA 02215, cc@bu.edu) and Hae-Sung
Jeon (School of Lang. and Global Studies, Univ. of Central Lancashire,
Preston, Lancashire, United Kingdom)
Seoul Korean is known for a rare three-way laryngeal contrast among
lenis, fortis, and aspirated voiceless stops, which has recently undergone a
change in phonetic implementation: whereas older speakers rely more on
voice onset time (VOT) to distinguish lenis and aspirated stops, younger
speakers rely more on onset fundamental frequency (f0) in the following
vowel. This production difference is reflected in disparate strategies for
enhancing the contrast in clear speech, supporting the view that younger and
older speakers represent the three laryngeal categories differently in terms
of VOT and f0 targets (Kang & Guion, 2008). In the current study, we used
the clear speech paradigm to test for change in the representation of the
two-way contrast between fortis (/s*/) and non-fortis (/s/) fricatives. Native
Seoul Korean speakers (n = 32), representing two generations and both
sexes, were recorded producing the coronal stops and fricatives in different
vowel contexts, item types (real vs. nonce words), and speech registers
(plain citation vs. clear). We report acoustic data on how the above factors
influence production of the fricative contrast and discuss implications for
the phonological categorization of non-fortis /s/ as lenis, aspirated, or a
hybrid lenis-aspirated category.
5a THU. AM
The production of coronal oral stops in place of interdental fricatives,
referred to as interdental-stopping, has been documented in Michigan’s
Upper Peninsula (UP) [3, 2], as well as, in other ethnic-heritage influenced
English varieties. However, there is a lack of quantitative inquiry into the
degree to which this salient feature is present among Michigan UP’s now
predominantly monolingual English-speaking communities; recent studies
have focused primarily on the last remaining older-aged bilinguals [1].
Michigan’s UP is in an ideal position to examine to what extent this feature
is present among a rural and predominantly monolingual English-speaking
community. The present study examines 40 Finnish-Americans and 44 Italian-Americans, whom are all monolingual speakers from Michigan’s Marquette County. Both samples are stratified by age, sex, and socioeconomic
status. All data are obtained from a passage task. To what degree, if any,
does stopping occur among the Finnish- and Italian-heritage monolingualEnglish speaking communities? This study reveals interdental-stopping
occurring most often among working-class males but least among Italian
middle- and younger-aged groups. The study’s apparent-time construct
highlights a potential change in the covert prestige that has been typically
associated with this feature among the older generation [1]. The decrease of
The current study investigated cross-category phonetic imitation of Jianshi Mandarin regional variants by Laifeng participants. An auditory shadowing task and a post-exposure reading task were used to examine how
participants imitated vowel variant [i] and coda variant [n] during shadowing and how imitation generalized to various types of novel words. During
the post-exposure reading task, participants were instructed to say the words
like the talker they heard, and to simply read the words, in a between-subjects design. Laifeng participants consistently imitated Jianshi [i] and [n]
variants during shadowing, and imitation generalization was observed, but
only when participants were instructed to imitate the previously heard
talker. Imitation of both variants generalized to novel words with the same
syllables (onset + rime + tone) but different orthography, novel words with
the same onset and rime but different tones, and novel words with the same
rime and tone but different onsets from the shadowed words, with similar
degrees of imitation across these three types of novel words. These results
suggest that generalization of cross-category imitation is a controlled, as
opposed to automatic, process, and that cross-category imitation can operate
and generalize at syllable, onset + rime, and phoneme levels in Mandarin.
3983
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3983
THURSDAY MORNING, 29 JUNE 2017
ROOM 302, 8:20 A.M. TO 12:20 P.M.
Session 5aSPa
Signal Processing in Acoustics: Audio and Array Signal Processing I
Gary W. Elko, Cochair
mh Acoustics, 25A Summit Ave., Summit, NJ 07901
Janina Fels, Cochair
Institute of Technical Acoustics, RWTH Aachen University, Neustr. 50, Aachen 52074, Germany
David C. Swanson, Cochair
Penn State ARL, 222E ARL Bldg., PO Box 30, State College, PA 16804
Contributed Papers
8:20
5aSPa1. Comparison of beamforming methods to reconstruct extended,
partially-correlated sources. Blaine M. Harker, Kent L. Gee, Tracianne B.
Neilsen (Dept. Phys. & Astronomy, Brigham Young Univ., N283 ESC,
Provo, UT 84602, blaineharker@gmail.com), and Alan T. Wall (Battlespace
Acoust. Branch, Air Force Res. Lab, Wright-Patterson AFB, OH)
Advanced cross-beamforming methods improve upon traditional beamforming to reconstruct complex source information and to estimate their respective acoustic radiation. Regularization of the cross-beamforming matrix
as part of the calculation procedure helps improve method robustness, but
differences in implementation impact volume velocity source results and
subsequent field predictions. This paper compares the abilities of four regularization-based, cross-beamforming methods: hybrid method, functional
beamforming, generalized inverse beamforming, and mapping of acoustic
sources (MACS), along with ordinary cross beamforming, in their ability to
reproduce source and field characteristics for an extended, partially correlated numerical source that mimics key characteristics of supersonic jet
noise radiation. The four methods that rely on regularization significantly
outperform cross-beamforming results, yet the effectiveness of each of four
methods is dependent on the individual regularization schemes, which
depend on frequency as well as the signal-to-noise ratio. Within those methods, generalized inverse method shows the greatest resistance to regularization variability when comparing results to benchmark cases, although in
many cases the hybrid method can give slight improvements. The successful
application of the methods demonstrates the utility of cross-beamforming in
formulating equivalent source models for accurate field prediction of complex sources, including jet noise. [Work supported by ONR and USAFRL
through ORISE.]
8:40
5aSPa2. Single-layer array method to reconstruct extended sound sources facing a parallel reflector. Elias Zea (KTH Royal Inst. of Technol.,
Teknikringen 8, Stockholm 100 44, Sweden, zea@kth.se) and Ines LopezArteaga (KTH Royal Inst. of Technol., Eindhoven, Netherlands)
The accuracy of sound field reconstruction methods with single-layer
microphone arrays is subject to the room or enclosure in which the measurements take place. Thus, the authors recently introduced a single-layer
method that can be employed to reconstruct compact sources in the presence
of a reflecting surface that is parallel to the array. Now the authors propose
a method conceived for extended planar sources such as baffled plates facing a parallel reflector. The method is based on a wavenumber-domain function describing the propagation paths between the source, the reflector and
the array. The free-space sound field radiated by the source is then recovered by means of a regularized inversion of the propagation function.
3984
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Numerical simulations are performed in order to assess the method’s performance and potential for source reconstructions. The results are promising
and point towards future experimental validation.
9:00
5aSPa3. Measurement extension limits of patch nearfield acoustic holography. Kanthasamy Chelliah and Ralph T. Muehleisen (Argonne
National Lab., 9700 S Cass Ave., Bldg. 362, Lemont, IL 60439, kchelliah@
anl.gov)
Patch nearfield acoustic holography is widely used when the measurement aperture is smaller than the area of the sound source. This paper discusses the factors affecting the limit of aperture extension. One step patch
nearfield acoustic holography based on equivalent sources model was considered in this study. Available literature provide constant estimations on
the ratio of measurement area to the area of the source. However, the present study shows that this extension limit is a variable depending on a few
parameters. Wave number, hologram distance and choice of regularization
technique were found to affect the extended area of reconstruction significantly. By choosing the right combination of these parameters, accurate
reconstructions were achieved even when the measurement area was only
ten percent of the source surface in size. This paper provides a systematic
comparison of reconstruction errors for cases with various parameter combinations. A model relating the parameters and the extension limit is under
development.
9:20
5aSPa4. The nearfield acoustic holography using a concentric rigid
microphone array to identify cylindrical sources disturbed by environment noise. Weikang Jiang and Shang Xiang (Shanghai Jiao Tong Univ.,
800 Dong Chuan Rd., Shanghai 200240, China, wkjiang@sjtu.edu.cn)
Many nearfield acoustic holography (NAH) techniques were proposed to
identify acoustical noise sources, which were usually available in the free
sound field. To extend NAH to more applications in noisy environments
such as factories, an NAH based on a rigid rectangle microphone array was
proposed to reconstruct normal velocities of vibrating plates in the environment with disturbing noise sources recently. In this presentation, an NAH
using rigid microphone array with the profiles matching the vibrating surface is proposed to identify the cylindrical surface sources. The microphone
array is flush-mounted at the bottom plane, and two concentric side surfaces
and two rectangular side surfaces are designed to satisfy the Neumann’s
boundary condition. The transfer matrix between the normal velocities of
sources and measured pressure is expressed as the summation of acoustic
modes of the volume within the rigid array. The disturbing waves from environments can be decomposed by the inverse patch transfer functions
Acoustics ’17 Boston
3984
9:40
5aSPa5. Model-matching for impulse sound localization in urban areas.
Sylvain Cheinet and Loic Ehrhardt (ISL, 5 Rue General Cassagnou, SaintLouis 68300, France, sylvain.cheinet@isl.eu)
The presentation addresses the localization of a punctual, impulse sound
source with distributed sensors in an urban area of typical size 150mx150m.
In the considered approach, the source is localized by matching some
observed characteristics of the signals (here, the first times-of-arrival) to
those obtained from a pre-defined database of simulations with known
source positions. The localization performance is analyzed on the basis of
small-scale model and real-scale measurements, with various building
heights, source positions, combinations from 15 down to 5 microphones.
The time matching localization attains an accuracy of 10 m in the vast majority of the configurations. A confidence level for each localization is satisfactorily tested. The robustness to the buildings height and to the urban map
is evaluated. Adaptations of the approach to real-time constraints and to
shot and shooter localizations are demonstrated.
10:00–10:20 Break
10:20
5aSPa6. Localization and source assignment of blast noises from a military training installation. Michael J. White, Edward T. Nykaza (US Army
ERDC/CERL, PO Box 9005, Champaign, IL 61826, michael.j.white@
usace.army.mil), and Andrew Hulva (US Army ERDC/CERL, Blacksburg,
VA)
Time differences of arrival (TDOA) are often sufficient data for localization of a sound source with a sensor array. We consider the problem of
localizing multiple impulsive sound sources that occur on a military installation having live-fire training exercises, using a dozen or more noise monitors as a large array. In this setup, though, sounds from multiple events can
arrive in different order at different monitors. When multiple sources are
operating, ambiguous source-detection assignments degrade the localization
estimates. By minimizing a global cost-function on the entire detection catalog, we resolve source-detection assignments and improve the localizations.
We outline an estimator and show results with simulated data and field
measurements.
10:40
5aSPa7. DOA (direction of arrival) estimation of incident sound waves
on a spherical microphone array: Comparison of some correction methods proposed to solve the DOA bias. Jean-Jacques Embrechts (Elec. Eng.
and Comput. Sci., Univ. of Liege, Campus du Sart-Tilman B28, Quartier
Polytech 1, 10 Allée de la découverte, Liege 4000, Belgium, jjembrechts@
ulg.ac.be)
A spherical 16-microphones array has been designed and built for room
acoustics applications (measurement of directional room impulse responses or
DRIR). The identification of strong specular reflections in a DRIR requires an
accurate determination of their direction of incidence on the sphere. Beamforming methods are therefore applied for their DOA (direction of arrival)
estimation. However, the crude applications of these methods revealed significant deviations between the measured and expected DOA. In-depth experiments in anechoic conditions have been carried out to analyze this problem
and they revealed that the origin of these deviations could be related to the
non-rigidity of the spherical surface of the array. In these experiments, incident waves were generated by a pseudo—point source at several frequencies
in the anechoic chamber. The microphone array is rotated and oriented at several azimuthal positions. The sound pressures measured on the sphere by the
16 microphones are then compared with their theoretical values obtained with
a rigid sphere assumption, which revealed some differences. Some methods
are then proposed to correct this problem. These methods are finally presented
and their results in DOA estimation tasks are compared.
3985
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
11:00
5aSPa8. Deconvolving the conventional beamformed outputs. Tsih C.
Yang (College of Information Sci. and Electron. Eng., Zhejiang Univ., Rm.
412, Bldg. of Information Sci. and Electron. Eng., 38 Zhe Da Rd., Xihu District, Hangzhou, Zhejiang, China, Hangzhou 310058, China, tsihyang@
gmail.com)
Horizontal line arrays are often used to detect/separate a weak signal
and estimate its direction of arrival among many loud interfering sources
and ambient noise. Conventional beamforming (CBF) is robust but suffers
from fat beams and high level sidelobes. High resolution beamforming such
as minimum-variance distortionless-response (MVDR) yields narrow beam
widths and low sidelobe levels but is sensitive to signal mismatch and
requires many snapshots of data. This paper applies deconvolution algorithm used in image de-blurring to the conventional beam power of a uniform line array (spaced at half-wavelength) to avoid the instability problems
of common deconvolution methods. The deconvolved beam power yields
narrow beams, and low sidelobe levels similar to, or better than MVDR and
at the same time retains the robustness of CBF. It yields a higher output signal-to-noise ratio than MVDR for isotropic noise. Performance is evaluated
with simulated and real data. Deconvolution is also applied to a circular
array to compare with that obtained using superdirective beamforming
(SDB) at small ka, where a is the radius and k is the wavenumber. The
deconvolved beam output achieves similar performance as the SDF, and
offers the same robustness as CBF at small ka.
11:20
5aSPa9. Separating silent sources. Richard Goldhor (Speech Technol. &
Appl. Res., 54 Middlesex Turnpike, Entrance D, Bedford, MA 01730, rgoldhor@sprynet.com), Keith Gilbert (Speech Technol. & Appl. Res., Berlin,
MA), and Joel MacAuslan (Speech Technol. & Appl. Res., Bedford,
MA)
In the presence of multiple continuously active acoustic sources, microphone response signals are additive mixtures of acoustic images: that is, filtered, delayed, and scaled versions of the source signals. By adaptively
minimizing a contrast function such as the mutual information between outputs, Blind Source Separation (BSS) algorithms can “demix” sets of such
microphone responses into output signals, each one of which is coherent
with a single hidden source signal. The time required to converge on such a
“separation solution” is typically at least several tens of seconds. However,
important real-world acoustic signals of interest (such as speech) are intermittent, not continuous. As a result, the response mixtures resulting from
such signals include “mute episodes” during which one or more of the sources are silent. These episodes can be employed to advantageously partition
the adaptive separation process, and even to generate permanently valid separation solutions for the momentarily silent sources. We present a method of
rapid optimal isolation (ROI) of intermittent sources, and explore the utility
of enhancing an adaptive BSS algorithm using this new method.
11:40
5aSPa10. Single channel speech enhancement based on harmonic estimation combined with statistical based method to improve speech intelligibility for cochlear implant recipients. Dongmei Wang and John H. L.
Hansen (Elec. Eng., Univ. of Texas at Dallas, 800 West Campbell Rd.,
ECSN 4.414, Richardson, TX 75080, dongmei.wang@utdallas.edu)
In this study, we propose a single microphone speech enhancement algorithm by combining harmonic structure estimation and traditional MMSE
speech enhancement for a leveraged overall solution. Traditional single
channel speech enhancement methods are usually based on the statistic
characteristics of noise signals which are effective only for stationary noise,
but not for non-stationary noise. In our study, we attempt to estimate noise
by exploring the harmonic structure of the target speech combined with temporal noise tracking. In voiced segments, since speech energy is sparsely
carried by harmonic partials, the spectrum located between adjacent harmonic partials are considered as noise. We assume that the speech spectrum
distributes continuously along the frequency-dimension. Thus, the noise
overlapped with speech harmonics can be estimated with an interpolation
technique. Next, the estimated noise is incorporated into a traditional
MMSE framework for speech enhancement. A listening test is carried out
Acoustics ’17 Boston
3985
5a THU. AM
method. The normal velocities of a cylindrical surface source are correctly
reconstructed in the numerical simulation as well as a motor operating in a
workshop. The results indicate that the proposed procedure is valid for
reconstructing velocities of the cylindrical sound source in a noisy
environment.
in this paper. When measuring HRTFs, the subject is usually rotated by
some angle, and stops and waits for the measurement signal to complete
before moving to the next measurement angle. It was shown that with this
static approach a comparable results to a traditional measurement using a
single speaker could be achieved. To further reduce the measurement time,
a slow continuous subject rotation can be used instead. While this rotation
will violate LTI (linear, time-invariant) requirements of the commonly used
signal processing, the influence is assumed to be negligible. As the subject
is rotating during the measurement sweep, different azimuth angles are
measured per frequency. This frequency dependent offset in the measurement positions has to be corrected during the post processing. To this end, a
spherical harmonic decomposition and reconstruction is applied as an interpolation method. To quantify the influence of the rotation and the subsequent post processing, a subjective and objective comparison between
statically and continuously measured objects is shown in this paper.
with 6 cochlear implant recipients to evaluate the proposed speech enhancement algorithm. The experimental results show that the proposed algorithm
is able to improve the speech intelligibility in terms of word recognition rate
for CI listeners.
12:00
5aSPa11. On the influence of continuous subject rotation during HRTF
measurements. Jan-Gerrit Richter and Janina Fels (Inst. of Tech. Acoust.,
RWTH Aachen Univ., Kopernikusstr. 5, Aachen 52074, Germany, jri@
akustik.rwth-aachen.de)
In recent years, the measurement time of individual Head-Related
Transfer Function (HRTF) measurements has been reduced by the use of
loudspeaker arrays. The time reduction is achieved by some kind of parallelization of measurement signals. One such fast system was developed at the
Institute of Technical Acoustics, RWTH Aachen University and is evaluated
THURSDAY MORNING, 29 JUNE 2017
ROOM 304, 8:20 A.M. TO 12:20 P.M.
Session 5aSPb
Signal Processing in Acoustics and Underwater Acoustics: Underwater Acoustic Communications
Milica Stojanovic, Chair
ECE, Northeastern Univ., 360 Huntington Ave., 413 Dana Bldg., Boston, MA 02115
Contributed Paper
8:20
the Primary to Intersymbol Interference (ISI) and Noise Ratio (PINR) is
shown to be a more complete predictor of performance than SNR. The
results also demonstrate the significant adverse effect of ISI on system performance. With this in mind, the optimal configuration in terms of mitigating the impact of ISI is considered for the receive array and processor
structure of equalizers. The role of total degrees of freedom (DOFs), equalizer filter adaptation averaging interval, and the relative stability of the
channel’s spatial and temporal structures is evaluated. The performance
gains attainable with array spatial aperture vs the gains attainable with filter
temporal depth is analyzed using closed form expressions as well as simulation and field data.
5aSPb1. Power delay profiles and temporal vs. spatial degrees of freedom: Predicting and optimizing performance in underwater acoustic
communications systems. James Preisig (JPAnalytics LLC, 638 Brick Kiln
Rd., Falmouth, MA 02540, jpreisig@jpanalytics.com)
The performance of underwater acoustic communications systems is often characterized as a function of source to receiver range or the received
in-band SNR. However, these measures are incomplete with respect to predicting performance. There is ample field data for which performance
improved with increasing range or for which data sets with similar SNRs
exhibited markedly different performance. Using field and simulation data,
Invited Papers
8:40
5aSPb2. Acoustic Communications: Through soils, sands, water, and tissue. Andrew Singer (Elec. and Comput. Eng., Univ. of Illinois at Urbana Champaign, 110 CSL, 1308 W. Main St., Champaign, IL 61801, acsinger@illinois.edu), Sijung Yang, and Michael Oelze
(Elec. and Comput. Eng., Univ. of Illinois at Urbana Champaign, Urbana, IL)
This talk will cover several experimental results and the challenges that arise in applying the basic concept of acoustic communications to vastly different media. Specifically, results obtained from state-of-the-art ultrasonic communications over distances of 100 m at
data rates in excess of 1Mbps will be described, along with the unique challenges that such high data rates impose of any subsea applications, for which traditional Doppler compensation techniques are wholly inadequate. A sample-by-sample time-scale projection method
was developed for streaming video applications for subsea operations. Over much shorter distance scales, even more acoustic bandwidth
is available, and data races in excess of 300 MB/s have been achieved. Difficulties in such applications arise due to the nonlinearities
excited in both the medium and potentially the transducers owing to the extremely wide bandwidths of operation. For potential
3986
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3986
biomedical applications, acoustic communication methods have been successfully used to transmit data in excess of 30 Mb/s through
animal tissue. Challenges in such applications include no only nonlinearity and Doppler compensation, but also reverberation and scattering off bone and other materials. A final application that will be discussed is the use of acoustic communications methods for
through-soils and through-sands experiments with application to various geotechnical projects. Attenuation, coupling, and excitation
pose substantive challenges for such applications.
9:00
5aSPb3. Reliable data delivery using packet coding over acoustic links. Rameez Ahmed (Cambridge, MA) and Milica Stojanovic
(ECE, Northeastern Univ., 360 Huntngton Ave., 413 Dana Bldg., Boston, MA 02115, millitsa@ece.neu.edu)
Acoustic links are challenged by high bit error rates, which cause data packets to be declared as erroneous. To prevent the packet
loss and achieve reliable data transmission over such links, some form of feedback must be implemented to deliver acknowledgments
from the receiver to the transmitter, thus initiating re-transmission of erroneous packets. Conventionally, data packets are grouped and a
selective acknowledgment procedure of the stop-and-wait type is employed as a suitable solution for half-duplex acoustic links. We revisit the issue of reliable data transmission and investigate the use of random packet coding in conjunction with a selective acknowledgment procedure. Specifically, we introduce random packet coding, and regard a block of coded packets as an equivalent of a packet—
now termed the super-packet. We then form groups of super-packets and apply a selective acknowledgment procedure on the soobtained units. Analytical results, obtained with experimentally verified channel models, demonstrate the power of grouped packet coding which offers a much improved throughput-delay performance on randomly time-varying acoustic channels with long delays.
Contributed Papers
9:20
10:00
5aSPb4. Estimating the acoustic channel to improve underwater communication. Ballard J. Blair (Electron. Systems and Technol. Div., MITRE
Corp., 202 Burlington Rd., Bedford, MA 01730, bblair@mitre.org)
5aSPb6. A multiple-input multiple-output orthogonal frequency division multiplexing underwater communication system using vector
transducers. Yuewen Wang, Erjian Zhang, and Ali Abdi (Elec. Comput.
Eng., New Jersey Inst. of Technol., University Heights, Newark, NJ 07102,
ez7@njit.edu)
9:40
5aSPb5. Combining sparse recovery approaches with underwater
acoustic channel models for robust communications in the shallow
water paradigm. Ananya Sen Gupta (Elec. and Comput. Eng., Univ. of
Iowa, 4016 Seamans Ctr. for the Eng. Arts and Sci., Iowa City, IA 52242,
ananya-sengupta@uiowa.edu)
Shallow water acoustic communication techniques are fundamentally
challenged by the rapidly fluctuating multipath effects due to oceanic phenomena such as surface wave focusing, specular reflections from the moving
sea surface, Doppler effects due to fluid motion as well as sediment-dependent absorption from the sea bottom. Several signal processing techniques
have been recently proposed that specialize in recovering the shallow water
acoustic channel components using compressing sampling and mixed norm
optimization theory. However, these novel techniques are typically agnostic
of underlying underwater acoustic propagation phenomena. Furthermore,
state-of-the-art in shallow water channel estimation typically does not
account for the three-way uncertainty principles governing the localization
of sparsity, time, and frequency for the time-varying shallow water acoustic
channel. This talk will focus on recent techniques proposed in this domain,
their relative benefits and shortcomings; as well as offer new insights into
how we can combine knowledge of basic underwater acoustic propagation
physics with state-of-the-art in sparse sensing and related techniques to
achieve robust shallow water channel estimation. The talk will also provide
an overview of equalization techniques that can be harnessed with channel
estimation techniques proposed.
3987
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Vector sensors and transducers are compact multichannel devices that can
be used for underwater communication via acoustic particle velocity channels
(A. Abdi and H. Guo, “A new compact multichannel receiver for underwater
wireless communication networks,” IEEE Transactions on Wireless Communications, vol. 8, pp. 3326-3329, 2009). In this paper, a multiple-input multiple-output (MIMO) underwater acoustic communication system is presented
using orthogonal frequency division multiplexing (OFDM) modulation. Upon
transmitting multiple independent data streams simultaneously over several
channels, this MIMO system can increase the transmission rate, whereas the
OFDM modulation mitigates the highly frequency selective underwater channels. Various components of the system including vector transducers and
algorithms for synchronization, channel estimation, MIMO detection, channel
coding, etc., are designed and implemented. Using this system, experiments
are conducted to measure and study acoustic particle velocity channels in the
MIMO setup. Additionally, system performance parameters such as bit error
rate and spectral efficiency are measured and discussed for various conditions
and configurations, to understand the performance of the developed vector
MIMO-OFDM system. [The work was supported in part by the National Science Foundation (NSF), Grant IIP-1500123.]
10:20–10:40 Break
10:40
5aSPb7. Very low signal to noise ratios coherent communications with
an M-ary orthogonal spread spectrum signaling scheme. Jacob L. Silva
and Paul J. Gendron (ECE Dept., Univ. of Massachusetts Dartmouth, 285
Old Westport Rd., Darmouth, MA 02747, jsilva13@umassd.edu)
Considered here is the use of an M-ary orthogonal spread spectrum signal set for coherent symbol detection at very low signal to noise ratios
(SNR). We consider symbols that are convolutionally orthogonal over the
entire multipath delay spread of the channel and allow for the minimum
mean square error (MMSE) estimation of an observed acoustic response
function with minimal computational effort at the receiver. We employ
these signals with an assumed sparsity prior to implement a joint symbol
and broadband channel estimation scheme for coherent detection without
the use of intra-packet training symbols of any kind. The approach allows us
to compensate for the shared time varying Doppler process of the various
coherent arrivals. The approach is demonstrated with at-sea recordings at
extremely low received SNR.
Acoustics ’17 Boston
3987
5a THU. AM
With the increasing number of unmanned vehicles and sensors being
deployed to support scientific research, oil and gas exploration, and military
operations, research and development in underwater acoustic communication is vital. The costs for testing new ideas for underwater acoustic communication is often prohibitively high due to expensive ship time, personnel,
and equipment. It is also challenging to accurately model the physics and
dynamics of the underwater communication channel because it is spread in
both Doppler and delay. The goal of the presented research is to create a
library of channels extracted from available experimental data sets. The
extracted channels are compared with physical models to see which characteristics, if any, are predictable from the observed data sets. This library of
channels would be used both for comparison of existing underwater communication systems and to aid in the development of communication techniques. [Work supported by the MITRE Innovation Program.]
11:00
5aSPb8. Combating impulsive noise for multicarrier underwater acoustic communications. Zhiqiang Liu (US Naval Res. Lab., 4555 Overlook
Ave., Washington, DC 20375, zhiqiang@ieee.org)
This paper presents a novel multicarrier communication scheme that is
specially designed for underwater acoustic channels with dominant impulsive noise. A novel multicarrier signaling structure is introduced at the transmitter. Thanks to the signaling structure, the received signal is shown
satisfying some special properties even after various channel distortions. By
exploiting these properties, we propose two receiver processing algorithms,
one for signaling parameter estimation and the other for symbol recovery.
The two algorithms take fully into account the presence of strong impulsive
noise, and both are shown capable of offering inherent robustness against it.
The proposed design is first evaluated via extensive simulations and then
tested in a recent sea-going experiment. Both simulated and experimental
results validate the proposed design and confirm its merits. [This work was
supported by US Office of Naval Research.]
11:20
5aSPb9. Broadband underwater acoustic communications subsystem
design. Corey Bachand, Tyler Turcotte, and David A. Brown (BTech
Acoust. LLC, 151 Martine St., Fall River, MA 02723, corey.bachand@cox.
net)
Increased bandwidth for acoustic communication places demanding
requirements on the electroacoustic transducer, tuning and matching circuits, and amplifier design. This presentation investigates various aspects of
the transmit channel with a focus on increasing bandwidth to the fullest
extent possible. This includes using alternative transducer designs, polarization orientations, transduction materials, broadband tuning networks, and
gain/phase control in high efficiency Class-D amplifiers. Comparisons
between modeled and measured performance for systems that achieve 10,
20, and 30 kHz are considered.
11:40
5aSPb10. Blind adaptive correlation-based decision feedback equalizer
(DFE) for underwater acoustic communications. Xiaoxia Yang, Jun
Wang, and Haibin Wang (Chinese Academic of Socience, Inst. of Acoust.,
No. 21 North 4th Ring Rd., Haidian District, Beijing 100190, China, yangxiaoxia@mail.ioa.ac.cn)
Passive time reversal uses a set of matched filters corresponding to individual channel impulse response to reduce intersymbol interferences (ISIs)
in underwater acoustic communications. Because of residual ISIs after time
reversal, a coupled single decision feedback equalizer (DFE) is necessary.
The coupled DFE uses a fixed tap number applicable to most shallow oceans
3988
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
with less user supervision. However, this correlation-based DFE needs the
training sequence to estimate the individual channel response and initialize
DFE, which decreases the effective bit rate and increases user supervision.
We therefore in this paper propose a new blind structure of time reversal
coupling with DFE, and it can complete multichannel identification and
equalization at the same time without the training sequence. This new receiver has a reversible structure. In the first mode, it is linear and adapted by
blind algorithms, e.g. constant modulus algorithm (CMA), to estimate the
multichannel impulse responses and initialize the equalizer. In the second
mode, the receiver becomes correlation-based DFE. From the viewpoint of
user supervision and spectral efficiency, the new structure with no training
sequence is more attractive. The blind correlation-based DFE exhibits the
same steady-state mean square error (MSE) with the trained structure, and
has been validated on real underwater communication data.
12:00
5aSPb11. Cyclic-feature based Doppler scale estimation for orthogonal
frequency-division multiplexing (OFDM) signals over doubly selective
underwater acoustic channels. Bingbing Zhang (College of Electron. Sci.
and Eng., National Univ. of Defence Technol., Kaifu district, Changsha,
Hunan 410073, China, zbbzb@nudt.edu.cn), Yiyin Wang (Dept. of Automation, Shanghai Jiao Tong Univ., Shanghai, China), Hongyi Wang, Liming
Zheng, Zhaowen Zhuang (College of Electron. Sci. and Eng., National
Univ. of Defence Technol., Changsha, China), and Kele Xu (College of
Electron. Sci. and Eng., National Univ. of Defence Technol., Paris,
France)
Underwater acoustic (UWA) communications enable underwater wireless networks to be applied in various applications, such as oceanographic
research, pollution early-warning, disaster prevention, and military systems.
However, a major challenge in UWA communications is to combat the
Doppler distortion caused by doubly selective (time and frequency selective) channels. Most Doppler scale estimators rely on training data or specially designed packet structures. These methods have fundamental
limitations in transmission rate and spectral efficiency. Different from these
methods, this paper presents a Doppler scale estimation approach exploiting
the redundant information contained within the cyclic prefix (CP) or cyclic
suffix (CS) of orthogonal frequency-division multiplexing (OFDM) signals.
We analyze the cyclic features of OFDM signals over doubly selective
underwater channels in order to demonstrate the relationship between the
cyclic features and the Doppler scale. Based on the theoretical analyses, we
find that the Doppler scale can be estimated from the extremums of the
cyclic autocorrelation function (CAF) of the received signal. Simulations
validate our theoretical analyzes and the performance of the proposed Doppler scale estimator. Apart from the high estimation performance, we also
highlight the utility of our method when only few OFDM blocks are
available.
Acoustics ’17 Boston
3988
THURSDAY MORNING, 29 JUNE 2017
ROOM 306, 8:00 A.M. TO 12:20 P.M.
Session 5aUWa
Underwater Acoustics: Acoustical Localization, Navigation, Inversion, and Communication
Aijun Song, Chair
School of Marine Science and Policy, University of Delaware, 114 Robinson Hall, Newark, DE 19716
Contributed Papers
8:00
8:40
5aUWa1. Over-sampling improvement for acoustic triangulation using
Barker code audio signals. Romuald Boucheron (DGA HydroDynam.,
Chaussée du Vexin, Val-de-Reuil 27105, France, romuald.boucheron@
intradef.gouv.fr)
5aUWa3. Geoacoustic inversion using distributed sensors. Jingcheng
Zhang (College of Information Sci. and Electron. Eng., Zhejiang Univ.,
Bldg. of Information Sci. and Electron. Eng., 38 Zhe Da Rd., Hangzhou,
Zhejiang 310058, China, 597572827@qq.com) and Tsih C. Yang (College
of Information Sci. and Electron. Eng., Zhejiang Univ., Kaohsiung,
Taiwan)
8:20
5aUWa2. Underwater source localization using unsynchronized passive
acoustic arrays. Daniel P. Zitterbart and Ying-Tsong Lin (Appl. Ocean
Phys. & Eng., Woods Hole Oceanographic Inst., 213 Bigelow Lab., MS#11,
Woods Hole, MA 02543, dpz@whoi.edu)
Localizing sources of underwater sound is a well studied field that is utilized by several scientific and naval communities. The scope of localization
might differ dramatically, from the necessity to localize targets with a submeter accuracy to estimation of the position of an object on a kilometer
scale. Advances in data storing capabilities during the last decade now allow
multi-year deployments of autonomous passive acoustic monitoring arrays
for which past-recovery time synchronization cannot be guaranteed. For
localization of transient signals, like marine mammal vocalization, arrival
time based localization schemes are currently the prevalent method. Applying arrival time based methods to non-synchronized multi station arrays
eventually leads to large localization uncertainties. Here, we present a backpropagation based localization scheme that overcomes the necessity to synchronize between array stations for localization purposes. It utilizes
waveguide dispersion measured within distributed arrays for simultaneous
source localization and time synchronization. Numerical examples are presented to demonstrate that localization uncertainty significantly improves
compared to arrival time based methods.
3989
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Geoacoustic inversion is a method using sound to invert for the sediment/bottom properties. The method has been researched extensively using
sound received on a vertical or horizontal line array (VLA/HLA). It requires
usually a dedicated ship to deploy the array and source to collect the data.
Conducting such experiments are costly and its’ use is limited in practical
applications. It is anticipated in the future that many low-cost acoustic transceivers will be deployed in the ocean in the form of a distributed sensor network. It seems logical to use these distributed sensors as the receivers to
conduct geoacoustic inversion using a ship-mounted source, thus covering a
wide area wherever the distributed nodes are deployed. The question is how
well can one estimate the bottom parameters using distributed sensors, compared with that using the traditional arrays. Note that the VLA/HLA data
offers spatial coherence, thus one usually carries out geoacoustic inversion
using the so called matched-field inversion. The signal received on the distributed data may not be coherent and one will conduct geoacoustic inversion using frequency-coherent inversion methods. The advantage of
distributed receiver is spatial diversity, particularly when the source moves.
Simulation results will be presented to compare their performance.
9:00
5aUWa4. Source depth discrimination using the array invariant. Rémi
Emmetiere, Julien Bonnel (ENSTA Bretagne, 2 rue Francois Verny, Brest
cedex 9 29806, France, remi.emmetiere@ensta-bretagne.org), Marie Gehant
Pernot (THALES, Sophia Antipolis, France), and Thierry Chonavel (Télécom Bretagne, Plouzané, France)
In this study, low frequency (0-500 Hz) source depth discrimination in
deep water is investigated using a horizontal line array (HLA). In this context, propagation is dispersive and can be described by modal theory. Array
invariant theory is known to allow source ranging via the modal beam-time
migration pattern. This pattern is defined by the evolution of a conventional
beamformer output over time. Recently, it has been shown that the array
invariant and the waveguide invariant are intrinsically related. In other
words, the beam-time migration pattern can be derived using waveguide
invariant theory. Utilizing this dependence, we pursue the link between the
two invariants to perform source depth discrimination using a HLA. Since
the waveguide invariant distribution is strongly related to source depth, the
beam-time migration pattern also depends on source depth and would allow
for source depth discrimination to be achieved. The proposed method is successfully applied on simulated data. As the classical array invariant method,
it is restricted to short signals, away from the array broadside, but it could
be used with minimal environmental knowledge in a multisource context.
Acoustics ’17 Boston
3989
5a THU. AM
Acoustic triangulation applications are widely used in submarine domain to estimate 3D-coordinates of a free model or buoy localization, for
example. Accuracy of positioning is dominated by the “perfect” knowledge
of sound celerity in the domain and also by temporal synchronization. This
short communication deals with the use of Barker audio code signals in
order to improve temporal detection. Indeed, properties of such signals
allow performing correlation with over-sampling detection like spatial subpixel algorithms in image processing domain. The first part of the communication presents this kind of signal and theoretical expected performances
with very low signal to noise ratio. The second part is dedicated to the
results obtained with an academic experiment devoted to measure directivity pattern of a sub-marine acoustic source. In order to decrease uncertainty
of this measurement, over-sampling algorithm has been used successfully to
estimate the position of the source during its displacement. This experimental set up allows in return a good angular resolution and an accurate estimate
of the directivity pattern.
9:20
10:40
5aUWa5. Estimation of clock drift in underwater acoustic instruments
using propagation channel analysis. Ilya A. Udovydchenkov, Ballard J.
Blair (The MITRE Corporation, 202 Burlington Rd., Bedford, MA 01730,
ilya@whoi.edu), Ralph A. Stephen (Geology and Geophys., Woods Hole
Oceanographic Inst., Woods Hole, MA), Peter F. Worcester, and Matthew
Dzieciuch (Scripps Inst. of Oceanogr., La Jolla, CA)
5aUWa8. Directional received medium access control protocol for
underwater sensor networks. Maochun Zheng (Harbin Eng. Univ., Harbin
150001, China, zmc2015@hrbeu.edu.cn)
All underwater acoustic sensors require accurate on-board clocks for
subsequent data analysis and interpretation. Unfortunately, most clocks suffer from a phenomenon called “clock drift” (loss of accuracy), which occurs
due to environmental changes, aging, and other factors. Typically, the clock
drift is accounted for by calibrating the clock in the instrument before and
after the deployment, and applying a correction during data post-processing.
This method, however, does not allow accurate estimation of clock errors
during a particular experimental event. In this presentation a small subset of
data collected on the last day of the Bottom Seismometer Augmentation in
the North Pacific (OBSANP) Experiment in June-July 2013 is analyzed. It is
shown that advanced signal processing techniques can be used to accurately
reconstruct the motion of the ship-suspended acoustic source, which, in
turn, can improve the accuracy of the acoustic receivers deployed on the
seafloor in the deep ocean. [Work supported by the MITRE Innovation Program and ONR.]
9:40
5aUWa6. Covet underwater acoustic communication based on spread
spectrum orthogonal frequency division multiplexing (OFDM). Shaofan
Yang, Zhongyuan Guo, Shengming Guo, Ning Jia, Dong Xiao, and Jianchun
Huang (Key Lab. of Underwater Acoust. Environment, Inst. of Acoust., Chinese Acad. of Sci., No. 21 North Fourth Ring Rd., Beijing 100190, China,
1172966054@qq.com)
A covert underwater acoustic communication (UAC) method is realized
with a dolphin whistle as the information carrier. The proposed method
jointly utilizes the spread spectrum and orthogonal frequency division multiplexing (OFDM) techniques. The original dolphin whistle is represented as
the complex baseband OFDM transmitted signal and the phase of its discrete
Fourier transform (DFT) coefficients are modulated by a spread spectrum
code to carry information. The audio quality and waveform similarity are
used as two covert effect criterion. In simulations, the influences of the code
length and modulation parameters are investigated. It is verified that the
communication performance and camouflage effect are a pair of contradictions. Therefore, the best compromise formula should be chosen according
to the actual demands.
10:00–10:20 Break
10:20
5aUWa7. High frequency underwater acoustic communication channel
characteristics in the Gulf of Mexico. Aijun Song (Elec. and Comput.
Eng., Univ. of Alabama, 114 Robinson Hall, Newark, DE 19716, ajsong@
udel.edu)
In the applications of underwater acoustic communications, higher carrier frequencies support wider bandwidth, which is preferable for achieving
higher data rates. At the same time, due to stronger attenuation, higher frequencies lead to shorter communication ranges. During past decades, a large
number of efforts were devoted to investigate acoustic communications for
the band of 8-50 kHz, over the medium range of several kilometers and
beyond. Several efforts focused on the very high frequencies, greater than
200 kHz, for short communication ranges, tens or hundreds of meters. Here
we consider a frequency band that falls between the well-studied high frequencies 8-50 kHz, for example from the Kauai Island experiment series,
and the “unknown” very high frequencies (200 kHz or higher). A high frequency acoustic experiment was conducted in the northern Gulf of Mexico
in August, 2016 to examine the operating ranges, data rates, and performance of acoustic communication systems at the carrier frequencies of 85 and
160 kHz. The received signal-to-noise ratios, channel coherence, and
impulse responses will be reported between multiple transducers and a fiveelement hydrophone array. Communication performance will be reported as
well.
3990
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Aiming at handshake protocols in the presence of hidden terminal and
exposed terminal problems in underwater sensor networks of single-channel, this paper has presented the communication protocol of directional
received slotted floor acquisition multiple access based on single vector sensor orientation. This protocol uses the characteristics of single vector sensor
to directional receive, solves the problem of the hidden and exposed terminal problem in the single channel condition through the directional handshake mechanism. For the problem of near-far effect, each node maintains
the neighbor node information table in one-hop range, calculates the power
of transmitting RTS and CTS packets according to the current neighbor information, and then calculates the transmission power that satisfies the signal-to-noise ratio requirement of the destination node according to the
obtained information. Compared with the single channel protocol based on
omnidirectional antenna, this protocol can make full use of the resources of
idle channel and the degree of network spatial reuse. Simulation experiments show that under the two different business, compared with the classical Slotted FAMA protocol and MACAW, the throughput of network has
improved by 40%~60% and 30%~40%, respectively, which proves that the
directional protocol can improve the network throughput effectively.
11:00
5aUWa9. Vertical multiuser communication using adaptive time reversal. Takuya SHIMURA, Yukihiro Kida, and Mitsuyasu Deguchi (Maine
Tech. Development Dept., JAMSTEC, 2-15 Natsushima-cho, Yokosuka,
Kanagawa 237-0061, Japan, shimurat@jamstec.go.jp)
Time reversal is an attractive solution to achieve channel equalization in
a rich multipath environment such as underwater acoustic communication
(UAC) channel because of its spatial and temporal focusing effect. Recently,
demands for multiuser communication have increased in the field of UAC.
In Japan Agency Marine-Earth Science and Technology (JAMSTEC), multiple autonomous underwater vehicles (AUVs) operation is planned to
widen observation area. Adaptive time reversal is a method for space division multiplexing (SDM) based on its spatial focusing and nulling effect,
and a promising solution for multiuser communication. In previous studies,
it has been demonstrated that adaptive time reversal is very effective for
“horizontal” multiuser communication, in which signals from multiple sources are received vertical receiver array. In this study, adaptive time reversal
is applied for “vertical” multiuser communication, which is between a support vessel and multiple vehicles below the vessel. At-sea experiments were
carried out in which signals from two sources near the seabed were measured at the receivers on the bottom of the research vessel. As results, adaptive time reversal is also effective for such vertical multiuser communication
and has better performance than orthogonal frequency-division multiplexing
(OFDM) with SDM combiners.
11:20
5aUWa10. Source localization in shallow waters with horizontal and
vertical arrays. Yuqing Jia (Key Lab. of Underwater Acoust. Environment,
Inst. of Acoust.,Chinese Acad. of Sci., No.21 North Fourth Ring Rd., Beijing 100190, China, 469120473@qq.com), Shengming Guo, and Lin Su
(Key Lab. of Underwater Acoust. Environment,Inst. of Acoust.,Chinese
Acad. of Sci., Beijing, China)
In order to develop the ability of localization of the underwater target in
shallow sea, a method of localization with multi-array is proposed in this paper. This method employ matched field processing (MFP) combined with
vertical line arrays (VLA) and horizontal line array (HLA) to sample the
sound field in large scale to get the 3-D information of the target. The multiarray is built by take a full account the affect of waveguide in shallow sea
comparing with single array, which can overcome the twin-line arrays port/
starbord blur problem and achieve the estimation of range and azimuth
effectively. The localization is realized by a MFP and the results from simulations and data processing is more reliable in complex shallow sea environment than with single array MFP.
Acoustics ’17 Boston
3990
11:40
12:00
5aUWa11. Specifics of DEMON acoustic signatures for large and small
boats. Alexander S. Pollara (Maritime Security Ctr., Stevens Inst. of Technol., 940 Bloomfield St., Apt. 3, Hoboken, NJ 07030, apollara@stevens.
edu), Gregoire Lignan (École Navale, Paris, France), Louis Boulange (École
Navale, Lanvéoc, France), Alexander Sutin, and Hady Salloum (Maritime
Security Ctr., Stevens Inst. of Technol., Hoboken, NJ)
5aUWa12. Iterative source-range estimation in a sloping-bottom shallow-water waveguide using the generalized array invariant. Chomgun
Cho, Hee-Chun Song (Scripps Inst. of Oceanogr., Univ. of California, San
Diego, 9500 Gilman Dr., La Jolla, CA 92093-0238, hcsong@mpl.ucsd.edu),
Paul Hursky (HLS Res. Inc., San Diego, CA), and Sergio Jesus (Univ. of
Algarve, Faro, Portugal)
Marine vessel propellers produce noise by the formation and shedding
of cavitation bubbles. This process creates both narrow-band tones and
broad-band amplitude modulated noise. The Detection of Envelope Modulation on Noise (DEMON) is an algorithm to determine the frequencies that
modulate this noise. Results of DEMON processing depend on the selection
of a ship noise frequency band to analyze. It is well known that the best
passband to use may vary dramatically between vessels. Despite this, there
has been no systematic investigation how the DEMON spectra depend of
the carrier noise frequencies, and the modulation indices of vessel noise
have not been investigated. We use a modification of the Cyclic Modulation
Spectrum (CMS) to determine the modulation index of cavitation noise
across the entire spectrum of carrier frequencies. We investigated how speed
and vessel size affect the modulation index and carrier frequency of vessel
noise. Several phenomena in the distribution of modulation indices for large
and small boats were observed. These can be used for vessel classification.
For small boats, the DEMON spectra have a different set of frequency peaks
at various carrier frequencies. This is explained by the engine exhaust which
produces amplitude modulated noise much like a propeller.
The array invariant proposed for robust source-range estimation in shallow water is based on the dispersion characteristics in ideal waveguides, utilizing multiple arrivals separated in beam angle and travel time for
broadband signals. Recently, the array invariant was extended to general
waveguides by incorporating the waveguide invariant b, referred to as a
generalized array invariant. In range-dependent environments with a sloping
bottom, the waveguide invariant b is approximately proportional to the
source range via the water depth. Assuming knowledge of the bottom slope,
the array invariant can be applied iteratively starting with b = 1 in shallow
water, which converges toward the correct source range in a zigzag fashion.
The iterative array invariant approach is demonstrated in a sloping-bottom
shallow-water waveguide using a short-aperture vertical array (2.8 m) from
the Random Array of Drifting Acoustic Receivers 2007 experiment (RADAR07), where a high-frequency source (2-3.5 kHz) close to the surface (6m) was towed between 0.5 to 5 km in range with the corresponding water
depth being 80 and 50 m, respectively.
THURSDAY MORNING, 29 JUNE 2017
ROOM 309, 8:40 A.M. TO 12:20 P.M.
Session 5aUWb
Underwater Acoustics, Acoustical Oceanography, and ASA Committee on Standards: Underwater Noise
From Marine Construction and Energy Production III
James H Miller, Cochair
Ocean Engineering, University of Rhode Island, 215 South Ferry Road, Narragansett Bay Campus URI, Narragansett,
RI 02882
Paul A. Lepper, Cochair
EESE, Loughborough University, Loughborough LE113TU, United Kingdom
Invited Papers
8:40
5a THU. AM
5aUWb1. The underwater sound field from impact pile driving: Observations and modeling of key features in the time domain.
Peter H. Dahl and David R. Dall’Osto (Appl. Phys. Lab. and Mech. Eng., Univ. of Washington, 1013 NE 40th St., Seattle, WA 98105,
dahl@apl.washington.edu)
Observations of the pressure time series made from a vertical line array at range of order 10 waveguide depths from the pile source are
presented along with interpretive modeling. Modeling is based on conceptualizing underwater sound from impact pile driving as originating
from a vertical distribution of harmonic point sources uniformly distributed along the wetted-length of the pile, each source phased prior to
summation in the frequency domain. This leads to credible representations of pressure time series as function of range and depth that are
linearly related to field measurements. In particular, precursor arrivals which arrive prior to any water borne arrival as result of having
propagated through a higher-speed sediment, and deterministic manifestations of the Mach cone effect within the main pressure time series,
are shown in comparison with modeling results. Both phenomena are influenced by the increasing sediment excursion of the pile, and the
importance of the sound speed gradient and sediment attenuation to the precursor will be shown. Finally, the synthetic results are used to
generate acoustic streamlines that trace the pathway of (short-time-averaged) active acoustic intensity associated with the precursor. Results
from this work suggest the precursor amplitude sets the performance bound of any sound mitigation strategy.
3991
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3991
9:00
5aUWb2. Experimental validation of models for prediction of marine pile driving sound. Roel A. Müller, Marten Nijhof, Bas Binnerts (TNO, Oude Waalsdorperweg 63, Den Haag 2597 AK, Netherlands, roel.muller@tno.nl), Christ A. de Jong, Michael A. Ainslie
(TNO, The Hague, Netherlands), and Erwin Jansen (TNO, Den Haag, Netherlands)
Various models for the underwater noise radiation due to marine pile driving are being developed worldwide, to predict the sound
exposure of marine life during pile driving activities. However, experimental validation of these models is scarce, especially for larger
distances. Recently, TNO has been provided with data from underwater noise measurements up to 65 km from the piling location, gathered during the construction of two wind farms in the Dutch North Sea. These measurement data have been compared with different
modeling approaches, in which the sound source is either formulated as an equivalent point source, or as a axially symmetric finite element model of the pile including the surrounding water and sediment. Propagation over larger distances, with varying bathymetry, is
modeled efficiently by either an incoherent adiabatic normal mode sum or a flux integral approach. Differences between simulation and
measurement data are discussed in terms of sound exposure level and spectral content, which leads to more insight into the mechanisms
of sound radiation and propagation that are relevant during marine piling activities. An overview is given of the merits, shortcomings,
and possibilities for improvement of the models.
Contributed Papers
9:20
9:40
5aUWb3. Hydroacoustic measurements and modeling of pile driving
operations in Ketchikan, Alaska. Graham A. Warner (JASCO Appl. Sci.,
2305-4464 Markham St., Victoria, BC V8Z 7X8, Canada, graham.warner@
jasco.com), Melanie Austin (JASCO Appl. Sci., Anchorage, AK), and
Alexander MacGillivray (JASCO Appl. Sci., Victoria, BC, Canada)
5aUWb4. On the airborne contribution to the underwater sound-field
from marine pile installation. David R. Dall’Osto (Appl. Phys. Lab., Univ.
of Washington, 1013 N 40th St., Seattle, WA 98105, dallosto@apl.washington.edu) and Peter H. Dahl (Appl. Phys. Lab. and Dept. of Mech. Eng.,
Univ. of Washington, Seattle, WA)
Underwater acoustic measurements of pile driving operations were
made at the Ketchikan ferry terminal in July 2016. At the time of the measurements, marine mammal injury and disturbance criteria developed by the
National Marine Fisheries Service (NMFS) were based on sound pressure
level (SPL) thresholds. Shortly after the measurements, NMFS changed the
injury thresholds to a dual criteria involving peak pressure and sound exposure levels specific to marine mammal functional hearing groups. This paper
presents distances to both injury criteria and the (unchanged) SPL-based disturbance criteria for vibratory driving and impact hammering 30-inch diameter cylindrical piles. Threshold distances were obtained using empirical
regressions of sound levels measured by seabed-mounted recorders at 10
and 1000 m nominal range. A finite-difference method pile driving source
model was used with a parabolic equation propagation model to compare
measurements with simulations and to estimate received levels at all ensonified locations in the complex bathymetric environment of the Tongass Narrows. Measured and modeled results show the importance of hydrophone
placement with respect to the Mach cone and near-pile bathymetry.
Airborne sound generated by the impact hammer used for marine pile
installation contributes to the total underwater sound field. This contribution
is distinct from the sound generated underwater, e.g., by the up-and-down
going Mach waves generated by the pile underwater. Airborne sound transmission into the water occurs within a 26 deg. cone directly below the pilehammer. As the pile is driven deeper, the distance between the hammer and
the water surface decreases producing a time-dependent signature in the
observed underwater sound pressure that correlates directly to the observed
pile-depth monitored during installation. Additionally, the area subtended
by the cone of transmission decreases as the pile is driven deeper, which
reduces the duration of the air-borne contribution which is longest at the beginning of the pile installation. In this presentation, a model for the airborne
contribution is examined alongside data that was measured at ranges of
approximately 2 and 10 times the water depth. In addition to basic interpretation of the time signature from impact pile driving, the observed transmission of high-intensity airborne sound into the water during pile installation
has implications for the contributions of airborne noise associated with
wind-turbines.
Invited Papers
10:00
5aUWb5. Depth dependence of pile driving noise measured at the research platform FINO3. Frank Gerdes (WTD 71, Berliner
Straße 115, Eckernförde 24340, Germany, frankgerdes@bundeswehr.org)
Impact pile driving is a source of high-amplitude underwater sound that can propagate large distances and may have a negative
impact on marine fauna. In Germany, wind farm operators are required to monitor underwater sound levels and to ensure that the levels
do not exceed certain values. Mainly because of practical reasons sound measurements are usually performed with hydrophones about 2
to 3 m above the sea-floor. It is of some interest to know whether these sound values are representative for the entire water column. We
investigated this with an underwater measurement system that was designed to provide simultaneous sound measurements at up to eight
different heights above the sea-floor. It consisted of a bottom mounted tripod to which a vertical chain of five to six hydrophones was
attached with the top-most hydrophone being located about four meters below the sea-surface. The system was cable-connected to the
research platform FINO3. This paper analyses the depth dependence of pile driving noise that was emitted from piling operations in the
nearby offshore wind-farm DanTysk with propagating distances between pile and FINO3 ranging between 3km and 15km. Observed
sound exposure levels (SEL) were usually somewhat larger at 2 m above the sea-floor than at hydrophones higher in the water column.
Details about the variability and frequency dependence of the observed differences are presented.
10:20–10:40 Break
3992
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3992
10:40
5aUWb6. Overview of underwater acoustic and seismic measurements of the construction and operation of the Block Island
Wind Farm. James H Miller, Gopu R. Potty (Ocean Eng., Univ. of Rhode Island, 215 South Ferry Rd., Narragansett Bay Campus URI,
Narragansett, RI 02882, miller@uri.edu), Ying-Tsong Lin, Arthur Newhall (Appl. Ocean Phys. and Eng., Woods Hole Oceanographic
Inst., Woods Hole, MA), Kathleen J. Vigness-Raposa (Marine Acoust., Inc., Middletown, RI), Jennifer Giard (Marine Acoust., Inc., Narragansett, RI), and Tim Mason (Subacoustech, Ltd., Southampton, United Kingdom)
The Block Island Wind Farm (BIWF), the first offshore wind farm in the United States, consists of five 6-MW turbines 3 miles
southeast of Block Island, Rhode Island in water depths of approximately 30 m. Construction began in the summer of 2015 and power
production began in late 2016. Underwater acoustic and geophysical measurement systems were deployed to acquire real-time observations of the construction and initial operation of a wind facility to aid the evaluation of environmental effects of future facilities. The
substructure for these BIWF turbines consists of jacket type construction with piles driven to the bottom to pin the structure to the
seabed. The equipment used to monitor construction and initial operation consisted of a towed array consisting of eight hydrophones,
two fixed moorings with four hydrophones each and a fixed sensor package for measuring particle velocity. This sensor package consists
of a three-axis geophone on the seabed and a tetrahedral array of four low sensitivity hydrophones at 1 m from the bottom. Additionally,
an acoustic vector sensor was deployed in mid-water. Data collected on these sensor systems during construction and initial operation
will be summarized. [Work supported by Bureau of Ocean Energy Management (BOEM).]
11:00
5aUWb7. Measurements of particle motion near the seafloor during construction and operation of the Block Island Wind Farm.
Gopu R. Potty, Makio Tazawa, Jennifer Giard, James H Miller (Dept. of Ocean Eng., Univ. of Rhode Island, 115 Middleton Bldg., Narragansett, RI 02882, potty@egr.uri.edu), Ying-Tsong Lin, Arthur Newhall (Appl. Ocean Phys. and Eng., Woods Hole Oceanographic
Inst., Woods Hole, MA), and Kathleen J. Vigness-Raposa (2 Corporate Pl., Ste. 105, Marine Acoust., Inc., Middletown, RI)
Noise radiation and particle motion from pile driving activities were monitored using multiple sensors during the construction of the
first offshore wind farm off Block Island, RI, USA in 2016. The Block Island Wind Farm (BIWF) consists of five turbines in water
depths of approximately 30 m. The substructure for these turbines consists of jacket type construction with piles driven to the bottom to
pin the structure to the seabed. Pile driving operations generate intense sound, impulsive in nature, which radiates into the surrounding
air, water and sediment producing particle motion that may affect marine animals. The particle velocity sensor package consists of a
three-axis geophone on the seabed and a tetrahedral array of four low sensitivity hydrophones at 1 m from the bottom. The acoustic pressure acquired by the hydrophones will be processed to calculate particle motion. Data from the BIWF site will be compared with model
predictions and published data from other locations. Recent measurements from the same wind farm location during the operational
phase also will be presented. [Work supported by Bureau of Ocean Energy Management (BOEM).]
11:20
5aUWb8. A preliminary numerical model of three-dimensional underwater sound propagation in the Block Island Wind Farm
area. Ying-Tsong Lin, Arthur Newhall (Appl. Ocean Phys. and Eng., Woods Hole Oceanographic Inst., Bigelow 213, MS#11, WHOI,
Woods Hole, MA 02543, ytlin@whoi.edu), Gopu R. Potty, and James H Miller (Dept. of Ocean Eng., Univ. of Rhode Island, Narragansett, RI)
The Block Island Wind Farm, consisting of five 6-MW turbines, is the first U.S. commercial offshore wind farm harvesting wind
energy to generate electricity, located 3.8 miles southeast of Block Island, Rhode Island. In-situ underwater and airborne noise measurements were made during the construction and the first two months of the operational period for the purpose of environmental impact
assessment. To better interpret the noise measurements and extend the noise propagation prediction beyond the coverage of listening stations, a three-dimensional underwater sound propagation model is created with a high resolution bathymetric map and a data-assimilated
ocean dynamic model. The bathymetric map is made using the 3 arc-second U.S. Coastal Relief Model (CRM) with a 100-m horizontal
resolution provided by the National Centers for Environmental Information (NCEI). The ocean model is extracted from the Regional
Ocean Modeling System (ROMS) ESPreSSO (Experimental System for Predicting Shelf and Slope Optics) model covering the Mid-Atlantic Bight with a 5 km horizontal resolution and 36 terrain-following vertical levels. Temporal and spatial variability of noise propagation conditions is identified in the integrated acoustic and oceanographic model. Future model development incorporating surface wind
waves and sub-bottom sediment layer structure will be discussed. [Work supported by BOEM.]
11:40
5a THU. AM
5aUWb9. Variations in the acoustic field recorded during pile-driving construction of the Block Island Wind Farm. Kathleen J.
Vigness-Raposa (Marine Acoust., Inc., 2 Corporate Pl., Ste. 105, Middletown, RI 02842, kathleen.vigness@marineacoustics.com), Jennifer Giard (Marine Acoust., Inc., Narragansett, RI), Adam S. Frankel (Marine Acoust., Inc., Arlington, VA), James H Miller, Gopu R.
Potty (Dept. of Ocean Eng., Univ. of Rhode Island, Narragansett, RI), Ying-Tsong Lin, Arthur Newhall (Appl. Ocean Phys. and Eng.,
Woods Hole Oceanographic Inst., Woods Hole, MA), and Tim Mason (Subacoustech, Ltd., Southampton, United Kingdom)
The Block Island Wind Farm, the first offshore wind farm in the United States, consists of five 6-MW turbines three miles southeast
of Block Island, Rhode Island in water depths of approximately 30 m. The turbines include a jacket-type substructure with four piles
driven at an angle of approximately 13 deg to the vertical to pin the structure to the seabed. The acoustic field was measured during pile
driving of two turbines in September 2015 with an 8-element towed horizontal line array. Measurements began at a range of 1 km from
the turbine on which piling was occurring and extended to a range of 8 km from the construction. The peak-to-peak received level, sound
exposure level, and kurtosis from each pile strike were determined as a function of range from the pile. The ambient noise just prior to
each signal was also measured to calculate signal-to-noise ratio values. Results provide insight into the transition from fast-rise-time impulsive signals at close range to slow-rise-time non-impulsive signals at longer ranges. In addition, the variability among signals at the
same range is being characterized as a function of pile and hammer strike characteristics. [Work supported by Bureau of Ocean Energy
Management (BOEM).]
3993
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3993
Contributed Paper
12:00
distinct stages. The first involved short-term monitoring of the installation
of the initial wind turbine generator foundation using both mobile real-time
and static monitoring techniques used for daily reporting. Long-term monitoring of the remaining four foundations with static recorders documented
the inherent variability in the data set. Received sound levels measured at
pre-determined distances were used to assess site-specific propagation characteristics and to verify ranges to the relevant sound exposure thresholds.
This involved the evaluation of multiple metrics including the apparent
sound source level of pile-driving activities and the confirmation of the
Exclusion and Monitoring Zone established to ensure the protection of marine life. All of the monitoring objectives were met, including the field verification of modeling results established during the environmental permitting
process.
5aUWb10. Hydroacoustic measurements during construction of the first
US offshore windfarm—Methodologies to address regulatory requirements. Erik J. Kalapinski and Kristjan Varnik (Energy Programs, Tetra
Tech, Inc., 160 Federal St., Fl. 3, Boston, MA 02210, erik.kalapinski@tetratech.com)
The regulations governing underwater noise from offshore wind farm
development in the United States have not been as explicit as in other countries. The Block Island Wind Farm represents the first case study. In this
context, it is important to disseminate information about the relevant noise
sources, address evolving guideline criteria, and develop noise measurement
and analysis procedures to address regulatory reporting requirements. Tetra
Tech led the hydroacoustic monitoring program which occurred in two
THURSDAY AFTERNOON, 29 JUNE 2017
ROOM 207, 1:15 P.M. TO 4:40 P.M.
Session 5pAAa
Architectural Acoustics: Architectural Acoustics and Audio: Even Better Than the Real Thing III
K. Anthony Hoover, Cochair
McKay Conant Hoover, 5655 Lindero Canyon Road, Suite 325, Westlake Village, CA 91362
Alexander U. Case, Cochair
Sound Recording Technology, University of Massachusetts Lowell, 35 Wilder St., Suite 3, Lowell, MA 01854
Wolfgang Ahnert, Cochair
Ahnert Feistel Media Group, Arkonastr. 45-49, Berlin D-13189, Germany
Chair’s Introduction—1:15
Contributed Paper
1:20
as the consequences are not possible to be changed in the process of mixing.
Therefore, proper selection of the parameter is often essential to achieve the
effect desired. The aim of the study is an attempt to answer the question
what is the optimal or preferred by an average listener reverberation time
and other room acoustic parameters in choral music. As part of the study,
sound samples of choral music that differed only by the reverb were made.
The samples were tested using the procedures of multidimensional scaling.
Ambisonics, altogether with VBAP, was used in the listening test.
5pAAa1. Multidimensional scaling approach in room acoustic evaluation. Pawel Malecki and Jerzy Wiciak (Dept. of Mech. and VibroAcoust.,
AGH - Univ. of Sci. and Technol., al. Mickiewicza 30, Kraków 30-059,
Poland, pawel.malecki@agh.edu.pl)
In the event of natural acoustics recordings, it is essential to decide on
the room and, consequently, on the reverb. The decision is very important,
Invited Papers
1:40
5pAAa2. Enhancing home recording spaces through architectural acoustics design. Sebastian Otero (Architectural Acoust., Acustic-O, Laurel 14, San Pedro Martir, Tlalpan, Mexico, D.F. 14650, Mexico, sebastian@acustic-o.com)
The demand for home recording spaces has increased in the past years. They vary in terms of privacy, comfort, size, audio quality,
budget, type of materials, acoustic treatments, types of usage and equipment. Although it is hard to unify the concept of “home studio,”
there are certain architectural acoustics criteria that should be considered in order to guarantee the use of the space for creative and
3994
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3994
technical purposes. This paper analyzes different cases to demonstrate how home recording is enhanced applying these architectural
acoustic principals.
2:00
5pAAa3. When the difference between “good” and “great” is 0.5dB—Ponderings of a “Tuning Conductor.” Christopher Blair
(Akustiks, LLC, 93 North Main St., Norwalk, CT 06854, cblair@akustiks.com)
For many years the author has been privileged to work closely with music directors, their orchestras, and soloists in optimizing the
acoustic environment in numerous rooms employing acoustic enhancement. During the process he has learned that even very small
changes in energy levels and timing of virtual reflections can make a profound difference in perceived acoustic quality, largely due to
masking effects, both in the audience and on the podium. The quick adjustments that can be made utilizing such systems not only make
A/B comparisons useful in assessing relative quality, but also as an educational tool informing the designer as to what specific attributes
in an impulse response are helpful. This presentation contains a number of illustrative vignettes from his experience as both an acoustician and conductor, including the notion that sometimes the most effective changes to acoustical perception in a concert hall can come
from changing how the orchestra plays.
Contributed Paper
2:20
acoustics inspiring to create a high level of energy therefore encourage high
level of participation. Furthermore, the church wants to support both traditional (orchestra and choir) and contemporary (Praise band with vocalists)
styles of music in this new worship space. These programming goals, to
some extent, are in conflict with one another. Particularly the architectural
goal of having a soaring space is in conflict with the goals for a space that
provides acoustical support of corporate worship. Since the walls and ceiling are too far away from most of the seating area to provide beneficial
sound reflections to support corporate worship and to support traditional
music, an electronic enhancement system was provided to electronically
create the early sound reflections that could be created by a lower ceiling.
5pAAa4. An integrated passive and active room acoustics + sound reinforcement system design solution for a large worship space. David Kahn
(Acoust. Disctinctions, 400 Main St., Ste. 600, Stamford, CT 06901,
dkahn@ad-ny.com)
The United Methodist Church of the Resurrection, with more than
16,000 adult members and an average weekly worship attendance of more
than 8,600, recently completed a new 3,500-seat worship space. The new
sanctuary building has an ellipsoidal plan shape and a very tall ceiling to
address an important programmatic goal to achieve “A Sense of Majesty”.
Acoustical goals included warm speech throughout all seating areas with
Invited Papers
2:40
5pAAa5. So you wanna be a rock “n” roll star (there’s an app for that). Sam Ortallono (Visual Performing Arts, Lee College, 711
W. Texas Ave., Baytown, TX 77522, sortallono@lee.edu)
With the increase of mobile recording technology more recordings are made in unconventional, acoustically uncontrolled environments. We designed an experiment to compare some of these spaces. At Lee College, we recorded the same song three times. The same
musicians were recorded in three different spaces using different technology; in a large, controlled studio using ProTools, in a home studio setting using a laptop and in a living space using a mobile telephone application. Once the songs were mixed and mastered, college
students were asked to rate the quality of the mixes.
3:00–3:20 Break
3:20
5pAAa6. Nuance is dead (or What You Will). Scott D. Pfeiffer (Threshold Acoust. LLC, 53 West Jackson Blvd., Ste. 815, Chicago,
IL 60604, spfeiffer@thresholdacoustics.com)
3995
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3995
5p THU. PM
Live performance is about connection. Traditional amplification at its least effective introduces microphones and loudspeakers in
between the artist with a message and the audience intending to show appreciation for the effort. The authenticity of performance in the
right setting where the mircophones and loudspeakers are unnecessary can be deeply moving, and provides for an unambiguous
exchange between performer and audience when the quality of the environment for the purpose is up to the challenge. The “Unplugged”
movement in popular music of the early 90s attempted to bridge this gap, though clearly none of the acts was truly “Unplugged.” For all
of acoustic design history, architectural or electronic technology to enhance the natural acoustic connection between performer and audience. Thanks to the efforts, we can replicate aspects of natural acoustics of a human scale—and the connection created by the shared
environment—to make an under-performing acoustic environment better, a great acoustic environment more flexible, or simply create
an interior acoustic where none exists. Applications and limitations of the available technologies for fictionalizing acoustic traits are
explored to help recognize when the technology, architectural or electronic, becomes a distraction from the purpose.
3:40
5pAAa7. Sound systems in reverberant spaces: Approaches in practice. David S. Woolworth (Roland, Woolworth, & Assoc., LLC,
356 CR 102, Oxford, MS 38655, dave@oxfordacoustics.com)
This paper presents three separate case studies of the pairing of sound systems and reverberant spaces, including existing and retrofitted spaces. The design or troubleshooting utilizes various approaches to meet the challenges of reverberant spaces in regard to acoustics,
programming of the space, and the use of loudspeakers and processing.
4:00
5pAAa8. Using the extended techniques on the trombone to demonstrate many of the basic principles of music acoustics. Thomas
J. Plsek (Brass/Liberal Arts, Berklee College of Music, MS 1140 Brass, 1140 Boylston St., Boston, MA 02215, tplsek@berklee.edu)
The author has realized that the trombone can be used as a low tech device to demonstrate many of the principles of acoustics. This
is especially true when one considers all the extended techniques that have been developed for the instrument. A sampling of them could
include (but not be limited to) multiphonics creating beats, playing lip buzzes through the various parts of the instrument giving audible
information about the signal processing path, mouthpieces slaps to illustrate resonance, inhaling while playing to put an interesting take
on brass pedagogy, showing how the water key can indicate the presence of nodes and antinodes and how the instrument provides very
little feedback in the extreme high register. Live demonstrations will be presented.
4:20
5pAAa9. Even weirder than the real thing—Gated reverb history and aesthetics. Alexander U. Case (Sound Recording Technol.,
Univ. of Massachusetts Lowell, 35 Wilder St., Ste. 3, Lowell, MA 01854, alex@fermata.biz)
Sound presented via loudspeaker may take advantage of signal processing to create sounds not possible in an all-acoustic production
chain. Breaking free of the acoustic constraints for music-making in the concert hall to take advantage of an analog, digital and electroacoustic production chain has been a major attraction for many popular recording artists. New aesthetics evolved. Among the most
absurd sonic concoctions to come from this, gated reverb is part discovery, and part invention, motivated by misunderstandings, and
driven by plain old rock and roll rebellion. This paper tours the development of gated reverb, with audio illustrations, and makes the
case for its continued use today.
THURSDAY AFTERNOON, 29 JUNE 2017
ROOM 208, 1:20 P.M. TO 4:20 P.M.
Session 5pAAb
Architectural Acoustics: Simulation and Evaluation of Acoustic Environments IV
Michael Vorländer, Cochair
ITA, RWTH Aachen University, Kopernikusstr. 5, Aachen 52056, Germany
Stefan Weinzierl, Cochair
Audio Communication Group, TU Berlin, Strelitzer Str. 19, Berlin 10115, Germany
Ning Xiang, Cochair
School of Architecture, Rensselaer Polytechnic Institute, Greene Building, 110 8th Street, Troy, NY 12180
Invited Paper
1:20
5pAAb1. First international round robin on auralization: Results of the perceptual evaluation. Fabian Brinkmann, David Ackermann (Audio Commun. Group, Tech. Univ. Berlin, Einsteinufer 17c, Berlin 10787, Germany, fabian.brinkmann@tu-berlin.de), Lukas
Aspöck (Inst. of Tech. Acoust., RWTH Aachen Univ., Aachen, Germany), and Stefan Weinzierl (Audio Commun. Group, Tech. Univ.
Berlin, Berlin, Germany)
The round robin on auralization aimed at a systematic evaluation of room acoustic modeling software by means of comparing simulated and measured impulse responses. While a physical evaluation by means room acoustical parameters and spectro-temporal comparisons is addressed in an accompanying talk, here, we focus on an evaluation of perceptual differences arising in complex room acoustical
scenarios. In these cases, a mere physical evaluation might not be able to predict the perceptual impact of the manifold interaction of different sound propagation phenomena in enclosed spaces such as reflection, scattering, diffraction, or modal behavior. For this purpose,
dynamic auralizations of binaural room impulse responses that were simulated with different room acoustical modeling softwares were
evaluated against their measured counterparts. For this purpose, listening tests were conducted using “plausibility” and “authenticity” as
overall quality criteria and the Spatial Audio and Quality Inventory (SAQI) for a differential diagnosis of the remaining differences.
3996
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3996
Contributed Paper
1:40
room acoustics of non-anechoic test rooms can heavily influence the auditory stimulus used for eliciting the ASSR. To systematically investigate the
effect of the room acoustics conditions on sound-field ASSR, a loudspeakerbased auralization system was implemented using a mixed order Ambisonics approach. The present study investigates the performance of the auralization system in terms of objective room acoustic measurements and soundfield ASSR measurements, both in the actual room and in the simulated and
auralized room. The evaluation is conducted for a small room with welldefined acoustic properties. The room is carefully modeled using the novel
room acoustic simulation tool PARISM (Phased Acoustical Radiosity and
Image Source Method) and validated through measurements. This study discusses the limitations of the system and the potential improvements needed
for a more realistic sound-field ASSR simulation.
5pAAb2. Evaluation of a Loudspeaker-Based Virtual Acoustic Environment for Investigating sound-field auditory steady-state responses. Valentina Zapata-Rodriguez (InterAcoust. Res. Unit, InterAcoust. A/S, c/o:
DTU, Ørsteds Plads B352, R027, Kgs. Lyngby 2800, Denmark, valr@iru.
interacoustics.com), Gerd Marbjerg (Acoust. Technol., Tech. Univ. of Denmark, Kongens Lyngby, Denmark), Jonas Brunskog, Cheol Ho Jeong
(Acoust. Technol., Tech. Univ. of Denmark , Kgs. Lyngby, Denmark),
Søren Laugesen, and James M. Harte (InterAcoust. Res. Unit, InterAcoust.
A/S, Kgs. Lyngby, Denmark)
Measuring sound-field auditory steady-state responses (ASSR) is a
promising new objective clinical procedure for hearing aid fitting validation,
particularly for infants who cannot respond to behavioral tests. In practice,
Invited Papers
2:00
5pAAb3. Auditory Illusion over Headphones Revisited. Karlheinz Brandenburg (Fraunhofer IDMT, Ehrenbergstr. 31, Ilmenau
98693, Germany, bdg@idmt.fraunhofer.de), Florian Klein, Annika Neidhardt, and Stephan Werner (Technische Universität Ilmenau,
Ilmenau, Germany)
Plausibility and immersion are two of the keywords which describe quality features of virtual and augmented reality systems. There
is a plethora of research results in this area, but current headphone based systems still do not enable auditory illusion for everybody and
all types of signals. To address the open questions, a series of studies have been conducted to study the quality of spatial audio reproduction using binaural synthesis. This contribution gives a revisited insight in the creation of a perfect auditory illusion via headphone. First,
a summary on technical parameters to realize correct auditory cues is given, which includes requirements like headphone equalization or
interpolation methods. Second, we will point out that beyond reproducing the physically correct sound pressure at the ear drums, more
effects play a significant role in the quality of the auditory illusion (and can be dominating in some cases and overcome physical deviation). Perceptual effects like the room divergence effect, additional visual influences, personalization, pose and position tracking as well
as adaptation processes are discussed. The single effects are described and the interconnections between them are highlighted.
2:20
5pAAb4. Interactive reproduction of virtual acoustic environments for the evaluation of hearing devices—Methods and validation. Giso Grimm and Volker Hohmann (Medizinische Physik and Cluster of Excellence Hearing4all, Universität Oldenburg, Germany,
Carl-von-Ossientzky Universitaet, Oldenburg 26111, Germany, volker.hohmann@uni-oldenburg.de)
Virtual acoustic environments are increasingly used for evaluating hearing devices in complex acoustic conditions. In this talk we
propose an interactive simulation method via multi-channel loudspeaker systems or headphones. The method focuses on the time-domain simulation of the direct path and a geometric image source model, which simulates air absorption and in case of motion the Doppler effect of all primary and image sources. To establish the feasibility of this approach, the interaction between reproduction method
and technical and perceptual hearing aid performance measures was investigated using computer simulations. Three spatial audio reproduction methods were compared in regular circular loudspeaker arrays with 4 to 72 channels. The influence of reproduction method and
array size on performance measures of multi-microphone hearing aid algorithms was analyzed. In addition to the analysis of reproduction methods, algorithm performance was tested in a number of different virtual acoustic environments in order to assess the underlying
factors of decreased hearing aid performance in complex environments. The results confirm previous findings that spatial complexity
has a major impact on hearing aid benefit, and demonstrate the potential of virtual acoustic environments for hearing aid evaluation.
[Funded by DFG FOR1732 “Individualized hearing acoustics.”]
2:40
The conceptualization and implementation of a psychoacoustic sound field synthesis system for music is presented. Critical bands,
the precedence effect, and integration times of the auditory system as well as the radiation characteristics of musical instruments are
implemented in the signal processing. Interaural coherence, masking and auditory scene analysis principles are considered as well. The
sound field synthesis system creates a natural, spatial sound impression and precise source localization for listeners in extended listening
area, even with a low number of loudspeakers. Simulations and a listening test provide a proof of concept. The method is particularly robust for signals with impulsive attacks and quasi-stationary phases as in the case of many instrumental sounds. It is compatible with
many loudspeaker setups, such as 5.1, ambisonics systems and loudspeaker arrays for wave front synthesis. The psychoacoustic sound
field synthesis approach is an alternative to physically centered wave field synthesis concepts and conventional stereophonic sound and
can benefit from both paradigms. Additional psychoacoustic quantities that have the potential to be implemented in the presented and
other audio systems are discussed.
3:00–3:20 Break
3997
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3997
5p THU. PM
5pAAb5. Perceptually motivated sound field synthesis for music presentation. Tim Ziemer (Inst. of Systematic Musicology, Univ.
of Hamburg, Neue Rabenstr. 13, Hamburg 20354, Germany, tim.ziemer@uni-hamburg.de)
Contributed Paper
3:20
behavior of natural acoustic sources such as speakers, singers or musical
instruments is not in any way considered by current techniques for acoustical simulation and auralization. In the talk, we will present measurement
results of the sound power and directivity of natural acoustic sound sources
and their dependence on pitch and dynamic level, as well as measurements
of typical movements of the source during musical performances. We will
demonstrate the physical and perceptual relevance of these effects both in
the direct field and in room acoustical environments based on a technical
and perceptual evaluation, and discuss new approaches to include these
effects in numerical simulations.
5pAAb6. On the numerical simulation of natural acoustic sound sources. David Ackermann, Christoph Böhm, and Stefan Weinzierl (Audio
Commun. Group, TU Berlin, TU Berlin, EN-8, Einsteinufer 17c, Berlin
10587, Germany, david.ackermann@tu-berlin.de)
A convincing auralization of acoustical scenes does not only require a
proper modeling of the sound propagation between source and receiver, but
also an appropriate representation of the acoustical source itself. While the
properties of electro-acoustic sources can be well represented by advanced
loudspeaker formats with high resolution, the complex, time-variant
Invited Papers
3:40
5pAAb7. A loudspeaker orchestra for opera houses studies. Dario D’Orazio, Luca Barbaresi, and Massimo Garai (DIN, Univ. of Bologna, Viale Risorgimento, 2, Bologna 40128, Italy, dario.dorazio@unibo.it)
A “Loudspeaker Orchestra” is an array of loudspeakers with a well-defined setup and layout. Initially proposed by contemporary
composers for innovative performances (e.g., the “Acousmonium” used by the Groupe de Recherches Musicales in the 1970s), in the
last years, the Loudspeaker Orchestra has been used for MIMO acoustic measurements in concert halls. In the present work, a Loudspeaker Orchestra for measurements in opera houses is proposed, taking into account the acoustic differences between a concert hall and
an opera house. In fact, in a concert hall, the orchestra plays on the stage, while in an opera house, the orchestra plays in the pit, with a
different layout, a smaller instrumental ensemble, etc. In a concert hall, the soloists are placed near the conductor, while in an opera
house they move all over the stage, due to dramatic reasons. Basing on previous studies on room criteria and experiences with real
orchestras, the authors proposed a Loudspeaker Orchestra layout for opera houses. In this work, tests on various configurations are presented, comparing measurements and numerical simulations.
4:00
5pAAb8. Acoustical evaluation of the Teatro Colón of Buenos Aires. Gustavo J. Basso (Facultad de Bellas Artes, Universidad Nacional de La Plata, Argentina, CALLE 5 N8 84, LA PLATA, Buenos Aires 1900, Argentina, gustavobasso2004@yahoo.com.ar)
It is been said that the Teatro Colón of Buenos Aires is one of the best halls for opera and symphonic music of the world, and this
statement is made mainly for its wonderful acoustics. During the restoration works carried out between 2006 and 2010, we made a lot of
studies in order to preserve the acoustical quality of the theater. Among them, we can mentioned acoustical measurements, architectonical examination, statistical analysis, aural descriptions and the development of a digital model of the space. None of the tools used for
evaluating the acoustical quality of the hall, mainly based on the parameters of the ISO-3382 Standard, like the acoustical systems proposed by Leo Beranek and Yoichi Ando, was enough to explain the true quality of the room, which was deduced from opinion polls
about the perceived sound by the audience. In order to find the causes of its acoustical behavior, we developed an architectural and
acoustical study of the hall with the aid of a digital model that can explain, if not all, some of the main characteristics of its particular
acoustical field. A synthesis of the results of this evaluation is described in this paper.
3998
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3998
THURSDAY AFTERNOON, 29 JUNE 2017
ROOM 206, 1:35 P.M. TO 5:00 P.M.
Session 5pAAc
Architectural Acoustics: Recent Developments and Advances in Archeo-Acoustics and Historical
Soundscapes IV
David Lubman, Cochair
DL Acoustics, 14301 Middletown Ln., Westminster, CA 92683-4514
Miriam A. Kolar, Cochair
Architectural Studies; Music, Amherst College, School for Advanced Research, 660 Garcia St., Santa Fe, NM 87505
Elena Bo, Cochair
DAD, Polytechnic Univ. of Turin, Bologna 40128, Italy
Chair’s Introduction—1:35
Invited Papers
1:40
5pAAc1. Rock art and prehistoric soundscapes: Some results from Italy, France, and Spain. Margarita Dı́az-Andreu (Història i
Arqueologia, ICREA, Universitat de Barcelona, Història i Arqueologia, Facultat de G. i Història, Carrer de Montalegre 6, Barcelona
08001, Spain, m.diaz-andreu@ub.edu) and Tommaso Mattioli (Història i Arqueologia, Universitat de Barcelona, Barcelona, Spain)
Under the axis of the SONART project—Sounds of Rock Art. Archaeoacoustics and post-palaeolithic Schematic art in the Western
Mediterranean—a series of acoustic tests have been undertaken in seven rock art areas of Italy, France and Spain. The early chronology
of this art—Neolithic and Chalcolithic—makes that no information exists about the reasons prehistoric peoples had to produce the art
and the beliefs surrounding its creation. This means that in order to check whether sound is related to this cultural manifestation, formal
methods need to be used. This also affects the analysis of sound for, in contrast to other areas of the world such as the circumpolar area,
no legends or myths can be found to explain the link between art and special reverberation or echoes. Related to the latter, the results of
our experiments to assess the direction of arrival (DOA) of echoes will be explained for the rock art landscapes of Valle d’Idivoro (Italy)
and Baume Brune (France). The location of rock art in sites where the audibility of the landscape is exceptionally optimal will also be
analyzed for the case of the Arroyo de San Serván landscape area in Extremadura (Spain).
2:00
5pAAc2. Were palaeolithic cave paintings placed because of acoustic resonances? Bruno M. Fazenda (Acoust. Res. Ctr., Univ. of
Salford, The Crescent, Salford, Manchester M5 4WT, United Kingdom, b.m.fazenda@salford.ac.uk)
Previous archaeoacoustics work published from the 1980s to the 2000s has suggested that the location of palaeolithic paintings in
French caves, such as Le Portel, Niaux, Isturitz, and Arcy-sur-Cureis, are associated with the acoustic response of those locations, particularly with strong low frequency resonances. Recent work done in caves in the Asturian and Cantabrian regions of Northern Spain has
shown some evidence of statistical association between paintings dated from the Aurignacian/Gravettian period (cf. 42,000-25,000 BP)
and the existence of acoustic responses which exhibit resonant artifacts. The work presented in this paper reports on a further analysis of
the data that explores the association in more detail. A number of metrics focused specifically on low frequency response are used as factors to form statistical models that explain the position of paintings within the caves studied. The results of this study further our understanding on how perception of acoustic response might have played a part in modulating the expressive behavior of our ancestors.
2:20
In 1988, two investigators (Iegor Reznikoff and Michel Dauvois) reported a connection between the local density of cave paintings
and local sonic “resonance” in three French Paleolithic painted caves. Archaeologist Chris Scarre summarized their findings in a brief article that drew much attention (Painting by Resonance, Nature 338 [1989]: 382). Scarre wrote “Reznikoff- Dauvois theory is consistent
with the likely importance of music and singing in the rituals of our early ancestors.” Reznikoff-Dauvois believed cave artists intentionally chose painting locations for their sonic resonance. They further conjectured it was the artists admiration for resonant sound that
inspired their choice. This writer suggests the associations found were merely correlative, and not necessarily causal. (Crowing roosters
do not cause the sun to rise.) How then can the correlation be explained? This writer hypothesizes that initially, “painterly” needs rather
3999
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
3999
5p THU. PM
5pAAc3. Did Paleolithic cave artists intentionally paint at resonant cave locations? David Lubman (DL Acoust., 14301 Middletown
Ln., Westminster, CA 92683-4514, dlubman@dlacoustics.com)
than sonic preferences may have influenced choice of painting location (large expanses of non-porous rock). Since non-porous rock is
highly sound reflective the best cave locations for long-lasting cave paintings are “resonant.” Paintings on porous (non-resonant) rock
would not persist. This is a testable hypothesis. Moreover, combining impressive art and sound could inspire singing and dance. Such
sites might plausibly become ritual spaces.
2:40
5pAAc4. The acoustic of Cumaean Sibyl. Gino Iannace (Dept. of Architecture and Industrial Design, Università della Campania,
Borgo San Lorenzo, Aversa 83016, Italy, gino.iannace@unina2.it) and Umberto Berardi (Architectural Sci. Dept., Ryerson Univ., Toronto, ON, Canada)
The Cumaean Sibyl cave is a place in the north of Naples, Italy. The Sibyl was a priestess presiding over the Apollonian oracle; she
received the travelers to whom she predicted their future. The cave is length about 140 m, with trapezoidal section excavated in the rock
of tuff; the height is about 4.5 m and large about 2.4 m. There is a little room, in the final part of the cave, where according to the legend,
the Sibyl received the travelers. The acoustic measurements were done with an omnidirectional sound source in the in the little room
and the microphones along the cave. The software for architectural acoustic were used to better understand the sound propagation in the
cave. From acoustic measurements and numerical results it emerges that the voice emitted by an orator positioned in the room inside the
Sybil cave, is understood in every tunnel point. This paper shows that the legend that this site was an oracle has true acoustical basis.
3:00–3:20 Break
3:20
5pAAc5. Acoustic measurements at the sacred sites in Finland. Riitta Rainio, Kai Lassfolk (Musicology, Univ. of Helsinki, Unioninkatu 38 C 213, FI-00014, Finland, riitta.rainio@helsinki.fi), Antti Lahelma (Archaeology, Univ. of Helsinki, Helsinki, Finland), and
Tiina Äikäs (Archaeology, Univ. of Oulu, Oulu, Finland)
In Finland, near the canyon lakes of Julma-Ölkky, Somerjärvi and Taatsijärvi, steep rock cliffs produce distinctive acoustic spaces.
On these cliffs, prehistoric rock paintings (5200—500 BC) as well as an ancient Sami offering site (cf. 1100—AD) can be found. Ethnographic sources describe that the Sami used to sing and listen to echoes while making offerings there. This paper presents the results of
an archaeoacoustic project that seeks to explore the role of sound in the development and use of these archaeological sites. The applied
methods include multichannel impulse response recording, angle of arrival estimation of early reflections, spectrum analysis, digital
image processing, and 3D laser scanning. On the basis of the analyses, we have concluded that the cliffs that have been painted or held
as sacred are efficient sound reflectors. They create discreet echoes and, accordingly, phantom sound sources. Especially at the Värikallio cliff, the sound appears to emanate directly from the painted figures. These results, together with previously unnoticed drumming figures in the Värikallio painting, provide a clue to the significance of the sound rituals at these sacred sites.
3:40
5pAAc6. A theoretical framework for archaeoacoustics and case studies. Steven J. Waller (Rock Art Acoust., 5415 Lake Murray
Blvd. #8, La Mesa, CA 91942, wallersj@yahoo.com)
Application of black box theory is proposed as a theoretical framework for archaeoacoustic studies. First, input/output analysis can
be applied to archaeological sites, in which the physical characteristics of a site serve as a black box to physically transform initial sonic
inputs directly into various acoustic output phenomena that can be quantitatively measured, such as reflected repeats, resonance, etc. In
turn, acoustic output from the first (archaeological) black box serves as input for a second black box: the human mind. Examples are presented to support the argument that the psychoacoustics of sound perception is/was a crucial aspect of archaeoacoustics. Cognitive
response is heavily influenced by culture, expectations, and subjective values attributed to various sounds. Outputs of the second (cognitive) black box that can be analyzed include tangible archaeological manifestations such as rock art, megaliths, etc., as well as intangible
responses such as orally recorded myths, traditions, rituals, and beliefs. Analysis of each of the input/output components from these two
coupled black boxes can reveal important interrelationships between initial sonic inputs, sound transforming characteristics of archaeological sites, and the cognitive responses of ancient artists and architects to those transformed sounds. Case studies are presented to illustrate application of this theoretical approach.
4:00
5pAAc7. Archaeoacoustic guidelines. Preparation, execution, and documentation. David N. Thomas (Archaeology, Univ. of Highlands and Islands, 1/l,1 Sibbald St., Dundee, Tayside dd37ja, United Kingdom, sibbald1@blueyonder.co.uk)
Preparation A discussion of Archaeological theory and archaeoacoustics in a post processual era. The author considers the goals and
objectives of archaeoacoustic research in the light of current trends; the nature of the human perception of sound is discussed in the light
of the philosophical discussions of “Intentionality.” The importance of a preliminary model displaying accurate reproduction of the resonant space, and use of light as indicator of sound wave propagation is emphasized (using examples from previously published Mousa
broch and Minehowe papers). Execution Notes on practical considerations in the field. Equipment, sound makers, recording audio and
image photos, tools, and markers. Mic and source recording locations. Wind noise and soundscapes. Notes on theoretical considerations
in the field. Sound wave length and the word frequency, the myth of infrasound, and discrete echoes. Documentation Notes on types of
documentation. Photo/video record, model record, audio record, and sound analysis. Conclusions, explanations, and possibilities. The
implications of applying additional audio recording to illustrate convolution reverbs. Listing and sharing failed approaches. Sharing and
Popularization of the discipline of archaeoacoustics.
4:20–5:00 Panel Discussion
4000
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
4000
THURSDAY AFTERNOON, 29 JUNE 2017
ROOM 313, 1:20 P.M. TO 5:40 P.M.
Session 5pAB
Animal Bioacoustics: Ecosystem Acoustics II
Susan Parks, Cochair
Biology, Syracuse University, Biology, 107 College Place, RM 114, Syracuse, NY 13244
Jennifer L. Miksis-Olds, Cochair
Center for Coastal and Ocean Mapping, Univ. of New Hampshire, 24 Colovos Rd., Durham, NC 03824
Denise Risch, Cochair
Ecology, Scottish Association for Marine Science (SAMS), SAMS, Oban PA371QA, United Kingdom
Contributed Papers
5pAB1. Long-term monitoring of cetacean bioacoustics using cabled
observatories in deep-sea off East Sicily. Francesco Caruso (IAMC,
National Res. Council, Via del Mare 3, Torretta Granitola, Trapani 91021,
Italy, fcaruso@unime.it), Virginia Sciacca (Univ. of Messina, Catania,
Italy), Giuseppe Alonge (Observations and Analyses of Earth and Climate,
ENEA, Palermo, Italy), Giorgio Bellia (Univ. of Catania, Catania, Italy),
Giuseppa Buscaino (IAMC, National Res. Council, Capo Granitola (TP),
Italy), Emilio De Domenico (Univ. of Messina, Messina, Italy), Rosario
Grammauta (IAMC, National Res. Council, Capo Granitola (TP), Italy),
Giuseppina Larosa (INFN, Catania, Italy), Salvatore Mazzola (IAMC,
National Res. Council, Capo Granitola (TP), Italy), Gianni Pavan (CIBRA,
Univ. of Pavia, Pavia, Italy), Elena Papale (IAMC, National Res. Council,
Capo Granitola (TP), Italy), Carmelo Pellegrino (INFN, Bologna, Italy),
Sara Pulvirenti (INFN, Catania, Italy), Francesco Simeone (INFN, Roma,
Italy), Fabrizio Speziale (INFN, Catania, Italy), Salvatore Viola (INFN, Catanoa, Italy), and Giorgio Riccobene (INFN, Catania, Italy)
The EMSO Research Infrastructure operates multidisciplinary seafloorcabled observatories in a deep-sea area offshore Eastern Sicily (2100 m of
depth). In a data-lacking zone, Passive Acoustic Monitoring activities
revealed new information on cetacean bioacoustics over multiple ecological
scales. Expert operators investigated the presence of cetacean vocalizations
within the large acoustic datasets acquired. Then, algorithms were developed to provide information on the behavior and ecology of the recorded
species. In 2005-2006, the acoustic activity of toothed whales was investigated through the OvDE antenna (100 Hz to 48 kHz). The assessment of the
size distribution of sperm whales was acoustically possible and the tracking
of the animals showed the direction of movement and the diving profile.
The biosonar activity of dolphins resulted mostly confined in the nighttime,
linked to seasonal variation in daylight time and prey-field variability
known for these deep-pelagic waters. Furthermore, in 2012-2013, we monitored the annual acoustic presence of fin whales thanks to the NEMO-SN1
station (1 Hz to 1 kHz). The results showed that the species was present
4001
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
throughout all seasons, with peaks in call detection rate during spring and
summer months, and that the fin whale calls were mostly detected in low
background noise conditions.
1:40
5pAB2. Polar coastal soundscapes: Tridimensional mapping of benthic
biophony and ice geophony with a compact sensor array. Julie Lossent
(Res. Inst. Chorus, 46, Ave. Felix Viallet, Grenoble cedex 1 38031, France,
julie.lossent@chorusacoustics.com), Cedric Gervaise (Chair CHORUS,
Saint Egreve, France), Laurent Chauvaud (Université de Bretagne Occidentale, Institut Universitaire Européen de la Mer, LIA BeBEST, Laboratoire
des Sci. de l’Environnement Marin, Plouzané, France), Aurélie Jolivet
(TBM Environnement, Auray, France), Delphine Mathias (Société d’Observation Multi-Modale de l’Environnement, Plouzané, France), and Jérôme
Mars (Univ. Grenoble Alpes, CNRS, GIPSA-Lab, Grenoble, France)
Polar areas show fast changes linked to global warming. The reduction
of the ice pack and the melting of the ice sheet modify the conditions of living of marine fauna. We propose the simultaneous monitoring of the ice and
benthic fauna using passive acoustics. Thanks to a compact sensor array of
4 hydrophones (2m*2m*2m), we detected, localized and mapped in three
dimensions ({azimuth, elevation} or {x, y, z}) the biophonic and geophonic
contributions made up of short and wideband pulses. Tridimensional maps
of benthic biophony and ice geophony of Antarctic and Arctic 7 days-long
recording sessions (2015, 2016) are built and analyzed over a surface of the
order of 1 km2. Benthic invertebrates emit high energetic pulses with peak
frequencies ranging from 2 to 55 kHz, most of them below 15 kHz. Geophony is structured into two parts. The ice sheet, located several kilometers
or tens of kilometers away, creates a stable spatial distribution of low energetic pulses (representing the majority of pulses in the soundscape) modulated by the temporal variability. The movements of isolated icebergs or
pack ice produce localized acoustic events identifiable by the high sound
levels and the stable peak frequencies of the emitted pulses.
Acoustics ’17 Boston
4001
5p THU. PM
1:20
2:00
5pAB3. Acoustic habitat utilized by ice-living seals: Hearing and masking in natural noise environments. Jillian Sills, Colleen Reichmuth (Inst.
of Marine Sci., Long Marine Lab., Univ. of California, Long Marine Lab.,
115 McAllister Way, Santa Cruz, CA 95060, jmsills@ucsc.edu), and Alex
Whiting (Kotzebue IRA, Native Village of Kotzebue, Kotzebue, AK)
Acoustic habitat is a fundamental but poorly understood resource for
marine mammals, including seals. To evaluate the soundscapes experienced
by seals in dynamic Arctic environments, two DSG-Ocean Acoustic Dataloggers were deployed in Kotzebue Sound, Alaska from September 2014
through September 2015, providing a full year of acoustic coverage for this
region of the Chukchi Sea. The recorders were placed in an area of seasonal
fast ice where spotted, ringed, and bearded seals are all found at various
times of year. The data describe the acoustic conditions typically experienced by these ecologically and culturally important seal species, including
variations in noise up to 48 kHz within and across scales of hours, days,
months, and seasons. The noise profiles provide an ecological framework
for laboratory studies of hearing with trained seals, allowing for improved
understanding of their sensory biology in the context of their acoustic habitat. The integration of these noise measurements with hearing and auditory
masking data enables a quantitative assessment of the effects of varying ambient noise conditions on the communication ranges of seals living in Arctic
waters. [Work supported by the Northwest Arctic Borough Science
Committee.]
2:20
5pAB4. Acoustic and biological trends on coral reefs off Maui, Hawaii.
Maxwell B. Kaplan (Biology, Woods Hole Oceanographic Inst., 266 Woods
Hole Rd., MS50, Woods Hole, MA 02543, mkaplan@whoi.edu), Marc
Lammers (Hawaii Inst. of Marine Biology, Kaneohe, HI), T Aran Mooney
(Biology, Woods Hole Oceanographic Inst., Woods Hole, MA), and Eden
Zang (Oceanwide Sci. Inst., Makawao, HI)
Coral reef soundscapes comprise a range of biological sounds. To investigate how the sounds produced on a given reef relate to the species present,
7 Hawaiian reefs that varied in their species assemblages were equipped
with acoustic recorders operating on a 10% duty cycle for 16 months, starting in September 2014. Benthic and fish visual surveys were conducted 4
times over the course of the study. Acoustic analyses were carried out in 2
frequency bands (50-1200 Hz and 1.8-20.5 kHz) that corresponded with the
spectral features of the major sound-producing taxa on these reefs, fish and
snapping shrimp, respectively. In the low-frequency band, the presence of
humpback whales (December-May) was the major driver of sound level,
whereas in the high-frequency band sound level closely tracked water temperature. On shorter timescales, the magnitude of the diel trend varied in
strength among reefs, which may reflect differences in the species assemblages present. Regression trees indicated that, at low frequencies, the relationship between species assemblages and acoustic parameters varied by
season; however, at high frequencies, a given reef was generally most like
itself over time. Thus, long-term acoustic recordings can capture and distill
the substantial acoustic variability present in coral reef ecosystems.
2:40
5pAB5. Temporal soundscape dynamics in a Magellanic penguin colony
in Tierra del Fuego. Dante Francomano, Ben Gottesman, Taylor Broadhead, and Bryan C. Pijanowski (Dept. of Forestry and Natural Resources,
Purdue Univ., Ctr. for Global Soundscapes, B066 Mann Hall, 203 South
Martin Jischke Dr., West Lafayette, IN 47907, dfrancomano@gmail.
com)
On Isla Martillo in Tierra del Fuego, we continuously recorded in a colony of Magellanic penguins (Spheniscus magellanicus) in the beginning of
the 2016 molting season. Here we describe the daily soundscape dynamics
within this colony using existing soundscape metrics, which were originally
developed to facilitate acoustic-based ecological inferences from multisource soundscapes. While these indices have exhibited great successes, little research has explored the utility of soundscape metrics for characterizing
ecological patterns and processes when a single soniferous species dominates the soundscape. Bioacoustics offers tools for such applications, but
soundscape metrics may be favorable in situations where sounds of
4002
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
chorusing animals temporally overlap or when sounds are non-stereotypical.
Some of the diel behavior patterns of this species have been previously
documented by studies focusing on foraging behavior, but these studies
relied on human observation and dive trackers mounted on individual birds.
Instead, we consider the potential utility of terrestrial acoustic recording to
monitor populations and behavior of this near-threatened species. We interpret our acoustic data in the context of known Magellanic penguin behavior
and the few non-penguin sounds in this habitat, and through this interpretation we evaluate how soundscape metrics can be used to assess nearly
monospecific assemblages.
3:00
5pAB6. Noise affects black-tufted marmoset (Callithrix penicillata;
GEOFFROY, 1812) phee call frequency. Sara Santos, Marina H. Duarte,
Isabela F. Cardoso (Pontifı́cia Universidade Católica de Minas Gerais, Belo
Horizonte, MG, Brazil), Renata S. Sousa-Lima (Physiol. and Behavior,
UFRN, Lab. of BioAcoust., Centro de Biociencias, Campus Universitario,
Caixa Postal 1511, Natal, Rio Grande do Norte 59078-970, Brazil, sousalima.renata@gmail.com), and Robert J. Young (Univ. of Salford Manchester, Salford, United Kingdom)
Anthropogenic noise is very different from natural sounds, and could
cause organisms living in noisy areas to modify their vocal communication.
We assessed the influence of noise on black tufted-ear marmoset acoustic
communication. Spontaneously produced phee vocalizations were recorded
in two areas: a noisy urban park, located in Belo Horizonte, Minas Gerais
state, Brazil and a quiet natural forest, located at Cauaia in Matozinhos, in
the same state. We also recorded bus brakes sounds (BBS) in the noisy park
because we noticed that the sounds produced by these vehicles were similar
to the phee vocalizations. Frequencies and duration of phees and the BBS
were measured and used to compare: 1- pheesfrom the urban and natural
areas and 2- urban phee vocalizations and BBS. The duration of the phee
calls was longer in the urban area. The low, high and dominant frequencies
were significantly higher in the natural area. The low frequency extracted
from BBS was similar to those of the phee calls of marmosets in the urban
area. We suggest that the difference between the marmoset calls from urban
and natural areas is influenced by noise and BBS compete with marmosets
calls and may disturb the communication of these primates.
3:20–3:40 Break
3:40
5pAB7. Vocal behavior and ontogeny of Northern right whales in the
southeast critical habitat. Edmund R. Gerstein (Charles E. Schmidt College of Sci., Florida Atlantic Univ., 777 Glades Rd., Boca Raton, FL 33486,
gerstein2@aol.com), Vasilis Trygonis (Univ. of the Aegean, Lesvos,
Greece), and James B. Moir (Marine Resources Council , Stuart, FL)
North Atlantic right whales are one of the most endangered of the great
whales. A remnant population of ~500 inhabits the eastern seaboard of
North America. A small fraction (2%) travels south to their critical calving
habitat along the Florida and Georgia coast. By late November and through
the winter, right whales give birth and nurse their calves in these shallow
waters before departing in early spring to their northern habitats. In the
southeast critical habitat mother-calf pairs remain generally isolated from
other whales, presenting a unique platform to study vocal development and
learning in large whales. Using small boats, GPS-instrumented, free-drifting
autonomous acoustic buoys were deployed in close proximity to 44 photoidentified mother-calf pairs over 7 calving seasons. Surface video and
synchronized underwater recordings documented their social and vocal
behavior. With the exception of some low-energy gunshot sounds, mothers,
and their calves, remained predominantly silent during the first 4 weeks.
This might be due to calf maturation, and or a strategy to avoid harassment
by other whales or potential predators. Over 100 calls have been analyzed
from 15 different calves. Some of these calves were resampled at different
stages at <1 week up to 12 weeks of age. Evidence of individual and agerelated variance and changes in call structure, complexity, power, rates, as
well as vocal mimicry are presented. [Funding: HBOI Florida PFW License
Plate Fund, The Harry Richter Foundation and IBM, NOAA Permit
#14233.]
Acoustics ’17 Boston
4002
5pAB8. Fish sound production in freshwater habitats of New England:
Widespread occurrence of air movements sounds. Rodney A. Rountree
(23 Joshua Ln., Waquoit, MA 02536, rrountree@fishecology.org), Francis
Juanes (Biology, Univ. of Victoria, Victoria, BC, Canada), and Marta Bolgan (Univ. of Liège, Liége, Belgium)
We conducted a roving survey of five major river systems and adjacent,
creek, lake, and pond habitats located within the northeastern United States.
Fish sounds were recorded in 49% of 175 locations. Air movement sounds,
including fast repetitive tick (FRT), occurred at 41% of the locations. Sluggish creeks had the highest occurrence of fish sounds (71%). Although
highly variable, creeks and brooks had the lowest noise levels and rivers the
highest. Fish sounds were more frequent in low noise habitats than high
noise habitats, but the effect of masking on detection is not clear. Within
main-stem river habitats, fish sound diversity tended to increase along a gradient from high elevation to the sea. Follow-up studies validated air movement sounds produced by alewife, white sucker, and brook, brown and
rainbow trout through direct observation or in observations where only single species were present. Sounds produced by all five species are of the “air
movement” type which is poorly understood but occurs widely in freshwater
habitats. Although air movement sounds are likely incidental to physiological processes, they appear to be uniquely identifiable to species and, hence,
hold promise for passive acoustic studies of freshwater soundscapes and fish
behavior.
4:20
5pAB9. Characterizing soundscapes and larval fish settlement in tropical seagrass and mangrove habitats. Ian T. Jones, Justin Suca, Joel Llopiz,
and T. Aran Mooney (Biology, Woods Hole Oceanographic Inst., Woods
Hole Oceanographic Inst., 266 Woods Hole Rd, Marine Res. Facility 225
(MS #50), Woods Hole, MA 02543, ijones@whoi.edu)
Recent evidence suggests soundscapes of coral reefs may provide acoustic cues that larval reef fish utilize during settlement. Seagrass and mangrove
habitats are further important refuges for larvae and juveniles of many
fishes; however, compared to reefs, less is known about the characteristics
of tropical seagrass and mangrove soundscapes and their potential as settlement cues. We deployed light traps to assess fish larvae settlement and passive acoustic recorders to study the “ecoacoustics” at mangrove, seagrass,
and coral reef sites around two bays in St. John, U.S. Virgin Islands. Light
traps were deployed nightly around the third quarter and new moon, and 24
h periods of acoustic recordings were taken during the same time. Fish larvae were counted and identified to the lowest possible taxonomic level. Focusing on biotic soundscape components, diel trends in metrics such as
sound pressure level, power spectral density, and snap counts of snapping
shrimp were assessed. Although what role mangrove and seagrass soundscapes may play in fish settlement remains unclear, these soundscape and
larval settlement data provide foundations for deeper investigations of the
relationship between acoustic and larval ecology in these essential nursery
habitats.
4:40
5pAB10. Toward an unsupervised and automated analysis framework
for large acoustic datasets to assess animal biodiversity and ecosystem
health. Yu Shiu, Ashakur Rahaman, Christopher W. Clark, and Holger
Klinck (BioAcoust. Res. Program, Cornell Univ., 159 Sapsucker Woods
Rd., Ithaca, NY 14850, atoultaro@gmail.com)
Passive acoustic monitoring is a promising and non-invasive method to
assess the biodiversity and potentially health of terrestrial and marine ecosystems. Over the last decade, various methods have been proposed to
extract information on the animal biodiversity primarily based on acoustics
4003
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
indices. Several recent studies have shown that ecological relevance and
effectiveness of these indices remain uncertain. We propose a new, multistep method to estimate animal biodiversity from acoustic datasets by
applying an unsupervised detection and classification technique. Our semiautomated framework extracts every acoustic event with a pre-defined signal-to-noise ratio. In a second step, the detected events are grouped into
classes based on the similarity of acoustic features. The number of resultant
classes are linked to animal biodiversity in an area by applying a transfer
function, which is established using manually/expert reviewed class labels.
Our framework provides diel and seasonal changes in the overall number of
sound classes as well as number of acoustic events in each class. We will
demonstrate its performance by application to three datasets collected in the
Chukchi Sea, Alaska, Sapsucker Woods Sanctuary, Ithaca, NY, and Abel
Tasman National Park, New Zealand.
5:00
5pAB11. Using automated acoustic monitoring to detect elevational
migration in the avian community of the Gondwana Rainforests of Australia. Elliot Leach (Environ. Futures Res. Inst., Griffith Univ., 170 Kessels
Rd., Brisbane, QLD 4111, Australia, elliot.leach@griffithuni.edu.au), Chris
Burwell (Biodiversity Program, Queensland Museum, Brisbane, QLD, Australia), Darryl Jones, and Roger Kitching (Environ. Futures Res. Inst., Griffith Univ., Brisbane, QLD, Australia)
Climate change presents the most significant threat to Australia’s rainforest avifauna. In order to determine the future impacts of climate change
and make informed conservation decisions, baseline information on species
distributions, elevational preferences, and seasonal movements is necessary.
Traditionally, generating data such as these over large spatio-temporal
scales has been difficult and costly. However, the recent development of
cheap, reliable bioacoustic recorders has facilitated such data collection. By
using automated acoustic recorders in subtropical rainforest along two elevational gradients in northern New South Wales, Australia, we were able to
continuously monitor the avian community for a 14-month period. The data
generated during this project allowed us to detect seasonal elevation migration amongst resident species, the arrival and departure times of migratory
species and the breeding behavior of cryptic species. This research also represented the first comprehensive avian biodiversity survey conducted in the
region. Here, we present our results from the automated acoustic monitoring, and discuss the implications for future research and monitoring of the
avian community in the Gondwanan Rainforests of Australia.
5:20
5pAB12. Large scale passive acoustic monitoring using compact arrays
of synchronized hydrophones. Ildar R. Urazghildiiev (JASCO Appl. Sci.
(Alaska) Inc., 19 Muriel St., Ithaca, NY 14850, ildar.urazghildiiev@jasco.
com) and David E. Hannay (JASCO Appl. Sci., Victoria, BC, Canada)
Modern and future ecosystem-level data processing techniques need to
solve the problem of detecting, classifying, localizing, tracking, and estimating density (DCLTDE) concurrently from all sounds an acoustic recorder
detects. To solve this problem, we propose a technique based on three major
components: the compact array of synchronized hydrophones; the automatic
DCLTDE technique; and the software to visualize and to rapidly analyze
the results of long-term data processing. The compact array provides an
additional information about the azimuth and elevation angles of the
detected sounds, and the data processing technique uses this information to
solve the required DCLTDE problems automatically. Processed acoustic
recordings collected over one year in the Strait of Georgia, BC, Canada, are
presented. These results demonstrated that the proposed technique dramatically increases the efficiency-to-cost ratio by decreasing the time needed for
a person to analyze a huge amount of data and by increasing the amount and
accuracy of the information extracted from acoustic recordings.
Acoustics ’17 Boston
4003
5p THU. PM
4:00
THURSDAY AFTERNOON, 29 JUNE 2017
ROOM 310, 1:20 P.M. TO 4:40 P.M.
Session 5pAO
Acoustical Oceanography: Tools and Methods for Ocean Mapping II
Scott Loranger, Cochair
Earth Science, University of New Hampshire, 24 Colovos Road, Durham, NH 03824
Philippe Blondel, Cochair
Physics, University of Bath, University of Bath, Claverton Down, Bath BA2 7AY, United Kingdom
Invited Papers
1:20
5pAO1. Acoustic detection of macroalgae in a dynamic Arctic environment. Isfjorden (West Spitsbergen) case study. Aleksandra
Kruss (Coastal Systems and Human Impacts, CNR ISMAR, Tesa 104, Castello 2737/F, Venice, Veneto 30122, Italy, aleksandra.kruss@ve.
ismar.cnr.it), Jozef Wiktor, Agnieszka Tatarek, and Jozef Wiktor, Jr. (Marine Ecology, Inst. of Oceanology PAS, Sopot, pomorskie, Poland)
Acoustic imaging of seabed morphology and benthic habitats is a fast-developing tool for investigating large areas of underwater
environment. Even though single- and multi-beam echosounders have been widely used for this purpose for many years, there is still
much to discover, especially in terms of processing water column echoes to detect macroalgae and other scatterers (e.g., fishes, or suspended sediments) that can provide us with important information about the underwater environment and its evolution. In difficult Arctic
conditions, acoustic monitoring plays an important role in the investigation of bottom morphology and in imaging habitats. In July 2016,
we carried out a multidisciplinary expedition to investigate macroalgae spatial distribution in Isfjorden and to measure significant environmental features (currents, salinity, turbidity) influencing their occurrence. An area of 4.3 km2 was mapped using single- and multibeam sonars along with underwater video recordings, CTD and ADCP measurements. We obtained a unique data set showing variability
of acoustic properties among different macroalgae species, supported by very well correlated ground-truth data and environmental measurements. Modern processing techniques were used to analyze water column data signals for kelp detection. This study presents efficient
tools for monitoring benthic communities and their environmental context, focusing on macroalgae acoustic characteristics.
1:40
5pAO2. Synthetic aperture sonar interferometry for detailed seabed mapping: Performance considerations. Roy E. Hansen, Torstein O. Sæbø, Stig A. Synnes, and Ole E. Lorentzen (Norwegian Defence Res. Establishment (FFI), P O Box 25, Kjeller NO-2027, Norway, Roy-Edgar.Hansen@ffi.no)
Synthetic Aperture Sonar (SAS) interferometry is a technique for detailed mapping of the seabed, with the potential of very high resolution and wide swaths simultaneously. There are several specific challenges to overcome for the technique to reach its full potential.
These differ from other mapping sensor technologies, e.g., multibeam echosounders (MBES), and interferometric sidescan sonars
(ISSS). In this talk, we describe the principle of SAS interferometry with emphasis on the estimation part, strongly inspired by the similar principle in synthetic aperture radar (SAR). We describe the limiting factors in using SAS interferometry for seabed depth estimation.
These are related to the host platform, the measurement geometry, the sonar array design, and the signal processing. We construct an
error budget where we categorize the different components that affect the overall performance. We also describe the choices and tradeoffs available in the signal processing for a given set of measurements. We show example images and depth estimates from the Kongsberg HISAS interferometric SAS collected by a HUGIN autonomous underwater vehicle.
2:00
5pAO3. Internal wave effects on seafloor imagery and bathymetry estimates. Anthony P. Lyons (Ctr. for Coastal and Ocean Mapping, Univ. of New Hampshire , Durham, NH 03824, anthony.lyons@ccom.unh.edu), Roy E. Hansen (Norwegian Defence Res. Establishment (FFI), Kjeller, Norway), James Prater (Naval Surface Warfare Ctr. Panama City Div., Panama City, FL), Warren A. Connors
(NATO STO Ctr. for Maritime Res. and Experimentation, La Spezia, Italy), Glen Rice (Hydrographic Systems and Technol. Programs,
NOAA, Durham, NH), and Yan Pailhas (Ocean Systems Lab., Herriot Watt Univ., Edinburgh, United Kingdom)
Large linear structures (tens of meters by several meters) have been observed recently in seafloor imagery and bathymetry collected
with both synthetic aperture sonar (SAS) and multibeam echosounder (MBES) systems. It has been suggested [Hansen, et al., IEEE J.
Oceanic Eng., 40, 621-631 (2015)] that this phenomenon is not due to the true morphology of the seafloor, but is caused by water column
features related to breaking internal waves. Changes observed in acoustic intensity and bathymetry estimates are caused by a focusing of
the acoustic field which results in structures that appear to be true seabed topography. In terms of seafloor mapping, these topographymimicking features will impact the interpretation of imagery, may complicate the production of mosaics, and have the potential to cause
bathymetric uncertainties exceeding International Hydrographic Organization standards. In this talk we will show that these water-column caused features may not be uncommon using examples of data collected with several different SAS and MBES systems in a variety
of experimental locations.
4004
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
4004
2:20
5pAO4. Circular synthetic aperture sonar image resolution theory. Yan Pailhas and Yvan Petillot (Heriot-Watt Univ., Riccardo
Campus, School of EPS, Edinburgh EH14 4AS, United Kingdom, Y.Pailhas@hw.ac.uk)
The introduction of SAS (Synthetic Aperture Sonar) systems has been a game changer for underwater surveys. The gain in resolution, compared to traditional sidescan systems, created a paradigm shift as the information contained in a SAS image switches from
shadows to highlights. SAS systems traditionally perform lawnmower type surveys, but the need for multiple views in MCM (MineCounter Measure) tasks, for example, opened the interesting problem of target re-acquisition patterns. In particular, circular patterns
maximize the aperture, thus the overall image resolution of such system. The capability of CSAS (Circular SAS) has been demonstrated
on the field, but the derivation of CSAS processing has not been fully developed. The non-uniform sampling of the circular pattern in
particular introduces aberrations within the field of view and a non uniform PSF (Point Spread Function). In this talk, we propose a new
spatial sampling scheme which makes the CSAS PSF perfectly uniform. The theoretical closed form solution of the PSF is then derived
both in time and Fourier domain. The PSF derivation naturally leads to redefine the image resolution as an energy leakage problem.
Thanks to the new sampling scheme and the uniform PSF, we also propose a deconvolution method based on atom waves which
increases the CSAS resolution.
2:40–3:00 Break
Contributed Papers
3:00
3:40
5pAO5. Seafloor mapping with a cylindrical array. Glen Rice (Ctr. for
Coastal and Ocean Mapping, Univ. of New Hampshire, 24 Colovos Rd.,
Durham, NH 03824, grice@ccom.unh.edu), Ole Bernt Gammelseter (Simrad, Kongsberg, Horten, Norway), and Thomas C. Weber (Ctr. for Coastal
and Ocean Mapping, Univ. of New Hampshire, Durham, NH)
5pAO7. Spatio-temporal variability of bottom reverberation striation
pattern in shallow water. Andrey Lunkov (A.M. Prokhorov General Phys.
Inst. RAS, Vavilov St. 38, Moscow 119991, Russian Federation,
landr2004@mail.ru)
3:20
5pAO6. Improving seep detection by swath sonars with adaptive beamforming. Tor Inge B. Lønmo (Informatics, Univ. of Oslo, P.O.Box 111,
Horten 3191, Norway, toribi@ifi.uio.no) and Thomas C. Weber (Ctr. for
Coastal and Ocean Mapping, Univ. of New Hampshire, Durham, NH)
Detection of gas seeps is currently of interest for a wide variety of applications. The oil and gas industry can use it to monitor their installations, e.g.
oil wells and pipelines. It may also contribute to the understanding of geological and biological activity in the seabed. In a climate perspective, it is
also important for estimating the amount of methane that seeps into the
atmosphere and monitoring CO2 stored in geological structures. Seeps are
commonly detected by bathymetric swath sonars. Bubbles are strong acoustical targets and may form clear flares in the water column display, depending on the size and density of the bubbles. Detection is often easy before the
first bottom return arrive and is gradually masked by the seafloor reverberation at longer ranges. We propose to use the sidelobe suppressing properties
of adaptive beamforming to suppress seafloor reverberation and extend seep
detection range. To investigate this in practice we placed an artificial seep at
approximately 45 m depth and mapped it with a swath sonar. We ran lines
over the seep at between 0 and 80 m horizontal range. Our processing chain
allows us to process each ping with both standard and adaptive beamforming, providing easily comparable results.
4005
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Interference pattern of bottom reverberation can be a source of additional information about underwater environment. Over the last decade, the
striations in bottom backscattered broadband signals in the time-frequency
domain were observed in several short-term experiments (Goldhahn et al.,
2008, Li et al., 2010). In our research, numerical simulations are carried out
to analyze the long-term variability of bottom reverberation striation pattern
in the presence of internal waves (IW), both background and solitary-like.
Coherent normal mode reverberation model is applied. Backscattered chirp
signals are selected from different directions by a bottom mounted horizontal array. Simulated striation patterns reveal frequency shifts that are dependent on bearing and IW type. This phenomenon can be used to monitor
the spatio-temporal variability of IW. To increase the sensitivity of this
approach, the time-reversal sound focusing on the seafloor is proposed.
[Work supported by RFBR, 16-32-60194.]
4:00
5pAO8. Paradigm shift of underwater echo sounding technology III—
Evaluation of paradigm shift echo-sounder. Ikuo Matsuo (Dept. of Information Sci., Tohoku Gakuin Univ., Tenjinzawa 2-1-1, Izumi-ku, Sendai
9813193, Japan, matsuo@mail.tohoku-gakuin.ac.jp) and Toyoki Sasakura
(FUSION Inc., Tokyo, Japan)
Underwater echo sounding technology was developed 65 years ago and
has been applied to various systems such as fishfinder, bathymetry, sonar,
side scanning sonar, multi beam echo sounding system and synthetic aperture sonar. We have suggested a new concept that may change the history of
underwater echo sounding technology, should be called as “Paradigm
Shift.” In the conventional system, the transmission intervals are always longer than the time of the round trip distance to the target divided by the
underwater sound velocity. By adapting our new “Paradigm Shift Echosounder” into the system, the transmission intervals can be decided except
for depending on a target and it will be possible to conduct a bathymetry
survey that transmits 100 times per second. The system utilized the 7th
order Gold code sequence signals. Transmitting signal is the phase modulated and four cycles of carrier signals as one bit of the Gold code sequence.
The actual sea trial experiment was done by the proto type of “Paradigm
Shift Echo-sounder.” The experimental result was obtained by the transmission of 100 times per second at the depth of 10 to 80 m sea bottom. We also
confirmed that this system could reconstruct a high-resolution echogram of
the artificial reef.
Acoustics ’17 Boston
4005
5p THU. PM
Seafloor mapping is conducted with different types of arrays. Single
beam mapping systems are often constructed with piston arrays. Linear
arrays are used for both side scan sonar and for multibeam echo sounders.
These conventional approaches to seafloor mapping are typically constrained to a single observation of any one point on the seafloor for any one
pass of a moving vessel. A less conventional cylindrical array offers the opportunity to observe the majority of the seafloor from different perspectives
but with the same angle with a single pass. This has the potential to improve
the resulting depth estimates and seafloor backscatter products. In 2016, a
Simrad Omnisonar was used to demonstrate seafloor mapping with a cylindrical array. While this array is designed for observing fish schools, a small
area was successfully mapped and the results compared with a conventional
bathymetric mapping system. Observations on the benefits and challenges
of this approach to seafloor mapping will be discussed.
4:20
distribution or the travel-time difference data on the boundary, and can even
forecast the current changes. Based on the ocean dynamics, the current field
is shown to be spatially and temporally correlated. We derive their relation
and use that as the state model for the Kalman filter; the coefficients are estimated from data using an auto-regressive analysis. Armed with this model,
it is shown based on simulated data that the current field can be tracked as a
function of time using the Kalman filter (with an arbitrary initial condition)
with a higher accuracy than that estimated by OAT. The reason of the
improvement, the use of spatial-temporal state model (versus using only the
temporal evolution), is studied. The method has also been applied to real
data.
5pAO9. Coastal ocean current tomography using a spatio-temporal
Kalman filter. Tongchen Wang, Ying Zhang, Tsih C. Yang, Wen Xu, and
Huifang Chen (College of Information Sci. and Electronic Eng., Zhejiang
Univ., Hangzhou, Zhejiang 310027, China, talentwtc@163.com)
The method of ocean acoustic tomography (OAT) can be used to invert/
map the ocean current in a coastal area based on measurements of two-way
travel time differences between the nodes deployed on the perimeter of the
surveying area. Previous work has attempted to relate the different measurements in time using the Kalman filter. Now, if the ocean dynamics or model
is known, one can also determine the current field given an initial
THURSDAY AFTERNOON, 29 JUNE 2017
ROOM 312, 1:15 P.M. TO 5:40 P.M.
Session 5pBAa
Biomedical Acoustics and ASA Committee on Standards: Standardization of Ultrasound Medical Devices
Volker Wilkens, Cochair
Ultrasonics Working Group, Physikalisch-Technische Bundesanstalt, Bundesallee 100, Braunschweig 38116, Germany
Subha Maruvada, Cochair
U.S. Food and Drug Administration, 10903 New Hampshire Ave., Bldg. WO 62-2222, Silver Spring, MD 20993
Chair’s Introduction—1:15
Invited Papers
1:20
5pBAa1. The International Electrotechnical Commission (IEC) and ultrasonics. Peter J. Lanctot (Int. ElectroTech. Commission,
446 Main St., Ste. 16, Worcester, MA 01608, pjl@iec.ch)
Peter Lanctot from the International Electrotechnical Commission (IEC) will provide an overview of the international standardization activities related to IEC Technical Committee 87, Ultrasonics. Since 1955, the IEC has published international standards for a wide
range of applications across virtually all business sectors including medicine, electronics, consumer products, food, manufacturing
industries, and defence for ultrasonic technologies. As TC 87 is in charge of Ultrasonic Standardization activities within the IEC, Peter
will provide a high level overview on their standards work oriented towards ultrasonic aspects of medical equipment and the safety of
non-medical applications of ultrasonic fields. He will also touch upon the important platform that IEC TC 87 and other IEC technical
committees offer to companies, industries, and governments for meeting, discussing, and developing the International Standards they
require. Based in Geneva, Switzerland, the International Electrotechnical Commission is the world’s leading organization that prepares
and publishes International Standards for all electrical, electronic and related technologies.
1:40
5pBAa2. Overview of International Electrotechnical Commission Technical Committee 87 ultrasound organization and standards. Subha Maruvada (U.S. Food and Drug Administration, 10903 New Hampshire Ave., Bldg. WO 62-2222, Silver Spring, MD
20993, subha.maruvada@fda.hhs.gov)
The International Electrotechnical Commission (IEC) is the world’s leading organization that prepares and publishes International
Standards for all electrical, electronic and related technologies. It comprises Technical Committees (TCs) and Subcommittees (SCs) that
oversee standards development in all areas of technology. Particular to ultrasound medical devices, TC 87, Ultrasonics, is responsible
for preparing standards related to measurement methods, field characterization, and safety concerns, but excluding safety and essential
performance standards for equipment and systems. Responsibility for the latter falls under TC 62, Electrical Equipment in Medical Practice. SC 62B in TC 62 is in charge of all diagnostic imaging equipment and related devices, including ultrasound, and SC 62D deals
with all electromedical equipment used in medical practice other than diagnostic imaging, such as ultrasound therapeutic and surgical
devices. This presentation will discuss the safety standards, measurement standards, maintenance/performance documents and structure
and distribution of work in TC 87 and SC62B and SC62D as an introduction to the more specific talks following.
4006
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
4006
2:00
5pBAa3. Safety and performance testing according to international standards and regulatory environment for medical ultrasound devices. Royth P. von Hahn (Medical and Health Services, TUV SUD America, 10040 Mesa Rim Rd., San Diego, CA 92126,
rvonhahn@tuvam.com) and Mathias Kuhn (Medical and Health Services, TUV SUD ProDC Service, Munich, Germany)
Besides being scientifically interesting, acoustics resp. ultrasonics has a broad and diverse range of applications in industry and the
medical field. Acceptance of technology and its application depend on safety and performance of technology resp. devices utilizing it.
Standardization is essential to ensure safety and performance and it also helps to harmonize market access of devices all around the
world. To keep up with the “state of the art” it is necessary to continuously update existing standards and develop new ones. Especially
in medical ultrasound a unique combination of expertise from different scientific areas is needed to develop useful standards: In addition
to acoustics, physics and HF engineering, aspects of bioeffects and measurement technology need to be represented in standard development. The presentation will give insights how such combination of expertise contributed to the current set of medical ultrasound standards and what technical and scientific challenges need to be solved for developing safety standards for the latest innovations. On actual
examples, it will show the relevance of scientific research for standardization and device testing.
2:20
5pBAa4. Basic ultrasonic field measurement: Overview of standardized methods and expected developments. Volker Wilkens
(Ultrason. Working Group, Physikalisch-Technische Bundesanstalt, Bundesallee 100, Braunschweig 38116, Germany, volker.wilkens@
ptb.de)
Basic ultrasonic field measurement methods to support the acoustic output characterization of medical ultrasound equipment are
standardized in IEC 61161 for ultrasonic power by means of radiation force balance measurements and in IEC 62127-1,2,3 for ultrasonic
pressure using hydrophones. Descriptions and requirements for ultrasonic power determination seem to be broadly elaborated even, for
instance, for extended frequency ranges of high frequency diagnostic systems or for very high output powers of high intensity therapeutic ultrasound devices. In contrast, hydrophone measurements often are accompanied by several challenges still not fully addressed by
current standards. Important deficiencies to overcome in future editions are the frequency range of calibrations currently limited to 40
MHz and being insufficient for broadband waveform deconvolution applications, and the descriptions of deconvolution and of the corresponding measurement uncertainty determination themselves. Improvements of these technical items according to recently developed
calibration and data evaluation procedures are expected to result in better quality and reliability of acoustic output parameter determination as required, for instance, for diagnostic ultrasound machines within the output declaration and output display according to IEC
60601-2-37 and IEC 62359. In addition, new projects on standards in the high intensity therapeutic area may also take advantage of such
improvements.
2:40
5pBAa5. International Electrotechnical Commission (IEC) and Food and Drug Administration (FDA) efforts to develop standards for ultrasound physiotherapy. Bruce Herman (U.S. FDA,.10903 New Hampshire Ave. Bldg. 66, Silver Spring, MD 20993,
bruce.herman@comcast.net)
Ultrasound physiotherapy devices produce acoustic energy, typically in the low MHz range, to generate deep heat for relief of pain
and spasms, and have been used since the 1940’s. The IEC and the FDA have an interrelated history of developing standards for these
devices. The IEC produced the first such standard “Testing and calibration of ultrasound therapeutic equipment”(1963). The FDA, with
the IEC document as guide, developed its own “Ultrasonic Therapy and Surgery Products Performance Standard”(1978, 2012). Then the
IEC, with FDA involvement and using many of the FDA’s concepts, produced “Medical electrical equipment—Part 2-5: Particular
requirements for the basic safety and essential performance of ultrasonic physiotherapy equipment (1984, 2000, 2009) and
“Ultrasonics—Physiotherapy systems—Field Specifications and methods of measurement in the frequency range 0.5 MHz to 5 MHz”
(1996, 2007, 2013). FDA personnel are also currently leading an IEC effort to write a standard dealing with new, lower frequency physiotherapy. The FDA is also committed to harmonizing FDA regulations with IEC standards whenever possible. This presentation
recounts the history of these efforts and examines some similarities and differences among the documents.
3:00
5pBAa6. Standards for the characterization of high intensity nonlinear ultrasound fields and power generated by therapeutic
systems. Thomas L. Szabo (Biomedical Dept., Boston Univ., 44 Cummington Mall, Boston, MA 02215, tlszabo@bu.edu)
4007
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Acoustics ’17 Boston
4007
5p THU. PM
The emerging field of high intensity therapeutic ultrasound (HITU) presents multiple challenges. Early attempts to characterize high
pressure fields fried both hydrophones and force balances originally designed for diagnostic ultrasound. Working Group 6 of International Electrotechnical Commission (IEC) Technical Committee 87 initially sought out methods employing current validated measurement technology. Technical Specification 62256 focused on two field characterization approaches for water. The first was using
conventional calibrated hydrophones in the linear range and a scaling up of field pressure values by the applied voltage ratio. The other
method involved a finely sampled 2D scan near the transducer face as input data to linear projection algorithms to recreate the entire
pressure field. This computed field could then be scaled up the previously described voltage ratio. An additional benefit of the second
approach was that the same scan data could be used to predict pressure and thermal levels in simulated tissue media representing clinical
scenarios. Technical standard 62255 described HITU power measurements with scaling and a novel buoyancy method. Current efforts
are directed at the direct measurement of high pressure fields with new technologies and the data-based nonlinear simulation of fields
and bioeffect end points in water and tissues.
3:20
5pBAa7. Standards for pressure pulses used in lithotripsy, pain therapy, and other medical applications. Friedrich Ueberle (Life
Sci. / Biomedical Eng., Hamburg Univ. of Appl. Sci. (HAW), Ulmenliet 20, Hamburg 21033, Germany, friedrich.ueberle@haw-hamburg.de)
Shockwave lithotripsy was first used in 1980 for the non-invasive, safe treatment of stones in the urinary tract. The first commercial
lithotripters were introduced in 1983, using an underwater spark discharge as sound source, which was focused by an ellipsoidal mirror.
Single steep (Shockwave) pressure pulses of few microseconds duration with 20…>100MPa amplitude are released at a rate up to 2 per
second. Each treatment requires ca. 1000 to 3000 pulses. Competitors developed lithotripters with spherical piezoelectric and focused
electromagnetic sources. Up today, these three source types are applied in commercial lithotripters. With the occurrence of different
types of sources from different manufacturers, it became important to standardize the description of pulses and acoustic wave fields, for
clinical approval and safety, the understanding of pressure pulse interaction with biological tissue and stones, and for quality control. A
lithotripter safety standard was created 1997 (IEC601-2-36). In 1998, the international standard IEC61846 was released, which describes
measurement methods and parameters for focused pressure pulse sources. Both standards are regularly reviewed and enhanced in maintenance projects. A new project (IEC63045) defines parameters for non-focusing and weakly focusing pressure pulse sources, which are
widely used in pain therapy and other tissue applications since 1998.
3:40
5pBAa8. Fast accurate optical measurement of medical ultrasonic field in combination with numerical simulation of nonlinear
propagation. Shin-ichiro Umemura and Shin Yoshizawa (Graduate School of Biomedical Eng., Tohoku Univ., Aoba 6-6-05, Aramaki,
Aoba-ku, Sendai 980-8579, Japan, sumemura@ecei.tohoku.ac.jp)
Fast and accurate measurement of ultrasonic field is necessary to ensure the safety and efficacy of therapeutic as well as diagnostic
applications of medical ultrasound. The most common method for the purpose is hydrophone scanning. However, it requires a long scanning time and potentially disturbs the field, which is limiting the efficiency of developing such applications. This study proposes an optical phase contrast method in combination of a CT algorithm. Ultrasonic pressure field modulates the phase of the light passing through
the field. A phase plate was employed to shift the phase of the non-diffracted component of the light typically by 90 degrees. The phase
modulation of the diffracted component was converted to amplitude modulation through interference with the phase-shifted non-diffracted component and then measured by camera. From the measured projected 2D data, the 3D pressure field was reconstructed by a
CT algorithm. An upstream field, in which the optical phase does not wrap and the effect of nonlinear propagation can be ignored, was
thereby quantified. Nonlinear ultrasonic propagation was simulated based on a pseudo spectral method using the upstream pressure field
as the input. Both pressure waveform and absolute pressure from the proposed method agreed well with those directly from hydrophone
scanning.
Contributed Papers
4:00
4:20
5pBAa9. Acoustic holography combined with radiation force measurements to characterize broadband ultrasound transducers and calibrate
hydrophone sensitivity. Sergey Tsysar, Maxim A. Kryzhanovsky, Vera
Khokhlova (Phys. dept, Lomonosov Moscow State Univ., GSP-1, 1-2 leninskie Gory, Moscow 119991, Russian Federation, sergey@acs366.phys.msu.
ru), Wayne Kreider (CIMU, Appl. Phys. Lab., Univ. of Washington, Seattle,
WA), and Oleg Sapozhnikov (Phys. dept, Lomonosov Moscow State Univ.,
Moscow, Russian Federation)
5pBAa10. A quick and reliable acoustic calibration method for a clinical magnetic resonance guided high-intensity focused ultrasound system. Satya V.V. N. Kothapalli (Biomedical Eng., Washington Univ. in
Saint Louis, 4511 Forest Park Ave., Saint Louis, MO 63108, vkothapalli@
wustl.edu), Ari Partanen (Clinical Sci. MR Therapy, Philips, Andover,
MA), Michael Altman (Dept. of Radiation Oncology, Washington Univ. in
St. Louis, Saint Louis, MO), Zhaorui Wang (Biomedical Eng., Washington
Univ. in Saint Louis, Saint Louis, MO), H. Michael Gach, William
Straube, Dennis Hallahan (Dept. of Radiation Oncology, Washington Univ.
in St. Louis, Saint Louis, MO), and Hong Chen (Biomedical Eng., Washington Univ. in Saint Louis, Saint Louis, MO)
Accurate characterization of broadband ultrasound (US) transducers
operating either in pulsed or CW modes is necessary for their use in biomedical applications. The acoustic holography method allows reconstruction of
the pattern of vibrations at the transducer surface for use as a model boundary condition to simulate ultrasound fields in water or tissue. A calibrated
hydrophone is often not available to perform such measurements. Here, it is
proposed to combine transient acoustic holography with radiation force balance (RFB) measurements at multiple frequencies to quantitatively determine 3D distributions of the acoustic field at different source frequencies,
the corresponding acoustic power of the source, and the hydrophone sensitivity within the frequency bandwidth of the transducer. First, a transient
acoustic hologram is measured using a short-pulse excitation of the source
by raster scanning the hydrophone along a surface in front of the source and
recording the waveform at a large number of points (typically 10-40 thousand). Then, RFB measurements are conducted for different frequencies
within the pulse bandwidth to determine axial component of the acoustic
radiation force. These data are related to the corresponding values calculated
from the holograms at each frequency component to determine the corresponding hydrophone sensitivities. [Work supported by RSF-14-12-00974.]
4008
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
With the expanding use and applications of MR-HIFU in both thermalbased and pressure-based therapies, there is an urgent need to develop
acoustic field characterization and quality assurance (QA) tools for MRHIFU systems.We developed a method for quick and reliable acoustic field
assessment inside the magnet bore of a clinical MRI system. A fiber-optic
hydrophone with a 2-m long fiber was fixed inside a water tank that was
placed on the HIFU table above the acoustic window. The long fiber
allowed the MRI-incompatible hydrophone control unit to be located outside the MRI suite. MR images of the fiber were used to position the HIFU
focus approximately at the tip of the fiber. The HIFU focus was electronically steered within a 555mm3 volume in synchronization with hydrophone measurements. The HIFU focus location was then identified based on
the 3D field scans. Peak positive and negative pressures were measured at
the focus at various nominal acoustic powers. Furthermore, focus dimensions and spatial peak pulse average intensities were assessed. The results
were compared to and were consistent with standard hydrophone measurements outside the MRI suite. This method provides a useful tool for field
characterization and QA of MR-HIFU systems.
Acoustics ’17 Boston
4008
5pBAa11. Use of acoustic holography to quantify and correct geometrical errors in ultrasound field characterization. Wayne Kreider (CIMU,
Appl. Phys. Lab., Univ. of Washington, 1013 NE 40th St., Seattle, WA
98105, wkreider@uw.edu), Vera Khokhlova, Sergey Tsysar, and Oleg Sapozhnikov (Phys. Faculty, Moscow State Univ., Moscow, Russian
Federation)
The development of medical ultrasound devices requires characterization of the ultrasound fields they radiate. Recently, acoustic holography
methods have gained increased acceptance for field characterization.
Because such an approach allows reconstruction of the full 3D field, the
beam axis, its orientation relative to the positioner, and any field quantities
of interest can be readily determined. Inherently, the accuracy of such projection methods can be sensitive to uncertainties in the positions at which
field measurements are recorded. Although commonly used industrial positioning systems have linear axes with repeatability specifications much less
than an ultrasound wavelength, the assembled orthogonality of three linear
axes is not guaranteed. Here we analyze a typical raster scanning holography experiment by considering two distinct coordinate systems: ideal rectilinear coordinates aligned with the transducer (1.2 MHz, aperture 124 mm,
F-number 1) and non-orthogonal coordinates aligned with the positioner
(Velmex Unislides). By locating a distinct field feature in both projection
calculations and independent hydrophone measurements, a misalignment of
positioner axes of about 0.50 was identified. The impact of positioner nonorthogonality on field characterization metrics is discussed along with the
potential use of this approach for a priori positioner calibration. [Work supported by NIH EB007643, NIH EB016118, and RSF-14-15-00665.]
5:00
5pBAa12. Two reduced-order approaches for characterizing the acoustic output of high-power medical ultrasound transducers. Vera Khokhlova, Petr Yuldashev (Phys. Faculty, Moscow State Univ., Leninskie Gory,
Moscow 119991, Russian Federation, vera@acs366.phys.msu.ru), Adam D.
Maxwell (Dept. of Urology, Univ. of Washington, Seattle, WA), Pavel Rosnitskiy, Ilya Mezdrokhin, Oleg Sapozhnikov (Phys. Faculty, Moscow State
Univ., Moscow, Russian Federation), Michael Bailey, and Wayne Kreider
(Ctr. for Industrial and Medical Ultrasound, Univ. of Washington, Seattle,
WA)
An approach that combines numerical modeling with measurements is
gaining acceptance for field characterization in medical ultrasound. The
general method is suitable for accurate simulation of the fields radiated by
therapeutic transducers in both water and tissue. Here, three characterization
4009
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
methods are compared. Simulations based on the 3D Westervelt equation
with a boundary condition determined from acoustic holography measurements are used as the most accurate benchmark method. Two simplified
methods are based on an axially symmetric nonlinear parabolic formulation,
either the KZK equation or its wide-angle extension. Various approaches for
setting a boundary condition to the parabolic models are presented and discussed. Simulation results obtained with the proposed methods are compared for a typical therapeutic array and a strongly focused single-element
transducer, with validation measurements recorded by a fiber optic hydrophone at the focus at increasing acoustic outputs. It is shown that the wideangle parabolic model is more accurate than the KZK model in governing
diffraction effects in the nearfields of the focused beams. However, both
methods give accurate results in the focal zone, even at very high outputs
when shocks are present. [Work supported by and RSF 14-12-00974, P01
DK043881, NIH EB7643, and NSBRI through NASA 9-58.]
5:20
5pBAa13. Use of wide-element one-dimensional receiving arrays to
measure two-dimensional lateral pressure distribution of ultrasound
beams. Oleg Sapozhnikov (Phys. Faculty, Moscow State Univ., and CIMU,
Appl. Phys. Lab., Univ. of Washington, Leninskie Gory, Moscow 119991,
Russian Federation, oleg@acs366.phys.msu.ru), Wayne Kreider (CIMU,
Appl. Phys. Lab., Univ. of Washington, Seattle, WA), Adam D. Maxwell
(Dept. of Urology, Univ. of Washington School of Medicine, Seattle, WA),
and Vera Khokhlova (Phys. Faculty, Moscow State Univ., and CIMU, Appl.
Phys. Lab., Univ. of Washington, Moscow, Russian Federation)
Measurement of acoustic fields is an important aspect of ultrasound
research and development. The corresponding data is usually collected by a
raster scan using a single point receiver moved by a computer-controlled
positioning system. A faster approach could be based on the use of a 1D linear multi-element array with several tens or hundreds of small elements,
which is moved in a direction perpendicular to the beam axis. The drawback
of such an approach is difficulty in making small, closely-positioned receiving elements that provide sufficient signal-to-noise ratio. The current paper
presents an alternative design for a 1D linear array: instead of using small
elements, here it is suggested to use narrow but long elements, with a width
on the order of a half wavelength or less and a length larger than the extent
of the ultrasound beam being studied. The drawback of the proposed longelement arrays is absence of resolution in the direction along the element.
This problem is solved by incrementally rotating the array either around the
axis parallel to the array surface and perpendicular to the elements or around
the axis perpendicular to the array surface. [Work supported by RSF 14-1500665, NIH R01 EB007643, and NIH R21 EB016118.]
Acoustics ’17 Boston
4009
5p THU. PM
4:40
THURSDAY AFTERNOON, 29 JUNE 2017
BALLROOM B, 1:20 P.M. TO 5:40 P.M.
Session 5pBAb
Biomedical Acoustics and Signal Processing in Acoustics: Diagnostic and Therapeutic Applications of
Ultrasound Contrast Agents II
Tyrone M. Porter, Cochair
Boston University, 110 Cummington Mall, Boston, MA 02215
Klazina Kooiman, Cochair
Thoraxcenter, Dept. of Biomedical Engineering, Erasmus MC, P.O. Box 2040, Room Ee2302, Rotterdam 3000 CA,
Netherlands
Invited Paper
1:20
5pBAb1. Ultrasound molecular imaging. Jonathan Lindner (Oregon Health & Sci. Univ., OHSU Cardiology UHN61, 3181 SW Sam
Jackson Park Rd., Portland, OR 97239, lindnerj@ohsu.edu)
Non-invasive in vivo molecular imaging technologies are uniquely poised to be able to temporally evaluate vascular adaptations to
disease and the impact of new therapies. These technologies have been used to study vascular changes in various forms of cardiovascular
and inflammatory disease and cancer. The approaches for studying the macro- or microvasculature with non-invasive imaging go beyond
anatomic or perfusion characterization and instead yield information on vascular phenotype by either analyzing novel signal features or
using targeted contrast agents that reveal the molecular underpinnings for disease. This talk will review new developments in ultrasound
molecular imaging techniques that are likely to have a substantial impact on the understanding of pathophysiology, the development of
new therapies, or diagnosis of disease in patients. Topics of focus will include recent advances in molecular imaging for the evaluation
of: (1) atherosclerotic disease, (2) tissue inflammation or ischemic injury; and (3) thrombus or prothrombotic environment, (4) angiogenesis and stem cells. The discussion will also include a discussion of how these different approaches may play a role to solve current clinical deficiencies.
Contributed Papers
1:40
2:00
5pBAb2. Toward detection of early apoptosis using labeled microbubbles. Tom Matula, Masaoki Kawasumi, Daiki Rokunohe, Brian MacConaghy, and Andrew Brayman (Univ. of Washington, 1013 NE 40th St.,
Seattle, WA 98105, matula@uw.edu)
5pBAb3. Role of mannitol on the echogenicity of echogenic liposomes
(ELIPs). Krishna N. Kumar (Dept. of Mech. Eng., The George Washington
Univ., 800 22nd St. NW, Washington, DC 20052, Washington, DC 20006,
krishnagwu@gwu.edu), Sanku Mallik (Dept. of Pharmaceutical Sci., North
Dakota State Univ., Fargo, ND), and Kausik Sarkar (Dept. of Mech. Eng.,
The George Washington Univ., Washington, DC)
Chemotherapy is useful for treating metastases, but on average only
50% of tumors respond to the initial choice of drug(s). Many patients thus
suffer from unnecessary side effects while their tumors continue to grow,
until the proper drugs are found. Because most drugs kill cancer cells by
inducing apoptosis, detection of apoptosis can be used to assess drug efficacy prior to treatment. Current technologies require large cell numbers and
several time points for quantification. We propose a simple drug screening
tool to monitor early apoptosis using annexin-V labeled microbubbles. We
investigated the parameter space for such a tool with a transparent flow
chamber such that the acoustic radiation force would be relatively parallel
with the focal plane of the microscope’s objective. An uncalibrated PZT
was activated with a small voltage from a function generator (1-10 V, 2
MHz), generating sufficient pressure to displace microbubble-labeled cells.
Initial studies focused on detection of labeled leukemia cells (not apoptosis).
In one study, labeled and unlabeled cells were mixed together. In another
study, labeled cells were mixed with red blood cells. Displacements were
measured with or without flow. The results suggest that a microbubblebased drug screening tool can be developed for real-time monitoring of
apoptosis.
4010
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
Although echogenic liposomes (ELIP), specially prepared lipid-bilayer
coated vesicles, have proved quite effective as scatterers, the exact mechanism of their echogenicity is not well understood. However, freeze-drying
in presence of mannitol has proved to be a critical component of the elaborate preparation protocol of these ELIPs. Here, we investigate the role
played by mannitol in ensuring echogenicity. We investigated mannitol
along with other similar sugars, such as sucrose, trehalose, and xylitol. Mannitol when freeze-dried assumes a crystalline state, while other sugars adopt
glassy states. Aqueous solutions of each sugar were prepared and freezedried. The freeze-dried samples were re-dissolved in water and the scattered
response from the solution was measured. While the solution of mannitol
was found echogenic, indicating production of bubbles, others were not. If
the sample was freeze-thawed before dissolution, it was not echogenic. The
crystalline state of the excipient, mannitol, was necessary for echogenicity.
Furthermore, other excipients such as glycine and meso-erythritol, which
attain crystalline state upon freeze-drying, also were found to give rise to
echogenicity in solution. The production of bubbles by crystalline mannitol
upon dissolution indicates a possible mechanism of echogenicity of ELIPs.
Acoustics ’17 Boston
4010
Invited Paper
2:20
5pBAb4. Ultrasound triggered mRNA delivery to dendritic cells—Towards an in vivo cancer vaccination strategy. Heleen Dewitte (Pharmaceutics, UGent, Ottergemse Steenweg 460, Gent 9000,Belgium), Emmelie Stock, Katrien Vanderperren (Veterinary Sci.,
UGent, Gent, Belgium), Stefaan De Smedt (Pharmaceutics, UGent, Gent, Belgium), Karine Breckpot (Health Sci., VUB, Brussels, Belgium), and Ine Lentacker (Pharmaceutics, UGent, Gent 9000, Belgium, Ine.Lentacker@UGent.be)
Increasing knowledge on the crucial role of dendritic cells (DCs) in the initiation of immunity has launched a new field in cancer
immunotherapy: DC vaccination. By loading patient’s DCs with tumor antigens and injecting them as a vaccine, antitumor immune
responses can be induced. This project aims to use theranostic mRNA-loaded MBs for ultrasound-guided, ultrasound-triggered antigenloading of DCs within the lymph nodes in vivo. mRNA-loaded MBs were prepared by attaching mRNA-lipid complexes to lipid MBs
via avidin-biotin linkages. MBs loaded with mRNA encoding a tumor antigen (ovalbumin, OVA) were used to sonoporate murine DCs
in vitro. These mRNA-sonoporated DCs were then used as therapeutic vaccines in E.G7-OVA-bearing mice. In vitro mRNA-sonoporation of murine DCs revealed transfection efficiencies up to 27%. The potential of this technique was further assessed in vivo by vaccinating E.G7-OVA-bearing mice with sonoporated DCs. When mRNA-sonoporated DCs were used, tumor growth was significantly reduced
and even regressed in 30% of the animals. Moreover, rechallenge with tumor cells did not lead to tumor growth, indicating long-lasting
immunological protection. A CEUS study in dogs showed the MBs rapidly drained to the lymph nodes after sc injection. Moreover, this
revealed detailed information on the lymphatic anatomy.
Contributed Papers
5pBAb5. Effect of molecular weight on sonoporation-mediated uptake
in human cardiac cells. Danyal F. Bhutto, Emily M. Murphy (BioEng.,
Univ. of Louisville, 2301 S. Third St., Paul C.Lutz Hall, Rm. 419, Louisville, KY 40292-0001), John Zhao, Joseph B. Moore (Medicine, Univ. of
Louisville, Louisville, KY), Roberto Bolli (Medicine, Univ. of Louisville,
La Grange, KY), and Jonathan A. Kopechek (BioEng., Univ. of Louisville,
Louisville, KY, jonathan.kopechek@louisville.edu)
Sonoporation of cells induced by ultrasound-driven microbubble cavitation has been utilized for intracellular delivery of molecular therapeutics.
The molecular weight of therapeutic agents can vary significantly, with
DNA plasmids often larger than 5 MDa, siRNAs and miRNAs ~10 kDa,
and other drugs often less than 1 kDa. Some studies have suggested that
sonoporation-mediated uptake may decrease at higher molecular weights
due to slower diffusion rates, but experiments with equal molar concentrations have not been reported. Therefore, the objective of this study was to
explore the effect of molecular weight on sonoporation-mediated uptake of
fluorescein (0.33 kDa) or fluorescent dextrans (10 kDa, 2 MDa) using equal
molar concentrations (100 nM). Sonoporation was induced in cultured
human cardiac mesenchymal cells using lipid microbubbles and 2.5 MHz
B-mode ultrasound (P4-1 transducer, Verasonics Vantage ultrasound system) and uptake by viable cells was measured with flow cytometry. No significant differences in sonoporation-mediated uptake were measured
between molecules of different sizes at equal molar concentrations
(ANOVA p = 0.92). However, significant differences in uptake were
observed at different molar concentrations and acoustic pressures (ANOVA
p = 0.02). In summary, these results suggest that the effect of molecular
weight on sonoporation-mediated uptake is minimal compared to drug concentration or acoustic pressure.
3:00
5pBAb6. Enhanced delivery of a density-modified therapeutic using
ultrasound: Comparing the influence of micro- and nano-scale cavitation nuclei. Harriet Lea-Banks, Christophoros Mannaris, Megan Grundy,
Eleanor P. Stride, and Constantin Coussios (Eng. Sci., Univ. of Oxford, Inst.
of Biomedical Eng., Old Rd. Campus Res. Bldg., Oxford OX3 7DQ, United
Kingdom, harriet.lea-banks@eng.ox.ac.uk)
New strategies are required to enhance the penetration and tumor-wide
distribution of cancer therapeutics, including viruses, antibodies, and oligonucleotides. We have shown previously that increasing the density of a
nanoparticle can enhance its ultrasound-mediated transport, particularly
when exposed to microstreaming. The current study investigates how the
physical characteristics and dynamics of different cavitation nucleation
4011
J. Acoust. Soc. Am., Vol. 141, No. 5, Pt. 2, May 2017
agents affect the cavitation-mediated transport of therapeutics not directly
bound to the gas nuclei. SonoVue (Bracco, Milan, Italy) microbubble contrast agent (with 3 lm mean initial bubble diameter) and polymeric gasentrapping nanoparticles (with 260 nm mean initial bubble diameter) were
first compared in terms of inertial cavitation threshold and cavitation persistence. Under the same ultrasound exposure conditions (centre frequencies of
either 0.5 MHz or 1.6 MHz, and peak negative pressures from 0.2 MPa to
3.5 MPa) their respective cavitation emissions were found to differ in magnitude, frequency composition and duration. The resultant penetration
depths of co-injected gold-coated nanoparticles in a tissue-mimicking phantom were also found to vary, with the polymeric gas-entrapping nanoparticles giving rise to greater penetration distances. This study concludes that
both the characteristics of the therapeutic and the cavitation nuclei can significantly influence ultrasound-mediated drug delivery.
3:20–3:40 Break
3:40
5pBAb7. Optimization of liposome delivery using low intensity ultrasound and microbubbles. Jia-Ling Ruan (Oncology, Univ. of Oxford, Old
Rd. Campus Res. Bldg., Roosevelt Dr., Oxford OX37DQ, United Kingdom,
jia-ling.ruan@oncology.ox.ac.uk), Richard J. Browning (Inst. of Biomedical
Eng., Univ. of Oxford, Oxford, United Kingdom), Yildiz Yesna, Borivoj
Vojnovic (