Signal Processing (SP) Society
of the
IEEE Long Island Section

The Signal Processing Society (SP-1) caters to hardware, firmware and software engineers involved in signal processing techniques, implementation and apparatus.

For upcoming SP lectures and meetings, please visit the calendar page.
> Calendar

Jessica Donaldson

Vice Chair
If interested, send email to:



> IEEE Global Sig Proc Society


> Analog Dialog

> Engineer/Scientist DSP Guide

> Xilinx DSP Central

Past Lectures

Analyzing Feedback Systems with Signal-Flow Graphs
Alan Lipsky - Consultant
2016 April 26

Signal-flow graphs facilitate finding transfer functions for linear systems, both mechanical and electrical. They provide an intuitive understanding. Integral differential equations are solved in the Laplace domain. Using Mason’s gain formula the transfer function is found easily from the signal-flow graph. The resulting transfer function is the ratio of polynomials in powers of ‘S’, the complex frequency variable. They are less general than state variable formulations. Since they are useful for solution of linear equation only and don’t consider initial conditions. In contrast with signal-flow diagrams the state variable formulation is ideal for computer solutions of multiple input output systems. Flow graphs, however, yield a better intuitive grasp of the system. Unlike block diagrams, which ignore interactions between the output of one block and the input of the following one, flow-graphs are an accurate representation. In the lecture, the rules for signal flow graphs are introduced and Mason’s gain formula stated. A number of Op-Amp circuits and simple mechanical systems are solved. After stating Bode’s criteria for stability, the graphs are used to illustrate why Op-Amps oscillate with capacitance loads.

> Viewgraphs (0.5 MB)

Protection From Lightning
Alan Lipsky - Consultant
2016 February 9

Lighting strokes range from a few hundred amps to more than 500 KA; their energy spectrum ranges from DC to above 1 Megahertz. Damage is caused by the large current flow or the voltage it causes. Mitigate the effects of lighting with the appropriate grounding techniques, surge protection at various points. There are a variety of surge protectors: gas discharge tubes, crow bars (Thyristors), Metal Oxide Varistor, and Transient Voltage Suppressors. The performance of each is discussed. The first two handle the largest power and are used on the power input to a facility. The latter 2 are more appropriate for equipment protection, MOV’s at the power input and TVS on boards with especially sensitive components. Because of their capacitance and leakage neither are appropriate for signal or communication inputs. A diode circuit for protecting these inputs is shown and discussed. To ensure equipment survival, specification organizations issued standard test-wave-shapes to simulate lighting caused surges. Examples of these wave shapes are presented. Equipment should be both designed and tested to survive these tests.

> Viewgraphs (0.4 MB)

Enhanced Feedback Robustness Via Scaled Dither
Lijian Xu - SUNY Farmingdale
2015 April 23

A new method is introduced to enhance feedback robustness against communication gain uncertainties. The method employs a fundamental property in stochastic differential equations to add a scaled stochastic dither under which tolerable gain uncertainties can be much enlarged, beyond the traditional deterministic optimal gain margin. Algorithms, stability, convergence, and robustness are presented for first-order systems. Extension to higher-dimensional systems is further discussed.

> Viewgraphs (3.6 MB)

Evolution of Digital Verification
Walter Gude - Mentor Graphics
2013 October 8

Verification of digital system used to be a pretty straight forward process. “Create a set of test vectors for each feature, apply these vectors and track down any bug.” The relentless march of Moore’s law has caused these traditional methods to breakdown; first in the ASIC world and now increasingly in the FPGA world. This has caused the system test budget to rise dramatically. Tool-based verification at front is always considered the most viable approach to balance the budget, as the statistics show that most of functional bugs could be caught by front-end verification before the physical unit test and system test.

> Viewgraphs (7.0 MB)

Reversing Time: A Way to Unravel Distorted Communications?
James V. Candy - Lawrence Livermore National Laboratory
2012 October 10

Communicating in a complex environment is a daunting problem. Such an environment can be a hostile urban setting populated with a multitude of buildings and vehicles, the simple complexity of a large number of sound sources that are common in the stock exchange, or military operations in an environment with a topographic features hills, valleys, mountains, etc. These inherent obstructions cause transmitted sounds or signals to bounce (reflect) and bend (refract) and spread (disperse) in a multitude of directions distorting both their shape and arrival times at the targeted receiver locations.

Time-reversal is a simple notion that we have all observed (in a sense) when viewing a movie of the demolition of a building, for example. Merely running the movie in-reverse or equivalently running it backwards in time allows us to reconstruct the building at least visually; even though it cannot be reconstructed physically. Using this same idea, time-reversal can be applied to “reconstruct” communication signals by retracing all of the multiple paths that originally distorted the transmitted signals in the first place! In order to separate or decompose the individual components of the message, the receiver must use its knowledge of the medium to not only separate each path but also to add them together in some coherent manner to extract the message with little or no distortion and increase their signal levels.

> Viewgraphs (3.0 MB)

Hardware Verification for Avionics & Safety Critical Design
Modesto Casas - Aldec
2012 May 23

The common problems associated during hardware testing of complex FPGAs in Safety Critical Designs are explored, in addition to the time savings attainable by re-using the simulation test-bench as test vectors to perform in-hardware verification at speed. A set of tasks and time required for in-hardware FPGA testing under the DO-254 Design Assurance Guidance for Airborne Electronic Hardware is presented and two methods of verification are contrasted.

Traditionally, hardware verification is performed at the board level which contains the FPGA under test as its primary component. The FPGA is also interconnected with other components on the board, and with the lack of test headers, visibility and controllability at the FPGA pin level is limited. At times the board may contain multiple FPGAs, further complicating the verification problem. Verification at the board level without first stabilizing each FPGA individually can lead to many problems and longer project delays. The methodology discussed is based on a bit-accurate in-hardware verification platform that is able to verify and trace the same FPGA level requirements from RTL to the target device at full speed, while saving time and resources.

> Viewgraphs (4.0 MB)
> Essay (0.2 MB)

Target Detection Using Optical Joint Transform Correlation
Professor M. Nazrul Islam - SUNY Farmingdale
2011 November 30

Automatic identification of a specific object or pattern in an arbitrary input scene is an important part of any authorization, monitoring and security system. Pattern recognition is always a challenging issue because the targets are often non-cooperative; the scene may contain noise and distortions due to variable environmental conditions during recording the image. Additional requirements for an efficient pattern recognition system are that the architecture should be simple so that it can easily be implemented and be user friendly, and it should perform fast enough to make instantaneous decision on the presence of a target in the input scene.

Optical joint transform correlation (JTC) technique has been found to be a versatile tool for real-time pattern recognition applications, which employs optical devices, like lens, spatial light modulator, for parallel processing of the given images. The JTC scheme provides a number of advantages over other correlation techniques, like Vanderlugt filter, in that it allows real-time updating of the reference image, permits parallel Fourier transformation of the reference image and input scene, operates at video frame rates and eliminates the precise positioning requirement of a complex matched filter in the Fourier plane. Several modifications have been proposed to improve the correlation performance of the JTC technique, namely binary JTC, phase-only JTC, fringe-adjusted JTC and shifted phase-encoded fringe-adjusted JTC. This presentation will review the features, problems and prospects of optical pattern recognition techniques.

> Viewgraphs (3.8 MB)

Tapping the TeraFLOP Potential of GP-GPU
Brooks Moses - Mentor Graphics
Gil Ettinger - Sensor Exploitation
Eran Strod - Curtiss-Wright
2011 June 15

High performance image and signal processing applications are significantly benefiting from GP-GPU technology to extract meaningful information from large volumes of rich data sources. This seminar provides an overview of GP-GPU technology and how it can be expected to perform in image and signal processing applications such as target tracking. In addition, we discuss how this technology, which was developed for desktop computing, can be adapted to rugged environmental conditions that are typical of military and aerospace applications.

> Viewgraphs (1.9 MB)

Digital Signal Processing For Radar Applications
Michael Parker - Altera
Benjamin Esposito - Altera
2011 March 15

This seminar features a space-time adaptive processing (STAP) pulsed Doppler Radar simulation using back-end FPGA implementation including: model of a Radar system environment, optimized implementation of STAP back-end processing and FPGA Implementation. Solutions are presented to address challenges often faced by Radar system and implementation engineers. The methodology and tools presented model and simulate systems and algorithms at a high level of abstraction, allow rapid exploration of design options (“what-if” scenarios), while efficiently and optimally implementing designs in FPGAs and ASICs.

> Viewgraphs (0.4 MB) Part I
> Viewgraphs (3.7 MB) Part II

Mapping DSP Algorithms Into FPGAs
Sean Gallagher - Xilinx
2010 November 2

FPGAs have been used to craft massively parallel custom computing machines since the early 1990s’ and since 2002 they have included embedded multipliers and adders. The next generation of the largest FPGAs from Xilinx, will have an equivalent gate count in the millions and close to 4000 embedded multipliers and adders. The sheer quantity of multipliers and adders allows the designer to build many high throughput DSP functions like digital down-conversion circuits, FFTs, channelizers, etc. However for low throughput requirements it is also possible to use a smaller FPGA device and over-clock (time share) the FPGA resources so as to require less of them. This presentation explores implementation options for efficiently building DSP algorithms like parallel FFTs, channelizers, filters, etc.

> Viewgraphs (1.0 MB)

Extending Laplace & Fourier Transforms: A Personal Perspective
Dr. Shervin Erfani - University of Windsor
2007 May 15

The classical theory of variable systems is based on the solutions of linear ordinary differential equations with varying coefficients. The varying coefficients are usually functions of an independent variable, so-called the time variable. The “time variable” is assumed to be a real variable for physical systems. This assumption facilitates analysis and synthesis of fixed (so-called time-invariant) systems by allowing the Laplace transform techniques to be used. However, the assumption of “real time” is shown to be inadequate for realization of time-varying systems in the transformed domain.

The discussion in this presentation is based on a different point of view. Specifically, the approach consists essentially in investigating the possibility of system realization through an examination of the behavior of systems that are functions of a complex time-variable. This approach allows, in effect, a two-dimensional Laplace transform technique to be used for the time-varying systems in the same manner that the conventional frequency-domain technique is used in connection with fixed systems. The challenge is the physical interpretation of a “complex time variable” versus the “real time,” and its implications on the transformed variable, so-called the “frequency variable.”

> Viewgraphs (0.3 MB)

Array Processing Technique for Anti-Jam GPS
Moeness Amin – Villanova University
2005 March 10

Despite the ever-increasing civilian applications, the main drawback of GPS remains to be its high sensibility to multi-path and interference. The effect of interference on the GPS receiver is to reduce the signal-to-noise ratio (SNR) of the GPS signal such that the receiver is unable to obtain measurements from the GPS satellite. The spread-spectrum (SS) scheme, which underlies the GPS signal structure, provides a certain degree of protection against interference. However, when the interferer power becomes much stronger than the signal power, the spreading gain alone is insufficient to yield any meaningful information.

This talk discusses a new technique for anti-jam Global Positioning System (GPS). A novel GPS anti-jam receiver using multi-antenna receivers is introduced which relies on the replications of the coarse/acquisition (C/A) code within a GPS symbol. The proposed receiver utilizes the inherent GPS self-coherence property to excise narrowband and broadband interferers that have different temporal structures from that of the GPS signals.

> Viewgraphs (1.0 MB)

Evolution of 3G Wireless Systems
Ariela Zeira - InterDigital
2003 May 14

Third generation wireless communication systems were introduced to extend the data capabilities of second-generation systems by providing quality of service (QoS) management and enabling the high data rates required for high speed web access, and transmission/reception of high quality images and video. To satisfy predicted future increasing demands on even higher rate data services, additional enhancements are being incorporated into the different 3G air interface standards. The first step in evolving the 3G standards is enabling high-speed packet access in the downlink or forward link, i.e. when the terminal is receiving information from the network. The higher data rates are achieved via new features such as adaptive modulation and coding, Hybrid ARQ and fast scheduling. Other enhancements that are being considered are extending the high-speed packet access to the uplink or reverse link (when the terminal is transmitting information to the network) and smart antenna techniques. In this talk we will review the new features recently introduced or being considered for 3G air interfaces and discusses their impact on the performance of the evolving standards.

> Viewgraphs (1.0 MB)




Chairman Login