End-to-End Network Slicing in 5G Networks with Controlled Slice Redistributions
Samaresh Bera and Neelesh B. Mehta
Department of Electrical Communication Engineering
Network slicing creates multiple logical networks and enables 5G to meet the diverse performance
requirements of emerging services and applications. We model the RAN, edge, and core networks
and study end-to-end network slicing in 5G. We consider the problem of admitting new slice requests
as a constrained optimization problem that seeks to maximize the total reward to the network operator
while considering the impact of slice redistributions in the network. We propose a multi-phase
polynomial-time, greedy approach to solve this NP-hard problem. It employs two comprehensive
weighted cost functions for request selection and resource allocation that take into account the
slice-specific requirements and multi-dimensional resources at the RAN, edge, and core networks.
We use a Bayesian optimization technique to automatically tune the cost functions as a function of
the network topology, networking resources, resource arrival rate, and slice-specific requirements.
Our extensive numerical results show that the proposed approach achieves a total reward that is
competitive with the optimal solution and is higher than the benchmark schemes at realistic higher
request arrival rates, while requiring fewer slice redistributions.
Random Access Schemes for Massive Machine-Type Communications
Chirag Ramesh and Chandra R. Murthy
Department of Robert Bosch Centre for Cyber-Physical Systems
Massive machine-type communications (mMTC) is a 5G and beyond application expected to serve
millions of internet-of-things devices within a small region. These devices transmit short packets
and are sporadically active. We need to use grant-free random access (RA) protocols to serve such
devices. In this work, we look at the coded slotted aloha (CSA) family of RA protocols in which
users transmit several encoded replicas of their packets across different resource elements in a
frame. We first leverage sparse signal recovery techniques to propose a user activity detection (UAD)
algorithm to detect the subset of active users in a frame in CSA. We then perform data decoding and
analyse the impact of UAD errors, i.e., false alarms and missed detections, on the performance of the
system. This analysis accounts for practical non-idealities such as UAD errors, channel estimation
errors, and pilot contamination. Finally, we provide several insights into the performance of RA for
mMTC: Which UAD error is more harmful? What limits the performance of the system? How do
we overcome these limits?
Metasurfaces for Microwave Applications
Aritra Roy
Department of Electrical Communication Engineering
A Metasurface facilitates exciting surface properties by engineering geometric profiles and find
several applications in microwaves and optics. Its surface impedance can be controlled to tailor the
amplitude and phase of the impinging waves. Often in a periodic structure, its operation depends on
the periodicity and resonant characteristics of the underlying unit cell geometry. These surfaces are
used as a microwave reflector, absorber, or phase shifter and are employed in antennas to miniaturize
the geometry, reconfigure its radiation properties, or generate multiple main lobes.
In the first part of this work a digitally reconfigurable metasurface is designed, where the transmission
through the unit cell is switched ON or OFF, thereby modifying the overall radiation pattern of an
antenna placed behind this planar array. The metasurface is designed with a unit cell consisting
of a meandered line and a PIN diode. Our experimental studies illustrated for the first time that
the presence of scatterers enhances the performance of media based modulation (MBM) using this
scheme, and is expected to provide impetus to build practical communication systems using this
approach
In the second part of this work, a wideband metasurface consisting of multiple metallic patches
with surface mounted lumped resistors to absorb the EM waves over 1-6 GHz is designed. This
metasurface is used with a compact spiral antenna to improve its low-frequency responses . A
conventional spiral antenna is a wideband circularly polarized bidirectional radiator used for
wideband communication, radar jamming and electronic warfare. When placed inside a compact
metallic cavity to mount it on a ship or aircraft, its low-frequency radiation performances are affected.
It has been shown that the use of metasurface improves the antenna responses thereby making
it suitable for use over a frequency band of 1-18 GHz. Numerical optimization is carried out to
design the metasurface geometry as well as to optimize the spiral antenna performance. A prototype
antenna with the metasurface is fabricated and characterized inside an anechoic chamber to validate
its performance.
Graph Neural Networks with Parallel Neighborhood Aggregations
Siddhant Rahul Doshi
Department of Electrical Communication Engineering
Graph neural networks (GNNs) have become very popular for processing and analyzing graph-structured
data in the last few years. GNN architectures learn low-dimensional graph-level or node-level
embeddings useful for several downstream machine learning tasks by using message passing as their
basic building block that aggregates information from neighborhoods. Existing GNN architectures
can be categorized based on how they perform this aggregation task: 1) GNNs that learn the node
embeddings by iteratively combining information from its neighborhood by cascading several GNN
blocks. We refer to such GNN architectures with sequential aggregation as SA-GNNs and 2) GNN
architectures that precompute the node features from different neighborhood depths using a bank of
neighborhood aggregation graph operators simultaneously. We refer to such GNN architectures with
parallel aggregation as PA-GNNs. Due to the precomputations, PA-GNNs have a natural advantage
of reduced training and inference time.
We provide theoretical conditions under which a generic PA-GNN model is provably as powerful
as the popular Weisfeiler-Lehman (WL) graph isomorphism test in discriminating non-isomorphic
graphs. Although PA-GNNs do not have an apparent relationship with the WL test, we show that
the graph embeddings obtained from these two methods are injectively related. We then propose a
specialized PA-GNN model, which obeys the developed conditions. We demonstrate via numerical
experiments on several graph classification benchmark datasets that the developed model achieves
state-of-the-art performance on many diverse real-world datasets while maintaining the discriminative
power of the WL test and the computational advantage of preprocessing graphs before the training
process.
Fast Algorithms for Max Cut on Geometric Intersection Graphs
Utkarsh Joshi
Department of Computer Science and Automation
Fast Algorithms for Max Cut on Geometric Intersection Graphs In the max cut problem, given a
graph, the goal is to partition the vertex set into two disjoint sets such that the number of edges
having their endpoints in different sets is maximized. Max cut is an NP-hard problem. The seminal
work by Goemans and Williamson gave an approximation algorithm for the max cut problem having
an approximation ratio of 0.878.
In this work, we design fast algorithms for max cut on geometric intersection graphs. In a
geometric intersection graph, given a collection of n geometric objects as the input, each object
corresponds to a vertex and there is an edge between two vertices if and only if the corresponding
objects intersect. Since we are dealing with the geometric intersection graphs, which have more
structure than general graphs, the following questions are of interest: Are there special cases of
geometric intersection graphs for which max cut can be solved exactly in polynomial time? It can be
shown that the random cut gives a 0.5 approximation for the max cut. Is it possible to design linear
or near-linear time algorithms (in terms of n) and beat the 0.5 approximation barrier? The edge-set
of the graph is not explicitly given as input; therefore, designing linear time algorithms is of interest.
Can an approximation factor better than 0.878 be obtained for the geometric intersection graphs?
An exact and fast algorithm for laminar geometric intersection graphs. Our algorithm uses a
greedy strategy. A fast algorithm is obtained by combining the properties of laminar objects with
range searching data structures. An O(n log n) time algorithm with an approximation factor of 2/3 for
unit interval intersection graphs. We decompose the unit intervals into several cliques, and based on
the number of edges between "adjacent" cliques, we choose an appropriate partitioning strategy. An
O(n log n) time algorithm with an approximation factor of 7/13 for unit square intersection graphs.
We use the "largest clique" in the graph to beat the 0.5 approximation barrier
Equivalence Test for Read-Once Arithmetic Formulas
Nikhil Gupta
Department of Computer Science and Automation
A read-once arithmetic formula (ROF) C over a field F is a tree, where a leaf node is labelled by
either a distinct variable or a constant from F and a non-leaf node is labelled by either + or ×. Every
node of C computes a polynomial naturally - a leaf node computes its label and a + node (or a ×
node) computes the sum (respectively, the product) of the polynomials computed by its children.
The equivalence testing problem for ROFs is as follows: given black-box access to a polynomial
f ∈ F[x1,..., xn] of degree at most n, decide if there exists an ROF C, an invertible matrix A ∈ F
n×n and a vector b ∈ Fn, such that f = C(Ax + b), where x = (x1x2 ··· xn)T. Further, if the answer is
yes then output an ROF C, an invertible matrix A and a vector b, such that f = C(Ax+b). In this
work, we give a randomized polynomial-time algorithm (with oracle access to the quadratic form
equivalence test over F) for the equivalence testing problem for ROFs.
At the heart of this algorithm lies a detailed analysis of the essential variables of the Hessian
determinant of an ROF. This analysis becomes technically challenging due to the arbitrary structure
of the underlying tree of an ROF. We overcome this challenge and use the knowledge of the essential
variables to design an efficient randomized equivalence test for ROFs.
This is a joint work with Chandan Saha and Bhargav Thankey
On Slowly-varying Non-stationary Bandits
Ramakrishnan Krishnamurthy, Aditya Gopalan
Department of Computer Science and Automation
We consider minimisation of dynamic regret in non-stationary bandits with a slowly varying property.
Namely, we assume that arms’ rewards are stochastic and independent over time, but that the absolute
difference between the expected rewards of any arm at any two consecutive time-steps is at most
a drift limit δ > 0. For this setting that has not received enough attention in the past, we give a
new algorithm which extends naturally the well-known Successive Elimination algorithm to the
non-stationary bandit setting. We establish the first instance-dependent regret upper bound for slowly
varying non-stationary bandits. The analysis in turn relies on a novel characterization of the instance
as a detectable gap profile that depends on the expected arm reward differences. We also provide
the first minimax regret lower bound for this problem, enabling us to show that our algorithm is
essentially minimax optimal. Also, this lower bound we obtain matches that of the more general
total variation-budgeted bandits problem, establishing that the seemingly easier former problem
is at least as hard as the more general latter problem in the minimax sense. We complement our
theoretical results with experimental illustrations
Tight Approximation Algorithms for Two-dimensional Guillotine Strip Packing
Aditya Lonkar
Department of Computer Science and Automation
In the STRIP PACKING problem (SP), we are given a vertical half-strip [0,W]×[0,∞) and a set of
n axis-aligned rectangles of width at most W. The goal is to find a non-overlapping packing of
all rectangles into the strip such that the height of the packing is minimized. A well-studied and
frequently used practical constraint is to allow only those packings that are guillotine separable, i.e.,
every rectangle in the packing can be obtained by recursively applying a sequence of edge-to-edge
axis-parallel cuts (guillotine cuts) that do not intersect any item of the solution. In this paper, we
study approximation algorithms for the GUILLOTINE STRIP PACKING problem (GSP), i.e., the
STRIP PACKING problem where we require additionally that the packing needs to be guillotine
separable. This problem generalizes the classical BIN PACKING problem and also makespan
minimization on identical machines, and thus it is already strongly NP-hard. Moreover, due
to a reduction from the PARTITION problem, it is NP-hard to obtain a polynomial-time (3/2 −
ε)-approximation algorithm for GSP for any ε > 0 (exactly as STRIP PACKING). We provide a
matching polynomial time (3/2+ε)-approximation algorithm for GSP. Furthermore, we present a
pseudo-polynomial time (1+ε)-approximation algorithm for GSP. This is surprising as it is NP-hard
to obtain a (5/4−ε)-approximation algorithm for (general) STRIP PACKING in pseudo-polynomial
time. Thus, our results essentially settle the approximability of GSP for both the polynomial and the
pseudo-polynomial settings
A PTAS for the Horizontal Rectangle Stabbing Problem
Arindam Khan, Aditya Subramanian and Andreas Wiese
Department of Computer Science and Automation
We study rectangle stabbing problems in which we are given n axis-aligned rectangles in the plane
that we want to stab, i.e., we want to select line segments such that for each given rectangle there is
a line segment that intersects two opposite edges of it. In the horizontal rectangle stabbing problem
(STABBING), the goal is to find a set of horizontal line segments of minimum total length such that
all rectangles are stabbed. In general rectangle stabbing problem, also known as horizontal-vertical
stabbing problem (HV-STABBING), the goal is to find a set of rectilinear (i.e., either vertical or
horizontal) line segments of minimum total length such that all rectangles are stabbed. Both variants
are NP-hard. Chan, van Dijk, Fleszar, Spoerhase, and Wolff initiated the study of these problems by
providing constant approximation algorithms. Recently, Eisenbrand, Gallato, Svensson, and Venzin
have presented a QPTAS and a polynomial-time 8-approximation algorithm for STABBING but it
was open whether the problem admits a PTAS.
In this work, we obtain a PTAS for STABBING, settling this question. For HV-STABBING, we
obtain a (2+ε)-approximation. We also obtain PTASes for special cases of HV-STABBING: (i)
when all rectangles are squares, (ii) when each rectangle’s width is at most its height, and (iii) when
all rectangles are δ-large, i.e., have at least one edge whose length is at least δ, while all edge
lengths are at most 1. Our result also implies improved approximations for other problems such as
generalized minimum Manhattan network
Near-optimal Algorithm for Stochastic Online Bin Packing
K. V. N. Sreenivas
Department of Computer Science and Automation
We study the online bin packing problem under the i.i.d. model. In the bin packing problem, we are
given n items with sizes in (0,1] and the goal is to pack them into the minimum number of unit-sized
bins. In the i.i.d. model, the item sizes are sampled independently and identically from a distribution
in (0,1]. Both the distribution and the total number of items are unknown. The items arrive one
by one and their sizes are revealed upon their arrival and they must be packed immediately and
irrevocably in bins of size 1. We provide a simple meta-algorithm that takes an offline α-asymptotic
approximation algorithm and provides a polynomial-time (α +ε)-competitive algorithm for online
bin packing under the i.i.d. model, where ε > 0 is a small constant. Using the AFPTAS for offline
bin packing, we thus provide a linear time (1 + ε)-competitive algorithm for online bin packing
under the i.i.d. model, thus settling the problem
Algorithmic Problems on Vertex Deletion and Graph Coloring
Raji R. Pillai and Sunil Chandran L
Department of Computer Science and Automation
Vertex deletion problems form a core topic in algorithmic graph theory with many applications.
Typically, the objective of a vertex deletion problem is to delete the minimum number of vertices
so that the remaining graph satisfies some property. Many classic optimization problems like
MAXIMUM CLIQUE, MAXIMUM INDEPENDENT SET, VERTEX COVER are examples of vertex
deletion problems. We study popular vertex deletion problems called CLUSTER VERTEX DELETION
and its generalisation s-CLUB CLUSTER VERTEX DELETION, both being important in the context
of graph-based data clustering. A cluster is often viewed as a dense subgraph (often a clique) and
partitioning a graph into such clusters is one of the main objectives of graph-based data clustering.
However, to account for the errors introduced during the construction of the network, the clusters of
certain networks may be retrieved by making a small number of modifications such as deleting some
vertices.
Given a graph G, the objective of CLUSTER VERTEX DELETION (CVD) is to delete a
minimum number of vertices so that the remaining graph is a set of disjoint cliques. We focus
on polynomial-time solvability of CVD on special classes of graphs. Chordal graphs (graphs
with no induced cycle of length greater than 3) are well studied class of graphs having many
applications in algorithmic graph theory. Though polynomial-time algorithms for certain sub classes
of chordal graphs such as interval graphs, block graphs and split graphs are known, the computational
complexity of CVD on chordal graphs remains unknown. We study CVD on well-partitioned chordal
graphs, another sub class of chordal graphs that generalizes split graphs, which is introduced as a
tool for narrowing down complexity gaps for problems that are hard on chordal graphs, and easy on
split graphs
In many applications the equivalence of cluster and clique is too restrictive. For example, in
protein networks where proteins are the vertices and the edges indicate the interaction between the
proteins, a more appropriate notion of clusters may have a diameter of more than 1. Therefore
researchers have defined the notion of s-clubs. An s-club is a graph with diameter at most s. The
objective of s-CLUB CLUSTER VERTEX DELETION (s-CVD) is to delete the minimum number of
vertices from the input graph so that all connected components of the resultant graph is an s-club.
We propose a polynomial-time algorithm for (s-CVD) on trapezoid graphs, a class of intersection
graphs. To the best of our knowledge, our result provides the first polynomial-time algorithm for
CLUSTER VERTEX DELETION on trapezoid graphs. We also provide a faster algorithm for s-CVD
on interval graphs. For each s ≥ 1, we give an O(n(n+m))-time algorithm for s-CVD on interval
graphs with n vertices and m edges. We also prove some hardness results for s-CVD on planar
bipartite graphs, split graphs and well-partitioned chordal graphs for each s ≥ 2.
Graph coloring has diverse applications and is still a prominent research area to tackle many
practical problems by simulating them as coloring the vertices or edges of a graph subject to
some constraints. Efficient and scalable implementation of parallel algorithms on multiprocessor
architectures with multiple memory banks require simultaneous access to the data items. Such
“conflict-free” access to parallel memory systems and other applied problems motivate the study
of rainbow coloring of a graph, in which there is a fixed template T (or a family of templates),
and one seeks to color the vertices of an input graph G with as few colors as possible, so that each
copy of T in G is rainbow colored, i.e., has no two vertices the same color. We call such coloring a
template-driven rainbow coloring and study the rainbow coloring of proper interval graphs (as hosts)
for cycle templates
Optimal Path Planning of Autonomous Marine Vehicles in Stochastic
Dynamic Ocean Flows using a GPU-Accelerated Algorithm
Rohit Chowdhury
Department of Computational and Data Sciences
Autonomous marine vehicles play an essential role in many ocean science and engineering applications.
Planning time and energy optimal paths for these vehicles to navigate in stochastic dynamic ocean
environments is essential to reduce operational costs. In some missions, they must also harvest
solar, wind, or wave energy (modeled as a stochastic scalar field) and move in optimal paths that
minimize net energy consumption. Markov Decision Processes (MDPs) provide a natural framework
for sequential decision making for robotic agents in such environments. However, building a realistic
model and solving the modeledMDP becomes computationally expensive in large-scale real-time
applications, warranting the need of parallel algorithms and efficient implementation. In the present
work, we introduce an efficient end-to-end GPU-accelerated algorithm that (i) builds the MDP
model (computing transition probabilities and expected one-step rewards); and (ii) solves the MDP
to compute an optimal policy. We develop methodical and algorithmic solutions to overcome the
limited global memory of GPUs by (i) using a dynamic reduced-order representation of the ocean
flows, (ii) leveraging the sparse nature of the state transition probability matrix, (iii) introducing a
neighbouring sub-grid concept and (iv) proving that it is sufficient to use only the stochastic scalar
field’s mean to compute the expected one-step rewards for missions involving energy harvesting from
the environment; thereby saving memory and reducing the computational effort. We demonstrate the
algorithm on a simulated stochastic dynamic environment and highlight that it builds the MDP model
and computes the optimal policy 600-1000x faster than conventional CPU implementations, making
it suitable for real-time use. We also demonstrate applications of our planner for multi-objective
optimization problems, where trade-offs between multiple conflicting objectives are achieved (such
as minimizing expected mission time, energy consumption, and environmental energy harvesting)
Control of nonlinear systems with state constraints
Pankaj Mishra
Robert Bosch Center for Cyber Physical Systems
Designing control for practical systems invites complications in the form of constraints. These
constraints could appear in different forms, such as performance, saturation, physical stoppages,
and safety specifications. The presentation will include the importance of considering constraints
in controller design, various approaches to dealing with state-constrained nonlinear systems, and a
brief discussion on the use of design tools such as Backstepping and Barrier Lyapunov Function for
the design of controllers for state-constrained systems in an adaptive framework.
CORNET: A Co-Simulation Middleware for Robot Networks
Srikrishna Acharya and Bharadwaj Amrutur
Robert Bosch Centre for Cyber-Physical Systems
We present a networked co-simulation framework for multi-robot systems applications. This is
necessary to co-design the multi-robots’ autonomy logic and the communication protocols. The
proposed framework extends existing tools to simulate the robot’s autonomy and network-related
aspects. We have used Gazebo with ROS/ROS2 to develop the autonomy logic for robots and
mininet-WiFi as the network simulator to capture the cyber-physical systems properties of the
multi-robot system. This framework addresses the need to seamlessly integrate the two simulation
environments by synchronizing mobility and time, allowing for easy migration of the algorithms to
real platforms.
Vision-based Tele-Operation for Robot Arm Manipulation
Himanshu Sharma and Bharadwaj Amrutur
Robert Bosch Centre for Cyber-Physical Systems
It’s worth the time to acknowledge just how amazingly well we can perform tasks with our hands.
Starting from picking up a coin to button up our shirts. All these tasks for robots are still very forefront
of robotics research & require significant interactions between vision, perception, planning & control.
Becoming an expert in all of them is quite a challenge. Here comes the Tele-operation which
offers the robots reasoning skills, intuition and creativity for performing these tasks in unstructured
environments and unfamiliar objects. Herein, we present a low cost vision based Tele-operation of
KUKA IIWA industrial robot Arm, where we would be imitating in real-time the natural motion of
human operator seen from a depth camera on his side from the view of the activities of the robot from
the cameras on robot-side on a screen. This tele-operated semi-autonomous control has potential
applications in unstructured dynamic environments where the presence of human is not desirable for
e.g. handling nuclear waste, deep under water to space explorations.
Evaluating the Benefits of Collaboration between Rideshare and
Transit Service Providers
Vishal Kushwaha
Robert Bosch Centre for Cyber-Physical Systems
The rideshare service providers (RSPs), e.g., Ola, Uber, Lyft etc., are gaining popularity among
travelers because of their special service structure. The features of their services include online
booking facility, ride personalization flexibility, end-to-end connectivity for travelers etc. However,
due to this increasing popularity, the city transportation planners are concerned that the congestion
levels on the roads may increase leading to an increase in travel times. On the other hand, the public
transit (e.g., bus, metro etc.) agencies are observing a decline in ridership. The transit stops may be
located far away from travelers’ homes or activity locations which discourages public transit use.
Due to these issues, efforts are being made to make the RSPs and public transit agencies collaborate.
In such collaboration frameworks, the RSPs will provide connectivity from transit stops to travelers’
home and activity locations. The transit agencies will provide connectivity on the long-haul part of
the journey. In this regard, we proposed a tri-level game theory and discrete choice theory-based
model to determine optimal travel prices for such travel mode. The model was applied on a travel
corridor of a major city in India which shows increased profits and market shares, and decreased
travel times for RSPs and bus agency. The benefits for travelers were also observed.
PentaGOD: Stepping beyond traditional GOD with five parties
Nishat Koti
Department of Computer Science and Automation
Secure multiparty computation (MPC) is increasingly being used to address privacy issues in
various applications. The recent work of Alon et.al. (CRYPTO’20) identified the shortcomings
of traditional MPC and defined a Friends-and-Foes (FaF) security notion to address the same. We
showcase the need for FaF security in real-world applications such as dark pools. This subsequently
necessitates designing concretely efficient FaF-secure protocols. Towards this, keeping efficiency at
the center stage, we design ring-based FaF-secure MPC protocols in the small-party honest-majority
setting. Specifically, we provide (1,1)-FaF secure 5 party computation protocols (5PC) that consider
one malicious and one semi-honest corruption and constitutes the optimal setting for attaining
honest-majority. At the heart of it lies the multiplication protocol that requires a single round of
communication with 8 ring elements (amortized). To facilitate having FaF-secure variants for several
applications, we design a variety of building blocks optimized for our FaF setting. The practicality
of the designed (1,1)-FaF secure 5PC framework is showcased by benchmarking dark pools. In the
process, we also improve the efficiency and security of the dark pool protocols over the existing
traditionally secure ones. This improvement is witnessed as a gain of up to 62× in throughput
compared to the existing ones. Finally, to demonstrate the versatility of our framework, we also
benchmark popular deep neural networks.
You Share Because We Care: Fully Secure Allegation Escrow System
Nishat Koti, Varsha Bhat Kukkala, Arpita Patra
Department of Computer Science and Automation,
The rising issues of harassment, exploitation, corruption and other forms of abuse have led victims
to seek comfort by acting in unison against common perpetrators. This is corroborated by the
widespread #MeToo movement, which was explicitly against sexual harassment. One way to curb
these issues is to install allegation escrow systems that allow victims to report such incidents. The
escrows are responsible for identifying victims of a common perpetrator and taking the necessary
action to bring justice to them. However, users hesitate to participate in these systems due to the
fear of such sensitive reports being leaked to perpetrators, who may further misuse them. Thus, to
increase trust in the system, cryptographic solutions are being designed. Several such web-based
platforms have been proposed to realize secure allegation escrow (SAE) systems, each improving
over its predecessors.
In the work of Arun et al. (NDSS’20), which presents the state-of-the-art solution, we identify
attacks that can leak sensitive information and compromise victim privacy. We also report issues
present in prior works that were left unidentified. To arrest all these breaches, we put forth an SAE
system that prevents the identified attacks and retains the salient features from all prior works. The
cryptographic technique of secure multi-party computation (MPC) serves as the primary underlying
tool in designing our system. At the heart of our system lies a new duplicity check protocol and an
improved matching protocol. We also provide additional features such as allegation modification
and deletion, which were absent in the state of the art. To demonstrate feasibility, we benchmark the
proposed system with state-of-the-art MPC protocols and report the cost of processing an allegation.
Different settings that affect system performance are analyzed, and the reported values showcase the
practicality of our solution.
Secret Key Agreement and Secure Omniscience of Tree-PIN Source
with Linear Wiretapper
Praneeth Kumar V
Department of Electrical Communication Engineering
In the setting of the multiterminal source model for secure computation, users who privately observe
correlated random variables from a source try to compute functions of these private observations
through interactive public discussion. The goal of the users is to keep these computed functions
secure from a wiretapper who has some side information (a random variable possibly correlated with
the source) and has noiseless access to the public discussion. In this work, we focus on a pairwise
independent network (PIN) source model defined on a tree with a linear wiretapper that can observe
arbitrary linear combinations of the source. For this model, we explore the connection between
secret key agreement and secure omniscience. While the secret key agreement problem on this
model considers the generation of a maximum-rate secret key through public discussion, the secure
omniscience problem is concerned with communication protocols for omniscience that minimize the
rate of information leakage to the wiretapper. Our main result is that a maximum-rate secret key can
be generated through an omniscience scheme that minimizes the information leakage rate. Moreover,
we obtain single-letter characterizations of the wiretap secret key capacity and the minimum leakage
rate for omniscience.
Fundamental Connections between Opacity and Attack Detection
in Linear Systems
Varkey M. John, Vaibhav Katewa
Department of Electrical Communication Engineering
Opacity and attack detectability are important properties for any system as they allow the states
to remain private and malicious attacks to be detected, respectively. In this paper, we show that a
fundamental trade-off exists between these properties for a linear dynamical system, in the sense that
if an opaque system is subjected to attacks, all attacks cannot be detected. We first characterize the
opacity conditions for the system in terms of its weakly unobservable subspace (WUS) and show that
the number of opaque states is proportional to the size of the WUS. Further, we establish conditions
under which increasing the opaque sets also increases the set of undetectable attacks. This highlights
a fundamental trade-off between security and privacy. We demonstrate application of our results on
a real-world system model
An Evaluation of Basic Protection Mechanisms in Financial Apps on
Mobile Devices
Nikhil Agrawal, Kanchi Gopinath and Vinod Ganapathy
Department of Computer Science and Automation
This work concerns the robustness of security checks in financial mobile applications. The best
practices recommended by the Open Web Application Security Project (OWASP) for developing
such apps, demand that developers include several checks in these apps, such as detection of running
on a rooted device, certificate checks, and so on. Ideally, these checks must be introduced in a
sophisticated way and must not be locatable through trivial static analysis, so that attackers cannot
bypass them trivially. In this work, we conduct a large-scale study focused on financial apps on the
Android platform and determine the robustness of these checks.
Our study shows that a significant fraction of the financial apps does not have the various
self-defense checks recommended by the OWASP. Then we showed that among the apps with at
least one security check, > 50% of such apps at least one check could be trivially bypassed. Some of
such financial apps have installation counts exceeding 100 million from Google Play. This entire
process of detecting the self-defense check and bypassing it is automated. We believe that the results
of our study can guide developers of these financial apps in inserting security checks in a more robust
fashion.
Inter and Intra-Annual Spatio-Temporal Variability of Habitat Suitability
for Asian Elephants in India: A Random Forest Model-based Analysis
Anjali P
Department of Computational and Data Sciences
We develop a Random Forest model to estimate the species distribution of Asian elephants in
India and study the inter and intra-annual spatiotemporal variability of habitats suitable for them.
Climatic, topographic variables and satellite-derived Land Use/Land Cover (LULC), Net Primary
Productivity (NPP), Leaf Area Index (LAI), and Normalized Difference Vegetation Index (NDVI)
are used as predictors, and the species sighting data of Asian elephants from Global Biodiversity
Information Reserve is used to develop the Random Forest model. A careful hyper-parameter
tuning and training-validation-testing cycle are completed to identify the significant predictors and
develop a final model that gives precision and recall of 0.78 and 0.77. The model is applied to
estimate the spatial and temporal variability of suitable habitats. We observe that seasonal reduction
in the suitable habitat may explain the migration patterns of Asian elephants and the increasing
human-elephant conflict. Further, the total available suitable habitat area is observed to have reduced,
which exacerbates the problem. This machine learning model is intended to serve as an input to the
Agent-Based Model that we are building as part of our Artificial Intelligence-driven decision support
tool to reduce human-wildlife conflict.
Template Vector Machines: A Classification Framework for Energy
Efficient Edge Devices
Abhishek Ramdas Nair
Department of Electronic Systems Engineering
Energy-efficient devices are essential in edge computing and the tiny Machine Learning (tinyML)
paradigm. Edge devices are often constrained by the available computational power and hardware
resource. To this end, we present a novel classification framework, Template Vector Machines,
for time-series data. Unlike a conventional pattern recognizer, where the feature extraction and
classification are designed independently, this architecture integrates the convolution and nonlinear
filtering operations directly into the kernels of a Support Vector Machine (SVM). The result of
this integration is a framework system whose memory and computational footprint (training and
inference) are light enough to be implemented on a constrained IoT platform like microcontrollers or
Field Programmable Gate Array (FPGA)-based systems. Template Vector Machines do not impose
restrictions on the kernel to be positive-definite and allow the user to define memory constraints
in fixed template vectors. This makes the framework scalable and enables its implementation for
low-power, high-density, and memory-constrained embedded applications. We demonstrate the
capabilities of this system on microcontrollers using audio data to identify bird species and classify
gestures using IMU data.
tinyRadar: mmWave Radar based Human Activity Classification for
Edge Computing
Radha Agarwal
Department of Electonic Systems Engineering
The current state-of-the-art systems for patient monitoring, elderly, and child care are mainly
camera-based and often require cloud computing. Camera-based systems pose a privacy risk, and
cloud computing can lead to higher latency, data theft, and connectivity issues. To address this, we
have developed a novel tinyML-based single-chip radar solution for on-edge sensing and detection of
human activity. Edge computing within a small form factor makes it a more portable, fast, and secure
solution. On top of that, radar provides an advantage by protecting the privacy and operating in fog,
dust, and low light environment. We have used the Texas Instruments IWR6843 millimeter-wave
radar board to implement the signal processing chain and classification model. A dataset for four
different human activities generalized over six subjects was collected to train the 8-bit quantized
Convolutional Neural Network. The real-time inference engine implemented on Cortex-R4F using
CMSIS-NN framework has a model size of 1.44KB, gives the classification result after every 120ms,
and has an overall subject-independent accuracy of 96.43%.
Self-supervised metric learning for Speaker Diarization
Prachi Singh and Sriram Ganapathy
Department of Electrical Engineering
Speaker diarization is the task of automatic segmentation of the given audio recording intoregions corresponding to different speakers. It is an important step in information extractionfrom conversational speech. The applications range from rich speech transcription to analysingturn-taking behavior in clinical diagnosis. Self-supervised learning, which involves learning themodels from the data itself has gained a lot of interest in the deep learning world. In this talk, Iwill highlight our work on self-supervised learning for the task of speaker diarization. It involvessegmenting the audio in small chunks and extracting the speaker embeddings. These embeddingsare further pruned using a representation learning and metric learning network. The target labelsfor training are obtained by first performing unsupervised clustering on the initial embeddings.The approach has found to separate same speaker pairs from different speakers pairs. Next I willdiscuss about the current work on Graph Neural Networks (GNN) . GNN have been widely exploredfor text classification and image clustering tasks but their use in speech research is nascent. Thediarization problem can be formulated as graph clustering in which nodes represent the speakerembeddings and the edges represent the similarities between the nodes. GNN exploit the structureof the graph to improve the representations and thereby improves the clustering for diarization. Iwill discuss the model architecture, and advantages over the conventional approach
On the use of Cross-Attention for Speaker Verification
Shreyas Ramoji and Sriram Ganapathy
Department of Electrical Engineering
Automatic Speaker verification is the task of determining whether a test segment of speech contains
a particular speaker of interest, given an enrollment recording of the speaker. Current approaches to
Speaker Verification involve using neural networks such as residual networks (ResNets), time-delay
neural networks (TDNNs), and their variants such as the Factorized TDNN and ECAPA-TDNN, to
name a few. These models involve extracting embeddings of fixed dimensions from speech segments
with variable durations, followed by a backend scoring approach such as cosine scoring or the
PLDA to compute a log-likelihood ratio score. A recent innovation in the architecture front for
speaker verification involves employing emphasized channel attention, propagation, and aggregation
into the popular time-delay neural network architectures (ECAPA-TDNN). In this presentation, I
will discuss my ongoing work involving modifications to the ECAPA-TDNN model. Here, we
use cross-attention to selectively propagate the relevant channels and temporal frames of a test
utterance using attention weights obtained from the enrollment recording. We can interpret these
as enrollment-aware representations of the test segments that can potentially favor the task of
speaker verification, particularly in challenging conditions such as shorter test duration or noisy test
conditions. While these models are more complex and slower to train than the regular embedding
extractors, the time taken to verify a test recording is similar. Hence, research along these lines can
potentially give rise to more reliable speaker verification models for real-life applications
On Achieving Leximin Fairness and Stability in Many-to-One
Matchings
Shivika Narang
Department of Computer Science and Automation
The past few years have seen a surge of work on fairness in allocation problems where items must be
fairly divided among agents having individual preferences. In comparison, fairness in settings with
preferences on both sides, that is, where agents have to be matched to other agents, has received much
less attention. Moreover, two-sided matching literature has largely focused on ordinal preferences.
This paper initiates the study of fairness in stable many-to-one matchings under cardinal valuations.
We study leximin optimality over stable many-to-one matchings. We first investigate matching
problems with ranked valuations where all agents on each side have the same preference orders
or rankings over the agents on the other side (but not necessarily the same valuations). Here, we
provide a complete characterisation of the space of stable matchings. This leads to FaSt, a novel and
efficient algorithm to compute a leximin optimal stable matching under ranked isometric valuations
(where, for each pair of agents, the valuation of one agent for the other is the same). Building upon
FaSt, we present an efficient algorithm, FaSt-Gen, that finds the leximin optimal stable matching for
a more general ranked setting. We next establish that, in the absence of rankings and under strict
preferences, finding a leximin optimal stable matching is NP-Hard. Further, with weak rankings,
the problem is strongly NP-Hard, even under isometric valuations. In fact, when additivity and
non-negativity are the only assumptions, we show that, unless P=NP, no efficient polynomial factor
approximation is possible.
DAD: Data-free Adversarial Defense at Test Time
Gaurav Kumar Nayak
Department of Computational and Data Sciences
Deep models are highly susceptible to adversarial attacks. Such attacks are carefully crafted
imperceptible noises that can fool the network and can cause severe consequences when deployed. To
encounter them, the model requires training data for adversarial training or explicit regularization-based
techniques. However, privacy has become an important concern, restricting access to only trained
models but not the training data (e.g. biometric data). Also, data curation is expensive and companies
may have proprietary rights over it. To handle such situations, we propose a completely novel
problem of ‘test-time adversarial defense in absence of training data and even their statistics’.
We solve it in two stages: a) detection and b) correction of adversarial samples. Our adversarial
sample detection framework is initially trained on arbitrary data and is subsequently adapted to the
unlabelled test data through unsupervised domain adaptation. We further correct the predictions
on detected adversarial samples by transforming them in Fourier domain and obtaining their low
frequency component at our proposed suitable radius for model prediction. We demonstrate the
efficacy of our proposed technique via extensive experiments against several adversarial attacks and
for different model architectures and datasets. For a non-robust Resnet-18 model pre-trained on
CIFAR-10, our detection method correctly identifies 91.42% adversaries. Also, we significantly
improve the adversarial accuracy from 0% to 37.37% with a minimal drop of 0.02% in clean accuracy
on state-of-the-art ‘Auto Attack’ without having to retrain the model.
Improving Domain Adaptation through Class Aware Frequency
Transformation
Vikash Kumar
Department of Computational and Data Sciences
In this work, we explore the usage of the Frequency Transformation for reducing the domain
shift between the Source and Target domain (e.g., synthetic image and real image respectively)
towards solving the Domain Adaptation task. Most of the Unsupervised Domain Adaptation (UDA)
algorithms focus on reducing the global domain shift between labelled Source and unlabelled
Target domain by matching the marginal distributions under a small domain gap assumption. UDA
performance degrades for the cases where the domain gap between Source and Target distribution
is large. In order to bring the Source and the Target domains closer, we propose a traditional
image processing technique based novel approach Class Aware Frequency Transformation (CAFT)
that utilizes pseudo label based class consistent low-frequency swapping for improving the overall
performance of the existing UDA algorithms. The proposed approach, when compared with the
state-of-the-art deep learning based methods, is computationally more efficient and can easily be
plugged into any existing UDA algorithm to improve its performance. Additionally, we introduce
a novel approach based on absolute difference of top-2 class prediction probability (ADT2P) for
filtering target pseudo labels into clean and noisy sets. Samples with clean pseudo label can be used
to improve the performance of unsupervised learning algorithms. We name the overall framework as
CAFT++.
Multi-modal query guided object localization in natural images
Aditay Tripathi
Department of Computational and Data Sciences
Localizing objects in a scene has been a long-sought pursuit in computer vision literature. More
recent works focus on localizing objects in the image using text and image queries. However, there
are many different kinds of unexplored modalities in the literature. In this work, we rigorously
study the problem of localizing objects in the image using queries such as sketches, gloss, and scene
graphs.
Sketch query: We introduce the novel problem of localizing all the instances of an object (seen
or unseen during training) in a natural image via sketch query. The sketch-guided object localization
proves to be more challenging when we consider the following: (i) the sketches used as queries are
abstract representations with little information on the shape and salient attributes of the object, (ii)
the sketches have significant variability as they are hand-drawn by a diverse set of untrained human
subjects, and (iii) there exists a domain gap between sketch queries and target natural images as
these are sampled from very different data distributions. To address the problem of sketch-guided
object localization, we propose a novel cross-modal attention scheme that guides the region proposal
network (RPN) to generate object proposals relevant to the sketch query. These object proposals are
later scored against the query to obtain final localization. Our method is effective with as little as a
single sketch query. Moreover, it also generalizes well to object categories not seen during training
(one-shot localization) and is effective in localizing multiple object instances present in the image
Sketch and gloss queries: Hand-drawn sketches are suitable as a query when neither an image
nor the object class is available. However, hand-drawn crude sketches alone might be ambiguous
for object localization when used as queries. On the other hand, a linguistic definition of the object
category and the sketch query give better visual and semantic cues for object localization. This work
presents a multimodal query-guided object localization approach under the challenging open-set
setting. In particular, we use queries from two modalities, namely, hand-drawn sketch and description
of the object (also known as gloss), to perform object localization. Multimodal query-guided object
localization is a challenging task, especially when the large domain gap exists between the queries
and the natural images and the challenge in optimally combining the complementary and minimal
information present across the queries. To address the aforementioned challenges, we present a novel
cross-modal attention scheme that guides the region proposal network to generate object proposals
relevant to the input queries and a novel orthogonal projection-based proposal scoring technique that
scores each proposal with respect to the queries, thereby yielding the final localization results.
Scene graph query: We present a framework for jointly grounding objects that follow certain
semantic relationship constraints given in a scene graph. A typical natural scene contains several
objects, often exhibiting visual relationships of varied complexities between them. These inter-object
relationships provide strong contextual cues to improve grounding performance compared to a
traditional object query-based localization task. A scene graph is an efficient and structured way to
represent all the objects in the image and their semantic relationships. In an attempt to bridge these
two modalities representing scenes and utilize contextual information to improve object localization,
we rigorously study the problem of grounding scene graphs in natural images. To this end, we
propose a graph neural network-based approach which we refer to as Visio-Lingual Message Passing
Graph Neural Network (VL-MPAG Net). The model first constructs a directed graph with object
proposals as nodes and an edge between a pair of nodes representing a plausible relation between
them. Then a three-step inter-graph and intra-graph message passing are performed to learn the
context-dependent representation of the proposals and query objects. These object representations
are used to score the proposals to generate object localization.
MMD-ReID: A Simple but Effective Solution for Visible-Thermal Person
ReID
Chaitra S. Jambigi
Department of Computational and Data Sciences
Learning modality invariant features is central to the problem of Visible-Thermal cross-modal
Person Reidentification (VT-ReID), where query and gallery images come from different modalities.
Existing works implicitly align the modalities in pixel and feature spaces by either using adversarial
learning or carefully designing feature extraction modules that heavily rely on domain knowledge. We
propose a simple but effective framework, MMD-ReID, that reduces the modality gap by an explicit
discrepancy reduction constraint. MMD-ReID takes inspiration from Maximum Mean Discrepancy
(MMD), a widely used statistical tool for hypothesis testing that determines the distance between two
distributions. MMD-ReID uses a novel margin-based formulation to match class-conditional feature
distributions of visible and thermal samples to minimize intra-class distances while maintaining
feature discriminability. MMD-ReID is a simple framework in terms of architecture and loss
formulation. We conduct extensive experiments to demonstrate both qualitatively and quantitatively
the effectiveness of MMD-ReID in aligning the marginal and class conditional distributions, thus
learning both modality-independent and identity-consistent features. The proposed framework
significantly outperforms the state-of-the-art methods on SYSU-MM01 and RegDB datasets
LEAD: Self-Supervised Landmark Estimation by Aligning Distributions
of Feature Similarity
Tejan Naresh Naik Karmali
Department of Computational and Data Sciences
In this work, we introduce LEAD, an approach to discover landmarks from an unannotated collection
of category-specific images. Existing works in self-supervised landmark detection are based on
learning dense (pixel-level) feature representations from an image, which are further used to learn
landmarks in a semi-supervised manner. While there have been advances in self-supervised learning
of image features for instance-level tasks like classification, these methods do not ensure dense
equivariant representations. The property of equivariance is of interest for dense prediction tasks
like landmark estimation. In this work, we introduce an approach to enhance the learning of
dense equivariant representations in a self-supervised fashion. We follow a two-stage training
approach: first, we train a network using the BYOLobjective which operates at an instance level.
The correspondences obtained through this network are further used to train a dense and compact
representation of the image using a lightweight network. We show that having such a prior in the
feature extractor helps in landmark detection, even under drastically limited number of annotations
while also improving generalization across scale variations.
Non-Local Latent Relation Distillation for Self-Adaptive 3D Human
Pose Estimation
Jogendra Nath Kundu
Department of Computational and Data Sciences
Available 3D human pose estimation approaches leverage different forms of strong (2D/3D pose) or
weak (multi-view or depth) paired supervision. Barring synthetic or in-studio domains, acquiring
such supervision for each new target environment is highly inconvenient. To this end, we cast
3D pose learning as a self-supervised adaptation problem that aims to transfer the task knowledge
from a labeled source domain to a completely unpaired target. We propose to infer image-to-pose
via two explicit mappings viz. image-to-latent and latent-to-pose where the latter is a pre-learned
decoder obtained from a prior-enforcing generative adversarial auto-encoder. Next, we introduce
relation distillation as a means to align the unpaired cross-modal samples i.e. the unpaired target
videos and unpaired 3D pose sequences. To this end, we propose a new set of non-local relations in
order to characterize long-range latent pose interactions unlike general contrastive relations where
positive couplings are limited to a local neighborhood structure. Further, we provide an objective
way to quantify non-localness in order to select the most effective relation set. We evaluate different
self-adaptation settings and demonstrate state-of-the-art 3D human pose estimation performance on
standard benchmarks.
Quality Assessment of Low-light Restored Images: A Subjective
Study and an Unsupervised Model
Vignesh Kannan and Rajiv Soundararajan
Department of Electrical Communication Engineering
The quality assessment (QA) of restored low-light images is an important tool for benchmarking
and improving low-light restoration (LLR) algorithms. While several LLR algorithms exist, the
subjective perception of the restored images has been much less studied. Challenges in capturing
aligned low-light and well-lit image pairs and collecting a large number of human opinion scores
of quality for training, warrant the design of unsupervised (or opinion unaware) no-reference (NR)
QA methods. This work studies the subjective perception of low-light restored images and their
unsupervised NR QA. Our contributions are two-fold. We first create a dataset of restored low-light
images using various LLR methods, conduct a subjective QA study, and benchmark the performance
of existing QA methods. We then present a self-supervised contrastive learning technique to extract
distortion-aware features from the restored low-light images. We show that these features can be
effectively used to build an opinion unaware image quality analyzer. Detailed experiments reveal
that our unsupervised NR QA model achieves state-of-the-art performance among all such quality
measures for low-light restored images
Teaching a GAN What Not to Learn
Siddarth Asokan
Department of Robert Bosch Centre for Cyberphysical Systems
Generative adversarial networks (GANs) are an unsupervised deep learning framework consisting of
two neural networks tasked with modelling the underlying distributions of a target dataset, usually
images. The supervised and semi-supervised counterparts learn target classes in the dataset by
providing labelled data and using multi-class discriminators. In this presentation, we will explore
a novel perspective to the supervised GAN problem, one that is motivated by the philosophy of
the famous Persian poet Rumi who said, “The art of knowing is knowing what to ignore.” In the
RumiGAN framework, we not only provide the GAN positive data that it must learn to model, but
also present it with so-called negative samples that it must learn to avoid. In this talk, we will explore
some of the basic mathematical aspects of formulating various standard GAN frameworks within the
Rumi approach, and demonstrate applications to data balancing, where RumiGANs can generate
realistic samples from a desired positive classes that have as low as 5% representation in the entire
dataset.
Interpolation of 3D Digital Elevation Models
Mani Madhoolika Bulusu
Department of Electrical Engineering
A Digital Elevation Model (DEM) is a two-dimensional discrete function that defines the topographic
surface of any terrain as a set of values measured or computed at the grid nodes. Applications of
DEMs include hydrologic and geologic analyses, hazard monitoring, natural resources exploration,
and traditional cartographic applications, such as the production of contour, hill-shaded, slope, and
aspect maps. They capture the elevations of the surface at locations specified by (latitude, longitude)
at irregularly spaced locations. But for all practical purposes, one needs the DEMs on regular grids.
And hence the need to interpolate from the known measurements to estimate the elevations at all the
terrain locations.
This talk covers Inverse Distance Weighting (IDW) and polyharmonic splines interpolation in
irregularly spaced data interpolation. Deep Learning has proven to work exceptionally well for
natural images denoising and inpainting. We present how the problem of DEM interpolation is
cast as an inpainting problem and solved using the concepts of cycle consistency and generative
adversarial network (GAN). We discuss relevant experiments to demonstrate its effectiveness. We
finally discuss the major advantages and the issues one faces with the data-driven approach.
Event-LSTM: An Unsupervised and Asynchronous Learning-based Representation for Event-based Data
Lakshmi Annamalai
Department of Electonic Systems Engineering
Event-based cameras, also known as silicon retinas, are a novel type of biologically inspired sensors that encode per-pixel scene dynamics asynchronously with microsecond resolution in the form of a stream of events. Key advantages of an event camera are: high temporal resolution, sparse data, high dynamic range, and low power requirements, which makes it a suitable choice for resource-constrained environments. However, one of the most challenging aspects of working with event cameras is the continuous and asynchronous nature of the data. This has prompted a paradigm shift that allows efficient extraction of meaningful information from the space-time event data.
Inspired by the benchmark set by the traditional vision and deep learning approaches, one of the predominant areas of research in event data focuses on aggregating the information conveyed by individual events onto a spatial grid representation. This ensures its compatibility with the tools available from the conventional vision domain. While interest in converting events into spatial representation by hand-crafted data transformations is growing, only very few approaches have looked into the more complex solutions that data-driven deep learning methods can provide. However, not every application has enough volume of labelled data to quench the data-hunger thirst of supervised deep learning algorithms, limiting the design of deep supervised networks to approximate complex functions. Hence, we have formulated the problem at hand as an unsupervised transformation to mitigate the challenges faced by supervised approaches due to limited availability of labelled data in the event domain.
The proposed Event-LSTM is a generic, deep learning-based task-independent architecture for transforming raw events into spatial grid representation. We achieve task independence by operating the popular architecture, LSTM, in an unsupervised setting to learn a mapping from raw events into a task-unaware spatial representation, which we call LSTM Time Surface (LSTM-TS). The Event-LSTM puts forth unsupervised event data representation generation as an alternative to data-hungry supervised learning approaches. It eliminates the need for large quantities of labelled data for each task at hand.
To take advantage of the asynchronous sensing principle of event cameras, Event-LSTM adapts asynchronous sampling of 2D spatial grid. The asynchronous 2D spatial grid sampling approach enables speed invariant feature extraction to cope with intraclass motion variations. It also initiates processing only when a specified number of events is accumulated, resulting in non-redundant energy-efficient feature extraction.
Advances in Large-Scale 3D Reconstruction
Lalit Manan
Department of Electrical Engineering
The problem of large-scale 3D reconstruction from images has been of great interest in the computer vision community. In recent years there have been significant advances in
multiple aspects of the reconstruction pipeline. In this talk, I will describe the challenges involved and the two principal approaches of incremental and global 3D reconstruction. I will also briefly analyse the nature of learning based solutions for 3D reconstruction.
Regularization using denoising: Exact and robust signal recovery
Ruturaj Gavaskar
Department of Electrical Engineering
Plug-and-play (PnP) is a relatively recent regularization technique for image reconstruction problems. As opposed to traditional methods that involve choosing a suitable regularizer function, PnP uses a high-quality denoiser such as nonlocal means (NLM) or BM3D within a proximal algorithm (e.g. ISTA or ADMM) to implicitly perform regularization. PnP has become popular in the imaging community; however, its regularization capacity is not fully understood yet. For example, it is not known if PnP can in theory recover a signal from few measurements, as in classical compressed sensing, and if the recovery is robust to noise. In this talk, we explore these questions and present some novel theoretical and experimental results.
Structure preserving regularization for imaging inverse problems
Manu Ghulyani
Department of Electrical Engineering
Image restoration is an important inverse problem of great research interest. Image restoration is often solved by the regularization approach. The conventional regularization approaches address the ill-posedness of reconstruction from distorted measurements, but the restored images
tend to suffer from loss of details such as blurring of edges.
Some works based on approximation of l0 norm have shown superior performance. These methods have led to significant improvement in reconstruction image quality restoration. These methods also possess theoretically sound guarantees on the reconstructed image based on assumptions on the forward model and noise. In this work, we propose to extend the popular Hessian-Schatten (HS) norm regularization by imposing a non-convex penalty on the singular values of the im-
age Hessian. We demonstrate that the quality of reconstruction increases significantly by applying the proposed non-convex functional.
Pipelined Preconditioned s-step Conjugate Gradient Methods for
Distributed Memory Systems
Manasi Tiwari
Department of Computational and Data Sciences
Preconditioned Conjugate Gradient (PCG) method is a widely used iterative method for solving
large linear systems of equations. Pipelined variants of PCG present independent computations in
the PCG method and overlap these computations with non-blocking allreduces. We have developed
a novel pipelined PCG algorithm called PIPE-sCG (Pipelined s-step Conjugate Gradient) that
provides a large overlap of global communication and computations at higher number of cores in
distributed memory CPU systems. Our method achieves this overlap by introducing new recurrence
computations. We have also developed a preconditioned version of PIPE-sCG. The advantages of
our methods are that they do not introduce any extra preconditioner or sparse matrix vector product
kernels in order to provide the overlap and can work with preconditioned, unpreconditioned and
natural norms of the residual, as opposed to the state-of-the-art methods. We compare our method
with other pipelined CG methods for Poisson problems and demonstrate that our method gives the
least runtimes. Our method gives up to 2.9x speedup over PCG method, 2.15x speedup over PIPECG
method and 1.2x speedup over PIPECG-OATI method at large number of cores.
The functional connectivity landscape of the human brain associated
with breathing and breath-hold
Anusha A. S.
Department of Electrical Engineering
Breathing is one of the most basic functions of the human body and is central to life. It allows the
body to obtain the energy it needs to sustain itself and its activities. Breathing happens naturally at rest
and involves automatic but active inspiration and passive expiration. Each breath is known to follow a
rhythm, that is instigated and synchronized by coupled oscillators periodically driving the respiratory
cycle, most prominently the pre-Bötzinger complex located in the medulla. This brainstem neural
microcircuit typically controls respiration autonomously, making the act of breathing seem effortless
and continuous even during sleep or when a person is unconscious. However, it is also possible
for humans to voluntarily control their breathing, e.g., during speech, singing, crying, or during
voluntary breath-holding. Even though this adaptive characteristic of respiration can be an indication
of the top-down architecture of the functional neuroanatomy of voluntary respiratory control, the
mechanisms underlying breath control, and the extent to which rhythmic brain activity is modulated
by the rhythmic act of breathing is not fully understood at the moment. Our research focusses on
investigating the differences in the electroencephalogram (EEG) based functional connectivity (FC)
of the human brain during normal breathing, and voluntary breath-hold, to locate the cortical regions
where the modulations are localized, and to distinguish the effects during different phases of the
respiratory cycle
A study of the fourth order joint statistical moment for dimensionality
reduction of combustion datasets
Anirudh Jonnalagadda
Department of Computational and Data Sciences
Principal Component Analysis (PCA) is a popular dimensionality reduction technique widely used
to reduce the computational cost associated with numerical simulations of combustion phenomena.
However, PCA, which transforms the thermo-chemical state space based on eigenvectors of co-variance
of the data, could fail to capture information regarding important localized chemical dynamics, such
as the formation of ignition kernels, appearing as outlier samples in a dataset. In this paper, we
propose an alternate dimensionality reduction procedure, wherein the required principal vectors are
computed from a high-order joint statistical moment, namely the co-kurtosis tensor, which may better
identify directions in the state space that represent stiff dynamics. We first demonstrate the potential
of the proposed method using a synthetically generated dataset that is representative of typical
combustion simulations. Thereafter, we characterize and contrast against PCA, the performance of
the proposed method for datasets representing spontaneous ignition of premixed ethylene in a simple
homogeneous reactor and ethanol-fueled homogeneous charged compression ignition (HCCI) engine.
Specifically, we compare the low-dimensional manifolds in terms of reconstruction errors of the
original thermo-chemical state, species production rates, and heat release rate to assess the suitability
of the proposed co-kurtosis based dimensionality reduction technique. We find that the co-kurtosis
based reduced manifold represents the stiff chemical dynamics, as captured by the species production
rates and heat release, in the reacting zones of the system much better than PCA
ERP Evidences of Rapid Semantic Learning in Foreign Language
Word Comprehension
Akshara Soman and Sriram Ganapathy
Department of Electrical Engineering
The event-related potential (ERP) of electroencephalography (EEG) signals has been well studied in
the case of native language speech comprehension using semantically matched and mis-matched
end-words. The presence of semantic incongruity in the audio stimulus elicits a N400 component
in the ERP waveform. However, it is unclear whether the semantic dissimilarity effects in ERP
also appear for foreign language words that were learned in a rapid language learning task. In this
study, we introduced the semantics of Japanese words to subjects who had no prior exposure to
Japanese language. Following this language learning task, we performed ERP analysis using English
sentences of semantically matched and mis-matched nature where the end-words were replaced with
their Japanese counterparts. The ERP analysis revealed that, even with a short learning cycle, the
semantically matched and mis-matched end-words elicited different EEG patterns (similar to the
native language case). However, the patterns seen for the newly learnt word stimuli showed the
presence of P600 component (delayed and opposite in polarity to those seen in the known language).
A topographical analysis revealed that P600 responses were pre-dominantly observed in the parietal
region and in the left hemisphere. The absence of N400 component in this rapid learning task
can be considered as evidence for its association with long-term memory processing. Further, the
ERP waveform for the Japanese end-words, prior to semantic learning, showed a P3a component
owing to the subject’s reaction to a novel stimulus. These differences were more pronounced in the
centro-parietal scalp electrodes.
Design and Development of Implantable Electrode Arrays for
Recording Signals from Rat’s Brain
Suman Chatterjee, Vikas V and Hardik J. Pandya
Department of Electronic Systems Engineering, IISc Department of Neurosurgery, National Institute of Mental Health and Neurosciences
Electroencephalography (EEG) is a widely utilized electrophysiological monitoring technique to
record the electrical activities of the brain for both research and clinical applications. Recently, the
popularity of electrocorticography (ECoG), compared to EEG, has increased due to relatively higher
spatial resolution and improved signal-to-noise ratio (SNR). ECoG signals, the intracranial recording
of electrical signatures of the brain, are recorded by minimally invasive planar electrode arrays placed
on the cortical surface. Flexible arrays minimize the tissue damage and induce minimal inflammation
upon implantation. However, the commercially available implantable electrode arrays offer a poor
spatial resolution. Therefore, there is a need for an electrode array with a higher density of electrodes
to provide better spatial resolution for mapping brain surfaces. We have developed a biocompatible,
flexible, and high-density micro-electrode array (MEA) for a simultaneous 32-channel recording of
ECoG signals. Two OpenBCI Cyton Daisy Biosensing Boards were used for signal acquisition. In
acute experiments, we have demonstrated that the fabricated MEA can record the baseline ECoG
signals, the induced epileptic activities, and the recovered baseline activities after administering
antiepileptic drug from the cortical surface of an anesthetized rat. We observed a significant increment
in amplitude (approximately ten times than baseline) of the brain signals as the epilepsy was induced
after topical application of a convulsant. After intraperitoneal application of an antiepileptic drug, we
observed recovered baseline signals with a lower amplitude than the normal baseline signals. Though
the ECoG signals can achieve better spatial resolution than EEG, it offers a limited understanding of
the activities at a brain depth where the signal originates. Recently, the implanted depth electrodes
have been used for acquiring signals (Local field potentials, LFPs) from deeper regions of the brain
to study the cortex, hippocampus, thalamus, and other deep brain structures. Our other work reports
the design and fabrication of a silicon-based 13-channel single-shank microneedle electrode array
to acquire and understand LFPs from a rat’s brain. In acute in vivo experiments, LFPs from the
somatosensory cortex of anesthetized rats were recorded and were acquired using OpenBCI Cyton
Daisy Biosensing Board at normal, epileptic (chemically induced), and recovered (after application
of antiepileptic drug) conditions. The recorded signals help us understand the response of the
different layers of cortical columns after applying a convulsant and an antiepileptic drug
SPDE-NetII: Optimal stabilization parameter prediction with neural
networks
Sangeeta Yadav, Prof. Sashikumaar Ganesan
Department of Computational and Data Sciences
A one-fit-all numerical solution strategy for the Singularly Perturbed Partial Differential Equations
(SPPDEs) does not exist and has been an open challenge in computational sciences. A number of
stabilization techniques have been proposed over the years in order to obtain a stable solution for such
problems, which is also free of spurious oscillations. However, most of the stabilization techniques
rely on an optimal value of the stabilization parameter, which unfortunately remains difficult to
evaluate. Although an analytical formula for the optimal value of the stabilization parameter exists
for a select few scenarios, such an expression for a general case does not exist. In this work, we
propose a deep neural network based approach for approximating the stabilization parameter for
an accurate and stable solution of the 2-dimensional convection dominated convection-diffusion
equation. In this technique, the stabilization parameter is approximated by a neural network by
minimizing the residual along with the crosswind term. We show that this approach outperforms
state-of-the-art PINN and VarNet neural network based PDE solvers.
Structural connectivity based markers for brain-aging and cognitive
decline
Bharat Richhariya
Computer Science and Automation
Cognitive decline is common in the aging population. However, chronological age may not
necessarily be an accurate marker of brain health. Recently, several studies have employed
neuroimaging based techniques to accurately determine brain health, also known as “brain age".
Brain Age Gap Estimation (BrainAGE) seeks to accurately estimate the difference between chronological
age and brain age, with the aim of establishing trajectories of healthy aging. Accurate estimation
of the brain-age gap can aid in timely identification of markers of brain-related disorders. Here,
using structural (T1-weighted) magnetic resonance imaging (sMRI) and diffusion MRI (dMRI), we
seek to identify anatomical and connectivity-based markers of brain age, in a large cohort of healthy
participants from the TATA Longitudinal Study of Ageing (TLSA). We analyzed 23 standardized
cognitive test scores using factor analysis and observed that the variation across all scores could be
explained by two latent factors alone. Next, we used the T1-weighted images of each participant to
extract structural features using a pre-trained simple fully convolutional neural network (SFCN). We
then used these features to predict the brain age for each participant using a leave-k-participant-out
approach. Predicted brain age correlated significantly with chronological age (r = 0.76, p < 0.001)
with a mean absolute error (MAE) of 3.98 years. In parallel, we asked if anatomical connectivity
could also predict brain age. For this, we estimated the structural brain connectome for each
participant, and quantified brain-wide anatomical connectivity. We then used these connectivity
features in a multiple linear regression model with recursive feature elimination. Our regression
model robustly predicted brain age (r = 0.64, p < 0.001; MAE=5.00 years). We further pruned the
structural connectomes using state-of-the-art pruning algorithms, ReAl-LiFE and SIFT2, to obtain
more robust connectivity estimates. Here again, we observed similar results (ReAl-LiFE: r = 0.5,
p < 0.001, MAE=5.6 years; SIFT2: r = 0.57, p < 0.001, MAE=5.26 years). After pruning, the brain
regions critical for these age predictions involved the frontal cortex (posterior cingulate gyrus) and
the occipital cortex (lingual gyrus). We then combined the structural and the connectivity features to
predict age. Predicted brain age strongly correlated with chronological age (r = 0.76, p < 0.001;
MAE=3.94 years), perhaps largely driven by the structural features themselves. Finally, we asked
if the brain-age gap (δ) was indicative of participants’ cognitive performance. Indeed, brain-age
gap correlated significantly with both latent factors 1 and 2 (Factor 1: r = 0.19, p < 0.05; Factor 2:
r = 0.220, p < 0.005, controlling for age). dMRI-based connectivity and structural brain features
may thus serve as reliable markers of age-related cognitive decline in healthy individuals as well as
in cognitive decline due to neurological disorders such as Alzheimer’s disease.
Sparsification of reaction-diffusion complex networks
Abhishek Ajayakumar and Soumyendu Raha
Department of Computational and Data Sciences
Complex networks are graphs with underlying dynamics cast upon them. Considering a reaction-diffusion
equation on the network, we try to sparsify or reduce the number of edges in the network with
minimal effect on the dynamics of the sparsified network. The resulting sparsified graph would then
produce a response which would be an ε approximation to the response produced by the original
graph. In the first part of our work, we provide a framework to sparsify a reaction-diffusion complex
network using the adjoint method for data assimilation using dimensionality reduction techniques
like Proper orthogonal decomposition(POD) or Karhunen-Loeve decomposition. The second part of
our work focuses on preserving the diffusion equation based on the Laplacian matrix on the graph
using a second-order conic programming(SOCP) formulation.
Graph sparsification is an area of interest in mathematics and computer science. At first, we start
by casting the problem of sparsification of the complex network as a data assimilation problem by
considering the snapshot reaction-diffusion observations in a reduced subspace with a reduced order
model dynamics modelled on the graph using the principles of POD. We incorporate connectivity
constraints in the traditional adjoint method cost function using the barrier function approach in
optimization to preserve the new network’s stability. We also use regression terms in the cost function
to avoid overfitting. The weight vector found is used to construct the new Laplacian matrix.
In the later part of our work, we use the estimate based on sampling edges by effective resistances
to find upper bounds on edge weights which forms constraints of the SOCP. We also impose
non-negativity of edge weights as constraints. Certain cut constraints also form constraints for
the problem. We use concepts from the theory of compressed sensing to formulate the objective
function of the SOCP, with several conic constraints coming from the snapshot observations. We are
investigating ways to make this approach computationally feasible using techniques like random
projections to reduce the number of constraints in the SOCP.
We evaluated our procedures on several random graphs, and we obtained graphs with a reduced
number of edges on the graphs tested
High-Throughput Computational Techniques for Discovery of Application-Specific
Two-Dimensional Materials
Arnab Kabiraj and Santanu Mahapatra
Department of Electronic Systems Engineering
Two-dimensional (2D) materials have revolutionized the field of materials science since the successful
exfoliation of graphene in 2004. Consequently, the advances in computational science have resulted in
massive generic databases for 2D materials, where the structure and the basic properties are predicted
using density functional theory (DFT). However, discovering material for a given application
from these vast databases is a challenging feat. As part of my PhD, we have developed various
automated high-throughput computational pipelines combining DFT and machine learning (ML) to
assess the suitability of 2D materials for specific applications. Methods have also been developed
to draw valuable insights into what makes these materials suitable for these applications. The
assessed properties include suitability for energy storage in the form of Li-ion battery (LIB) and
supercapacitor electrodes, along with high-temperature ferromagnetism and the presence of exotic
charge density waves (CDW). The ultra-large surface-to-mass ratio of 2D materials has made them
an ideal choice for electrodes of compact LIBs and supercapacitors. We combine explicit-ion and
implicit-solvent formalisms to develop the high-throughput pipeline and define four descriptors
to map âCœcomputationally softâC. single-Li-ion adsorption to âCœcomputationally hardâC.
multiple-Li-ion-adsorbed configuration located at global minima for insight finding and rapid
screening. Leveraging this large dataset, we also develop crystal-graph-based ML models for the
accelerated discovery of potential candidates. A reactivity test with commercial electrolytes is
further performed for wet experiments. Our unique approach, which predicts both Li-ion storage
and supercapacitive properties and hence identifies various important electrode materials common to
both devices, may pave the way for next-generation energy storage systems. The discovery of 2D
ferromagnets with high Curie temperature is challenging since its calculation involves a manually
intensive complex process. We develop a Metropolis Monte-Carlo based pipeline and conduct a
high-throughput scan of 786 materials from a database to discover 26 materials with a Curie point
beyond 400âC‰K. For rapid data mining; we further use these results to develop an end-to-end ML
model with generalized chemical features through an exhaustive search of the model space as well
as the hyperparameters. We discover a few more high Curie point materials from different sources
using this data-driven model. CDW materials are an important subclass of two-dimensional materials
exhibiting significant resistivity switching with the application of external energy. We combine a
first-principles-based structure-searching technique and unsupervised machine learning to develop
a high-throughput pipeline, which identifies CDW phases from a unit cell with an inherited Kohn
anomaly. The proposed methodology not only rediscovers the known CDW phases but also predicts
a host of easily exfoliable CDW materials (30 materials and 114 phases) along with associated
electronic structures.
A scalable asynchronous computing approach for discontinuous-Galerkin
method based PDE solvers
Shubham K. Goswami, Konduri Aditya
Department of Computational and Data Sciences
Due to the ability to provide high-order accurate solutions in complex geometries, the discontinuous-Galerkin
(DG) method has received broad interest in developing partial differential equation (PDE) solvers,
particularly for equations with hyperbolic nature. In addition, the method also provides high
arithmetic intensity and, in an explicit formulation, avoids global linear solves, making it suitable
for high-performance computing platforms. However, massively parallel simulations based on the
DG method show poor scalability of solvers. This is mainly attributed to data communication and
synchronization between different processing elements (PEs). Recently, an asynchronous computing
approach was proposed based on finite differences that relax communication/synchronization
at a mathematical level. In this approach, computations at PEs can proceed regardless of the
communication status between the PEs, thus improving the scalability of PDE solvers. In this work,
we extend the asynchronous computing approach to the DG method for improving its scalability
at extreme scales. We investigate the numerical properties of standard DG schemes under relaxed
communication synchronization and show that their accuracy drops to first order. Subsequently, we
develop new asynchrony-tolerant fluxes that result in solutions of any arbitrary order of accuracy.
Results from simulations of one-dimensional linear and nonlinear equations will be presented to
demonstrate the accuracy of the asynchronous DG method.
Micro-Watts Analog Processor for Machine Learning at the Edge
Pratik Kumar
Department of Electronic Systems Engineering
Machine learning has become a part of our everyday lives: from social media that learn over time
our customized preferences to self-driving cars that demand reliable accuracy. However, implying
such learning techniques at resource-constrained edge devices had posed a significant challenge.
These smart algorithms feed on huge data sets and require complex networks that demand extensive
hardware and power. State-of-the-art digital implementations offer a boost in performance while
trading it off with area and power. However, the potential of analog circuits to provide energy and
performance boost stands unparalleled despite its low immunity to non-idealities. In this regard,
we present the first in-house fully analog AI processor based on a novel shape-based approximate
computing framework that accounts for the non-ideal effects. This AI processor is fabricated on
CMOS 180nm technology and can be operated across different regimes of MOS operation, and is also
scalable through temperatures. The processor can be tuned to perform at nine orders of magnitude
(1uA to 1pA), thus providing a wider choice for power. We utilized the novel computational blocks
to show standard classification and regression tasks.
Nonlinear nanophotonics in a two-dimensional material
Rabindra Biswas
Department of Electrical Communication Engineering
Two-dimensional (2D) materials have emerged as an excellent platform for building ultra-thin
nonlinear photonics devices due to their high refractive index and strong nonlinear response. These
materials are known to have layer dependent, electrically tunable optical properties with relaxed
lattice and thermal mismatch requirement. 2D materials can also be used in various applications,
such as wavelength converter, saturable absorber, optical modulator, and parametric down-converter.
Firstly, we characterized the nonlinear properties of multi-layered Tin Diselenide (SnSe2). We
investigated up-conversion of 1550 nm incident light using third-harmonic generation (THG) in
multi-layered SnSe2, with the help of a multiphoton nonlinear microscopy setup. We have also
studied its thickness dependence by simultaneously acquiring spatially resolved images in the forward
and backward propagation direction. Next, we demonstrated strong second-harmonic generation
(SHG) from a 2H polytype of multilayer Tin Diselenide. In the absence of excitonic resonance, the
strong SHG from SnSe2 is attributed to the dominant band to band transition close to the indirect
band edge. The SHG intensity was compared with a monolayer Molybdenum disulphide (MoS2)
and is found to be ≈ 34 and 3.4 times higher, for excitation wavelengths of 1040 nm and 1550
nm, respectively. This work highlights the applicability of multi-layered 2D materials for building
photonic devices despite having no excitonic resonance.
Next, to make use of the strong nonlinear response, we numerically and experimentally demonstrated
an optimized multilayer fabry-perot based dual-resonance structure to simultaneously enhance the
fundamental and second harmonic field. This, in turn results in strong SHG signal generated from
a multilayer Gallium Selenide (GaSe). The optimal vertical superlattice structure obtained using
a hybrid evolutionary optimization numerical approach results in ≈ 400 times enhancement in the
SHG signal in the backward direction, compared to a single layer GaSe on 300nm Si-SiO2 substrate.
The planer geometry of the optimized structure makes it perfectly compatible with CMOS backend
integration.
Optical System Design for Indoor Visible Light Communication
system
Faheem Ahmad
Department of Electrical Communication Engineering
Indoor visible light communication (VLC) is seen as a promising high bandwidth access technology
for emerging heterogeneous wireless networks for meeting the increasing data bandwidth requirements
from mobile personal devices. Stand-alone VLC links making use of white or multicolor light-emitting
diodes (LED), blue laser down-converted white light, and multicolor lasers as transmitters have
been used to demonstrated multigigabit communication performance. In our lab we work on optical
system design, such as VLC transmitter to serve both illumination and communication, path-loss
optimization for variable link length, mechanical and non-mechanical beam steering, and mobile
receiver tracking system for the indoor giga-bit class VLC system.
Path-Loss Optimization: We discuss an optical ray-tracing approach for minimizing path-loss in a
variable link length indoor blue laser down-converted white light visible light communication
(VLC) system. For a given link length, minimum path-loss is achieved by finding optimum
positions of transmitter and receiver lenses relative to phosphor and detector respectively such that
collection efficiency is maximized. The designed VLC system is experimentally implemented for two
different optimized link lengths of 25 and 300 cm. The illumination beam profile and propagation
characteristics are found to be in good agreement with optical simulations. Communication
experiments with on-off modulation at 1.5 Gbps achieved BER of 3 × 10-3 for the optimized
link, which is below the forward-error correction threshold.
Closed-Loop Non-Mechanical Beam Steering System: In this experiment, we demonstrate a
hybrid Laser-LED transmitter module for indoor optical wireless communication with closed-loop,
non-mechanical beam steering capability. The hybrid transmitter module consists of a near infrared
laser diode for data communication and white LED array for illumination, combined on a diffuser
surface. Dual-axis non-mechanical beam steering of the laser beam is implemented using two
off-centered liquid lenses. The diffused laser beam directed towards the receiver is steered over an
angular range of -7.6◦to 7.6◦(-1.7◦to 2.6◦) along the horizontal (vertical) axes spanning -200 to
200 mm (-44 to 67 mm) at the receiver placed 1.5-meter from the transmitter. M-QAM/OFDM in
combination with adaptive bit-and power-loading is utilized to achieve a total data throughput of
5.15 Gbps for the diffused laser beam with steering. Laser intensity levels as measured at the receiver
plane are kept below the maximum permissible exposure limit for indoor usage across the entire
beam steering range. Closed-loop beam steering is also demonstrated by scanning the transmitted
laser beam horizontally, measuring the signal strength using a low bandwidth photodetector and
locking the laser beam to the receiver position for data-communication. Such hybrid transmitters
offer the benefit of decoupling the data communication and illumination requirements of the indoor
optical link, thereby tailoring the individual light emitter’s performance to specific use-case.
Trion-trion annihilation in monolayer WS2
Suman Chatterjee and Kausik Majumdar
Department of Electrical Communication Engineering
Strong Coulomb interaction in monolayer transition metal dichalcogenides can facilitate nontrivial
many-body effects among excitonic complexes. Many-body effects like exciton-exciton annihilation
(EEA) have been widely explored in this material system. However, a similar effect for charged
excitons (or trions), that is, trion-trion annihilation (TTA), is expected to be relatively suppressed
due to repulsive like-charges, and has not been hitherto observed in such layered semiconductors. By
a gate-dependent tuning of the spectral overlap between the trion and the charged biexciton through
an “anti-crossing”-like behaviour in monolayer WS2, here we present an experimental observation
of an anomalous suppression of the trion emission intensity with an increase in gate voltage. The
results strongly correlate with time-resolved measurements, and are inferred as a direct evidence
of a nontrivial TTA resulting from non-radiative Auger recombination of a bright trion, and the
corresponding energy resonantly promoting a dark trion to a charged biexciton state. The extracted
Auger coefficient for the process is found to be tunable ten-fold through a gate-dependent tuning of
the spectral overlap.
Astability versus Bistability in van der Waals Tunnel Diode for Voltage
Controlled Oscillator and Memory Applications
Nithin Abraham and Kausik Majumdar
Department of Electrical Communication Engineering
Van der Waals (vdW) tunnel junctions are attractive due to their atomically sharp interface, gate
tunablity, and robustness against lattice mismatch between the successive layers. However, the
negative differential resistance (NDR) demonstrated in this class of tunnel diodes often exhibits
noisy behaviour with low peak current density, and lacks robustness and repeatability, limiting their
practical circuit applications. Here we propose a strategy of using a 1L-WS as an optimum tunnel
barrier sandwiched in a broken gap tunnel junction of highly doped black phosphorus (BP) and
SnSe. We achieve high yield tunnel diodes exhibiting highly repeatable, ultra-clean, and gate tunable
NDR characteristics with a signature of intrinsic oscillation, and a large peak-to-valley current ratio
(PVCR) of 3.6 at 300 K (4.6 at 7 K), making them suitable for practical applications. We show
that the thermodynamic stability of the vdW tunnel diode circuit can be tuned from astability to
bistability by altering the constraint through choosing a voltage or a current bias, respectively. In the
astable mode under voltage bias, we demonstrate a compact, voltage controlled oscillator without the
need for an external tank circuit. In the bistable mode under current bias, we demonstrate a highly
scalable, single element one-bit memory cell that is promising for dense random access memory
applications in memory intensive computation architectures
Stochastic Algorithms for Radial Point Interpolation Method Based
Computational Electromagnetic Solvers
Kiran R
Department of Electrical Communication Engineering
A time-domain stochastic radial point interpolation method (SRPIM) is developed for uncertainty
quantification of electromagnetic systems. Fabrication processes cause uncertainty in dielectric
constant in engineered systems. Similar variations in properties are evident in biological tissues.
Derivatives of field quantities in Maxwell’s equations are obtained using radial basis function, and
stochasticity in dielectric constant are incorporated through polynomial chaos expansion (PCE).
SRPIM is further made faster by utilizing the linearization of product of Hermite polynomials,
which reduces PCE coefficient matrix and thereby eliminating a good number of multi-dimensional
integrations. This will avoid considerable computations in the stochastic implementation, and the
computational gain increases with the dimensionality of the problem. This is validated by choosing
the example of an implanted cardioverter defibrillator where the effect of electromagnetic interference
from a mobile phone placed in its close proximity is modeled and the uncertainty is quantified. Such
uncertainty quantification may help regulatory agencies to issue appropriate guidelines for users.
Accuracy of these simulations are validated using Kolmogorov Smirnov test, with Monte Carlo
(MC) simulation as the reference. Computation time of the proposed methods are found significantly
better than MC . The proposed methods perform well even for large stochastic variations.
Suppression of Higher Order Modes in a Four Element CSRR Loaded
Multi-Antenna System and An Overview of Full-Duplex Antenna Design
Dr. Jogesh Chandra Dash
Department of Electrical Communication Engineering
A compact four-port dual-band microstrip-patch sub-array antenna with suppressed higher order
modes (HOMs) for Massive-MIMO application is proposed. First, complementary split ring resonator
(CSRR) loading is used on a square microstrip antenna to achieve simultaneous miniaturization
and dual-band response. Next, the HOMs in the proposed CSRR loaded MIMO configuration are
analysed using equivalent circuit model as well as surface current distribution plots. By placing a
single shorting post close to antenna center line, these HOMs of the four-port dual-band MIMO
antenna are then suppressed, while maintaining satisfactory mutual coupling (< −11 dB) and
impedance matching (< −15 dB) performance in the operating band. Further, stating the effect
of mutual coupling in a multi-antenna system for Full-Duplex (FD) communication we propose a
closely spaced two-port microstrip patch antenna system with significant isolation enhancement
(> 90 dB), which can be deployed for MIMO as well as FD transceiver systems. We deploy a
resonant combination of rectangular defected ground structure (DGS) and a near-field decoupling
structure (NFDS) in the vicinity of a closely spaced (inter-element spacing = 0.01λ0) two-port
microstrip patch antenna system at 5.85 GHz. This drastically reduces the port-to-port mutual
coupling (< −90 dB), which can help in self-interference cancellation for FD point of view without
any additional circuitry, while still preserving desired impedance matching performance (< −15
dB). The proposed concepts are validated by full-wave simulation in CST Microwave Studio, as well
as experimental results on fabricated prototype. Moreover, MIMO performance metrics such as total
active reflection coefficient (TARC), envelop correlation coefficient (ECC) and channel capacity loss
(CCL) are analysed using simulation and measurement
A point-of-care lab-on-PCB for detection of protein-protein interactions
using bioimpedance measurements
Anil Vishnu G K, Anju Joshi, Hari R. S., Aniket Das Gupta, Siddhartha Sinha Roy, and
Hardik J. Pandya
Department of Electronic Systems Engineering
Accurate detection of sub-nanogram levels of proteins from body fluids and tissues is a cornerstone
of clinical diagnostics and guiding treatment strategies. Detection of pathological levels of specific
proteins finds applications in infectious diseases, cancer diagnostics, and cardiovascular diseases,
to name a few. The existing gold standard techniques for highly sensitive detection are the
enzyme-linked immunosorbent assay (ELISA) and reverse transcriptase-polymerase chain reaction
(RT-PCR) tests. In contrast, colorimetry-based lateral flow assays are used for point-of-care rapid
testing. ELISA and RT-PCR, though highly sensitive and specific, are time-consuming, expensive,
and require trained personnel to perform the tests. However, colorimetry-based rapid tests have
high false-negative rates and can only detect highly expressed levels of proteins. We report the
development and validation of a point-of-care system and a novel methodology for high-throughput
and sensitive detection of protein-protein interactions (antigen-antibody binding) by electrical
impedance sensing. Microchips fabricated on industry-compatible ENIG and soft gold finish printed
circuit board (PCB) are chemically modified for enhanced antibody immobilization and antigen
capture by the antibodies. The microchips are interfaced with a field-programmable gate array
(FPGA)-based bioimpedance measurement module for detecting antigen-antibody binding events
through changes in measured impedance and phase. A statistically significant reduction in impedance
with respect to the control (only antibody) at 10 kHz was observed for analyte concentrations from
40 pg to 200 pg (30.1 ± 3.56 Ω (40 pg), 44.73 ± 5.63 Ω (120 pg), and 66.5 ± 6.1 Ω (200 pg)). The
assay has a limit of detection of 40 pg and can detect antigens with microlitre (20 – 40 µL) volumes
of the analyte
Sensorized Catheter for Quantitative Assessment of the Airway
Caliber
Alekya B and Hardik J.Pandya
Department of Electronic Systems Engineering
This work reports the design and development of a sensorized intubation catheter for chronic airway
management. Central airway obstruction remains a diagnostic and therapeutic challenge in clinical
practice. Severely constricted airways often warrant continuous monitoring as resistance to flow
increases to fourth power for every one-degree reduction in tracheal patency. The complexities and
impediments with conventional diagnostic tools such as misclassification on the degree of narrowing
and long radiation exposure make them sub-optimal for diagnosis. Therefore, it is of utmost clinical
interest to develop tools and methods that can provide diagnostic solutions with a fast turnaround
time. The catheter is integrated with an array of flow and tactile sensors along with a smart helical
spring actuator for manoeuvring the catheter. Flow distribution is measured in excised sheep tracheal
tissues at 15, 30, 50, 65, and 80 l/min for multisegmental and varying grades of tracheal stenosis.
Even mild reduction in lumen area generated unique peaks corresponding to the obstruction site. For
a 50% tracheal obliteration, the sensor closest to stenosis showed a 2.4-fold increase in velocity when
tested for reciprocating flows. From axial compression load test, the stiffness of tracheal segments
such as the cartilage and smooth muscle tissue measured using the tactile sensor are 23±1.39 N/m and
14.02±0.76 N/m at 30% strain rate. Also, the tissue relaxation behavior and its regional dependence
recorded using the sensor reveal smooth muscle tissues’ highly compliant behaviour. While the
flow patterns allow for locating stenosis, the tactile sensors can determine the target tissue stiffness.
Quantitative evaluation of alteration in the airway column biomechanics facilitates targeted diagnosis
and expedites on-site decision making.
Design and Development of an Intraoperative Probe to Delineate
Cancer from Adjacent Normal Breast Biopsy Tissue
Arif Mohd.Kamal and Hardik J.Pandya
Department of Electronic Systems Engineering
This work reports the design and development of diffuse reflectance spectroscopy (DRS) based
intraoperative handheld probe (Multispectral-Pen) to characterize cancerous tissues from adjacent
normal tissues and accurately determine the tumor margin. The assessment of tumor margin is a
crucial challenge during breast-conserving surgery. The clinician extracts the malignant core region
and a margin (up to a few millimeters) from the adjacent normal regions to ensure complete tumor
resection. The frozen section-based histopathological analysis guides the clinician to confirm a
clear margin. Even though highly accurate, this technique suffers from concerns such as being
time-consuming and requiring additional sample preparations, resulting in sampling errors and
being expensive. We have developed a novel handheld probe that can study the changes in the
cancerous tissue compared to adjacent normal tissue based on the detected voltage. The higher value
of detected voltage observed for cancerous tissue compared to the adjacent normal tissue at the
operating wavelength of 850 nm (3.58 ± 0.07, 2.82 ± 0.12), 940 nm (3.89 ± 0.06, 3.19 ± 0.10), and
1050 nm (3.78 ± 0.04, 3.32 ± 0.07), respectively. The detected voltage values can be further used to
quantify the absorption and reduced scattering coefficients of the malignant and adjacent normal
tissues, a basis for on-site tumor delineation.
DC Bus Second Harmonic LC Filter with Solid-State Tuning Restorer
Anwesha Mukhopadhyay
Department of Electrical Engineering
Single-phase voltage source converters (VSC) find wide applications as an inverter, which integrates
renewable sources, e.g., solar PV, fuel cell or battery storage system, into the grid. Also, different
variable frequency drives, e.g., traction drives in electric locomotives, use single-phase VSC as the
front end stage. However, there is always a mismatch between dc side power and instantaneous ac
side power in single-phase VSCs. The difference power is oscillatory, with a frequency twice the ac
side frequency. This oscillatory power affects the health and lifespan of dc sources adversely and
causes torque oscillations in drives applications. To prevent this, various passive, active and hybrid
filtering techniques are adopted to handle the difference power. Passive filter, consisting only of
capacitor, necessitates large capacitor bank to keep double frequency voltage ripple across dc source
within the limits. As traditionally used electrolytic capacitors have reliability concerns, the use of
more reliable plastic film capacitors appears to be reasonable in applications, demanding greater
availability. However, the large capacitance requirement often makes the filter size impracticable to
be realized with film capacitors. The use of passive tuned LC filter reduces capacitance requirement,
but can become ineffective if it gets detuned due to variation in filter parameters or grid frequency.
As a result, the voltage across the dc source can exhibit a significant double frequency ripple. Active
filters offer consistent and superior performance at the cost of additional switches, usually of rating
comparable to those of the main VSCs. Also, the main VSC functions satisfactorily as long as
the added switches are functional. The above concerns are addressed by the proposed hybrid filter
configuration, which employs an auxiliary converter to enhance the performance of LC-tuned filters
while using switches of much lower ratings. Moreover, the failure or non-availability of the auxiliary
converter does not completely disrupt the operation of the main converter. The performance of
the proposed filter is verified in an experimental prototype which shows effective second harmonic
filtering.
Maximum Current Cell Voltage Equalization with Phase-shift Based
Control for Multi-active Half-bridge Equalizer
Manish Tathode
Department of Electrical Engineering
Lithium-ion battery stacks maintain the continuity of power supply in the solar powered satellites.
The series connected battery stacks are often operated at high charging and discharging current
levels to minimize the weight. The initial imbalance in the individual cell voltages of the stack,
which can be due to manufacturing tolerances, different operating temperatures,etc., grows faster
as the number of the high-current-charge-discharge cycles increases. The increased imbalance
results either in the early failure of the undercharged cells or in the under-utilization of overcharged
cells. Voltage equalization of the stack is performed to bring all the cell voltages in a narrow band
by charging the undercharged cells by the overcharged cells. Out of many equalization methods,
multicell-to-multicell equalization offers higher rate by simultaneously charging-discharging all
the un-equalized cells. Phase Shifted Multi Active Half-Bridge (PS-MAHB) equalizer is one of
the multicell to multicell, open-loop equalizers. It maintains high levels of equalization current
throughout the equalization offering fast equalization unlike commonly known Switched-Capacitor
and multi-winding transformer based equalizers. A dynamic phase shift based control is proposed to
maintain the equalization current through the cells at maximum throughout the cell voltage variation
during the charge-discharge cycle. The proposed control increases the rate of equalization still
further than the existing static phase shift based control. The higher rate of equalization offered
by PS-MAHB equalizer as compared to commonly known Switched-Capacitor and multi-winding
transformer based equalizers with the existing control and further increased rate with proposed
control is verified in the simulation.
Experimental Study of Sensitivity of IGBT Turn-on and Turn-off Delay
Times and their Sub-intervals
Subhas Chandra Das
Department of Electrical Engineering
This paper examines the junction temperature sensitivity of the turn-on and turn-off delay times
during IGBT switching transitions. The study is carried out with experimental measurements
of switching transitions on different IGBTs of comparable ratings. For each device test, the
junction temperature is varied in the range from -35◦C to 125◦C. The study, through a large
body of experimental data, confirms that the turn-off delay time, td,o f f , increases with junction
temperature, Tj
. However, unlike td,o f f , the turn-on delay time, td,on, seems to have divergent trend
for different IGBT devices. Further, td,on is split into two intervals, namely td,on,1 and td,on,2. During
the first interval td,on,1, the gate voltage rises from IGBT off-state gate voltage, VGE(o f f)
to 10% of
the on-state gate voltage, i.e., 0.1VGE(on)
. And the time duration, during which, the gate voltage
rises from 0.1VGE(on)
to threshold voltage, vth during the second interval td,on,2. Experimental study
shows, the delay time td,on,1 marginally increases with increase in Tj
. On the other hand, td,on,2
reduces significantly with increase in Tj
. The experimental study suggests that td,on,1 could be used
as a temperature sensitive parameter for indirect measurement of IGBT junction temperatures.
Stored Energy-Limited High-Voltage Power Supply for Travelling
Wave Tube Application
P Sidharthan
Department of Electrical Engineering
Travelling Wave Tubes are amplifiers capable of operating over multiple octave bandwidths, finding
applications in civilian communication, weather radars, air traffic control, etc., and for military
requirements like search radars, electronic warfare, missile guidance and tracking, etc. On account
of metal to ceramic joints with high voltage presence across them inside the tube’s vacuum envelop,
there exists a partial or severe arcing possibility during the operation of the TWT. Therefore, the
high voltage power supply powering the TWT is designed to withstand and limit the energy that
may be discharged through the tube under expected operating conditions to prevent temporary or
permanent damages arising out of high voltage arcing. This presentation describes the development
of a compact power supply for a TWT demanding high voltage DC power of the order of 500W @
4.3kV for the operation. Development of compact high voltage planar transformer, techniques to
contain the EMI through the physical layout of the power converter switches, soft-switching, power
line decoupling, selection of rectifiers for low loss and ripple, etc., are briefly touched upon in the
presentation. The presentation also touches upon the challenges in using the latest GaN MOSFETs
in high frequency-switched power converters from the output voltage ripple and EMI generation
perspectives.
A Unified Modeling Approach for a Triple Active Bridge Converter
Vishwabandhu Uttam
Department of Electrical Engineering
This talk introduces a systematic methodology to develop a unified model for a multi-port Triple
Active Bridge (TAB) converter. The proposed model accurately predicts the AC port currents in
a TAB converter. The model can be used to compute performance metrices of the TAB converter
such as the peak and RMS currents at the AC ports, and the average currents at the DC ports. One
of the features of the proposed model is that it can predict the impact of transformer magnetizing
inductance on the AC and DC port currents. The proposed model is valid for all operating modes
and modulation strategies of the TAB converter. The accuracy of the model has been verified against
extensive switching circuit simulations for a variety of operating conditions. Experimental results
from a TAB converter laboratory prototype are also presented to showcase the impact of magnetizing
inductance variation on TAB converter performance.
Minimisation of Switched-Capacitor Voltage Ripple in a 12-Sided
Polygonal Space Vector Structure for Induction Motor Drives
Mohammed Imthias and Umanand L
Department of Electronic Systems Engineering,
A multilevel 12-sided polygonal voltage space vector generation scheme for variable-speed drive
applications with a single DC-link operation requires an enormous capacitance value for cascaded
H-bridge (CHB) filters when operated at lower speeds. The multilevel 12-sided polygonal structure
is obtained in existing schemes by cascading a flying capacitor inverter with a CHB. This paper
proposes a new scheme to minimise the capacitance requirement for full-speed operation by
creating vector redundancies using modular and equal voltage CHBs. Also, an algorithm has
been developed to optimise the selection of vector redundancies among the CHBs to minimise the
floating capacitors’ voltage ripple. The algorithm computes the optimal vector redundancies by
considering the instantaneous capacitor voltages and the phase currents. The effectiveness of the
proposed algorithm is verified in both the simulation and the experiment.
An investigation on increasing the modulation range linearly in
hybrid multilevel inverter fed induction machine drives regardless of load power factor.
Souradeep Pal and Umanand L
Department of Electronic Systems Engineering
IN last decade multilevel inverters (MLIs) have become very popular in high power applications
namely variable speed drives, high voltage DC transmission, renewable energy and electric vehicles.
It offers many advantages such as low harmonic distortion in voltage and current, low dv/dt across
motor phase terminals, less bulky filter size, operation at low switching frequency etc. There are
three popular MLI topologies - Neutral point clamp (NPC), Flying Capacitor (FC) and Cascaded
H-Bridge (CHB) which are widely discussed in the literature. MLIs can also be realised by a dual
inverter structure feeding an open-end winding induction motor (OEWIM) where either end of
stator terminals are connected to two separate inverters. Among several dual inverter topologies,
recently, the dual inverter with a single DC-link has become popular. Here the primary inverter is
supplied by a DC link, and the secondary inverter is fed from a floating capacitor. This configuration
aids in increasing the phase voltage levels with a reduced number of switches besides the benefit
of reliability and fault-tolerant capability. These two inverters together can generate a combined
hexagonal multilevel space vector structure (SVS) of radius Vdc similar to a 2-level inverter single
hexagonal structure feeding the IM from one end using a DC-link voltage of Vdc. For any hexagonal
SVS, the maximum peak phase fundamental voltage that can be attained from a DC link of Vdc is
0.637Vdc (correspond to the full base speed operation of the IM drive), when the inverter operates
in six-step mode. But the generic SVPWM mode operation can achieve a peak phase fundamental
of 0.577Vdc in extreme linear modulation range (LMR). Here the maximum radius of the rotating
voltage space vector (SV) is 0.866Vdc which can be inscribed within the hexagonal SVS. Further
increasing of the modulation range above 0.577Vdc will result in all the lower order harmonics
(predominantly 5 th , 7 th , 11th and 13th) appearing in the motor phase voltage. These harmonic
contents cause low-frequency torque pulsations that may even break the motor shaft. Hence, these
lower-order harmonics need to be eliminated to operate the motor seamlessly till full base speed.
In this work, a 10-level dual inverter scheme is investigated to eliminate all the lower order
harmonics (5 th ,7 th ,11th ,13th , etc.) while extending the LMR from 0.577Vdc to 0.637Vdc
peak phase fundamental regardless of load power factor. The proposed inverter topology supplies
an OEWIM where the primary side is a cascade of a 2-level inverter and HB while the secondary
side is connected to a floating capacitor fed 2-level inverter cascaded with an HB. The proposed
inverter structure will synthesize a hexagonal SVS of more than 9-levels. Those extra levels will be
switched in a unique way to extend the modulation range without surpassing the maximum voltage
SV amplitude of Vdc along A,B, C phases at any time. All the capacitors in this topology can be
balanced simultaneously and independently using the concept of opposing vector redundancy of a
Space Vector Point (SVP). The proposal to balance the capacitors, even at an extended modulation
range (from 0.577Vdc to 0.637Vdc peak phase fundamental) for u.p.f load (since u.p.f is the worst
case condition to charge balance all floating capacitors at the extreme modulation) is possible in the
proposed scheme.
A Galvanically Isolated Single-Phase Inverter Topology With Flux-Rate
Control Based Harmonic Filtering Scheme
Ruman Kalyan Mahapatra and Umanand L
Department of Electronic Systems Engineering
This work presents a galvanically isolated single-phase inverter topology with a flux-rate control-based
harmonic filtering scheme. The proposed topology consists of a high-power primary inverter that
operates at low frequency and establishes the primary flux. A low-power secondary inverter that
operates at high frequency is associated with another limb of the magnetic core, which controls the
flux rate. The undesired harmonic components present in the primary flux are filtered by controlling
the flux rate to provide a sinusoidal output voltage at the load. These two inverters and the load
side of the proposed topology are associated with the three-limbed magnetic core. The load side of
the proposed inverter topology is galvanically isolated from the rest of the circuit. That causes the
load side of the proposed inverter to be free from any power electronics components and passive
filters. Hence, the inverter is suitable for medium to high voltage applications without modifying the
power semiconductor device ratings. The proposed inverter is modeled using the popular bond-graph
modeling technique, and the dynamic equations are obtained from the model. The derived dynamic
model is simulated, and a lab build prototype is utilized to verify the working of the proposed inverter
topology
Optimal Pulse-width Modulation Techniques of Asymmetrical
Six-phase Machine in Linear and Overmodulation Regions
Sayan Paul
Department of Electrical Engineering
This work presents two pulse-width-modulation (PWM) techniques of a two-level inverter fed
asymmetrical six-phase machine (ASPM) to reduce the drive system’s loss and improve efficiency.
The first PWM technique is applicable in the overmodulation region, and the second is relevant in
the linear region.
Overmodulation (OVM) techniques of ASPM achieve higher DC-bus utilization by applying
voltage in the non-energy transfer plane. This results in unwanted current and associated copper
loss. The existing OVM technique minimizes this voltage from the space-vector perspective with a
pre-defined set of four active vectors. To find the best technique, one needs to perform the above
minimization problem with all possible sets of active vectors with which higher voltage gain can be
attained. So, this requires evaluation of a large number of cases. This work formulates the above
minimization problem in terms of average voltage vectors of two three-phase inverters, where active
vectors need not be specified beforehand. Thus, the analysis is more general. Sixteen possible
techniques with different active vectors are derived following the above analysis, which attains
minimum voltage injection in the non-energy transfer plane.
Linear modulation techniques (LMTs) of an ASPM with two isolated neutral points synthesize
the desired voltage vectors by applying at least five switching states. Different choices of applied
voltage vectors, sequences in which they are used, distribution of dwell-times among the redundant
switching states give rise to a large number of possible LMTs. These LMTs should avoid more
than two transitions of a particular inverter leg within a carrier period. Only a subset of existing
LMTs of ASPM follows this rule. This work finds a way to account for all possible infinitely many
LMTs that follow the rule of at most two transitions per leg through an innovative approach. Another
essential criterion for the selection of an LMT is its current-ripple performance. Therefore, through
numerical optimization, the work finds optimal LMTs among the above infinite possible LMTs for
all reference voltage vectors in the linear range and the whole feasible range of a machine parameter.
This parameter is related to the leakage inductance of the machine and impacts the current ripple
performance of ASPM. An optimal hybrid strategy is proposed with these optimal techniques, which
outperforms all existing methods in terms of the current ripple.
The theoretical analysis of the above two PWM techniques is validated through simulation in
Matlab and experiments performed up to 3.5 kW on a hardware prototype.
The Phenomena of Standing Waves in Uniform Single Layer Coils
Ashiq Muhammed P E
Department of High Voltage Engineering
Accurate knowledge of the natural frequencies and shapes of corresponding standing waves are
essential for gaining deeper insight into the nature of response of coils to impulse excitations. Most of
the previous analytical studies on coils assumed shape of standing waves as sinusoidal but numerical
circuit analysis and measurements suggest otherwise. Hence, this paper revisits the classical standing
wave phenomenon in coils to ascertain reasons for this discrepancy and thereafter extends it by
analytically deriving the exact mode shape of standing waves for both neutral open/short conditions.
For this, the coil is modeled as a distributed network of elemental inductances and capacitances
while spatial variation of mutual inductance between turns is described by an exponential function.
Initially, an elegant derivation of the governing partial differential equation for surge distribution
is presented which is then analytically solved, perhaps for the first time, by the variable-separable
method to find the complete solution (sum of time and spatial terms). Hyperbolic terms in spatial part
of solution have always been neglected but are included here, thus, yielding the exact mode shapes.
Voltage standing waves gotten from analytical solution are plotted and compared with simulation
results on a 100-section ladder network. The same is measured on a large-sized single layer coil. So,
it emerges that, even in single layer coils, shape of standing waves deviates considerably from being
sinusoidal and this deviation depends on spatial variation of mutual inductance, capacitive coupling,
and order of standing waves.
Modelling of bi-directional leader inception and propagation from
aircraft
Sayantan Das and Udaya Kumar
Department of Electrical Engineering
A commercial aircraft can expect on an average one lightning strike per year i.e. one lightning strike
in approximately 3000 hours of flight. Severity of damage due to lightning can range from minor
burn mark, creation of holes on skin upto complete destruction of aircraft. Nowadays, use of less
conducting composite materials for constructing structural elements of aircraft enhances possibility
of physical damages. Increasing use of sensitive electronics components in on-board equipment of
aircraft further makes it more vulnerable to indirect effect of lightning strike. Therefore, protection
of aircraft against lightning is one of the major aspects of modern aircraft design.
An aircraft can be struck by lightning in two possible ways – aircraft-initiated lightning where the
aircraft itself incepts bi-directional leaders. Aircraft-intercepted lightning where a cloud to ground
lightning gets intercepted by aircraft. Recorded data from in-flight measurements suggest that almost
90% of events of lightning strike occurred due to aircraft-initiated leaders. Hence, the study will be
limited only to aircraft-initiated lightning phenomena.
The first step of designing of lightning protection on aircraft is Zoning where the aircraft surface
is divided into several distinct zones depending on the probability of lighting strike. Several methods
have been suggested in standard (ARP5414) such as Rolling Sphere Method (RSM), Similarity
Principle, Field based approach. All these methods are either empirical or qualitative and lacks
the physical basis of leader discharge from aircraft. For more accurate assessment of zoning, the
discharge phenomena need to be modelled. Therefore, the purpose of this work is to develop a model
for inception and propagation of bi-directional leader from cruising aircraft.
This presentation highlights the salient features of leader inceptions from cruising aircraft
followed by brief description of the model developed and demonstration of propagation of connecting
leaders from aircraft.
Low latency replication coded storage over memory-constrained
servers
Rooji Jinan
Department of Electrical Communication Engineering
We consider a distributed storage system storing a single file, where the file is divided into equal
sized fragments. The fragments are replicated with a common replication factor, and stored across
servers with identical storage capacity. An incoming download request for this file is sent to all the
servers, and it is considered serviced when all the unique fragments are downloaded. The download
time for all fragments across all servers, is modeled as an independent and identically distributed
(i.i.d.) random variable. The mean download time can be bounded in terms of the expected number
of useful servers available after gathering each fragment. We find the mean number of useful servers
after collecting each fragment, for a random storage scheme for replication codes. We show that
the performance of the random storage for replication code achieves the upper bound for expected
number of useful servers at every download asymptotically in number of servers for any storage
capacity. Further, we show that the performance of this storage scheme is comparable to that of
Maximum Distance Separable (MDS) coded storage.
Measurement Aided Design of a Heterogeneous Network Testbed
For Condition Monitoring Applications
Rathinamala Vijay
Department of Electronic Systems Engineering
We propose a composite diagnostics solution for railway infrastructure monitoring. In
particular, we address the issue of soft-fault detection in underground railway cables. We first
demonstrate the feasibility of an orthogonal multitone time domain reflectometry based fault detection
and location method for railway cabling infrastructure by implementing it using software defined
radios. Our practical implementation, comprehensive measurement campaign, and our measurement
results guide the design of our overall composite solution. With several diagnostics solutions
available in the literature, our conglomerated method presents a technique to consolidate results
from multiple diagnostics methods to provide an accurate assessment of underground cable health.
We present a Bayesian framework based cable health index computation technique that indicates
the extent of degradation that a cable is subject to at any stage during its lifespan. We present the
performance results of our proposed solution using real-world measurements to demonstrate its
effectiveness.
Word-level beam search decoding and correction algorithm (WLBS)
for end-to-end ASR
Zitha Sasindran
Department of Electronic Systems Engineering
A key challenge in resource-constrained speech recognition applications is the unavailability
of a large, domain-specific audio corpus to train the models. In such scenarios, models may not be
exposed to a wide range of domain-specific words and phrases. In this work, we propose an approach
to improve the in-domain automatic speech recognition results using our word-level beam search
decoding and correction algorithm (WLBS). We use a token-based language model to mitigate the
data sparsity and the out of vocabulary issues in the corpus. We evaluate the proposed approach
for airplane-cabin specific announcements use case. The experimental results show that the WLBS
algorithm with its handling of misspellings and missing words achieves better performance than
state-of-the-art beam search decoding and n-gram LMs. We report a WER of 11.48% on our
airplane-cabin announcement test corpus.