LiU » ELLIIT » Distinguished Lectures


Distinguished Lectures

Upcoming Lectures

 

Title: TOPPool and Matrioska
Speaker: Dr. Elena Pagnin
Location: E-building, E:3139, LTH, Lund University
Date and time: December 10, 2018, 15:15

 

Title:  Distributed and sparse signal processing: Architectures, algorithms and nonlinear estimators
Speaker: Steffen Limmer, TU Berlin, Germany
Location: Systemet, B-building, Campus Valla
Date and time: December 13, 2018, 10:15

 


Previous Lectures

 

2018-09-21 Tommy Svensson, Department of Electrical Engineering, Chalmers University of Technology, Sweden
Challenges and Opportunities with mm-wave Communications in 5G and Beyond

2018-06-12 Prof. Jan Peters, TU Darmstadt
Robot Skill Learning


2018-06-12 Dr. Caitlin Sadowski (Google)
Lessons from Building Static Analysis Tools at Google

 

2018-06-11 Prof. Jiri Matas, Center for Machine Perception, Czech Technical University in PragueTitle: Multi-Class Model Fitting by Energy Minimization and Mode-Seeking

2018-06-04 Prof. Jeffrey Carver, University of Alabama, USA http://se4science.org
What We Have Learned About Using Software Engineering Practices in Research Software
 

2018-05-03 Prof. Kannan Moudgalya,Bombay, India
Self-learning Tutorials and Open Source Software for 4 million Students
 

2018-05-24 Prof. Michel Verhaegen, Delft University of Technology
Large-Scale Subspace Identification with Kronecker modeling
 

2018-04-12 Prof. Laura Cottatellucci, Friedrich Alexander University of Erlangen-Nürnberg
Title: Machine Learning on Graphs

2018-04-11 Prof. Marius Pesavento, TU Darmstadt
The Partial Relaxation Approach: An Eigenvalue-Based DOA Estimator Framework
 

2018-03-29 Prof. Christoph Kirsch from U. of Salzburg
Self-Referential Compilation, Emulation, Virtualization, and Symbolic Execution with Selfie 

2018-03-19  Prof. Geoffrey Li, Georgia Institute of Technology, Atlanta, Georgia, USA.
Resource Allocation in Vehicular Communications


2018-03-14 Prof. Geoffrey Li, Georgia Institute of Technology, Atlanta, Georgia, USA.
Device-to-Device Communications in Cellular Networks
 

2018-03-06 Professor Pierluigi Salvo Rossi, Kongsberg Digital, Norway
Machine Learning & Industry 4.0
 

2018-01-2 Professor A. Lee Swindlehurst, University of California Irvine 
Analysis of the Mixed-ADC Massive MIMO Uplink

2018-01-26 Docent Antti Tölli, University of Oulu, Finland
Multi-antenna Interference Management for Coded Caching

2018-01-11 Adjunct Professor Gabor Fodor, KTH and Ericsson Research
Tuning the Pilot-to-Data Power Ratio in Multiuser MIMO Systems

2017-11-30 Prof. Pieter Harpe, TU,  Eindhoven, Netherlands
Advanced SAR ADCs – efficiency, accuracy, calibration and references

2017-10-19  Prof. Francesco Casella, Politecnico di Milano, Milan, Italy
New trends in Object-Oriented modelling tools: Large-Scale Systems and Optimal Control of Future Power Generation Systems

2017-09-27 Dr. Marco Mattavell, Ecole Polytechnique Federale de Lausanne (EPFL), Lausanne, Switzerland
Design space exploration of dynamic dataflow programs

2017-09-06 Prof. Osvaldo Simeone, King's College, London, U.K.
Fog-Aided Wireless Networks: An Information-Theoretic View

2017-08-23 Prof. Marco Di Renzo, Paris-Saclay University/ CNRS, France.
On System-Level Analysis & Design of Cellular Networks: The Magic of Stochastic Geometry

2017-06-22 Prof. Antonio Liscidini, University of Toronto
Emerging analog filtering techniques 

2017-03-23 Professor Joseph R. Cavallaro, Rice University, USA
Algorithms, Architectures, and Testbeds for 5G Wireless Communication Systems

2017-01-09  Dr. Mario Costa, Huawei Technologies, Finland
Prospects for Positioning in 5G

2016-12-21 Professor Carlo Fischione,  KTH Royal Institute of Technology
Millimeter Wave Networking: A Medium Access Control Perspective

2016-12-12 Professor Kimmo Kansanen, Norwegian University of Science and Technology (NTNU)
Estimation over MIMO Fading Channels:  Outage and DiversityAnalysis

2016-12-07 Peter D. Mosses, Professor Emeritus, Computer Science, Swansea University
A component-based approach to semantics

2016-11-22 Derek Eager, Professor, University of Saskatchewan, Canada
Scalable and Efficient On-Demand Video Streaming: Retrospective and Recent Work

2016-10-20 Kwan-LiU Ma, University of California at Davis
Emerging Topics for Visualization Research

2016-10-20 Dr. Hilding Elmqvist, CEO Mogram AB
Exciting and disruptive innovations: Julia for programming and Modia for modeling

2016-10-17 Prof. Kevin Ryan (Lero & Univ. of Limerick)
Experience in setting a software engineering research agenda

2016-09-16 Prof. Henry D. Pfister, Duke University, USA
Graphical Models and Inference: Insights from Spatial Coupling

2016-09-02 Professor Josef A. Nossek, Technical University of Munich
Multiport Communication Systems

2016-06-02 Gerhard Kramer, Technical University of Munich
Bounds on the Capacity of Optical Fiber Channels

2016-05-04 Bei Wang, Assistant professor at the Scientific Computing and Imaging (SCI) Institute of the University of Utah
Understanding the Shape of Data with Topological Data Analysis and Visualization, from Vector Fields to Brain Network

2016-05-03  Florian Kaltenberger, Eurecom, France
Practical issues of reciprocity based massive MIMO systems

2016-01-07 Prof. Jos Baeten, University of Amsterdam, The Netherlands
Computability Revisited

2015-12-03 Gregory Ward, Dolby Laboratories
Reconstructing Anistropic BSDFs from Sparse Measurements

2015-11-24 Dr. Eric Klumperink, University of Twente, The Netherlands
Cognitive Radio Transceiver Chips

2015-11-18 Dr. Michalis Matthaiou, Senior Lecturer, Queen's University Belfast, UK
Some baby steps into new massive MIMO pastures

2015-10-08 Matt Ettus, President and founder of Ettus Research
Technologies for Rapid Prototyping and Low Cost Deployment of Novel Radio Systems

2015-09-18: Dr. Emil Björnson, Linköping University
Optimizing 5G Networks for Energy-Efficiency: Small Cells, Massive MIMO or Both?

2015-09-08: Prof. Urbashi Mitra, University of Southern California, USA
Wireless channel Estimation: Opportunities for Exploiting Structure and Sparsity

2015_09-02 Dr. Elisabeth de Carvalho, Aalborg University, Denmark
Channel Measurements and Random Access for Massive MIMO systems

2015-06-03: Prof. Gerhard Fohler, TU Kaiserslautern, Germany
Adaptive real-time resource management

2015-05-28: Prof. Barbara Kitchenham, Keele University, United Kingdom
Using an Evidence-based Approach to Inform Practice - Fact or Fiction?

2015-05-28: Liesbet Van Der Perre, KU Leuven, Belgium
Precious bits, sharing the spectrum, on elegant energy

2015-05-22: Thomas L. Marzetta, Bell Laboratories, USA
Honorary doctorate lecture: Massive MIMO -- a New Philosophy for Wireless Technology

2015-04-21: Prof. William J Dally, Stanford University and Senior Vice President of Research at NVIDIA Corporation
A Lightspeed Data Center Network

2015-03-06: Prof. Giuseppe Caire, TU Berlin and USC, USA
Network Optimization with Massive MIMO

2015-02-26: Prof. Uwe Aßmann, Technische Universität, Dresden
A Generalized Form of Autotuning

2015-02-26: Prof. Neil Jones, University of Copenhagen, Denmark
Programs = Data = First-Class Citizens in a Computational World

2014-10-24: Peter Marwedel (TU Dortmund, Germany)
Cyber-physical Systems: Opportunities, Problems and (some) Solutions

2014-10-23: Claudio Silva (New York University, USA)
Exploring Big Urban Data

2014-10-13: Prof. Sergiy Vorobyov (Aalto University, Finland and University of Alberta, Canada)
Advances in active MIMO sensing

2014-08-27: Bernhard Preim (University of Magdeburg)
Visual Analytics in Cohort Study Data

2014-05-14: Dr. Eric Klumperink (Twente University in Enschede, The Netherlands)
Cognitive Radio Transceiver Chips

2013-10-22: Björn Ekelund (Ericsson)
Breaking the curve

2013-10-23: Karl Iagnemma (MIT, Massachusetts, US)
Safe Semi-Autonomous Control of Unmanned Ground Vehicles

2012-05-16: Dr. Paul K. Hopt (Lund)
Control Challenges to Renewables Integration in Smart Grid

2012-05-21: Dr. Paul K. Hopt (Linköping)
Control Challenges to Renewables Integration in Smart Grid

2012-05-03: Johann Borenstein (University of Michigan, US)
Dead reckoning for Vehicles and Pedestrians

2012-05-03: Richard Voyles (University of Denver, US)
Structured Computational Polymers: From Smart Clothing to Squishy Bots

2012-03-22: K. Rustan M. Leino,
Program verification using Dafny

2011-10-27: Prof. Michael Gastpar,
Algebraic Structure in Network Information

2011-10-17: Tarek Abdelzaher (UIUC)
Social Sensing Challenges for a Smarter Planet

2011-10-17: Thomas Marzetta (Bell-Laboratories, Alcatel-Lucent)
Future Directions for Wireless Communication Research

2010-11-12: Rudolf Mester (Goethe Universität, Frankfurt am Main, Germany)
Towards intelligent vision systems: the role of signal theory, statistical models, and learning

2010-11-11: Sven Mattisson (Ericsson Research, Lund)
Wireless Communications IC:s trends for 3G and LTE

 

Title: TOPPool and Matrioska
Speaker: Dr. Elena Pagnin

Abstract: In this seminar I will present two recent works of mine:

1- TOPPool: a privacy preserving model for efficient ridesharing. This work builds on top of PrivatePool and provides a clever optimiziation and a generalization to take into consideration the dimension  of time for sharing rides.

2- Matrioska: a compiler for multi-key homomorphic signatures. This work presents an original construction to obtain multi-key properties from a single-key homomorphic signature scheme. The core idea is simple and powerful, and could be applied to other cryptographic verification techniques.

About the speaker: Elena Pagnin received a master degree in applied mathematics at the University of Trento in 2013 with a thesis on "homomorphic authentication codes". The thesis contains her research work carried out as a

project officer at Nanyang Technological University (Singapore). Elena defended her doctoral thesis on "enhancing data and user authentication in collaborative settings” in September 2018 

at the computer science and engineering department of Chalmers University of Technology.  Elena's research interest lie within the wide area of cryptography, with a particular sparkle for primitives with

homomorphic properties and authentication protocols. 

 

Title:  Distributed and sparse signal processing: Architectures, algorithms and nonlinear estimators
Speaker: Steffen Limmer, TU Berlin, Germany

 

Abstract: The 21st century will be remembered for the ubiquity of data. Data analysis has become an indispensable tool for finding patterns in high-dimensional datasets, and the steep increase in computational power allows us to execute ever more sophisticated algorithms. This talk presents two novel approaches to exploiting simple structures for problems in distributed and sparse signal processing. The first part deals with the concepts of effective dimension and analysis of variance (ANOVA) that allow us to understand interactions and importance among subsets of variables. In the second part, we consider the problem of designing nonlinear estimators for Bayesian signal recovery. In this regard, our aim is to provide a better understanding of the trade-off between expected performance on the one hand, and the computational complexity of architectures for nonlinear estimation on the other.

 

Biography: Steffen Limmer received the Dipl.-Ing. degree in electrical engineering in 2011 from the Technical University of Munich (TUM), Munich, Germany. From 2013 to 2015, he was with the Fraunhofer Heinrich Hertz Institute, Berlin, Germany. Since 2015, he has been with the Network-Information Theory Group at the Technical University of Berlin (TUB), Berlin, Germany, where he is currently working toward the Ph.D. degree. He received the Student Travel Grant from the German Academic Exchange Service in 2014 as well as an AWS in Education Research Grant award in 2015. His research interests include distributed and sparse signal processing, inverse problems, machine learning and nonlinear estimation theory.

 

 

Title:  Challenges and Opportunities with mm-wave Communications in 5G and Beyond
Speaker: Tommy Svensson, Department of Electrical Engineering, Chalmers University of Technology, Sweden

Abstract: The race is ongoing globally towards developing key technical components and concepts for a new 5G mobile radio access technology operating in frequency bands in the range 6 to 100 GHz. The use of such high frequencies for mobile communications is challenging but necessary for supporting 5G’s extreme mobile broadband service which will require very high (up to 10 Gbps) data rates, and in some scenarios also very low end-to-end latencies (down to 1 ms). In fact, a multi-RAT concept supporting lower mm-wave bands is now an inherent part of the first release of the 3rd Generation Partnership Project (3GPP) 5G New Radio (NR) interface, ready in Dec 2017.


This talk will introduce some of the challenges and opportunities with mm-wave frequencies for access, backhaul and fronthaul and key technical enablers to support targeted 5G use cases, as seen and studied in the European Horizon2020 5G Infrastructure Public Private Partnership (5GPPP) project “Millimetre-Wave Based Mobile Radio Access Network for Fifth Generation Integrated Communications” (mmMAGIC), and in the Mantua project within ChaseOn Antenna Systems center at Chalmers.

Biography: TOMMY SVENSSON [IEEE S’98, M’03, SM’10] is Full Professor in Communication Systems at Chalmers University of Technology in Gothenburg, Sweden, where he is leading the Wireless Systems research on air interface and wireless backhaul networking technologies for future wireless systems. He received a Ph.D. in Information theory from Chalmers in 2003, and he has worked at Ericsson AB with core networks, radio access networks, and microwave transmission products. He was involved in the European WINNER and ARTIST4G projects that made important contributions to the 3GPP LTE standards, the EU FP7 METIS and the EU H2020 5GPPP mmMAGIC 5G projects, and currently in the EU H2020 5GPPP 5GCar project, as well as in the ChaseOn antenna systems excellence center at Chalmers targeting mm-wave solutions for 5G access, backhaul and V2X scenarios. His research interests include design and analysis of physical layer algorithms, multiple access, resource allocation, cooperative systems, moving networks, and satellite networks. He has co-authored 4 books, 70 journal papers, 119 conference papers and 51 public EU projects deliverables. He is Chairman of the IEEE Sweden joint Vehicular Technology/ Communications/ Information Theory Societies chapter and editor of IEEE Transactions on Wireless Communications, and has been editor of IEEE Wireless Communications Letters, Guest Editor of several top journals, organized several tutorials and workshops at top IEEE conferences, and served as coordinator of the Communication Engineering Master's Program at Chalmers, www.chalmers.se/en/staff/Pages/tommy-svensson.aspx.

 

 

Title:  Generalized Least Square Error Precoders for Massive MIMO
Speaker: Prof. Ralf Müller, FAU Erlangen-Nürnberg
Location: Algoritmen, B-building, Campus Valla
Date and time: September 14, 2018, 10:15

Abstract: For a generic transmit constellation, generalized least square error (GLSE) precoders minimize the interference at user terminals assuring that given constraints on the transmit signals are satisfied. The general form of these precoders enables us to impose multiple restrictions at the transmit signal such as limited peak power and restricted number of active transmit antennas. Invoking the replica method from statistical mechanics, the performance of GLSE precoders can be analyzed in the large- system limit. The output symbols of these precoders are identically distributed and their statistics are described with an equivalent scalar GLSE precoder. The asymptotic results are utilized to further address some applications of the GLSE precoders; namely forming transmit signals over a restricted alphabet and transmit antenna selection. Recent investigations demonstrate that a computationally feasible GLSE precoder requires 21% fewer active transmit antennas than conventional selection protocols in order to achieve a desired level of input-output distortion.

Biography:  Ralf R. Müller was born in Schwabach, Germany, 1970. He received the Dipl.-Ing. and Dr.-Ing. degree (summa cum laude) from Friedrich-Alexander Universität Erlangen-Nürnberg, in 1996 and 1999, respectively. From 2000 to 2004, he directed research group at the Telecommunications Research Center Vienna, Austria, and taught as an Adjunct Professor with TU Wien. In 2005, he was appointed as a Full Professor with the Department of Electronics and Telecommunications, Norwegian University of Science and Technology, Trondheim, Norway. In 2013, he joined the Institute for Digital Communications, Friedrich-Alexander Universität Erlangen-Nürnberg, Erlangen, Germany. He held visiting appointments at Princeton University, USA, the Institute Eurécom, France, the University of Melbourne, Australia, the University of Oulu, Finland, the National University of Singapore, Babes-Bolyai University, Cluj-Napoca, Romania, Kyoto University, Japan, Friedrich-Alexander Universität Erlangen-Nürnberg, Germany, and Technische Universität München, Germany. Dr. Müller was a recipient of the Leonard G. Abraham Prize (jointly with S. Verdú) from the IEEE Communications Society. He was awarded by both the Vodafone Foundation for Mobile Communications and the German Information Technology Society (ITG). Moreover, he was also a recipient of the Philipp-Reis Award (jointly with R. Fischer). He served as an Associate Editor for the IEEE TRANSACTIONS ON INFORMATION THEORY from 2003 to 2006 and on the Executive Editorial Board of the IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS from 2014 to 2016.

 

Title: Robot Skill Learning
Speaker: Prof. Jan Peters, TU Darmstadt
Location: M:2112B (Seminar room of Dept of Automatic Control), M-building, LTH Ole Römers väg 1, Lund
Date and time: June 12, 2018, 15.30-16.30

Abstract:
Autonomous robots that can assist humans in situations of daily life have been a long standing vision of robotics, artificial intelligence, and cognitive sciences. A first step towards this goal is to create robots that can learn tasks triggered by environmental context or higher level instruction. However, learning techniques have yet to live up to this promise as only few methods manage to scale to high-dimensional manipulator or humanoid robots. In this talk, we investigate a general framework suitable for learning motor skills in robotics which is based on the principles behind many analytical robotics approaches. It involves generating a representation of motor skills by parameterized motor primitive policies acting as building blocks of movement generation, and a learned task execution module that transforms these movements into motor commands. We discuss learning on three different levels of abstraction, i.e., learning for accurate control is needed to execute, learning of motor primitives is needed to acquire simple movements, and learning of the task-dependent „hyperparameters“ of these motor primitives allows learning complex tasks. We discuss task-appropriate learning approaches for imitation learning, model learning and reinforcement learning for robots with many degrees of freedom. Empirical evaluations on a several robot systems illustrate the effectiveness and applicability to learning control on an anthropomorphic robot arm. These robot motor skills range from toy examples (e.g., paddling a ball, ball-in-a-cup) to playing robot table tennis against a human being and manipulation of various objects.

Bio:
Jan Peters is a full professor (W3) for Intelligent Autonomous Systems at the Computer Science Department of the Technische Universitaet Darmstadt and at the same time a senior research scientist and group leader at the Max-Planck Institute for Intelligent Systems, where he heads the interdepartmental Robot Learning Group. Jan Peters has received the Dick Volz Best 2007 US PhD Thesis Runner-Up Award, the

Robotics: Science & Systems - Early Career Spotlight, the INNS Young Investigator Award, and the IEEE Robotics & Automation Society's Early Career Award. Recently, he received an ERC Starting Grant.

Jan Peters has studied Computer Science, Electrical, Mechanical and Control Engineering at TU Munich and FernUni Hagen in Germany, at the National University of Singapore (NUS) and the University of Southern California (USC). He has received four Master's degrees in these disciplines as well as a Computer Science PhD from USC. Jan Peters has performed research in Germany at DLR, TU Munich and the Max Planck Institute for Biological Cybernetics (in addition to the institutions above), in Japan at the Advanced Telecommunication Research Center (ATR), at USC and at both NUS and Siemens Advanced Engineering in Singapore.

 

Title: Multi-Class Model Fitting by Energy Minimization and Mode-Seeking
Speaker: Prof. Jiri Matas, Center for Machine Perception, Czech Technical University in Prague
Location: Ada Lovelace, Entrance 27, Building B, Campus Valla, Linköping University
Date and time: June 11, 2018, 10.15-11.15

Abstract:

In multi-class fitting, the input data is interpreted as a mixture of noisy observations originating from multiple instances of multiple model types, e.g. as k lines and l circles in 2D edge maps, k planes and l cylinders in 3D laser scan, multiple homographies or fundamental matrices in correspondences from a non-rigid scene. I will present a novel method, called Multi-X, for general multi-class multi-instance model fitting. The proposed approach combines global energy minimization using alpha-expansion and mode-seeking in the parameter domain. Multi-X outperforms significantly the state-of-the-art on the standard dataset, runs in time approximately linear in the number of data points at around 0.1 second per 100 points, an order of magnitude faster than available implementations of commonly used methods. I will also show how to plug efficiently and effectively the energy term into RANSAC. The resulting GC-RANSAC employs of graph cut as local optimization to achieve state-of-the-art results.

 

Bio:

Jiri Matas is a full professor at the Center for Machine Perception, Czech Technical University in Prague. He holds a PhD degree from the University of Surrey, UK (1995). He has published more than 200 papers in refereed journals and conferences. His publications have approximately 11000 citations in the ISI Thomson-Reuters Science Citation Index and about 29000 in Google scholar. His h-index is 38 (Thomson-Reuters Web of Science) and 61 (Google scholar) respectively. He received the best paper prize e.g. at the British Machine Vision Conferences in 2002 and 2005, at the Asian Conference on Computer Vision in 2007 and at Int. Conf. on Document analysis and Recognition in 2015. J. Matas has served in various roles at major international computer vision conferences (e.g. ICCV, CVPR, ICPR, NIPS, ECCV), co-chairing ECCV 2004, 2016 and CVPR 2007. He is on the editorial board of IJCV and was the Associate Editor-in-Chief of IEEE T. PAMI. His research interests include object recognition, image retrieval, tracking, sequential pattern recognition, invariant feature detection, and Hough Transform and RANSAC-type optimization.

 

Title:
Speaker: Dr. Caitlin Sadowski (Google)
Location: E:1406, E-huset, LTH, Lund University
Date and time: June 12, 2018, 10:15-11:00

Abstract: I will describe how we have applied the lessons from Google's previous experience with FindBugs Java analysis, as well as from the academic literature, to build a successful static analysis infrastructure used daily by most software engineers at Google. Google's tooling detects thousands of problems per day that are fixed by engineers, by their own choice, before the problematic code is checked into Google's companywide codebase.

Bio: Caitlin's mission is to improve the developer workflow at Google. She has worked on a variety of teams  improving the developer workflow. She has a computer science PhD from UC Santa Cruz where she worked with her advisors on a variety of research topics related to Programming Languages, Software Engineering and Human Computer Interaction.

 

Title: What We Have Learned About Using Software Engineering Practices in Research Software
Speaker: Prof. Jeffrey Carver, University of Alabama, USA http://se4science.org
Location: E:2116, E-huset, LTH, Lund University
Date and time: June 4, 2018, 13:15-14:00

Abstract: The increase in the importance of Research Software (i.e. software developed to support research) motivates the need to identify and understand which software engineering (SE) practices are appropriate. Because of the uniqueness of the research software domain, existing SE tools and techniques developed for the business/IT community are often not efficient or effective. Appropriate SE solutions must account for the salient characteristics of the research software development environment. To identify these solutions, members of the SE community must interact with members of the research software community. This presentation will discuss the findings from a series of case studies of research software projects, an ongoing workshop series, and the results of interactions between my research group and research software projects. 

Bio: Jeffrey C. Carver is professor in the Department of Computer Science, University of Alabama. He received the PhD degree in computer science from the University of Maryland in 2003, under supervision of Prof. Victor Basili. His main research interests include empirical software engineering, peer code review, human factors in software engineering, software quality, software engineering for science, and software process improvement. Prof. Carver is editor of the Practitioner's Corner column in IEEE Software, and editorial board member of Computing in Science & Engineering, Empirical Software Engineering (Springer), Information and Software Technology (Elsevier) and Software Quality Journal (Springer). Contact him at carver@cs.ua.edu.

Abstract: I will describe how we have applied the lessons from Google's previous experience with FindBugs Java analysis, as well as from the academic literature, to build a successful static analysis infrastructure used daily by most software engineers at Google. Google's tooling detects thousands of problems per day that are fixed by engineers, by their own choice, before the problematic code is checked into Google's companywide codebase.

 

Bio: Caitlin's mission is to improve the developer workflow at Google. She has worked on a variety of teams  improving the developer workflow. She has a computer science PhD from UC Santa Cruz where she worked with her advisors on a variety of research topics related to Programming Languages, Software Engineering and Human Computer Interaction.

 

Title: Self-learning Tutorials and Open Source Software for 4 million Students
Speaker: Prof. Kannan Moudgalya,Bombay, India
Venue:  Ada Lovelace, B-building, Campus Valla
Date and time: 15:15 - 16:15, 2018-05-03

Abstract:

The speaker will present a summary of three successful projects carried out from IIT Bombay in the general area of ICT training.

1) Spoken Tutorial is a method to provide ICT training through 10 minute long videos created through screencast.  These tutorials are created for self learning; spoken part dubbed into 22 Indian languages; can be used offline, which makes it accessible to students who don't have access to Internet.  We work only with open source software.  Using this method, 4 million students have been trained in the past four years, in about 5,000 colleges in India, affiliated to about 100 universities, through 36,000 lab courses.  It has received several awards, including Google MOOCs Focused Research Award.  URL for this work: http://spoken-tutorial.org.

2) FOSSEE stands for Free and Open Source Software for Education.  While Spoken Tutorial looks at training the bottom of the pyramid, FOSSEE looks at training at a higher level, and also content generation through crowdsourcing.  This project works in niche software tools in various engineering domains, some examples being Scilab, Python, eSim (electronic circuit simulation), DWSIM (chemical process flowsheeting), OpenFOAM (computational fluid dynamics), OpenModelica (general purpose modelling), and Osdag (steel structure design).  Through Textbook Companions, students from across the country have generated one of the largest source of solved examples using Scilab and Python.  URL for this work: https://fossee.in.

3) The Talk will also introduce an affordable laptop that the speaker has been promoting.  At Rs. 10,000, it is an excellent laptop for most engineering students.  A demo of this laptop will be given during the talk.  More details of the laptop can be seen at this link: https://www.linkedin.com/pulse/affordable-laptop-kannan-moudgalya/.

 

Title:  K4SID: Large-Scale Subspace Identification with Kronecker modeling
Speaker: Prof. Michel Verhaegen, Delft University of Technology
Venue:  Ada Lovelace, B-building, Campus Valla
Date and time: 13:15, 2018-05-24

Abstract: 

In this talk we consider the identification of matrix state-space models (MSSM) of the following form:

X(k + 1) = A2 X(k) A1' + B2 U(k)B1'

Y(k) = C2 X(k) C1' + E(k)

for all time dependent quantities and matrices of appropriate dimensions. Due to the large size of these matrices, vectorization does not allow the use of standard multivariable subspace methods such as N4SID or MOESP. In this paper the resulting Kronecker structure that appears in the system matrices due to vectorization is exploited for developing a scalable subspace-like identification approach. This approach consists of first estimating the Markov parameters associated to the MSSM via the solution of a regularized bilinear least-squares problem that is solved in a globally convergent manner. Second, a bilinear low-rank minimization problem is tackled which allows to write a three-dimensional low-rank tensor and consequently to estimate the state-sequence and the lower-dimensional matrices A1;A2;B1;B2;C1;C2. A numerical example on a large-scale adaptive optics system demonstrates the ability of the algorithm to handle the identification of state-space models within the class of Kronecker structured matrices in a scalable manner which results in more compact models.

Biography

Michel Verhaegen received an engineering degree in aeronautics from the Delft University of Technology, Delft, The Netherlands, in 1982 and the doctoral degree in applied sciences from the Catholic University Leuven, Belgium, in 1985. During his graduate study, he held a research assistantship sponsored by the Flemish Institute for scientific research (IWT).

From 1985 to 1994, he was a Research Fellow of the U.S. National Research Council (NRC), affiliated with the NASA Ames Research Center in California, and of the Dutch Academy of Arts and Sciences, affiliated with the Network Theory Group of the Delft University of Technology. From 1994 to 1999, he was an Associate Professor of the Control Laboratory, Delft University of Technology and was appointed as Full Professor with the Faculty of Applied Physics, University of Twente, The Netherlands, in 1999.

In 2001, he moved back to the University of Delft and is now a member of the Delft Center for Systems and Control. He has held short sabbatical leaves at the University of Uppsala, McGill, Lund, and the German Aerospace Research Center (DLR) in Munich and is participating in several European Research Networks. His main research interest is the interdisciplinary domain of numerical algebra and system theory. In this field he has published over 100 papers. Current activities focus on the transfer of knowledge about new identification, fault tolerant control and data-driven controller design methodologies to industry. Application areas include smart structures, adaptive optics, and wind energy.

 

Title: Machine Learning on Graphs
Speaker:  Professor Laura Cottatellucci, Friedrich Alexander University of Erlangen-Nürnberg 
Venue:  Systemet, B-building, Campus Valla
Date and time: 10:15, 2018-04-12

Abstract: Recently, in many areas of wireless communications such as wireless sensor networks (WSNs) and complex ad hoc networks, distributed graph algorithms and machine learning on graphs are gaining relevance as fundamental tools in network analysis and information processing. A fundamental classical learning problem on graphs is the detection of clusters embedded in the graph. In this tutorial we present the fundamental concepts and techniques of clustering on graphs and unveil their relationship with graph spectral properties. We introduce the audience to local clustering methods such as Belief Propagation and Page Rank and global methods such as spectral clustering techniques. The trade-off between performance and complexity is explored by analyzing what are the characteristics of a detectable community, its size and whether detectability is attained in polynomial time or not. We also analyze the impact of a priori information on cluster detectability by comparing unsupervised and semisupervised methods. The objective is to understand if and how much semi-supervised clustering outperforms unsupervised clustering, what is the cost for performing better in terms of amount of prior knowledge to be acquired, and which level of uncertainty can be tolerated on the prior information.

Biography: Laura Cottatellucci obtained the Habilitation from University for Nice-Sophia Antipolis (2015, France), the Ph.D. from Technical University of Vienna, Austria (2006) and the Master degree from La Sapienza University, Rome, Italy (1995). Specialized in networking at Guglielmo Reiss Romoli School (1996, Italy), she worked in Telecom Italia (1995–2000) as responsible of industrial projects. From April 2000 to September 2005 she worked as senior research in ftw Austria on CDMA and MIMO systems. She was research fellow in INRIA (Sophia Antipolis, France) from October to December 2005 and at the University of South Australia, Australia in 2006. From December 2006 until November 2017, she was assistant professor in EURECOM (Sophia Antipolis, France). Since December 2017 she is professor at Friedrich Alexander Universität of Erlangen-Nürnberg (Germany). Cottatellucci is currently associate editor for IEEE Transactions on Communications and IEEE Transactions on Signal Processing and served as guest editor for EURASIP Journal on Wireless Communications and Networking (special issue on cooperative communications). She is elected member of the IEEE Technical Committee on Signal Processing for Communications and Networking. Her research interests lie in the field of communications theory and signal processing for wireless communications, satellite and complex networks. Her contributions in these fields are based on the application of mathematical tools such as random matrix theory and game theory. 

 

Self-Referential Compilation, Emulation, Virtualization, and Symbolic Execution with Selfie 
Speaker: Prof. Christoph Kirsch from U. of Salzburg
Venue:  E:4130, 4th floor in the E-house, Lund University
Date and time: 09:00, March 29, 2018
 

Abstract: 

Selfie is a self-contained 64-bit, 10-KLOC implementation of (1) a self-compiling compiler written in a tiny subset of C called C* targeting a tiny subset of 64-bit RISC-V called RISC-U, (2) a self-executing RISC-U emulator, (3) a self-hosting hypervisor that virtualizes the emulated RISC-U machine, and (4) a prototypical symbolic execution engine that executes RISC-U symbolically. Selfie can compile, execute, and virtualize itself any number of times in a single invocation of the system given adequate resources. There is also a simple linker, disassembler, debugger, and profiler. C* supports only two data types, uint64_t and uint64_t*, and RISC-U features just 14 instructions, in particular for unsigned arithmetic only, which significantly simplifies reasoning about correctness. Selfie has originally been developed just for educational purposes but has recently become a research platform as well. In this talk, we demonstrate the capabilities of the system and discuss our ongoing effort in designing a minimal symbolic execution engine. 

This is joint work with A. Abyaneh, M. Aigner, S. Arming, C. Barthel, S. Bauer, T. Hütter, A. Kollert, M. Lippautz, C. Mayer, P. Mayer, C. Moesl, S. Oblasser, C. Poncelet, S. Seidl, A. Sokolova, and M. Widmoser 

Web: 

http://selfie.cs.uni-salzburg.at 

Biography: 

Christoph Kirsch is Professor at the Department of Computer Sciences of the University of Salzburg, Austria. He received his Dr.Ing. degree from Saarland University in 1999 while at the Max Planck Institute for Computer Science in Saarbrücken, Germany. From 1999 to 2004 he worked as Postdoctoral Researcher at the Department of Electrical Engineering and Computer Sciences of the University of California, Berkeley. He later returned to Berkeley as Visiting Scholar (2008-2013) and Visiting Professor (2014) at the Department of Civil and Environmental Engineering. His research interests are in concurrent programming, memory management, virtualization, and formal verification. Dr. Kirsch co-invented embedded programming languages and systems such as Giotto, HTL, and the Embedded Machine, and more recently co-designed high-performance, multicore-scalable concurrent data structures and memory management systems. He co-founded the International Conference on Embedded Software (EMSOFT) and served as ACM SIGBED chair from 2011 until 2013 and as ACM TODAES associate editor from 2011 until 2014. He is currently associate editor of IEEE TCAD and ACM Distinguished Speaker since 2017.

 

The Partial Relaxation Approach: An Eigenvalue-Based DOA Estimator Framework
Speaker: Prof. Marius Pesavento, TU Darmstadt
Venue:  Algoritmen, B-building, Campus Valla
Date and time: 10:15, 2018-04-11
 

Abstract: 

In this talk, we introduce the partial relaxation approach for direction-of-arrival estimation. Unlike existing spectral search methods like Capon or MUSIC which can be considered as single source approximations of multi-source estimation criteria, the proposed approach accounts for the existence of multiple sources. At each considered direction, the manifold structure of the remaining interfering signals impinging on the sensor array is relaxed, which results in closed form estimates for the interference parameters. The conventional multidimensional optimization problem reduces, thanks to this relaxation, to a simple spectral search. Following this principle, we propose estimators based on the Deterministic Maximum Likelihood, Weighted Subspace Fitting and covariance fitting methods. To calculate the pseudo-spectra efficiently, an iterative rooting scheme based on the rational function approximation is applied to the partial relaxation methods. Simulation results show that the performance of the proposed estimators is superior to the conventional methods especially in the case of low Signal-to-Noise-Ratio and low number of snapshots, irrespectively of any specific structure of the sensor array while maintaining a comparable computational cost as MUSIC.

Biography

Marius Pesavento received the Dipl.Ing. and M.Eng. degrees from Ruhr-University Bochum, Bochum, Germany, and McMaster University, Hamilton, ON, Canada, in 1999 and 2000, respectively, and the Dr. Ing. degree in electrical engineering from Ruhr-University Bochum, in 2005. Between 2005 and 2009, he held research positions in two start-up companies in ICT. In 2010, he became a Professor for Communication Systems in the Department of Electrical Engineering and Information Technology, at the Technical University Darmstadt, Germany. His research interests include robust signal processing and adaptive beamforming, high-resolution sensor array processing, multiantenna and multiuser communication systems, distributed, sparse, and mixed-integer optimization techniques for signal processing, communications and machine learning, statistical signal processing, spectral analysis, and parameter estimation. He was the recipient of the 2003 ITG/VDE Best Paper Award, the 2005 Young Author Best Paper Award of the IEEE Transactions on Signal Processing. He is a Member of the Editorial Board for the EURASIP Signal Processing Journal, the Special Area Teams “Signal Processing for Communications and Networking” and “Signal Processing for Multisensor Systems” of the EURASIP and served as an Associate Editor for the IEEE in the term 2012–2016 and as a Member of the Sensor Array and Multichannel Technical Committee of the IEEE Signal Processing Society in the term 2012-2017.

 

Device-to-Device Communications in Cellular Networks
Speaker and date: 2018-03-14 Prof. Geoffrey Li, Georgia Institute of Technology, Atlanta, Georgia, USA.
 

Abstract:

To satisfy the increasing demand of high data-rate services, provide better user experience, and alleviate the huge infrastructure investment of operators, device-to-device (D2D) communications have being considered as one of the key techniques in the 5G wireless networks. With D2D communications, proximity users in a cellular network can communicate directly to each other without going through the base station (BS). It can potentially increase spectral-efficiency (SE) and device energy-efficiency (EE) of communications. However, D2D communications may generate interference to the existing cellular network if not designed properly. Therefore, interference management is one of the most challenging and important issues in D2D communications. This talk will focus on interference management in D2D communications including quality-of-service (QoS) aware admission control and SE/EE based mode selection. Cross-layer optimization and concave-convex procedures (CCCP) are exploited to solve the related optimization problems.

 
Biography:

Dr. Geoffrey Li is a Professor with the School of Electrical and Computer Engineering at Georgia Institute of Technology. He is also holding a Cheung Kong Scholar title at the University of Electronic Science and Technology of China since 2006. He was with AT&T Labs – Research for five years before joining Georgia Tech in 2000. His general research interests include statistical signal processing for wireless communications. Recently, he focuses on intelligent processing for communication networks. In these areas, he has published over 200 papers on referred journals in addition to over 40 granted patents and many conference papers, with over 30,000 citations. He has been listed as the World’s Most Influential Scientific Mind, also known as a Highly-Cited Researcher, by Thomson Reuters (almost every year). He has been an IEEE Fellow since 2006. He received the 2010 IEEE ComSoc Stephen O. Rice Prize Paper Award, the 2013 IEEE VTS James Evans Avant Garde Award, the 2014 IEEE VTS Jack Neubauer Memorial Award, the 2017 IEEE ComSoc Award for Advances in Communication, and the 2017 IEEE SPS Donald G. Fink Overview Paper Award. He also won 2015 Distinguished Faculty Achievement Award from the School of Electrical and Computer Engineering, Georgia Tech.

 

Resource Allocation in Vehicular Communications
Speaker and date: 2018-03-19  Prof. Geoffrey Li, Georgia Institute of Technology, Atlanta, Georgia, USA.
 

Abstract: This talk will address resource allocation in vehicular communications. Different from traditional resource allocation, strong dynamics caused by high mobility in the vehicular environments poses a serious obstacle to the acquisition of high-quality channel state information (CSI). To deal with the issue, we investigate the delay impacts of periodic CSI feedback and develop efficient graph-based centralized resource management schemes to meet the diverse quality-of-service (QoS) requirements in vehicular networks. To further reduce signaling overhead, we take advantage of recent advances in reinforcement learning (RL) and develop an effective distributed resource allocation scheme. We will show that the demanding latency and reliability requirements of vehicular communications, which are hard to model and analyze using traditional methods, can be explicitly accounted for in the proposed deep RL framework.

Biography: Dr. Geoffrey Li is a Professor with the School of Electrical and Computer Engineering at Georgia Institute of Technology. He is also holding a Cheung Kong Scholar title at the University of Electronic Science and Technology of China since 2006. He was with AT&T Labs – Research for five years before joining Georgia Tech in 2000. His general research interests include statistical signal processing for wireless communications. Recently, he focuses on intelligent processing for communication networks. In these areas, he has published over 200 papers on referred journals in addition to over 40 granted patents and many conference papers, with over 30,000 citations. He has been listed as the World’s Most Influential Scientific Mind, also known as a Highly-Cited Researcher, by Thomson Reuters (almost every year). He has been an IEEE Fellow since 2006. He received the 2010 IEEE ComSoc Stephen O. Rice Prize Paper Award, the 2013 IEEE VTS James Evans Avant Garde Award, the 2014 IEEE VTS Jack Neubauer Memorial Award, the 2017 IEEE ComSoc Award for Advances in Communication, and the 2017 IEEE SPS Donald G. Fink Overview Paper Award. He also won 2015 Distinguished Faculty Achievement Award from the School of Electrical and Computer Engineering, Georgia Tech.

Local host: Danyo Danev, danyo.danev@liu.se

Please see the link https://events.vtools.ieee.org/m/168723

 

Machine Learning & Industry 4.0
Speaker and date:  2018-03-06 Professor Pierluigi Salvo Rossi, Kongsberg Digital, Norway
 

Abstract: Artificial intelligence, big data, Internet of things, machine learning, and sensor networks are popular terms currently pervading almost every human activity. A crucial field, which contributes significantly to their success, is the industrial setting, where the marriage between information technology and process technology is usually denoted Industry 4.0. This talk will start with introducing the driving elements that contributed to the large popularity of machine learning in the industrial setting and then focus on the significant role that such knowledge plays within the Kongsberg Group. Activities where Kongsberg Digital is developing machine-learning tools are presented and special emphasis is given to distributed detection and localization.

Biography - Pierluigi Salvo Rossi was born in Naples, Italy, on April 26, 1977. He received the Dr.Eng. degree in telecommunications engineering (summa cum laude) and the Ph.D. degree in computer engineering, in 2002 and 2005, respectively, both from the University of Naples “Federico II”, Naples, Italy. From 2005 to 2008, he worked as a postdoc at the Dept. Computer Science & Systems, University of Naples “Federico II”, Naples, Italy, at the Dept. Information Engineering, Second University of Naples, Aversa ( CE ), Italy, and at the Dept. Electronics & Telecommunications, Norwegian University of Science and Technology (NTNU), Trondheim, Norway. From 2008 to 2014, he was an Assistant Professor (tenured in 2011) in telecommunications at the Dept. Industrial & Information Engineering, Second University of Naples, Aversa (CE), Italy. From 2014 to 2016, he was an Associate Professor in signal processing with the Dept. Electronics & Telecommunications, NTNU, Trondheim, Norway. From 2016 to 2017 he was a Full Professor in signal processing with the Dept. Electronic Systems, NTNU, Trondheim, Norway. Since 2017 he is a Principal Engineer with the Advanced Analytics & Machine Learning Team, Kongsberg Digital AS, Norway. He held visiting appointments at the Dept. Electrical & Computer Engineering, Drexel University, Philadelphia, PA, US, at the Dept. Electrical & Information Technology, Lund University, Lund, Sweden, at the Dept. Electronics & Telecommunications, NTNU, Trondheim, Norway, and at the Excellence Center for Wireless Sensor Networks (WISENET), Uppsala University, Uppsala, Sweden. He is an IEEE Senior Member and serves as Senior Editor for the IEEE Communications Letters (since 2016) and Associate Editor for the IEEE Transactions on Wireless Communications (since 2015). He was Associate Editor for the IEEE Communications Letters (from 2012 to 2016). His research interests fall within the areas of communications and signal processing.

 

Analysis of the Mixed-ADC Massive MIMO Uplink
Speaker: Professor A. Lee Swindlehurst, University of California Irvine 
 

Abstract: We study the spectral efficiency (SE) of a mixed-ADC massive MIMO system in which K single-antenna users communicate with a base station (BS) equipped with M antennas connected to N high-resolution ADCs and M − N one-bit ADCs. First, we investigate the effectiveness of mixed-ADC architectures in overcoming the channel estimation error caused by coarse quantization. We analyze a previously proposed channel estimation scheme in which the channel is estimated using a round-robin switching approach that allows all receive antennas to receive a high-resolution view of the training data, and we improve the method by incorporating one-bit observations as well. Then, we analyze the impact of the mixed-ADC architecture in the data detection phase. We consider random high-resolution ADC assignment and also analyze a simple antenna selection scheme to increase the SE. Analytical expressions are derived for the SE for maximum ratio combining (MRC) and numerical results are presented for zero-forcing (ZF) detection. Performance comparisons are made against systems with all one-bit and all full-resolution ADCs for various numbers of antennas to illustrate when mixed-ADC processing is of greatest benefit. We also study the energy efficiency of the mixed-ADC architecture, assuming a fixed uplink power budget at the BS, and derive the optimal distribution of one-bit and high-resolution ADCs that maximize the sum SE. The resulting derivation shows that either an all-one-bit or an all-high-resolution ADC is optimal and that in most realistic scenarios, the all-one-bit architecture provides the best performance.

Biography: A. LEE SWINDLEHURST received the B.S. and M.S. degrees in Electrical Engineering from Brigham Young University, Provo, Utah, in 1985 and 1986, respectively, and the Ph.D. degree in Electrical Engineering from Stanford University in 1991. From 1986-1990, he was employed at ESL, Inc., of Sunnyvale, CA, where he was involved in the design of algorithms and architectures for several radar and sonar signal processing systems.  He was on the faculty of the Department of Electrical and Computer Engineering at Brigham Young University from 1990-2007, where he was a Full Professor and served as Department Chair from 2003-2006.  During 1996-1997, he held a joint appointment as a visiting scholar at both Uppsala University, Uppsala, Sweden, and at the Royal Institute of Technology, Stockholm, Sweden.  From 2006-07, he was on leave working as Vice President of Research for ArrayComm LLC in San Jose, California.  He is currently a Professor in the Electrical Engineering and Computer Science Department at the University of California Irvine (UCI), a former Associate Dean for Research and Graduate Studies in the Henry Samueli School of Engineering at UCI, and a former Hans Fischer Senior Fellow in the Institute for Advanced Studies at the Technical University of Munich.  His research interests include detection and estimation theory for radar, wireless communications, and biomedical signal processing, and he has over 300 publications in these areas. Dr. Swindlehurst is a Fellow of the IEEE, past Editor-in-Chief of the IEEE Journal of Selected Topics in Signal Processing, and past member of the Editorial Boards for the EURASIP Journal on Wireless Communications and NetworkingIEEE Signal Processing Magazine, and the IEEE Transactions on Signal Processing.  He is a recipient of several paper awards: the 2000 IEEE W. R. G. Baker Prize Paper Award, the 2006 and 2010 IEEE Signal Processing Society’s Best Paper Awards, the 2006 IEEE Communications Society Stephen O. Rice Prize in the Field of Communication Theory, and the 2017 IEEE Signal Processing Society Donald G. Fink Overview Paper Award.

 

Tuning the Pilot-to-Data Power Ratio in Multiuser MIMO Systems
Speaker and date: 2018-01-11 Adjunct Professor Gabor Fodor, KTH and Ericsson Research
 

Abstract: In systems employing pilot-sequence aided channel estimation, the pilot-to-data power ratio (PDPR) has a large impact on the performance. Therefore, previous works proposed centralized schemes that set the PDPR such that the mean squared error of the estimated data symbols is minimized or the overall spectral efficiency is maximized. In this talk, we argue that decentralized PDPR-setting schemes are advantageous over centralized approaches, and propose a non-cooperative game theoretic algorithm to tune the PDPR. Numerical examples in a multiuser multiple input multiple output setting illustrate the viability of the proposed algorithm.   

Biography: Gabor Fodor received his M.Sc. and Ph.D. degrees in electrical engineering (major in communications engineering) in 1988 and 1998 respectively. In 1998 he joined Ericsson Research in Stockholm, where he is currently a master researcher specializing in performance analysis and design of wireless networks. Since 2013 he is also a visiting researcher at the Royal Institute of Technology. His current research interests include device-to-device communications, multiple antenna systems, and full duplex communications. He is a senior member of the IEEE.

 

Multi-antenna Interference Management for Coded Caching
Speaker and date: 2018-01-26 Docent Antti Tölli, University of Oulu, Finland
 

Abstract: In Coded Caching (CC), instead of simply replicating high-popularity contents near-or-at end-users, the network should spread different contents at different caches such that common coded messages broadcast during the network high-peak hours to different users with different demands would benefit all the users simultaneously. In this talk, a single cell downlink scenario is considered where a multiple-antenna base station delivers contents to multiple cache-enabled user terminals. Most of the existing work for wireless CC focus on the Degrees-of-Freedom (DoF) analysis in the high Signal-to-Noise-Ratio (SNR) regime. Using the ideas from multi-server coded caching scheme developed for wired networks,  a joint design of CC and general multicast beamforming is proposed to benefit from spatial multiplexing gain, improved interference management and the global CC gain, simultaneously. Utilizing the multiantenna multicasting opportunities provided by the CC technique, the proposed method is shown to perform well over the entire SNR region, including the low SNR regime, unlike the existing DoF optimal schemes based on zero-forcing (ZF). Instead of nulling the interference at users not requiring a specific coded message, general multicast beamforming strategies are employed, optimally balancing the detrimental impact of both noise and inter-stream interference from coded messages transmitted in parallel. The proposed scheme is shown to provide the same degrees-of-freedom at high SNR as the state-of-art methods and, in general, to perform significantly better than several base-line schemes including, the joint ZF and CC, max-min fair multicasting with CC, and basic unicasting with multiuser beamforming.

Biography: Antti Tölli (M'08, SM'14) received the Dr.Sc. (Tech.) degree in electrical engineering from the University of Oulu, Oulu, Finland, in 2008. Before joining the Centre for Wireless Communications (CWC) at the University of Oulu, he worked for 5 years with Nokia Networks as a Research Engineer and Project Manager both in Finland and Spain. In May 2014, he was granted a five year (2014-2019) Academy Research Fellow post by the Academy of Finland. He also holds an Adjunct Professor position with University of Oulu. During the academic year 2015-2016, he visited at EURECOM, Sophia Antipolis, France. He has authored more than 140 papers in peer-reviewed international journals and conferences and several patents all in the area of signal processing and wireless communications. His research interests include radio resource management and transceiver design for broadband wireless communications with a special emphasis on distributed interference management in heterogeneous wireless networks. He is currently serving as an associate editor for IEEE Transactions on Signal Processing.

 

Advanced SAR ADCs – efficiency, accuracy, calibration and references

This lecture will discuss advanced techniques that enabled substantial performance improvement of SAR ADCs in recent years. After a brief introduction on SAR ADCs, a short overview of the recent trends will be given. Then, the presentation will show four design examples with different targets. The first topic deals with minimizing power consumption. The second and third design aim to increase accuracy by means of linearization, noise reduction techniques, and calibration. Finally, the last part describes an efficient method to co-integrate the reference buffer with the SAR ADC.

 

New trends in Object-Oriented modelling tools: Large-Scale Systems and Optimal Control of Future Power Generation Systems
Speaker: Prof. Francesco Casella, Politecnico di Milano, Milan, Italy
 

Abstract:

Object-Oriented Modelling Languages and Tools (EOOLT) are firmly established for system-level simulation of engineering systems in many fields, such as automotive, mechatronics, and energy. Traditionally, the main use of such models and tools is simulation of moderately complex system models, often including control systems. There is now a lot of potential for new applications outside this boundary. Based on the presenter's own experience, the talk will illustrate two of them: the modelling of large scale systems (a million equations or more) and the use of O-O models and optimal control to help design future highly flexible thermal power generation systems.

 

About the speaker:

Francesco Casella is assistant professor at Politecnico di Milano, Milan, Italy, with main research interests in object-oriented modeling, simulation and control.

He is an active contributor to the Modelica effort, has developed the ThermoPower library, is vice director of the Open Source Modelica Consortium and member of the Modelica Association Board. Recently he has been focusing on the problem of modeling and simulation of very large systems.

 

Design space exploration of dynamic dataflow programs

Speaker: Dr. Marco Mattavell, Ecole Polytechnique Federale de Lausanne (EPFL), Lausanne, Switzerland
Time: 10:15, September 27, 2017
Venue: Venue: Floor E2, Halmstad University

Dr. Marco Mattavelli holds the appointment of Senior Scientist at Ecole Polytechnique Federale de Lausanne (EPFL), Lausanne, Switzerland and he leads the Multimedia group at School of Engineering at EPFL. Marco Mattavelli has research interests in programming models and methods for heterogeneous parallel embedded systems. He has been working with design space exploration tools to perform profiling and analysis of dataflow programs for efficient partitioning on manycore platforms. Dr. Mattavelli has supervised at least seven PhD students until graduation and at least five post-doctoral fellows. He has published more than 50 articles in scientific journals (several of which with hundreds of citations).    

Web: https://people.epfl.ch/cgi-bin/people?id=102553&lang=en&cvlang=en

 

Fog-Aided Wireless Networks: An Information-Theoretic View

SpeakerProf. Osvaldo Simeone, King's College, London, U.K.
Time: 10:15 - 11:45, September 6, 2017
Venue: Room Systemet, floor 2, B-building, Campus Valla, Linköping University, Sweden. 

Abstract:

Fog-aided wireless networks are an emerging class of wireless systems that leverage the synergy and complementarity of cloudification and edge processing, two key technologies in the evolution towards 5G systems and beyond. The operation of fog-aided wireless networks poses novel fundamental research problems pertaining to the optimal management of the communication, caching and computing resources at the cloud and at the edge, as well as to the transmission on the fronthaul network connecting cloud and edge. In this talk, it will be argued via specific examples concerning the problem of content delivery that network information theory provides a principled framework to develop fundamental theoretical insights and algorithmic guidelines on the optimal design of fog-aided networks. This work has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No. 725731).

Biography:

Osvaldo Simeone is a Professor of Information Engineering with the Centre for Telecommunications Research at the Department of Informatics of King's College London. He received an M.Sc. degree (with honors) and a Ph.D. degree in information engineering from Politecnico di Milano, Milan, Italy, in 2001 and 2005, respectively. He was previously a Professor with the Center for Wireless Information Processing (CWiP) at New Jersey Institute of Technology (NJIT). His research interests include wireless communications, information theory, optimization and machine learning. Dr Simeone is a co-recipient of the 2017 JCN Best Paper Award, the 2015 IEEE Communication Society Best Tutorial Paper Award and of the Best Paper Awards of IEEE SPAWC 2007 and IEEE WRECOM 2007. He was awarded a Consolidator grant by the European Research Council (ERC) in 2016. His research has been supported by the U.S. NSF, the ERC, the Vienna Science and Technology Fund, as well by a number of industrial collaborations. He is currently a Distinguished Lecturer of the IEEE Information Theory Society. Dr Simeone is a co-author of a monograph, an edited book published by Cambridge University Press and more than one hundred research journal papers. He is a Fellow of the IEEE.

 

On System-Level Analysis & Design of Cellular Networks: The Magic of Stochastic Geometry

SpeakerProf. Marco Di Renzo, Paris-Saclay University/ CNRS, France.
Time: 13:15 - 14:45, August 23, 2017
Venue: Room Systemet, floor 2, B-building, Campus Valla, Linköping University, Sweden 
 

Abstract:

This talk is aimed to provide a comprehensive crash course on the critical and essential importance of spatial models for an accurate system-level analysis and optimization of emerging 5G ultra-dense and heterogeneous cellular networks. Due to the increased heterogeneity and deployment density, new flexible and scalable approaches for modeling, simulating, analyzing and optimizing cellular networks are needed. Recently, a new approach has been proposed: it is based on the theory of point processes and it leverages tools from stochastic geometry for tractable system-level modeling, performance evaluation and optimization. The potential of stochastic geometry for modeling and analyzing cellular networks will be investigated for application to several emerging case studies, including massive MIMO, mmWave communication, and wireless power transfer. In addition, the accuracy of this emerging locations and building footprints from two publicly available databases in the United Kingdom (OFCOM and Ordnance Survey). This topic is highly relevant to graduate students and researchers from academia and industry, who are highly
interested in understanding the potential of a variety of candidate communication technologies for 5G networks.

Biography:

Marco Di Renzo received the "Laurea" and Ph.D. degrees in Electrical and Information Engineering from the University of L’Aquila, Italy, in 2003 and 2007, respectively. In October 2013, he received the Doctor of Science degree from the University Paris-Sud, France. Since 2010, he has been a "Chargé de Recherche Titulaire" CNRS (CNRS Associate Professor) in the Laboratory of Signals and Systems of Paris-Saclay University - CNRS, CentraleSupélec, Univ Paris Sud, France. He is an Adjunct Professor at the University of Technology Sydney, Australia, a Visiting Professor at the University of L’Aquila, Italy, and a co-founder of the university spin-off company WEST Aquila s.r.l., Italy. He serves as the Associate Editor-in-Chief of IEEE
COMMUNICATIONS LETTERS, and as an Editor of IEEE TRANSACTIONS ON COMMUNICATIONS and IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS. He is a Distinguished Lecturer of the IEEE Vehicular Technology Society and IEEE Communications Society. He is a recipient of several awards, and a frequent tutorial and invited speaker at IEEE conferences.

 

Title: Emerging analog filtering techniques

Speaker: Prof. Antonio Liscidini, University of Toronto
Time:June 22, 2017  
Venue: Room E:2311, Faculty of Engineering, Lund University
 

Abstract: This talk will introduce three different techniques tailored to the implementation of channel selection filters in wireless transceivers: adaptive analog filters, filtering ADC and passive switched capacitor circuits.
The adaptive filters presented succeed to shape the filtering profile as function of the operative scenario without the need of any control loop. This allows to optimize the filter design by minimizing the average power consumption instead of the peak-dissipation occurred in the worst-case scenario, which has a very low probability to appear.

In the second part of the talk, a filtering ADC is presented. Although the interferes are suppressed before the analog to digital conversion,
the filter profile is entirely defined in digital domain through a reconfigurable filter able to track and suppress unwanted interferers.
The talk ends with a discussion on passive switched capacitors filters. A new intuitive continuous-time model will be introduced. The model
easily allows to design high-order topologies even with complex conjugates poles without the need of any active device. 

Measurements results on three different prototypes will be provided.

 

Biography: Antonio Liscidini (S’99 M’06 SM ‘13) was born in Tirano, Italy, in 1977. He received the Laurea degree (summa cum laude) and PhD. in electrical engineering from the University of Pavia, Pavia, Italy, in 2002 and 2006 respectively. He was a summer intern at National Semiconductors in 2003 (Santa Clara, CA) studying poly phase filters and CMOS LNAs. From 2008 to 2012 he was assistant professor at the University of Pavia and consultant for Marvell Semiconductors in the area of integrated circuit design. In December 2012 he joined the Edward S. Rogers Sr. Department of Electrical & Computer Engineering of the University of Toronto. His research interests are in the implementations of transceivers and frequency synthesizers for cellular and ultra low power applicatio

 

Title: Algorithms, Architectures, and Testbeds for 5G Wireless Communication Systems

Speaker: Professor Joseph R. Cavallaro, Rice University, USA
Time: 13:15, March 23, 2017
Venue:
  Faculty of Engineering, Lund University

Abstract: Wireless communication system concepts for 5G include a variety of advanced physical layer algorithms to provide high data rates and increased efficiency. Each of these algorithms provide different challenges for real-time performance based on the tradeoffs between computation, communication, and I/O bottlenecks and area, time, and power complexity. In particular, Massive MIMO systems can provide many benefits for both uplink detection and downlink beamforming as the number of base station antennas increases. Similarly, channel coding, such as LDPC, can support high data rates in many channel conditions. At the RF level, limited available spectrum is leading to noncontiguous channel allocations where digital pre-distortion (DPD) can be used to improve power amplifier efficiency. Each of these schemes impose complex system organization challenges in the interconnection of multiple RF transceivers with multiple memory and computation units with multiple data rates within the system. Parallel numerical methods can be applied to tradeoff computational complexity with minimal effect on error rate performance. Simulation acceleration environments can be used to provide thorough system performance analysis. In this talk, we will focus on design tools for high level synthesis (HLS) to capture and express parallelism in wireless algorithms. This also includes the mapping to GPU and multicore systems for high speed simulation. HLS can also be applied to FPGA and ASIC synthesis, however, there exist tradeoffs in area with flexibility and reuse of designs. Heterogeneous system architectures as expressed by Systems on Chip (SoC) attempt to address these system issues. The talk will conclude with a discussion of computation testbeds from supercomputers through desktop GPU to single board systems. The integration with radio testbeds from WARP and USRP to NI and Argos prototype massive MIMO systems will be explored.

 

Biography: Joseph R. Cavallaro received the B.S. degree from the University of Pennsylvania, Philadelphia, Pa, in 1981, the M.S. degree from Princeton University, Princeton, NJ, in 1982, and the Ph.D. degree from Cornell University, Ithaca, NY, in 1988, all in electrical engineering. From 1981 to 1983, he was with AT&T Bell Laboratories, Holmdel, NJ. In 1988, he joined the faculty of Rice University, Houston, TX, where he is currently a Professor of electrical and computer engineering. His research interests include computer arithmetic, and DSP, GPU, FPGA, and VLSI architectures for applications in wireless communications. During the 1996–1997 academic year, he served at the National Science Foundation as Director of the Prototyping Tools and Methodology Program. He was a Nokia Foundation Fellow and a Visiting Professor at the University of Oulu, Finland in 2005 and continues his affiliation there as an Adjunct Professor. He is currently the Director of the Center for Multimedia Communication at Rice University. He is a Fellow of the IEEE and a Member of the IEEE SPS TC on Design and Implementation of Signal Processing Systems and the Chair-Elect of the IEEE CAS TC on Circuits and Systems for Communications. He is currently an Associate Editor of the IEEE Transactions on Signal Processing, the IEEE Signal Processing Letters, and the Journal of Signal Processing Systems. He was Co-chair of the 2004 Signal Processing for Communications Symposium at the IEEE Global Communications Conference and General/Program Co-chair of the 2003, 2004, and 2011 IEEE International Conference on Application-Specific Systems, Architectures and Processors (ASAP), General/Program Co-chair for the 2012, 2014 ACM/IEEE GLSVLSI, Finance Chair for the 2013 IEEE GlobalSIP conference, and TPC Co-Chair of the 2016 IEEE SiPS workshop. He was a member of the IEEE CAS Society Board of Governors during 2014.

 

Prospects for Positioning in 5G

Speaker: Dr. Mario Costa, Huawei Technologies, Finland
Time:  10:15, January 9, 2017
Venue: Algoritmen, Linköping University
 

Abstract: This talk addresses the prospects and enabling technologies for positioning in upcoming 5G networks. 5G is expected to support xMBB, mMTC and IoT services, and the main technologies for the radio access include massive MIMO, cooperative dense networks, and mmW systems. We will focus on MBB and urban environments, and consider an architecture where a mMIMO system provides backhaul to an ultra dense network, both operating at sub-6GHz. In this context, network based positioning of UEs can be achieved with a sub-meter accuracy by reusing the UL reference signals employed for CSIT. Sequential estimation techniques are used for tracking the location of UEs as well as the clock-offsets among the TRPs composing the ultra dense network. In addition to facilitating network synchronization, network based positioning of UEs allows for location aware communications. Among other advantages, location based beamforming provides savings in terms of CSI overhead and increased throughput. This is joint work with M. Koivisto, A. Hakkarainen, and M. Valkama from Tampere University of Technology, as well as P. Kela and K. Leppänen from Huawei Technologies Finland.

Biography: Mário Costa received the D.Sc.(Tech.) degree in electrical engineering from Aalto University (former Helsinki University of Technology), Finland, in 2013. From 2007 to 2014 he was with the Department of Signal Processing and Acoustics at Aalto University. In 2011, he was an External Researcher at Connectivity Solutions Team, Nokia Research Center, and in 2014 he was a visiting postdoctoral research associate at Princeton University. Since 2014, he has been with Huawei Technologies Oy (Finland) Co., Ltd., as a Senior Researcher. His research interests include statistical signal processing and wireless communications.

 

Millimeter Wave Networking: A Medium Access Control Perspective

Speaker: Professor Carlo Fischione,  KTH Royal Institute of Technology
Time:  14:00, December 20, 2016
Venue: Algoritmen, Linköping University


Abstract: Due to spectrum scarcity and restrictions used by legacy communication technologies, millimeter wave (mmWave) bands are considered as a promising enabler to provide multi-gigabit wireless access. However, mmWave communications exhibit high attenuations, vulnerability to obstacles, and sparse-scattering environments, which are not taken into account in the existing cellular wireless design approaches. Moreover, the small wavelengths of mmWave signals make it possible to incorporate a large number of antenna elements both at the base stations and at the user equipment, which in turn lead to high directivity gains and fully directional communications. This level of directionality can result in a network that is noise-limited as opposed to interference limited. The significant differences between mmWave networks and traditional ones challenge the classical design constraints, objectives, and available degrees of freedom. This demands a reconsideration of several traditional design aspects in mmWave systems, where, despite a recent wave of investigations at the physical layer, little has been done for the medium access control (MAC) layer. In this seminar, we give an overview on the MAC research activities on mmWaves.


Biography: Dr. Carlo Fischione is currently a tenured Associate Professor at KTH Royal Institute of Technology, Electrical Engineering and ACCESS Linnaeus Center, Stockholm, Sweden. He received the Ph.D. degree in Electrical and Information Engineering (3/3 years) in May 2005 from University of L’Aquila, Italy, and the Laurea degree in Electronic Engineering (Laurea, Summa cum Laude, 5/5 years) in April 2001 from the same University. He has held research positions at Massachusetts Institute of Technology, Cambridge, MA (2015, Visiting Professor); Harvard University, Cambridge, MA (2015, Associate); University of California at Berkeley, CA (2004-2005, Visiting Scholar, and 2007-2008, Research Associate); and Royal Institute of Technology, Stockholm, Sweden (2005-2007, Research Associate). His research interests include optimization with applications to wireless sensor networks, networked control systems, wireless networks, security and privacy. He received or co-received a number of awards, including the best paper award from the IEEE Transactions on Industrial Informatics (2007), the best paper awards at the IEEE International Conference on Mobile Ad-hoc and Sensor System 05 and 09 (IEEE MASS 2005 and IEEE MASS 2009), the Best Paper Award of the IEEE Sweden VT-COM-IT Chapter (2014). He is Associated Editor of Elsevier Automatica, has chaired or served as a technical member of program committees of several international conferences and is serving as referee for technical journals. He is co-funder and CTO of the sensor networks start-up company MIND Music Labs. He is Member of IEEE (the Institute of Electrical and Electronic Engineers), and Ordinary Member of DASP (the Italian academy of history Deputazione Abruzzese di Storia Patria).

 

Estimation over MIMO Fading Channels:  Outage and DiversityAnalysis

Speaker: Professor Kimmo Kansanen, Norwegian University of Science and Technology (NTNU)
Time: 10:15, December 12, 2016
Venue: Visionen, Linköping University

Abstract: Uncoded (analog) transmission offirst-order Gauss-Markov processes over fading channels is considered. The optimal MMSE estimator at the receiver is the Kalman filter with random instantaneous estimation error variance, assuming perfect channel estimation. We analyze the estimation error outage probability as a means of characterizing the estimation quality. We consider the cases of scalar processes over fading, parallel, and MIMO channels, and find the diversity order of the outage probability. Some extended results on Wiener filtering of oversampled vectors and related characterization of the estimation outage at high SNR are shown.

Biography: Kimmo Kansanen received his M.Sc (EE) and Dr. Tech. degrees from University of Oulu, Finland in 1998 and 2005, respectively, where he was a research scientist and project manager at the Centre for Wireless Communications. He has been with the Norwegian University of Science and Technology in Trondheim, Norway, since 2006, since 2016 as professor. He is in the editorial board of Elsevier Physical Communications. His research interests are within communications and signal processing.

 

A component-based approach to semantics

Speaker: Peter D. Mosses, Professor Emeritus, Computer Science, Swansea University
Time: 13:15-14:30, December 7, 2016
Venue: R1107, Halmstad University 

A component-based semantics of a programming language involves a collection of so-called fundamental
programming constructs, or ‘funcons’. The definition of each funcon is an independent module,
intended to be used as an off-the-shelf component. The semantics of a language is defined by specifying
a translation from programs to funcons, which is generally much simpler than specifying the semantics
of the language constructs directly. New funcons can be defined when needed, although it is expected
that many funcons will be widely reused in definitions of different languages. After completing further
case studies, the PLanCompS project intends to publish an initial collection of validated funcon definitions
in an open access digital library.

ELLIIT Poster Mosses.pdf

 

Scalable and Efficient On-Demand Video Streaming: Retrospective and Recent Work

Speaker: Derek Eager, Professor, University of Saskatchewan, Canada
Time: 13:15-14:15, November. 22, 2016
Venue: Alan Turing

Abstract: Streaming video is responsible for more than half of all Internet traffic. This talk will begin with a retrospective on the speaker’s past work, with a variety of collaborators and over many years, on techniques for achieving scalable and efficient delivery of streaming video through aggregation of requests.  Two recent projects will then be described that concern improving scalability and efficiency by other means.  The first project is joint work with Niklas Carlsson (Linkoping U.) and concerns edge-network caching in the context of applications with high rates of addition of new content, most of which will be accessed only a small number of times from any particular edge network.  The second project concerns improving server efficiency.  Most streaming video is now delivered by web servers over HTTP rather than using dedicated video servers, and these web servers are not necessarily tuned for efficient video delivery.  This portion of the talk will focus in particular on joint work with Jim Summers and Tim Brecht (U. of Waterloo) and Alex Gutarin (Netflix) on characterizing the workload of a Netflix server and implications for improving server efficiency.
 

Biography: Derek Eager is a Professor in the Department of Computer Science at the University of Saskatchewan, Canada.  He received the Ph.D. degree in Computer Science from the University of Toronto, Canada.  His research interests are in the areas of performance evaluation, content distribution, distributed systems, and networks.  Past professional activities include service as Chair of SIGMETRICS, the ACM Special Interest Group on performance evaluation, and service as program chair and as a general co-chair for the annual SIGMETRICS conference.  In 2010 he was a co-recipient of one of the three inaugural ACM SIGMETRICS Test of Time Awards.

 

Emerging Topics for Visualization Research

Speaker: Kwan-LiU Ma
Time:  11:00 - 12:00, October 20, 2016
Venue: VR arena at the Visualization Center, Norrköping
 

Abstract: Visualization is a powerful exploration and storytelling tool for large complex, multidimensional data. The design of a visualization solution must take into account the data characteristics, the media used, and the purpose of the visualization, each of which presents some unique challenges. These challenges suggest new topics for visualization research. I will discuss some of these topics and present related research results produced by my group.
 

Biography: Kwan-Liu Ma is a professor of computer science and the chair of the Graduate Group in Computer Science (GGCS) at the University of California-Davis, where he directs VIDI Labs and UC Davis Center of Excellence for Visualization. His research spans the fields of visualization, computer graphics, high-performance computing, and user interface design. Professor Ma received his PhD in computer science from the University of Utah in 1993. During 1993-1999, he was with ICASE/NASA Langley Research Center as a research scientist. He joined UC Davis in 1999.  Professor Ma received numerous recognitions for his research contributions such as the NSF Presidential Early-Career Research Award (PECASE) in 2000, the UC Davis College of Engineering's Outstanding Mid-Career Research Faculty Award in 2007,  and the 2013 IEEE VGTC Visualization Technical Achievement Award. He was elected an IEEE Fellow in 2012. 

 

Exciting and disruptive innovations: Julia for programming and Modia for modeling

Speaker: Dr. Hilding Elmqvist
Time:  10.15-11:00, October 19, 2016
Venue: Visionen, Linköping University

Abstract: There are some exciting and disruptive innovations, which I will tell you about. The Julia language and compiler (http://julialang.org/) is the first such disruptive innovation. Julia is a scripting and strongly typed programming language with type inference suited for technical computations. Julia has the convenience of Matlab and the power of modern programming languages. Julia offers modern data structures, parallel processing, parameterized types and powerful libraries of functions for various applications, etc. The most important feature is the meta-programming of Julia. It is possible to define structural macros for which the AST (abstract syntax tree) becomes available for further processing by Julia algorithms and for just-in-time generation of machine code. The second disruptive innovation is a new experimental modeling language with similarities with Modelica, which we call Modia. Modia is based on the Julia macro concept for models, extends, equations, connections, etc. and uses native Julia functions for algorithms. The intention of this effort is to provide an open-source implementation of the semantics of a Modelica-like modeling language. This will enable experimenting with various advanced capabilities such as dynamic model changes and recompilation of models when objects enter or leave the model scope or components change mode. Other such advanced features are collision handling and flexible models.

Biography: Hilding Elmqvist attained his Ph.D. at the Department of Automatic Control, Lund Institute of Technology in 1978. His Ph.D. thesis contains the design of a novel object-oriented model language called Dymola and algorithms for symbolic model manipulation. It introduced a new modeling methodology based on connecting submodels according to the corresponding physical connections instead of signal flows. Submodels were described declaratively by equations instead of assignment statements.

Elmqvist spent one year in 1978-1979 at the Computer Science Department at Stanford University, California. His research continued in 1979-1984 on languages for implementation of control systems (LICS) and for visual modeling (HIBLIZ). Elmqvist was in 1984-1990 the principal designer and project manager at a subsidiary to Alfa-Laval called SattControl in Malmö for developing SattGraph, a user interface system for process control and SattLine, a graphical, object-oriented and distributed control system. In 1990-1992, he worked for Alfa-Laval in Toronto.

In 1992, Elmqvist founded Dynasim AB in Lund, Sweden. The company developed software tools for modeling, simulation and design of large dynamical systems. The primary product is Dymola for object-oriented modeling allowing graphical composition of models and 3D visualization of model dynamics.

Elmqvist took the initiative in 1996 to organize an international effort to design the next generation object-oriented language for physical modeling, Modelica. The Dymola software has support for the Modelica language since 1998. In April 2006, Dynasim AB was acquired by Dassault Systemes. The Dymola/Modelica technology is central for their new strategy on CATIA Systems.

Elmqvist was the worldwide Chief Technology Officer for CATIA Systems within Dassault Systèmes until December 2015.

In January 2016, Elmqvist founded Mogram AB. Current activities include designing and implementing an experimental modeling language called Modia.

Experience in setting a software engineering research agenda

Speaker: Professor Kevin Ryan (Lero & Univ. of Limerick)
Time:  13:00 - 14:30, October 17
Venue: J1650, Blekinge Tekniska Högskola, Karlskrona
 

Abstract: Lero – the Irish Software Research Centre – is a collaboration of all the Irish universities, led from the University of Limerick. It was set up to maximise the depth, focus and impact of software engineering research in Ireland by bringing together not just academic researchers but also major industrial partners, both large and small. Given the relatively small scale of all activities in Ireland this shared agenda was essential but how could it be set and how well has it worked? This talk traces the background to the establishment of Lero; the false starts that preceded it and the initial difficulties it encountered.

Lero’s research agenda always reflected a tension between academic ambition and industrial practicality. Evaluation panels, funding agencies and political priorities all were influential – sometimes in contradictory ways. In the past 5 years Lero has competed for an won two further rounds of funding and is now well established as part of the international SE research landscape. Reflection on how this happened will include such factors as; national pride; strong social links; key recruits; Lero’s values and, undoubtedly, sheer good fortune. It is expected that attendees will find lessons that can help set their own research agendas within their own unique contexts.
 

Biography: Professor Kevin Ryan is Emeritus Professor of Information Technology at the University of Limerick and was the founding Director of Lero – the Irish Software Research Centre (www.lero.ie). Lero is a partnership of academic and industrial organisations whose aim is to advance the quality and quantity of software engineering research being conducted in Ireland. Since 2004 Lero has attracted funding of over €120m, mainly from Science Foundation Ireland, and now involves researchers at all the Irish universities with its headquarters at UL. From 1990 to 2004 Kevin Ryan was successively Head of Computer Science, Dean of Informatics and Vice President Academic and Registrar at the University of Limerick.

During this period he brought together and led the team that established the Lero centre.  Kevin Ryan holds degrees of BA (Maths & Economics), BAI (Engineering) and PhD (Computer Science) from Trinity College Dublin and is a fellow of both the Irish Computer Society and the Institute of Engineers of Ireland. Over the past 40 years he has lectured and researched on software topics at universities and industry in Ireland, the UK, the USA, Canada, Japan, Africa and Sweden. He has been an adviser to the Irish government on the development of the Irish software industry and has acted as consultant to industry and to international funding bodies. He has published papers on software engineering methods and tools, software requirements engineering and on the role of technology in development. He served on the editorial board of 3 journals. He has been a director of a number of start-up software companies. He currently works as a freelance consultant.

Graphical Models and Inference: Insights from Spatial Coupling

Speaker: Prof. Henry D. Pfister, Duke University, USA
Time:  10:00, September 16, 2016
Location:   Room E:B, ground floor, E-building, Ole Römers väg 3, Lund University, Lund

Abstract: This talk focuses on recent theoretical and practical advances in coding, compressed sensing, and multiple-access communication based on spatially-coupled graphical models.  The goal is to introduce the key ideas and insights using concrete examples.  First, we introduce factor graphs and belief propagation (BP) as tools for understanding large systems of dependent random variables.  Then, we describe how these techniques are applied to problems in signal processing and communications.  Next, we use the example of low-density parity-check (LDPC) codes on the binary erasure channel to introduce the idea of density-evolution analysis.  A key result is that BP decoding algorithms have a noise threshold below which recovery succeeds with high probability.  Finally, we discuss how extrinsic-information transfer (EXIT) functions can be used to compare the performance between BP and optimal decoding.
 
Biography: Henry D. Pfister received his Ph.D. in electrical engineering in 2003 from the University of California, San Diego and he is currently an associate professor in the electrical and computer engineering department of Duke University.  Prior to that, he was a professor at Texas A&M University (2006-2014), a post-doctoral fellow at the École Polytechnique Fédérale de Lausanne (2005-2006), and a senior engineer at Qualcomm Corporate R&D in San Diego (2003-2004).
He received the NSF Career Award in 2008, the Texas A&M ECE Department Outstanding Professor Award in 2010, the IEEE COMSOC best paper in Signal Processing and Coding for Data Storage in 2007, and a 2016 STOC Best Paper Award.  He is currently an associate editor in coding theory for the IEEE Transactions on Information Theory (2013-2016) and a Distinguished Lecturer of the IEEE Information Theory Society (2015-2016).
His current research interests include information theory, communications, probabilistic graphical models, and machine learning.

Multiport Communication Systems

Speaker:  Professor Josef A Nossek, Technical University of Munich
Time: September 2, 2016, 14:00
Location: Visionen, Linköping University

Abstract: A major difficulty in joining mathematical theories of communication (like information theory) with physical theories of communication (like electrodynamics), lies in the fact that the concept of conjugated pairs is usually not included in information theory. This constitutes a major hurdle when one tries to apply information theory to real-world problems, such as the analysis and optimization of radio communication systems. Dealing with pairs of conjugated variables is a prerequisite for considerations of energy and power flow and of the interaction of systems with their environment, especially with sources of noise. It is therefore desirable to establish an interface between the physical and the mathematical theories of communication which makes sure that all relevant physical concepts are correctly represented in information theory. A circuit theoretic multiport approach is proposed as the implementation of such an interface. In the talk, the concepts of multiport communications are introduced and illustrated with selected examples of their application to wireless multi-antenna communication systems.

Biography: Josef A. Nossek (S’72–M’74–SM’81–F’93–LF’13) received the Dipl.-Ing. and Dr. techn. degrees in electrical engineering from Vienna University of Technology, Austria, in 1974 and 1980, respectively. He joined Siemens AG, Germany, in 1974, where he was engaged in filter design for communication systems. From 1987 to 1989, he was the Head of the Radio Systems Design Department, where he was instrumental in introducing high-speed VLSI signal processing into digital microwave radio. From 1989 to 2016, he has been Head of the Institute of Circuit Theory and Signal Processing at the Technische Universität München (TUM), Germany. In 2016 he joined the Universidade Federal do Ceara, Fortaleza, Brasil as full professor. He was the President Elect, President, and Past President of the IEEE Circuits and Systems Society in 2001, 2002, and 2003, respectively. He was President of Verband der Elektrotechnik, Elektronik und Informationstechnik (VDE) in 2007 and President of the Convention of National Associations of Electrical Engineers of Europe (EUREL) in 2013. He was the recipient of the ITG Best Paper Award in 1988, the Mannesmann Mobilfunk (currently Vodafone) Innovations Award in 1998, and the Award for Excellence in Teaching from the Bavarian Ministry for Science, Research and Art in 1998. From the IEEE Circuits and Systems Society, he received the Golden Jubilee Medal for Outstanding Contributions to the Society in 1999 and the Education Award in 2008. He was the recipient of the Order of Merit of the Federal Republic of Germany (Bundesverdienstkreuz am Bande) in 2008. In 2011 he received the IEEE Guillemin-Cauer Best Paper Award and in 2013 the honorary doctorate (Dr. h.c.) from the Peter Pazmany Catholic University, Hungary. He was awarded the VDE Ring of Honor in 2014 and the TUM Emeritus of Excellence in 2016. 

 

Bounds on the Capacity of Optical Fiber Channels

Speaker: Professor Gerhard Kramer, Technical University of Munich
Venue: Algoritmen, entrance 29, House B, Linköping University
Date and time:  June 2, 2016, 10:15


Abstract:  The capacity of optical fiber channels seems difficult to compute or even bound. Existing capacity lower bounds are based on numerical simulations using the split-step Fourier method. Representative lower bounds are reviewed. A recent capacity upper bound applies two basic tools: maximum entropy under a correlation constraint and Shannon’s entropy power inequality (EPI). The main insight is that the non-linearity that is commonly used to model waveform propagation does not change the differential entropy of a signal. As a result, the spectral efficiency of a basic fiber model is at most log(1+SNR) per mode, where SNR is the receiver signal-to-noise ratio. The results extend to other channels, including basic models of multi-mode fiber.

Biography: Gerhard Kramer works at the Technical University of Munich. He received his doctoral degree from the ETH Zürich in 1998. Since then, he was with Endora Tech AG for 2 years, the Math Center at Bell Labs for over 8 years, and the University of Southern California (USC) for almost 2 years. He joined TUM in 2010. His research interests are primarily in information theory and communications theory, with applications to wireless, copper, and optical fiber networks. 
 

Understanding the Shape of Data with Topological Data Analysis and Visualization, from Vector Fields to Brain Networks Bounds of the

Speaker: Bei WangAssistant professor at the Scientific Computing and Imaging (SCI) Institute of the University of Utah
Time: 15:00-16:00, May 4, 2016
Location: VR arena at Norrköping Visualization Center.
 

Abstract: Large and complex data arise in many application domains, such as nuclear engineering, combustion simulation, weather prediction and brain imaging. However, their explosive growth in size and complexity is more than enough to exhaust our ability to apprehend them directly. Topological techniques which capture the "shape of data" have the potential to extract salient features and to provide robust descriptions of large and complex data.

My research develops pertinent theoretical and algorithmic advancements in topological data analysis, and establishes their applications in simplifying and accelerating the visualization and analysis of large, complex data sets. In particular, in this talk I will describe a novel visualization framework for the simplification and visualization of vector fields, based on the topological notion of robustness that quantifies their structural stability.

I will also discuss several other representative areas in my research that focus on developing novel, scalable and mathematically rigorous ways to rethink about complex forms of data, from brain networks, to high-dimensional point clouds and multivariate ensembles.


Biography: Bei Wang is an assistant professor at the School of Computing, and the Scientific Computing and Imaging (SCI) Institute of the University of Utah. Before that, she was a research scientist from 2011 to 2016 and a Postdoctoral Fellow from 2010 to 2011, both at the SCI Institute. She received her Ph.D. in Computer Science from Duke University in 2010. There, she also earned a certificate in Computational Biology and Bioinformatics. Her research interests include data analysis and data visualization, computational topology, computational geometry, computational biology and bioinformatics, machine learning and data mining.

 

Practical issues of reciprocity bases massive MIMO systems

Speaker: Florian Kaltenberger, Eurecom, France
Time: 13:15 - 14:00, 3 May, 2016
VenueE:2517, E-huset, LTH, Ole Römers väg 3, Lund 
 

Abstract: In this talk we are going to talk about some open practical issues of reciprocity based massive MIMO TDD systems. First we are going to examine the calibration step, which is necessary to establish reciprocity between the UL and DL. Many different calibration methods exists resulting in different level of complexity and accuracy. But how accurately do we really need to calibrate? We will try to answer this question with some theoretical results. Secondly we are going to look into some standardization issues for massive MIMO. In theory even LTE-Advanced already supports massive MIMO. But in order to exploit its full potential, some changes to the standard are needed. Last but not least we are going to give an overview of the massive MIMO testbed that is currently being built at Eurecom and show some first results.
 

Biography: Florian Kaltenberger was born in Vienna, Austria in 1978. He received his Diploma degree (Dipl.-Ing.) and his PhD both in Technical Mathematics (with distinction) from the Vienna University of Technology 2002 and 2007 respectively. Between 2003 and 2007 he was a junior researcher in the wireless communications group at the Austrian Research Centers GmbH, where he was working on the development of low-complexity smart antenna and MIMO algorithms as well as on the ARC SmartSim real-time hardware channel simulator. He joined Eurecom as a post-doctoral research engineer in 2007 and became an assistant professor in 2011. He is currently teaching a course on radio engineering and is part of the team managing the Eurecom real-time open-source testbed OpenAirInterface.org. His research interests include signal processing for wireless communications, MIMO communication systems, receiver design and implementation, MIMO channel modeling and simulation, and hardware implementation issues.

 

Computability Revisited Practical

Jos Baeten picSpeaker: Prof. Jos Baeten, University of Amsterdam, The Netherlands
Time: 15.30, 7 January 2016
Venue: Wigforssalen, Visionen, Halmstad University
 

Abstract: Classical computability theory disregards runtime interaction completely. The strong Church-Turing thesis states that anything a computer can do can also be done by a Turing machine, given enough time and memory. This thesis is invalidated by e.g. a self-driving car, as all possibly crossing pedestrians cannot be put on the Turing tape before starting, and resulting actions cannot be postponed until after finishing. Concurrency theory gives us a proper treatment of interaction. The talk surveys what happens when computability theory is integrated with concurrency theory, which theorems remain valid and which theorems should be adapted. The Reactive Turing Machine is introduced as a model of computability with interaction.

 

Biography: Jos Baeten has a Ph.D. in mathematics of the University of Minnesota (1985). From 1991 to 2015, he was professor of computer science at the Technische Universiteit Eindhoven (TU/e). In addition, from 2010 to 2012 he was professor of Systems Engineering at TU/e. As of October 1, 2011, he is general director of CWI in Amsterdam, the Netherlands research institute for mathematics and computer science. Since January 1, 2015, he is part-time professor in Theory of Computing at the Institute of Logic, Language and Computation of the University of Amsterdam. He is well-known as a researcher in model-based engineering, in particular in process algebra. To date, he supervised 29 Ph.D. degrees. He chairs the CONCUR conferences steering committee and is member of the Koninklijke Hollandsche Maatschappij der Wetenschappen (Royal Holland Society of Sciences and Humanities).

 

Reconstructing Anisotropic BSDFs from Sparse Measurements

Speaker: Gregory Ward, Dolby Laboratories
Time: 15.00-16.00, 3 December 2015
Venue: VR arena at Norrköping Visualization Center
 

Abstract: A mathematical model will fit BSDF data to a well-behaved function, but this limits the classes of materials one can represent.  We have developed a general method that accepts sparse input data and arrives at a complete, 4-D BSDF for rendering.  Employing radial basis functions and displacement interpolation to track peaks in the distribution, we are able to resample real-world BSDFs into a multi-scale representation for efficient Monte Carlo importance sampling.  We will explain our methods and describe our freely available software.

 

Cognitive Radio Transceiver Chips

Speaker: Dr. Eric Klumperink, University of Twente, The Netherlands
Time: 14.00, 24 November 2015
Venue: Lund, E-building, room E:2311
 

Abstract: A Cognitive Radio transceiver senses its radio environment and adaptively utilizes free parts of the radio spectrum. CMOS IC-technology is the mainstream technology to implement smart signal processing and for reasons of cost and size it is attractive to also integrate the radio frequency (RF) hardware in CMOS. This lecture discusses design challenges and ideas for radio transceiver ICs designed for cognitive radio applications, with focus on analog RF. Cognitive radio asks for new functionality, e.g. spectrum sensing and more agility in the radio transmitter and flexibility in the receiver. Moreover, the technical requirements on the building blocks are more challenging than for traditional single standard applications, e.g. in bandwidth, programmability, sensing sensitivity, blocker tolerance, linearity and spurious emissions. Circuit ideas that address these challenges will be discussed, and examples of chips and their achieved performance will be given.
 

Biography: Eric A. M. Klumperink (IEEE Member '98, Senior Member '06) was born on April 4th, 1960, in Lichtenvoorde, The Netherlands. He received the B.Sc. degree from HTS, Enschede, The Netherlands, in 1982. After a short period in industry, he joined the University of Twente in 1984, participating in analog CMOS circuit research resulting in several publications and his Ph.D. thesis "Transconductance Based CMOS Circuits" (1997). In 1998, Eric started as Assistant Professor at the IC-Design Laboratory in Twente and his research focus changed to RF CMOS circuits. In april-august 2001, he extended his RF expertise during a sabbatical at the Ruhr Universitaet in Bochum, Germany. Since 2006, he is an Associate Professor, teaching Analog & RF IC Electronics. Eric participates in the CTIT Research Institute, guiding PhD and MSc projects related to RF CMOS circuit design with focus on Software Defined Radio, Cognitive Radio and Beamforming. He served as an Associate Editor for the IEEE TCAS-II (2006-2007), IEEE TCAS-I (2008-2009) and the IEEE JSSC (2010-2014) and is a member of the technical program committees of the ISSCC and IEEE RFIC Symposium. Eric served as IEEE SSC Distinguished Lecturer in 2014/2015, holds 10+ patents, authored and co-authored 150+ internationally refereed journal and conference papers, and was recognized as 20+ ISSCC paper contributor over 1954-2013. He is a co-recipient of the ISSCC 2002 and the ISSCC 2009 "Van Vessem Outstanding Paper Award".

 

Some baby steps into new massive MIMO pastures

Speaker: Dr. Michalis Matthaiou, Senior Lecturer, Queen's University Belfast, UK
Time: 18 November, 13.15
Venue: Algoritmen, entrance 29, Building B, Linköping University
 

Abstract: In this talk, we will overview some recent results in the general area of massive MIMO. In the first part, we will elaborate on the concept of space-constrained massive MIMO systems, where an increase in the number of antennas induces a decrease in the inter-antenna spacing. In the second part, we will propose the concept of hybrid processing for massive MIMO relaying and the realisable potential of such a solution.
 

Biography:  Michalis Matthaiou is currently a Senior Lecturer at Queen’s University Belfast, U.K. after holding an Assistant Professor position at Chalmers University of Technology, Sweden. He obtained his Ph.D from the University of Edinburgh in 2008. He has held research appointments at Munich University of Technology (TUM), Germany,  and University of Wisconsin-Madison, U.S.A. He is the recipient of the 2011 IEEE ComSoc Young Researcher Award for the Europe, Middle East and Africa Region and of the 2014 Best Paper Award at IEEE ICC 2014. He currently serves as an Associate Editor for the IEEE Transactions on Communications, Senior Editor for IEEE Communications Letters and was the Lead Guest Editor of the special issue on ``Large-scale multiple antenna wireless systems'' of the IEEE Journal on Selected Areas in Communications. He has published, to date, some 120 IEEE papers on signal processing for wireless communications, massive MIMO systems, hardware-constrained communications, and performance analysis of fading channels.

 

Technologies for Rapid Prototyping and Low Cost Deployment of Novel Radio Systems

Matt Ettus 2015Speaker: Matt Ettus, President and founder of Ettus Research
Time: 8 October, 15.15-16.00
Venue: Lund Unversity, Föreläsningssal E:A
 

Abstract: Software defined radio (SDR) promises rapid prototyping and development cycles for new radio communication systems.  Some of that promise has been realized, but there is still room for improvement in the process, largely due to the deep gulfs between the design tools used in various aspects of the design.  In particular, the various processing architectures used (FPGAs, CPUs, GPUs, etc.) all require the designer to operate in separate walled gardens.  We will discuss some exciting new technologies which allow the bridging of these gaps, allowing the use of a consistent design methodology across all computational devices in the system as well as all phases of the system life cycle.  These technologies ease the design, evaluation, and deployment while at the same time reducing costs and time to market for radio systems.  Extensive code reuse and open source lead to low cost and reproducible experimentation and deployment.
 

Biography: Matt Ettus is a core contributor to the GNU Radio project, a free framework for Software Radio, and is the creator of the Universal Software Radio Peripheral (USRP).  In 2004, Matt founded Ettus Research to develop, support and commercialize the USRP family of products.  Ettus Research was acquired by National Instruments in 2010, and Matt continues as its president.  USRPs are in use in over 110 countries for everything from cellular and satellite communications to radio astronomy, medical imaging, and wildlife tracking.  In 2010, the USRP family won the Technology of the Year award from the Wireless Innovation Forum. In the past Matt has designed Bluetooth chips, GPS systems, and high performance microprocessors.  Before that, he received BSEE and BSCS degrees from Washington University and an MSECE degree from Carnegie-Mellon University.  In 2011, Matt was named an eminent member of Eta Kappa Nu, and was awarded the Wireless Innovation Forum International Achievement Award in 2015. He is based in Silicon Valley.

 

Wireless channel Estimation: Opportunities for Exploiting Structure and Sparsity

Speaker: Prof. Urbashi Mitra, University of Southern California, USA
Time:  8 September 2015 at 10.15
Venue: Algoritmen, B-building, Linköping University
 

Abstract: Wireless communication systems typically require some form of channel state information in order to provide high performance.  Traditionally, wireless channels are modeled as linear systems, which could be time-varying depending on the
communication scenario.  With the advent of very wideband communication and high speed applications, channel estimation
becomes more challenging.  Herein we show how exploiting structure inherent in many wireless communication channels can overcome challenges introduced by modern wireless applications. In many wideband signaling scenarios, channels can be modeled as sparse.  To be truly practical, one must consider the effects of practical channels (not purely sparse) and transceiver characteristics such as bandlimited pulse shapes in order to design highly accurate channel estimation.  We propose hybrid channel models suitable for ultrawideband radio and underwater acoustic systems based on both sparse and diffuse components and provide asymptotic performance analyses.  These hybrid models can be extended to time-varying channels. We show that vehicle-to-vehicle channels, in particular, show further structure in the form of both sparse specular components and groups of diffuse components.  A new channel estimation based on a nested thresholding algorithm is shown to be optimal for this channel structure and offers strong performance improvement over previous methods.  Finally, we examine truly wideband channels where mobility induces Doppler scaling (versus Doppler shifts as approximated in narrowband systems).  We show that OFDM signaling after passing through such a multi-scale, multi-path channel has a low rank representation.  This feature can be employed to improve robustness; we eventually pose the channel estimation problem as a structured spectral estimation where sparsity can be exploited with classical spectral techniques.  Strong performance gains are achieved over previously proposed methods.

 

Biography of the speaker: Urbashi Mitra received the B.S. and the M.S. degrees from the University of California at Berkeley and her Ph.D. from Princeton University. After a six year stint at the Ohio State University, she joined the Department of Electrical Engineering at the University of Southern California, Los Angeles, where she is currently a Professor. Dr. Mitra is a member of the IEEE Information Theory Society's Board of Governors (2002-2007, 2012-2017) and the IEEE Signal Processing Society’s Technical Committee on Signal Processing for Communications and Networks (2012-2016). She is the inaugural Editor-in-Chief for the IEEE Transactions on Molecular, Biological and Multi-scale Communications. Dr. Mitra is a Distinguished Lecturer for the IEEE Communications Society (2015-2016). Dr. Mitra is a Fellow of the IEEE. She is the recipient of: 2012 Globecom Signal Processing for Communications Symposium Best Paper Award, 2012 NAE Lillian Gilbreth Lectureship, USC Center for Excellence in Research Fellowship (2010-2013), the 2009 DCOSS Applications & Systems Best Paper Award, Texas Instruments Visiting Professor (Fall 2002, Rice University), 2001 Okawa Foundation Award, 2000 OSU College of Engineering Lumley Award for Research, 1997 OSU College of Engineering MacQuigg Award for Teaching, and a 1996 National Science Foundation (NSF) CAREER Award. Dr. Mitra currently serves on the IEEE Fourier Award for Signal Processing committee, the IEEE Paper Prize committee and the IEEE James H. Mulligan, Jr. Education Medal committee. She has been/is an Associate Editor for the following IEEE publications: Transactions on Signal Processing (2012--2015), Transactions on Information Theory (2007-2011), Journal of Oceanic Engineering (2006-2011), and Transactions on Communications (1996-2001). She has co-chaired: (technical program) 2014 IEEE International Symposium on Information Theory in Honolulu, HI, 2014 IEEE Information Theory Workshop in Hobart, Tasmania, IEEE 2012 International Conference on Signal Processing and Communications, Bangalore India, and the IEEE Communication Theory Symposium at ICC 2003 in Anchorage, AK; and general co-chair for the first ACM Workshop on Underwater Networks at Mobicom 2006, Los Angeles, CA Dr. Mitra was the Tutorials Chair for IEEE ISIT 2007 in Nice, France and the Finance Chair for IEEE ICASSP 2008 in Las Vegas, NV. Dr. Mitra has held visiting appointments at: the Delft University of Technology, Stanford University, Rice University, and the Eurecom Institute. She served as co-Director of the Communication Sciences Institute at the University of Southern California from 2004-2007. Her research interests are in: wireless communications, communication and sensor networks, biological communication networks, underwater acoustic communications, detection and estimation and the interface of communication, sensing and control.

 

Channel Measurements and Random Access for Massive MIMO systems

Speaker: Dr. Elisabeth de Carvalho, Aalborg University, Denmark
Time: September 2, 2015, 10.15 am

 

Adaptive real-time resource management

Speaker: Prof. Gerhard Fohler, TU Kaiserslautern, Germany
Time: June 3rd, 15.30
Venue: Lunds tekniska högskola, M-huset, Reglertekniks seminarierum, Ole Römers väg 1
 

Abstract: Early real-time systems dealt with simple tasks of well known characteristics. The efforts and cost to obtain about their worst case behavior was acceptable  in contexts with strict requirements. As applications have become more complex and less predictable since, obtaining detailed information is often no longer possible, or the efforts not justifiable. An example for these is high quality video processing, where e.g. decoding times of frames are highly content dependent and variable. In this talk, we will present a resource managment approach based on abstractions of resource states in the form of a small discrete number of levels as the basis for dealing with variability, providing a small sized view of the system state. A central resource manager takes adaptation decisions only on these levels, while real-time methods provide temporal isolation to ensure the assigned levels are maintained.
 

Biography of the speaker: Gerhard Fohler has been holding the Chair for Real-time Systems at TU Kaiserslautern since 2006. He received his Dipl. Ing. and  Ph.D. degrees with honors from the TU Vienna,  Prof. Hermann Kopetz, then was with the University of Massachusetts at Amherst, USA as postdoctoral researcher. Before joining TU Kaiserslautern, he was with MDH Sweden where he was promoted to full professor. His research is based on issues in the field of real-time, embedded systems, with emphasis on adaptive real-time systems.  Recently, it has been including related issues in real-time and control, real-time networking,  real-time media processing, and wireless sensor networks. He has been involved in a number of EU projects,  coordinator and partner, and was core partner of the EU IST Networks-of-Excellence ARTIST. He is Chairman of the Technical Committee on Real-time Systems of Euromicro, which is responsible for ECRTS, the prime European conference on real-time systems, member of the executive board of the real-time and embedded committees of the IEEE, where he chairs the sub-committee on conference afairs. He was program chair of the leading real-time conferences, and  is associate editor of Springer's Real-time System Journal. He has been serving as expert reviewer for the EU IST embedded systems unit and other funding agencies. He is Senior Member of the IEEE.

 

Using an Evidence-based Approach to Inform Practice - Fact or Fiction?

Speaker: Prof. Barbara Kitchenham, Keele University, United Kingdom
Time: May 28th, 10.15
Venue: Lunds Tekniska högskola, E-huset sal E:1406, Ole Römers väg 3D
 

Abstract: In 2004, I wrote a paper for the International Conference in Software Engineering (ICSE) with Magne Jørgensen and Tore Dybå proposing the idea of Evidence-Based Software Engineering (EBSE) using an analogy with Evidence-Based Medicine (EBM). We followed that with another paper for IEEE Software that considered EBSE from the viewpoint of practitioners. Even then we noted some problems applying EBM methods in software engineering. In this presentation, I review the basic problems of using an evidence-based approach, and discuss the extent to which these issues have been addressed. I also consider other ways in which practice could be influenced including the need both to catalogue evidence-based results of particular value to industry and to include evidence-based results in the education of software engineers.
 

Biography of the speaker: Barbara Kitchenham is Professor of Quantitative Software Engineering at Keele University in the UK. She has worked in software engineering for over 40 years both in industry and academia. Her main research interest is software measurement and experimentation in the context of project management, quality control, risk management, and evaluation of software technologies. Her most recent research has focused on the application of evidence-based practice to software engineering. She has published over 100 software engineering journal and conference papers. She has been ranked among the top 10 software engineering scholars in the Journal of Systems and Software series of articles assessing scholars and institutions every year since 1999. She co-authored the paper entitled “Evidence-based Software Engineering” that was presented at the International Conference on Software Engineering in 2004, which this year received the 2014 ACM SIGSOFT Impact Paper Award.  The related paper “Evidence-Based Software Engineering for practitioners” was included in IEEE Software’s 25th Anniversary Top Picks. She also co-authored “Preliminary guidelines for empirical research in software engineering” which was published in IEEE Transactions on Software Engineering and was ranked first in the Information and Software Technology Journal analysis of the most cited articles in software engineering journals for 2002.

 

Precious bits, sharing the spectrum, on elegant energy

Speaker: Liesbet Van Der Perre, KU Leuven, Belgium
Time: May 28th, 13.15-14.00
Venue: Lunds tekniska högskola, E:B, E buildning, Faculty of Engineering
 

Abstract: Smart mobile devices offer ever nicer and more applications and services. Consequently, wireless communication witnesses a continued spectacular growth. The innovative power of nano-electronics has brought about this digital revolution. This huge success is bound: you can not fool physics! The limits of radio propagation and technology scaling are a fact, and the confrontation with the scarcity of resources (energy and spectrum), is evident. Researchers join forces to embark on a fascinating journey, conquering appealing new wireless capacity territories.
 

Biography of the speaker: Liesbet Van der Perre is active in a very competitive area, wireless communications. A combination of deep scientific knowledge, fearless analytical skills, and strong leadership qualities makes her thrive in an environment where basic research is mixed with highly specialized commercial projects. Integrated circuits developed by her have been nominated among the best in the last fifty years. Traces of her designs may hide in your pocket right now, making your smart phone better performing and batteries last longer. She has participated in many European research projects, often in leading roles, where her experience and expertise has been a very valuable asset. She has contributed to the research at Lund University in different ways, both through joint projects and as scientific advisor in several center constellations.Liesbet Van der Perre is currently a director at Imec, Belgium, a world-leading research institute in nanoelectronics, and professor at the KU Leuven, Belgium, where she is teaching advanced wireless systems. She is author and co-author of over 250 scientific publications.

 

Honorary doctorate lecture: Massive MIMO -- a New Philosophy for Wireless Technology

Speaker: Thomas L. Marzetta, Bell Laboratories, USA
Time: May 22th, 15.00 (Coffee will be served at 14.30)
Venue: Visionen, building B, entrance 25 (LiU, Valla campus)

Thomas L. Marzetta at Bell Laboratories, USA, will be receiving an honorary doctorate from LiTH in May 2015.

Biography of the speaker: Thomas Marzetta is the originator of Massive MIMO. He is Group Leader of Large Scale Antenna Systems at Bell Labs, Alcatel-Lucent and Co-Head of their FutureX Massive MIMO project.  Dr. Marzetta was born in Washington, D.C. He received the PhD and SB in Electrical Engineering from Massachusetts Institute of Technology in 1978 and 1972, and the MS in Systems Engineering from University of Pennsylvania in 1973. He worked for Schlumberger-Doll Research in petroleum exploration and for Nichols Research Corporation in defense research before joining Bell Labs in 1995 where he served as the Director of the Communications and Statistical Sciences Department within the former Math Center. Currently Dr. Marzetta serves as Coordinator of the GreenTouch Consortium's Large Scale Antenna Systems Project, and as Member of the Advisory Board of MAMMOET (Massive MIMO for Efficient Transmission), an EU-sponsored FP7 project.  For his achievements in Massive MIMO he has received the 2015 IEEE W. R. G. Baker Award and the 2014 Thomas Alva Edison Patent Award, among others. He was elected a Fellow of the IEEE in 2003, and he became a Bell Labs Fellow in 2014. In May 2015 he will receive an Honorary Doctorate from Linköping University.

 

A Lightspeed Data Center Network

Speaker: Prof. William J Dally, Stanford University and Senior Vice President of Research at NVIDIA Corporation
Time: April 21th at 15.00
Venue: Visionen, B-house, Campus Valla, Linköping University
 

Abstract: Emerging data center applications demand low latency and high bandwidth networks - similar to those found in high-performance computers.  This talk walks through a thought experiment of what a data center network using best practices HPC network design would look like. It shows that a “dragonfly network” using global adaptive routing and speculative reservations for congestion avoidance can offer network latencies that are dominated by the time-of-flight over the network cables.  Because of reduced buffering and better channel utilization such a network would have lower component cost than a conventional network with comparable performance.
 

Biography of the speaker: Bill is Chief Scientist and Senior Vice President of Research at NVIDIA Corporation and a Professor (Research) and former chair of Computer Science at Stanford University. Bill and his group have developed system architecture, network architecture, signaling, routing, and synchronization technology that can be found in most large parallel computers today. While at Bell Labs Bill contributed to the BELLMAC32 microprocessor and designed the MARS hardware accelerator. At Caltech he designed the MOSSIM Simulation Engine and the Torus Routing Chip which pioneered wormhole routing and virtual-channel flow control. At the Massachusetts Institute of Technology his group built the J-Machine and the M-Machine, experimental parallel computer systems that pioneered the separation of mechanisms from programming models and demonstrated very low overhead synchronization and communication mechanisms.  At Stanford University his group has developed the Imagine processor, which introduced the concepts of stream processing and partitioned register organizations, the Merrimac supercomputer, which led to GPU computing, and the ELM low-power processor.  Bill is a Member of the National Academy of Engineering, a Fellow of the IEEE, a Fellow of the ACM, and a Fellow of the American Academy of Arts and Sciences.  He has received the ACM Eckert-Mauchly Award, the IEEE Seymour Cray Award, and the ACM Maurice Wilkes award.  He currently leads projects on computer architecture, network architecture, circuit design, and programming systems. He has published over 200 papers in these areas, holds over 100 issued patents, and is an author of the textbooks, Digital Design: A Systems Approach, Digital Systems Engineering, and Principles and Practices of Interconnection Networks.

 

Network Optimization with Massive MIMO

Speaker:  Prof. Giuseppe Caire, TU Berlin and USC, USA
Time:  March 6, 2015, at 10.30am

 

Programs = Data = First-Class Citizens in a Computational World

Speaker: Prof. Neil Jones, University of Copenhagen, Denmark
Time: Thursday February 26, 2015, at 13:15
Venue: Wigforssalen, Halmstad University

Abstract: From a programming perspective, Alan Turing's epochal 1936 paper on computable functions introduced several new concepts, originated a great many now-common programming techniques, including the invention of what are today known as self-interpreters, using programs as data. The 'blob' model of computation is a recent stored-program computational model, biologically motivated and without pointers or memory addresses. Novelties of the blob model: programs are truly first-class citizens, capable of being automatically executed, compiled or interpreted. The model is Turing complete in a strong sense: a universal interpretation algorithm exists, able to run any program in a natural way and without arcane data encodings. The model appears closer to being physically realisable than earlier computation models. In part this owes to strong finiteness due to early binding; and a strong adjacency property: the active instruction is always adjacent to the piece of data on which it operates. Joint work with Jakob Grue Simonsen.

Biography of the speaker: Neil Jones is professor emeritus with the University of Copenhagen, Denmark. Research directions follow two directions: programming languages (compiling, program analysis, partial evaluation, semantics); and the theory of computation and computational complexity. He has published books and a number of articles in both areas. Educated in the U.S. and Canada, Neil Jones has been assistant, associate or full professor at the University of Western Ontario, Pennsylvania State University, University of Kansas, Aarhus University in Denmark, and the University of Copenhagen.

 

A Generalized Form of Autotuning

Speaker: Prof. Uwe Aßmann, Technische Universität, Dresden
Time: Thursday February 26, 2015, at 10.15
Venue: E:2116, E-house, Lund University/CS

Abstract: So far, autotuning has been a form of continuous optimization for specific kernels and algorithms. In the Collaborative Research Center "Highly-Adaptive Energy-Efficient Computing (HAEC)", we develop a generalized form of autotuning for software product lines. The approach is based on cost-utility functions and relations, which are specified in quality contracts. A more specific form treats energy-utility functions, which describe constraints on energy behavior. From these contracts, constraint-based systems are generated to be solved by constraint solvers. Generalized autotuning attributes every variant of a software product line with quality and energy contracts, and then decides at run time of the application, which variant is the most appropriate with regard to a specific objective function. This generalizes autotuning from the level of specific kernels to dynamic software product lines.

Biography of the speaker: Uwe Aßmann holds the Chair of Software Engineering at the Technische Universität Dresden.  He has obtained a PhD in compiler optimization and a habilitation on "Invasive Software Composition" (ISC), a composition technology for code fragments enabling flexible software reuse.  ISC unifies generic, connector-, view-, and aspect-based programming for arbitrary program or modeling languages. The technology is demonstrated by the Reuseware environment, a meta-environment for the generation of software tools (http://www.reuseware.org).

His group is member of the research centre "Center for Advancing Electronics Dresden (cfAED)", working on novel code composition techniques for many-core architectures and modern hardware structures. In the subproject "Highly-Adaptive Energy-Efficient Computing (HAEC)", the group develops energy autotuning (EAT), a technique to dynamically adapt the energy consumption of an application to to the required quality of service, the context of the system, and the hardware platforms.

 

Cyber-physical Systems: Opportunities, Problems and (some) Solutions

Speaker: Peter Marwedel (TU Dortmund)
Time: October 24 2014
 

Abstract: The term “cyber-physical systems” characterizes the integration of information and communication technologies (ICT) with their physical environment. This integration results in a huge potential for the development of intelligent systems in a large set of industrial sectors. The potential will be covered in the first part of the talk. Sectors comprise industrial automation (“industry 4.0”), traffic, consumer devices, the smart grid, the health sector, urban living and computer-based analysis in science and engineering. A multitude of goals can be supported in this way, e.g., the availability of higher standards of living, higher efficiency of many processes, the generation of knowledge and safety for the society. However, the realization of this integration implies manifold challenges. Challenges covered in the second part of this talk include security, timing, safety, reliability, energy efficiency, interfacing, and the discovery of information in huge amounts of data. Also, the inherent multidisciplinarity poses challenges for knowledge acquisition and application. In the third and final part of the talk, we will present some of our contributions addressing these issues. These contributions include techniques for improving the energy efficiency and an integration of a timing model into the code generation process, tradeoffs between timeliness and reliability, and approaches for education crossing boundaries of involved disciplines.

 

Exploring Big Urban Data

Speaker: Claudio Silvia (New York University, USA)
Time: October 23 2014


Abstract: For the first time in history, more than half of the world's population lives in urban areas; in a few decades, the world's population will exceed 9 billion, 70 percent of whom will live in cities. Given the growing volume of data that is being captured by cities, the exploration of urban data will be essential to inform both policy and administration, and enable cities to deliver services effectively, efficiently, and sustainably while keeping their citizens safe, healthy, prosperous, and well-informed.   Urban data analysis is a growing research field that will not only push computer science research in new directions, but will also enable many others, including urban planners, social scientists, transportation experts, and so on, to understand how cities work at unprecedented detail.

An important long-term goal of our research is to enable interdisciplinary teams to “crack the code of cities”.  Over the past 3 years, we have been working on methods and systems that support urban data analysis, with a focus on spatio-temporal aspects.  We will describe these efforts, in particular our work on analyzing the NYC taxi dataset, which contains information about over 850 million yellow cab trips that took place in NYC from 2009 to 2013. We will also discuss a number of challenges that have led us to new research paths, pushing us to design new data management, data analysis and visualization techniques.

This work was supported in part by the National Science Foundation, a Google Faculty Research award, the Moore-Sloan Data Science Environment at NYU, IBM Faculty Awards, NYU School of Engineering and Center for Urban Science and Progress.

 

Advances in active MIMO sensing

Speaker: Prof. Sergiy Vorobyov, Aalto University, Finland and University of Alberta, Canada
Time: October 13, 2014, at 10.15
Place: Room Signalen, building B, entrance 27, Linköping University/ISY
 

Abstract: We start with the background on MIMO radar: MIMO radar models for the configurations. Concerning the configuration with colocated transmit antennas, the tradeoff between the waveform diversity and coherent processing is of a great importance. Indeed, while the MIMO radar allows for using different waveforms and, thus, extending the aperture of the virtual array, the SNR gain of the MIMO radar is low as compared to the phased-array radar. The tradeoff between the MIMO and phased-array radars is the phased-MIMO radar. We will explain why the phased-MIMO radar enjoys the advantages of the MIMO radar without sacrificing the main advantage of the phased-array radar. Coherent processing gain can be achieved by designing weight vectors for subarrays transmitting different waveforms to form beams toward a certain direction in space. Substantial improvements offered by the phased-MIMO radar will be demonstrated and explained. Moreover, it will be shown how the transmit beamspace energy focusing can be used for direction finding problem. Assuming that angular directions of the targets are located within a certain spatial sector, the energy of multiple transmitted orthogonal waveforms can be focussed within that spatial sector using transmit beamformers which are designed to improve the SNR gain at each receive antenna and to guarantee that matched-filtering the received data to the waveforms yields multiple data sets with rotational invariance property. The high-resolution and search-free techniques such as MUSIC, ESPRIT, and PARAFAC  can be then used for direction finding. Unlike the classic ESPRIT-based direction finding techniques, we propose to achieve the rotational invariance property in a different manner combined also with the transmit energy focusing. It enables to achieve better estimation performance at lower computational cost.
 

Speakers bio: He is a Professor with the Department of Signal Processing and Acoustics, Aalto University, Finland and adjunct Professor at the Department of Electrical and Computer Engineering, University of Alberta, Edmonton, Canada. He has been with the University of Alberta as an Assistant Professor from 2006 to 2010, Associate Professor from 2010 to 2012, and Full Professor since 2012. Since his graduation, he also held various research and faculty positions at Kharkiv National University of Radio Electronics, Ukraine; the Institute of Physical and Chemical Research (RIKEN), Japan; McMaster University, Canada; Duisburg-Essen University and Darmstadt University of Technology, Germany; and the Joint Research Institute between Heriot-Watt University and Edinburgh University, U.K. He has also held short-term visiting positions at Technion, Haifa, Israel and Ilmenau University of Technology, Ilmenau, Germany. His research interests include statistical and array signal processing, applications of linear algebra, optimization, and game theory methods in signal processing and communications, estimation, detection and sampling theories, and cognitive systems.

Dr. Vorobyov is a recipient of the 2004 IEEE Signal Processing Society Best Paper Award, the 2007 Alberta Ingenuity New Faculty Award, the 2011 Carl Zeiss Award (Germany), the 2012 NSERC Discovery Accelerator Award, and other awards. He served as an Associate Editor for the IEEE TRANSACTIONS ON SIGNAL PROCESSING from 2006 to 2010 and for the IEEE TRANSACTIONS ON SIGNAL PROCESSING LETTERS from 2007 to 2009. He was a member of the Sensor Array and Multi-Channel Signal Processing Committee of the IEEE Signal Processing Society from 2007 to 2012. He is a member of the Signal Processing for Communications and Networking Committee since 2010. He has served as the Track Chair for Asilomar 2011, Pacific Grove, CA, the Technical Co-Chair for IEEE CAMSAP 2011, Puerto Rico, and the Tutorial Chair for ISWCS 2013, Ilmenau, Germany.

 

Visual Analytics in Cohort Study Data

Speaker: Bernhard Preim, University of Magdeburg
Time:  Wed 27/8 at 15:00-16:00
Place: VR-Arena, Norrköping Visualization Center
 

Abstract: Epidemiology aims at understanding relations between life style, genetics, environmental factors and the outbreak of diseases. Recent large scale cohort studies involving a wealth of sociodemographic data and medical image data bear a great potential to identify complex interactions between different factors and non-linear relations between risk factors and the likelihood and severity of diseases. A combination of data analysis (dimensionality reduction and clustering), image analysis (segmentation and quantification of target structures) and interactive visualization supports the generation of new hypothesis.

As case study, we discuss back pain and its relation to socio-demographic and medical image data. The epidemiological data is part of the SHIP.
 

Speakers Bio: Bernhard Preim was born in 1969 in Magdeburg, Germany. He received the diploma in computer science in 1994 (minor in mathematics) and a Ph.D. in 1998 from the Otto-von-Guericke University of Magdeburg (Phd thesis "Interactive Illustrations and Animations for the Exploration of Spatial Relations"). In 1999 he finished his work on a German textbook on Human Computer Interaction which appeared at Springer. He then moved to Bremen where he joined the staff of MeVis . In close collaboration with radiologists and surgeons he directed the work on "computer-aided planning in liver surgery" and initiated several projects funded by the German Research Council in the area of computer-aided surgery. In June 2002 he received the Habilitation degree (venia legendi) for computer science from the University of Bremen. Since Mars 2003 he is full professor for "Visualization" at the computer science department at the Otto-von-Guericke-University of Magdeburg, heading a research group which is focussed on medical visualization and applications in surgical education and surgery planning. These developments are summarized in a comprehensive textbook Visualization in Medicine(Co-author Dirk Bartz). His continous interest in HCI lead to another textbook "Interaktive Systeme" (Co-author: R. Dachselt) (Springer, 2010).


Bernhard Preim is member of the ACM and the German Chapter of the ACM where he served as Vice-President (2001-03) and Past Vice-President (2003-05). He is speaker of the working group Medical Visualization in the German Society for Computer Science and vicepresident of the . He was Co-Chair and Co-Organizer of the first and second Eurographics Workshop on Visual Computing in Biology and Medicine (VCBM) and is now member of the steering committee of that workshop. He is the chair of the scientific advisory board of ICCAS (International Competence Center on Computer-Assisted Surgery, since 2010) and member of advisory board of Fraunhofer Heinrich-Hertz-Institute. He is also regularly a Visiting Professor at the University of Bremen where he closely collaborates with Fraunhofer MEVIS. At the University of Magdeburg, Bernhard Preim is member of the Board (since 2008). Bernhard Preim is married with Uta Preim (Medical Doctor), born Hahn and has two children.

 

Cognitive Radio Transceiver Chips

Speaker: Dr. Eric Klumperink
Time: Wednesday 14 May 2014, 10.15 - 11.30
Place: Room ‘Signalen’ i B-huset, Linköping University
 

Abstract: A Cognitive Radio transceiver senses its radio environment and adaptively utilizes free parts of the radio spectrum. CMOS IC-technology is the mainstream technology to implement smart signal processing and for reasons of cost and size it is attractive to also integrate the radio frequency (RF) hardware in CMOS. This lecture discusses radio transceiver ICs designed for cognitive radio applications, with focus on analog RF. Cognitive radio asks for new functionality, e.g. spectrum sensing and more agility in the radio transmitter and flexibility in the receiver. Moreover, the technical requirements on the building blocks are more challenging than for traditional single standard applications, e.g. in bandwidth, programmability, sensing sensitivity, blocker tolerance, linearity and spurious emissions. Circuit ideas that address these challenges will be discussed, and examples of chips and their achieved performance will be given.
 

Speaker bio: Eric Klumperink received his PhD from Twente University in Enschede, The Netherlands, in 1997. He is currently an Associate Professor at the same university where he teaches Analog and RF CMOS IC Design and guides research projects focussing on Cognitive Radio, Software Defined Radio and Beamforming. Eric served as Associate Editor for TCAS-I and II, and for the Journal of Solid-State Circuits. He is a technical program committee member of ISSCC and RFIC and is Respected Lecturer for IEEE. He holds several patents, authored and co-authored more than 150 international refereed journal and conference papers, and is a co-recipient of the ISSCC 2002 and the ISSCC 2009 "Van Vessem Outstanding Paper Award”.

 

Breaking the curve

Speaker: Björn Ekelund (Ericsson)
Time: 2013-10-22, 10.30-11.30
Place: Matteannexet, LTH (Sölvegatan 20, Lund) 
 

Abstract: “The computing pendulum swings back; From far-away server halls to ubiquitous intelligence.” We have seen the world swing back and forth from centralized (IBM370 in the 80’s and Google in the 00’s) to distributed (PC in the 90’s) paradigms and it is now time for a new swing. With the current and forecasted rate of development in digital services and both human and machine related data generation, a centralized cloud computing model is becoming unsustainable. To address this, both devices and networks will have to employ new types of intelligence and communication. The distributed mobile cloud is our connected future.

Björn Ekelund has an M.Sc.E.E. and a Ph.L. in telecommunications microelectronics, both from Lund University. Mr. Ekelund has held leading positions in telecommunications research and technology for over 25 years and has contributed to the creation and introduction of all major cellular standards. He currently heads the Ecosystem and Strategy unit in Ericsson Business Unit Modems and is also part of the Ericsson Group technology strategy staff. In parallel with his work at Ericsson, he is the interim director for MAPCI – The Mobile and Pervasive Computing Institute at Lund University.”

 

Safe Semi-Autonomous Control of Unmanned Ground Vehicles

Speaker: Karl Iagnemma (MIT, Massachusetts, USA)
Time: 2013-10-23, 08.00-09.00
Place: Matteannexet, LTH (Sölvegatan 20, Lund)
 

Abstract: Semi-autonomous teleoperated control of unmanned ground vehicles is a challenging tasks for several reasons. First, remote operators often experience reduced situational awareness due to limited sensory feedback. Second, the presence of communications latency can degrade performance and reduce achievable task speed and complexity. Finally, bandwidth limits can fundamentally restrict the type and quality of sensory data that can be transmitted to the operator. This talk will discuss the development of various technologies at MIT aimed at enabling safe teleoperation of unmanned ground vehicles over degraded communications links. An approach to assistive vehicle control that acts as an "intelligent co-pilot" to mitigate vehicle collisions and instability will be described. Methods for visual feature extraction to enable teleoperation over extremely low bandwidth comms links will also be discussed. Finally, learning-based methods for adaptively sensing terrain properties will be discussed. The talk will conclude with a description of applications to fully autonomous vehicle operation.

Karl Iagnemma is Director of the Robotic Mobility Group at the Massachusetts Institute of Technology. He holds a B.S. from the University of Michigan, and an M.S. and Ph.D. from MIT, where he was a National Science Foundation Graduate Fellow. He has performed postdoctoral research at MIT, and been a visiting researcher at the NASA JPL, the National Technical University of Athens (Greece), and the University of Halmstad (Sweden). Dr. Iagnemma’s primary research interests are in the areas of design, modeling, motion planning, and control of robotic vehicles in complex terrain. He is author of the book, "Mobile Robots in Rough Terrain: Estimation, Planning and Control with Application to Planetary Rovers" (Springer, 2004). He has recently led research programs for organizations including DARPA, JPL, Nissan, Samsung, the U.S. Army TARDEC, the Army Research Office, the NASA Mars Office, Ford Motor Company, and many others. He has authored or co-authored over 100 conference and journal papers on a wide range of robotic topics, has received numerous issued patents in the areas of robotic system design and control, and has received several best paper awards for his research.

 

Control Challenges to Renewables Integration in Smart Grid

Speaker: Dr. Paul K. Hopt
Affiliation: GE Global Research, Automation and Controls Laboratory LUND
Time: Wednesday 16 May, 10.30
Place: M:B LINKÖPING
Time: Monday 21 May, 15.15
Place: Visionen, Building B, Campus Valla
 

Abstract: The expected emergence of high renewable penetration in electric power generation and delivery world wide presents major challenges to the controls community. In providing accommodation for the wide variability of supply from wind and solar energy sources, new control strategies must also manage demand arising from new loads such as EVs. To accommodate growth in demand, the existing infrastructure must be exploited and means to move the grid closer to thermal limits while retaining stability provided. While many tools are emerging for control such as energy storage and flexible demand management, e.g. through third party aggregators, the problems are challenging from both algorithmic and hardware perspectives. In this talk we will overview some of the challenges in modeling the grid for use in controls and discuss attributes of importance to controls engineers in both sensing and actuation made possible with network wide measurements from PMUs (phase) and solid-state power control (real and reactive) with inverters used in wind and solar ‘farms.’ While no definitive solutions are provided, some promising directions are identified along with suggestions for what is likely to be in the portfolio of enablers for smart grid controls. This includes an example vision of how all the bits might come together in a future marriage of the smart grid with a rail transportation system. This presentation is based in part on findings of the Smart Grid Working group of an IEEE CSS sponsored workshop on the Future of Controls, held in Berchtesgaden (Germany) in 2009.
 

Speaker’s bio: Paul earned his B.S. from Syracuse University, his M.S. from New York University, and his Ph.D. from M.I.T., all in Electrical Engineering. From 1966-1970, he was a member of technical staff in the Power Systems division of Bell Laboratories, in Whippany NJ. At Bell an assignment to develop a fuel-optimal control strategy for the Apollo lunar lander spawned a life-passion for controls. He was a post doc in MIT’s Laboratory for Information and Decision Systems from 1974-1978, where his research focused on freeway traffic control and model-based highway incident detection. Paul transitioned to the Mechanical Engineering faculty at MIT in 1978 as Detroit Diesel Allison Associate Professor, where his research and teaching concentrated on control systems for vehicular propulsion systems (gas and diesel engines), wind-power generation and manufacturing process control for semiconductor materials. From 1985 until his recent retirement in February 2012, he had been associated with the Automation and Controls Laboratory at GE Global Research, where was Principal Scientist, Controls. At GE Paul has served in several technical contributor and management roles, all associated with the development of control systems for GE products and manufacturing processes. During the last ten years, he led and contributed to teams developing freight locomotive engine and system controls to save fuel, reduce emissions. Dr. Houpt has served on program committees for several American Control Conferences and Conference on Decision and Controls conferences, as an associate editor for the IEEE Transactions on Automatic Control, and was twice elected to the IEEE Control Systems Society board of Governors, and currently serves as the chairman of the Control Systems Technology award. Since 1995, he has been a member of the advisory board of the Institute for Systems Research at the University of Maryland. Dr. Houpt has published widely on vehicular system controls and diagnostics, process control and gas turbine control. He has been issued 15 patents (10 additional pending) on developments in locomotive controls, process controls and wayside rail diagnostics. In 2005, Dr. Houpt was elected a Fellow of the IEEE for ‘Contributions to the Control of Transportation Vehicles and Systems,’ and also received GE Global Research’s Dushman award, its highest team award, selected for the team’s contributions to the successful commercialization of the GE Evolution Locomotive.

In 2009 Paul and the team he led were again honored with the Dushman Award for the commercialization of Trip Optimizer, an optimal “cruise-control” for freight trains. This system will save 10-25% in fuel use and emissions production for heavy-haul freight trains. Since 2009 Paul has been exploring controls opportunities in Smart Grid, the focus of this talk.

 

Dead reckoning for Vehicles and Pedestrians

Speaker: Johann Borenstein (Univ. of Michigan, US)
Time: 2012-05-03, 11.00-11.50
Place: Collegium, Linköping

 

Structured Computational Polymers: From Smart Clothing to Squishy Bots

Speaker: Richard Voyles (University of Denver, US)
Time: 2012-05-03, 10.10-11.00
Place: Collegium Linköping

 

Program verification using Dafny

Speaker: K. Rustan M. Leino (http://research.microsoft.com/en-us/um/people/leino/)
Affiliation: Microsoft Research, Redmond, WA, USA
Time: Thursday 22 March, 10.15-12.00
Place: Alan Turing, Campus Valla, Linköping, IDA, Hus E.
 

Abstract: Every day, software engineers reason about the programs they write and the programs others have written. But the automatic tools at their disposal for helping with this reasoning are still relatively weak, often providing only syntactic checks and type checks. At the other extreme, program verification aims at full semantic reasoning about programs, but the associated tooling can be intimidating to users.

Dafny (http://research.microsoft.com/dafny) is a programming language that integrates program verification into the development experience. The language is class-based and sequential, and it offers specification features as part of the language (à la Eiffel). Because the Dafny verifier runs continuously in the background, the consistency of a program and its specifications is always enforced.

In this talk, I will first give a sampler of some formal-methods based tools that are used at Microsoft. Then, for the main part of the talk, I will present Dafny through a series of live demos.
 

Speaker’s bio: Rustan Leino is a Principal Researcher in the Research in Software Engineering (RiSE) group at Microsoft Research. He is a world leader in building automatic program verifiers and is generally known for his work on programming methods and program verification tools. He has led a number of programming language and verification projects, including Spec# (which extends C# with contracts and was a forerunner of the Code Contracts in Microsoft .NET 4.0), Chalice (for concurrent programs), Dafny (for functional-correctness verification), and, previously, ESC/Java. He is the architect of the Boogie program verification framework, which underlies more than a dozen program verifiers for C, Spec#, and other languages.

Before getting his Ph.D. (Caltech, 1995), Leino designed and wrote object-oriented software as a technical lead in the Windows NT group at Microsoft. Leino collects thinking puzzles on a popular webpage and has started the Verification Corner video blog on channel9.msdn.com. In his spare time, he plays music and teaches cardio exercise classes.

 

Algebraic Structure in Network Information Theory

Speaker: Prof. Michael Gastpar
Affiliation: EPFL, Switzerland
Time: Thursday 27 October in Linköping, at 10.15
Place: Planck, Physics building, Campus Valla Linköping University.

In cooperation with IEEE VT/COM/IT Sweden Chapter Board, and the ACCESS Linnaeus center,
the ELLIIT excellence center invite to a Distinguished Lecture by Prof. Michael Gastpar, EPFL, Switzerland, with the title
“Algebraic Structure in Network Information Theory”.
The Lecture was held Thursday 27 October in Linköping, at 10.15
hosted by Prof. Erik Larsson, Lecture Hall Planck, Physics building, Campus Valla Linköping University
 

Abstract: Information Theory has impacted the design of communication systems in a profound fashion, both by providing fundamental performance bounds as well as architectural guidance. For point-to-point connections and simple network situations, information-theoretic bounds are based purely on statistical arguments, and algebra enters the game as an enabler of practically implementable (though typically suboptimal) codes.

In emerging communication network scenarios where interference between multiple flows of information plays a fundamental role, emerging insights suggest that statistical arguments alone are insufficient to characterize the fundamental behavior. Rather, algebraic arguments play a key role. Important recent examples include classical network coding (in multiple-unicast configurations), physical-layer network coding in relay networks, interference alignment, and more. In this talk, we discuss elements of an emerging algebraic network information theory.

Based in part on joint work with Uri Erez, Bobak Nazer, Shlomo Shamai, and Jiening Zhan.
 

Biography: Michael Gastpar (Ph.D. EPFL, 2002, M.S. UIUC, 1999, Dipl. El-Ing, ETH, 1997) is currently a Professor at EPFL and an Adjunct Associate Professor at the University of California, Berkeley. He was an Assistant Professor 2003-2008 and a tenured Associate Professor 2008-2011 at the University of California, Berkeley. He also holds a faculty position at Delft University of Technology, The Netherlands, and he spent time as a researcher at Bell Laboratories, Lucent Technologies, Murray Hill, NJ. His research interests are in network information theory and related coding and signal processing techniques, with applications to sensor networks and neuroscience. He won the 2002 EPFL Best Thesis Award, an NSF CAREER award in 2004, an Okawa Foundation Research Grant in 2008, and an ERC Starting Grant in 2010. He is an Information Theory Society Distinguished Lecturer (2009-2011). He has served as an Associate Editor for Shannon Theory for the IEEE Transactions on Information Theory (2008-11), and as Technical Program Committee Co-Chair for the 2010 International Symposium on Information Theory, Austin, TX.

http://people.epfl.ch/michael.gastpar

 

Social Sensing Challenges for a Smarter Planet

Speaker: Tarek Abdelzaher (UIUC)
Time: 2011-10-17, 11.30-12.30
Place: Hörsalen, Kårhuset, Lund University, Lund
 

Abstract: The vision of smarter cities that better conserve their resources and better streamline their services relies in part on the increased availability of data about the real time state of such resources and services, and the increased ability to perform large-scale data analytics. A central architectural component is therefore an infrastructure for data collection and information distillation that relies not only on fixed sensors but more importantly on people (and their mobile devices) as data sources. This talk motivates such an infrastructure by example applications, then discusses the research challenges brought forth by the need to collect reliable data in real time from large groups of individuals who may be unknown, unreliable, or not motivated to collect and share their data. Mathematical formulation of the underlying problems are presented together with analytic solutions. Initial evaluation results are shown from experimental prototypes deployed in controlled settings.
 

Bio: Tarek Abdelzaher received his B.Sc. and M.Sc. degrees in Electrical and Computer Engineering from Ain Shams University, Cairo, Egypt, in 1990 and 1994 respectively. He received his Ph.D. from the University of Michigan in 1999 on Quality of Service Adaptation in Real-Time Systems. He has been an Assistant Professor at the University of Virginia, where he founded the Software Predictability Group until 2005. He is currently a Professor and Willett Faculty Scholar at the Department of Computer Science, the University of Illinois at Urbana Champaign. He has authored/coauthored more than 150 refereed publications in real-time computing, distributed systems, sensor networks, and control. He is Editor-in-Chief of the Journal of Real-Time Systems, and has served as Associate Editor of the IEEE Transactions on Mobile Computing, IEEE Transactions on Parallel and Distributed Systems, IEEE Embedded Systems Letters, the ACM Transaction on Sensor Networks, and the Ad Hoc Networks Journal. He was Program Chair of RTAS 2004, RTSS 2006, IPSN 2010, ICDCS 2010 and ICAC 2011, as well as General Chair of RTAS 2005, IPSN 2007, RTSS 2007, DCoSS 2008, and Sensys 2008. Abdelzaher's research interests lie broadly in understanding and controlling performance and temporal properties of networked embedded and software systems in the face of increasing complexity, distribution, data dependencies, and degree of embedding in an external physical environment. Tarek Abdelzaher is a member of IEEE and ACM. 

 

Future Directions for Wireless Communication Research

Speaker: Thomas Marzetta (Bell-Laboratories, Alcatel-Lucent)
Time: 2011-10-17, 10.30-11.30
Place: Hörsalen Kårhuset, Lund University, Lund
 

Abstract: New applications – perhaps requiring sustained throughputs of hundreds of megabits per second to and from each terminal and millisecond-latency - will inevitably drive the development of revolutionary wireless communication technologies. Large-Scale Antenna Systems (LSAS) is a step in that direction where unprecedented numbers of service antennas focus directed beams to a multiplicity of terminals on the forward link, and selectively collect signals from these terminals on the reverse link. The GreenTouch Consortium is promoting LSAS as a means of increasing wireless energy efficiency by three orders-of-magnitude; a major challenge here is to match improvements in radiated energy efficiency with commensurate improvements in internal energy efficiency. In principle LSAS is adaptable to massive sensor array telemetry: contrary to the premises of wireless sensor networks there are types of data – computed tomography, synthetic-aperture radar, and 3D reflection seismology, for example – where one has to acquire raw data from hundreds-of-thousands of points in space for joint processing at a central access point without any compression, pruning, or pre-processing.  The communication theory that underlies today’s most advanced wireless concepts is based on very elementary notions of electromagnetic propagation and phenomenology. To make further breakthroughs communication theorists will have to master propagation theory and explore and exploit neglected topics such as near-field effects, mutual coupling of antennas, evanescent waves, super-directivity, and meta-materials. Fortunately much physical insight is easily obtainable by employment of the cornerstone of linear system theory: the Fourier transform.
 

Bio: Thomas L. Marzetta was born in Washington, D.C. He received the PhD in electrical engineering from the Massachusetts Institute of Technology in 1978. His dissertation extended, to two dimensions, the three-way equivalence of autocorrelation sequences, minimum-phase prediction error filters, and reflection coefficient sequences. He worked for Schlumberger-Doll Research (1978 - 1987) to modernize geophysical signal processing for petroleum exploration. He headed a group at Nichols Research Corporation (1987 - 1995) which improved automatic target recognition, radar signal processing, and video motion detection. He joined Bell Laboratories in 1995 (formerly part of AT&T, then Lucent Technologies, now Alcatel-Lucent). He has had research supervisory responsibilities in communication theory, statistics, and signal processing. He specializes in multiple-antenna wireless, with a particular emphasis on the acquisition and exploitation of channel-state information. He is the originator of Large-Scale Antenna Systems which can provide huge improvements in wireless energy efficiency and spectral efficiency.

Dr. Marzetta was a member of the IEEE Signal Processing Society Technical Committee on Multidimensional Signal Processing, a member of the Sensor Array and Multichannel Technical Committee, an associate editor for the IEEE Transactions on Signal Processing, an associate editor for the IEEE Transactions on Image Processing, and a guest associate editor for the IEEE Transactions on Information Theory Special Issue on Signal Processing Techniques for Space-Time Coded Transmissions (Oct. 2002) and for the IEEE Transactions on Information Theory Special Issue on Space-Time Transmission, Reception, Coding, and Signal Design (Oct. 2003).

Dr. Marzetta was the recipient of the 1981 ASSP Paper Award from the IEEE Signal Processing Society. He was elected a Fellow of the IEEE in Jan. 2003.

 

Towards intelligent vision systems: the role of signal theory, statistical models, and learning

Speaker: Rudolf Mester (Goethe Universität, Frankfurt am Main, Germany)
Time: 2010-11-12, 12.20-13.00
Place: Linköping University

 

Wireless Communications IC:s trends for 3G and LTE

Speaker: Sven Mattisson (Ericsson Research, Lund)
Time: 2010-11-11, 10.45-11.45
Place: Linköping University

Estimation over MIMO Fading Channels:  Outage and DiversityAnalysis